uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,996,163
arxiv
\section{Introduction} To adapt to the distributed generation and increasing penetration of volatile renewable energies and to improve the overall efficiency of the power industry, the power systems have undergone a substantial change from centrally controlled, vertically integrated organizations to decentrally controlled, deregulated systems. Market mechanism has been introduced at various levels of power systems to create competition. In particular, a local electricity market can be created to motivate self-interested distributed energy resources (DERs) to realize efficient energy allocation and achieve system-level objectives. Several demonstration projects have been implemented to validate the idea which showed promising practice. For example, the GridWise\textregistered\ demonstration project by the Pacific Northwest National Laboratory showed that the market-based coordination of residential loads could reduce the utility demand and congestion at key times~\cite{FullerSchneiderChassin2011}. The AEP Ohio demonstration project~\cite{AEP} further showed the capability of enforcing both system-wide constraints and local constraints, while optimizing both system and individual objectives~\cite{WidergrenSubbaraoFullerEtAl2014}. Motivated by these projects, recent studies have been focused on the market mechanism design for engaging different types of DERs~\cite{li2015,HaoCorbinKalsiEtAl2017,BejestaniAnnaswamySamad2014,LiChenLow2011}. For example, the authors in~\cite{li2015} proposed a market mechanism to coordinate a population of thermostatically controlled loads (TCLs) for demand response. The proposed bidding strategies incorporated the TCL dynamics in order to improve the accuracy and efficiency of the load coordination. For commercial HVACs (heating, ventilation and air conditioning), a double-auction market structure was designed which takes into account the detailed nonlinear building models~\cite{HaoCorbinKalsiEtAl2017}. The proposed market was demonstrated to be very efficient at peak shaving and load shifting services. Other market models for demand response using batteries or PEVs (plug-in electric vehicles) were also proposed in~\cite{ChenLiLowEtAl2010,LiChenLow2011}. Despite the popularity of market-based coordination, there has been a growing concern about the risk of the instability of the power market. Under some extreme conditions, the aggregate demand and the energy price can be unstable or demonstrate high volatility over time. Various factors that contribute to the instability of market-coordinated TCLs have been examined in~\cite{NazirHiskens}. Some earlier works abstracted a simple linear differential equation model to quantify the power market stability~\cite{Alvarado1997,Alvarado1999,NutaroProtopopescu2009}. A discrete time nonlinear model based on the marginal cost pricing mechanism was proposed in~\cite{RoozbehaniDahlehMitter2012}. It assumed that the demand side did not bid into the market and its utility function was unknown to the system operator. The focus was therefore on the market instability caused by the uncertain demand prediction of the coordinator. Under the same framework of~\cite{RoozbehaniDahlehMitter2012}, the authors in~\cite{ZhouRoozbehaniDahlehEtAl2017} considered a more realistic dynamic consumption model. It was obtained from solving an optimal inventory control problem, and the market stability was found to be related to the ratio between the marginal backlog disutility and the marginal cost of supply. The aforementioned works only considered aggregate demand models in autoregression form depending on the previous consumption history or price history~\cite{RoozbehaniDahlehMitter2012,ZhouRoozbehaniDahlehEtAl2017}. However, in order to quantify the aggregate demand variation more accurately, the internal dynamics and operational limits of individual DERs must be considered. More importantly, the impact of the population dynamics of the DERs on the overall market dynamics must be investigated systematically. These features are nevertheless abstracted away in the existing literature. In this paper, we propose a market model which explicitly considers the individual dynamics of the DERs. Specifically, we model each DER by a general constrained linear system. Such models have been widely employed to describe the dynamics of its internal energy state~\cite{Hao2015,ZhaoHaoZhang2016,ZhaoZhangHaoEtAl2017}. Moreover, we consider the bidding process of the DERs and model the bidding functions to be dependent on the energy state. Consequently, the market is cleared at the competitive equilibrium, which results in an efficient energy allocation for each individual DERs. These features distinguish our work from those by~\cite{ZhouRoozbehaniDahlehEtAl2017,RoozbehaniDahlehMitter2012}: the latter of which assumed no bidding process and the price was ex-ante which may not clear the market. Under the proposed market structure with heterogeneous dynamic DER models, we translate the analysis of the market stability into the stability analysis of a closed-loop system of the DER dynamics. In general, this system can be viewed as a model predictive control (MPC) system~\cite{MayneRawlingsRaoEtAl2000,GrunePannek2011} or systems with optimization based controller~\cite{HeathWills2005,HeathLi2008,Primbs2001,KordaJones2017}. By assuming quadratic utility and cost functions, it can be further shown to be a piecewise linear system~\cite{BemporadMorariDuaEtAl2002}. Alternatively, it can be viewed as an input saturation system with state-dependent saturation limits. Most of the existing stability results considered only special cases of such systems~\cite{MayneRawlingsRaoEtAl2000,HeathWills2005,HeathLi2008,Primbs2001,Liu1992,HuLin2001,HuLin2001a}, which are generally not applicable to the system considered in this paper. In fact, even the analysis of these much simpler systems are very challenging. While general numerical stability tests via Lyapunov methods may be employed, they lend little insights into the market practice. In addition, they are very conservative and can become intractable as the number of DERs increases. To address the above challenges, we propose a contraction analysis based approach to analyzing the stability of such systems. This approach enables us to derive analytical conditions that guarantee the market stability. These conditions provide important insights into the design of the bidding functions. The key observation is that the market stability can be always guaranteed by selecting shallower bidding functions, that is, the linear region of the bidding function should have a relatively small slope. Moreover, these conditions are very mild, and thus leave the full freedom to the individual users to design their desired bidding functions while ensuring a stable electricity market. The rest of this paper is organized as follows. The market structure and its equilibrium are described in Section~II. The characterization of the equilibrium and the stability results are presented in Section III. Numerical examples illustrating the application of the stability results is provided in Section IV. Finally, the paper is concluded in Section IV. \textbf{Notation}: We use ${\bf 1}_{m}$ to represent the $m$ dimensional column vector of all ones, and $I_{m}$ the $m$ dimensional identity matrix. The index set $\{1,2,...,m\}$ will be denoted by the $\mathcal{M}$. Let $\mathcal{X}$ be a subset of $\mathbb{R}^{m}$, then for $x\in\mathbb{R}^{m}$, the set $\mathcal{X}-x$ is defined as $\{x'-x,\,\forall x'\in\mathcal{X}\}$. For a symmetric matrix $S$, the inequality $S\succ0$ ($S\succeq0$ ) means that the matrix is positive definite (positive semi-definite). We denote by $\mathcal{H}_{Q}$ with $Q\succ0$ the Hilbert space $\mathbb{R}^{m}$ with inner product $\left\langle \cdot,\cdot\right\rangle _{Q}:\,\mathbb{R}^{m}\times\mathbb{R}^{m}\mapsto\mathbb{R}$ defined as $\left\langle x,y\right\rangle _{Q}:=x^{T}Qy$, and induced norm $\left\Vert \cdot\right\Vert _{Q}:\,\mathbb{R}^{m}\mapsto\mathbb{R}_{\geq0}$ defined as $\left\Vert x\right\Vert _{Q}:=\sqrt{x^{T}Qx}$. The Euclidean 2-norm will be denoted by $\left\Vert \cdot\right\Vert _{2}$, The projection operator in $\mathcal{H}_{Q}$, denoted by $\text{Proj}_{C}^{Q}(x):\,\mathbb{R}^{m}\mapsto C$, is defined as $\text{Proj}_{C}^{Q}(x):=\arg\min_{y\in C}\left\Vert x-y\right\Vert _{Q}=\arg\min_{y\in C}\left\Vert x-y\right\Vert _{Q}^{2}$. When $Q=I_{m}$, we simply write $\text{Proj}_{C}(x)$ which is the standard projection in $\mathbb{R}^{m}$. \section{Problem Formulation} In this section, we will first describe an electricity market model which involves the bidding and clearing processes at each market period. The problem of market stability analysis is then defined formally. \subsection{Market Structure} Usually a system coordinator is running an electricity market to schedule and guide the power usage of $m$ DERs. We assume that the $i$th DER is modeled by the following discrete time scalar linear system subject to both state and input constraints, \begin{equation} x_{i}^{+}=a_{i}x_{i}+d_{i},\label{eq:sys} \end{equation} where $x_{i}$, $x_{i}^{+}\in\mathcal{X}_{i}$ represent the current and the successive energy states of the DER, respectively, and $d_{i}\in\mathcal{D}_{i}$ is its power consumption. The state and input constraint sets will be denoted by $\mathcal{X}_{i}=[\underline{x}_{i},\bar{x}_{i}]$ and $\mathcal{D}_{i}=[\underline{d}_{i},\bar{d}_{i}]$, respectively. The constant $a_{i}\in(0,1]$ represents the energy dissipation rate. This model has been widely used to describe the power flexibility of the HVAC systems (when $a\in(0,1)$) or energy storage (when $a=1$), see for example, \cite{Hao2015,ZhaoZhangHaoEtAl2017,ZhaoHaoZhang2016}. To ensure the controllability of the system~(\ref{eq:sys}), we impose the following conditions \begin{equation} a_{i}\underline{x}_{i}+\bar{d}_{i}>\underline{x}_{i},\,a_{i}\bar{x}_{i}+\underline{d}_{i}<\bar{x}_{i}.\label{eq:ctrl} \end{equation} Condition~(\ref{eq:ctrl}) guarantees that the DER can be controlled into $\mathcal{X}_{i}$ with $d_{i}\in\mathcal{D}_{i}$. Note that the demand of the DER is changing dynamically with the current states. For example, the DER is less willing to procure power if its energy state is close to the upper bound. We assume that each DER submits a bid on the desired power and price to the coordinator in order to meet its own demand. Given the energy price, the demand of the DER will be determined by the bidding function. The bidding function of the DER can be considered as the solution of a payoff maximization problem defined as follows, \begin{equation} \begin{array}{ll} \mbox{maximize} & v_{i}(d_{i},x_{i})-\lambda d_{i}\\ \mbox{subject to:} & \text{(\ref{eq:sys})},\,x_{i}^{+}\in\mathcal{X}_{i},\,d_{i}\in\mathcal{D}_{i}, \end{array}\label{eq:utility} \end{equation} over $d_{i}$, where $v_{i}:\,\,\mathbb{R}\times\mathbb{R}\mapsto\mathbb{R}$ is the consumer's utility function dependent on the current state $x_{i}$, and $\lambda$ is the energy price. Note that the optimal consumption $d_{i}^{*}(\lambda;x_{i})$ is solved only when $x_{i}\in\mathcal{X}_{i}$. In reality, the DER may not start with a initial condition that is in $\mathcal{X}_{i}$, then we simply assume the following consumption policy, \begin{equation} d_{i}=\begin{cases} \bar{d}_{i}, & \text{if }x_{i}<\underline{x}_{i},\\ \underline{d}_{i}, & \text{if }x_{i}>\bar{x}_{i}. \end{cases}\label{eq:outside} \end{equation} Then under this controllability condition~(\ref{eq:ctrl}), the system will converge exponentially to $\text{\ensuremath{\mathcal{X}}}_{i}$ from any initial state $x_{i}(0)\in\mathbb{R}\backslash\mathcal{X}_{i}$ under~(\ref{eq:outside}). Therefore, without loss of generality we will always assume that the initial state lies in $\mathcal{X}_{i}$. The bidding function $d_{i}^{*}(\lambda;x_{i})$ describes the price responsiveness of the DER's demand, that is, the change of the demand with respect to the price. As a general common assumption, we assume that the utility function $v_{i}$ is a concave function of $d_{i}$. The coordinator procures energy from electricity providers to meet the aggregate demand of the DERs. We assume that there is associated a fixed cost function $c:\,\mathbb{R}\mapsto\mathbb{R}$ with the providers to supply the energy. As usual, the cost function $c$ is assumed to be convex, increasing, and $\dot{c}(0)>0$. The providers are reimbursed at price $\lambda$ and therefore seeks to maximize its profit by supplying \begin{equation} s^{*}(\lambda)=\arg\underset{s}{\max}\,\lambda s-c(s),\label{eq:cost} \end{equation} amount of power. The function $s^{*}(\lambda)$ describes the price sensitivity of the power supply. To clear the market, an aggregate demand curve is constructed by the coordinator from the submitted bidding functions~\cite{li2015}. It is the inverse mapping of the aggregate demand $\sum_{i\in\mathcal{M}}d_{i}^{*}(\lambda;x_{i})$, which describes the marginal utility. The supply curve is given by the optimality condition of~(\ref{eq:cost}), which is $\lambda=\dot{c}(s)$, i.e., the marginal cost as a function of the supply. Then the market is cleared at the intersection point $(\lambda_{c},\,s^{*})$ of the demand curve and the supply curve, where the aggregate demand equals the supply and the marginal utility equals the marginal cost, see Fig.~\ref{fig:MC}. This marginal cost $\lambda_{c}$ is usually referred to as the market clearing price. The tuple $\{\lambda_{c},\,s^{*},\,d_{i}^{*},\forall i\in\mathcal{M}\}$ will be referred to as the market equilibrium. \subsection{Market Stability} As discussed in the previous subsection, at each market period, the DER consumes $d_{i}^{*}(\lambda_{c};x_{i})$ amount of energy. Then the DER dynamics become \begin{equation} x_{i}^{+}=a_{i}x_{i}+d_{i}^{*}(\lambda_{c};x_{i}),\label{eq:GCL} \end{equation} for all $i\in\mathcal{M}$. As illustrated in Fig.~\ref{fig:CL}, this is a closed-loop system under the feedback of the market clearing process. In particular, notice that the market clearing price can also be viewed as a function of the energy states of the DERs. As a result, the stability of this closed loop system must be investigated carefully. Ideally, in the absence of the external disturbances, such as variation of the electricity cost, coordination signal change, and weather change, etc., the system~(\ref{eq:GCL}) should converge to a steady state as fast as possible. In the rest of this paper, we will establish conditions under which~(\ref{eq:GCL}) is exponentially stable, that is, there exist an equilibrium $x^{*}\in\mathcal{\mathcal{X}}$, $M>0$, and $\rho\in(0,1)$ such that for all initial condition $x(0)\in\mathbb{R}^{m}$, $\forall k\in\mathbb{Z}_{+}$, we have \[ \left\Vert x(k)-x^{*}\right\Vert \leq M\rho^{k}\left\Vert x(0)-x^{*}\right\Vert , \] where $x(k)$ is the system states at the $k$th period. Consequently, the inherent robustness of an exponentially stable system can reduce the price and power consumption volatility. Hereafter we will refer to the market stability and the stability of the closed loop system~(\ref{eq:GCL}) interchangeably. In the next section, we will first characterize the market equilibrium at each market period. This enables us to characterize the consumption profile $d_{i}^{*}(\lambda_{c};x_{i})$ of the DERs. Based on these characterizations, the stability of the closed-loop system~(\ref{eq:GCL}) is analyzed via contraction analysis. \begin{center} \begin{figure*}[!tp] \centering{ \begin{minipage}[t]{0.32\textwidth \begin{center} \includegraphics[clip,width=0.9\linewidth]{MarketClearing}\caption{\label{fig:MC}Typical market clearing process} \par\end{center \end{minipage}\hfill{ \begin{minipage}[t]{0.32\textwidth \begin{center} \includegraphics[clip,width=0.95\linewidth]{ClosedLoop}\caption{\label{fig:CL}Closed-loop DER dynamics} \par\end{center \end{minipage}\hfill{ \begin{minipage}[t]{0.32\textwidth \begin{center} \includegraphics[clip,scale=0.6]{BiddingFcn}\caption{\label{fig:BidsCurve}Bidding function in~(\ref{eq:bids})} \par\end{center \end{minipage} \end{figure*} \par\end{center} \section{Stability Analysis} In this section, we will first define and characterize the market equilibrium. We show that it can be solved from a social welfare optimization problem. Under the assumption of quadratic utility and cost functions, we explicitly characterize~(\ref{eq:GCL}) by a discrete nonlinear system. We further derive analytical conditions which guarantee the stability of this nonlinear system. \subsection{Competitive Equilibrium} The intersection point of the demand curve and the supply curve represents an equilibrium of the market. A \emph{competitive equilibrium }of the above described electricity market is a tuple $\{\lambda^{*},\,s^{*},\,d_{i}^{*},\forall i\in\mathcal{M}\}$ such that \begin{itemize} \item $d_{i}^{*}$ maximizes the $i$th consumer's payoff, that is, it is an optimal solution to the problem~(\ref{eq:utility}), for $i\in\mathcal{M}$. \item $s^{*}$ maximize the profit of the supplier, that is, it is an optimal solution to the problem (\ref{eq:cost}). \item $\lambda^{*}$ clears the market, that is, $\sum_{i}d_{i}^{*}(\lambda^{*};x_{i})=c^{-1}(\lambda^{*}).$ \end{itemize} From the discussion of the last section, it is clear that given the current state of each DERs, their bids are determined, and the market is cleared at a competitive equilibrium depending on the DERs' states. It is well known by the welfare theorems that the competitive equilibrium is Pareto efficient and every Pareto efficient allocation is attainable by a competitive equilibrium. The following lemma shows that any competitive equilibrium is efficient, that is, it is an optimal solution to a social welfare maximization problem. \begin{lem} The competitive equilibrium $\{\lambda^{*},s^{*},d_{i}^{*},\forall i\in\mathcal{M}\}$ of the electricity market is equivalent to the optimal solution of the following social welfare optimization problem, \begin{equation} \begin{array}{ll} \underset{d_{i},s}{\mbox{maximize}} & \sum_{i=1}^{N}v_{i}(d_{i},x_{i})-c(s)\\ \mbox{subject to:} & x_{i}^{+}=a_{i}x_{i}+d_{i},\\ & x_{i}^{+}\in\mathcal{X}_{i},\,d_{i}\in\mathcal{D}_{i},\,\forall i\in\mathcal{M}\\ & \sum_{i=1}^{m}d_{i}=s, \end{array}\label{eq:social} \end{equation} where the market clearing price $\lambda$ emerges as the Lagrangian dual variable associated with the demand-supply balance constraint (the last equality constraint in~(\ref{eq:social})). \end{lem} The proof of the above lemma follows the standard argument by checking that the Karush\textendash Kuhn\textendash Tucker (KKT) conditions of~(\ref{eq:social}) is equivalent to that of~(\ref{eq:utility}) and~(\ref{eq:cost}), see for example~\cite{HaoCorbinKalsiEtAl2017}. Note that the social welfare problem~(\ref{eq:social}) has to be solved at each market period after the update of the DERs' energy states. The DER dynamics~(\ref{eq:sys}) with the feedback of the optimal solution $d_{i}^{*},\,\forall i\in\mathcal{M}$ can be viewed as a one-horizon model predictive control (MPC) system, or alternatively, as a system with optimization-based controller~\cite{Primbs2001,HeathWills2005,HeathLi2008}. It is well known that such systems could be unstable even when the open loop systems are stable~\cite{Maciejowski2000}. Although various analytical conditions are proposed in the literature to guarantee the closed-loop stability of the MPC control system~\cite{MayneRawlingsRaoEtAl2000}, they are generally not applicable to the system considered in this paper involving the market clearing process~(\ref{eq:social}). In fact, even the numerical verification of the closed-loop stability~(\ref{eq:GCL}) with quadratic $v_{i}$ and $c$ can be very challenge~\cite{Primbs2001,HeathWills2005,HeathLi2008}. These works usually assume the knowledge of the equilibrium, which is usually the origin. In addition, the numerical conditions proposed there do not scale with the number of the DERs in the market. In the next subsection, we will work with the quadratic utility and cost functions and obtain simple analytical conditions to guarantee the closed-loop system stability and hence the market stability. These conditions will provide valuable insight into market dynamics as well as guidance on the design of the bidding functions. \subsection{Closed-loop DER Dynamics} We assume that the utility function $v_{i}$ is a general quadratic function of the power consumption $d_{i}$, \begin{equation} v_{i}(d_{i},x_{i})=-\frac{1}{2}q_{i}d_{i}^{2}+(r_{i}x_{i}+c_{i})d_{i},\,\forall i\in\mathcal{M},\label{eq:quad_utility} \end{equation} where $q_{i}>0$, $r_{i},\,c_{i}\in\mathbb{R}$ are user-specified parameters reflecting their preferences. Such quadratic utility functions have been widely used in coordinating demand-side electric loads via mean-field game approaches, see for example~\cite{GrammaticoPariseColombinoEtAl2016} and the references therein. As discussed in the previous section, the bidding function of the $i$th consumer is given by the optimal solution of~(\ref{eq:utility}). It can be easily verified that it is the projection of the optimizer of the unconstrained problem onto the constrain set, which is \begin{equation} d_{i}^{*}(\lambda)=\text{Proj}_{(\mathcal{X}_{i}-a_{i}x_{i})\cap\mathcal{D}_{i}}\left[\frac{-\lambda+r_{i}x_{i}+c_{i}}{q_{i}}\right],\label{eq:bids} \end{equation} where the non-emptiness of $(\mathcal{X}_{i}-a_{i}x_{i})\cap\mathcal{D}_{i}$ is guaranteed by the controllability condition~(\ref{eq:ctrl}). A typical bidding function of the form~(\ref{eq:bids}) is depicted in Fig.~\ref{fig:BidsCurve}. It contains a linear region in-between the saturated regions. In the figure, there are two threshold prices denoted by $\lambda_{\text{th}}^{1}$ and $\lambda_{\text{th}}^{2}$ . Similar bidding function has also been considered in~\cite{li2015}. Note that the dependence on the load's state models the time-varying power demand of the consumer. The cost function is assumed to be \begin{equation} c(s)=\frac{1}{2}\beta_{1}s^{2}+\beta_{2}s,\label{eq:quadcost} \end{equation} where $\beta_{i}>0$, $i=1,2$. Then the market price which is given by the marginal cost is $\lambda=\beta_{1}s+\beta_{2}$. Using~(\ref{eq:quad_utility}) and~(\ref{eq:quadcost}), we rewrite the social welfare problem~(\ref{eq:social}) in vector form as follows, \begin{equation} \begin{array}{ll} \underset{d,s}{\mbox{maximize}} & -\frac{1}{2}d^{T}Qd+d^{T}(Rx+c)-\frac{1}{2}\beta_{1}s^{2}-\beta_{2}s\\ \mbox{subject to:} & x^{+}=Ax+d,\,x^{+}\in\mathcal{X},\,d\in\mathcal{D},\\ & {\bf 1}_{m}^{T}d=s, \end{array}\label{eq:quad_social} \end{equation} where $d,\,x,\,c\in\mathbb{R}^{m}$ and their $i$th components are $d_{i},\,x_{i},\,c_{i}$, respectively. The diagonal matrices $A,\,Q,\,R$ are generated by the vector $a=\{a_{i}\}$, $q=\{q_{i}\}$, and $r=\{r_{i}\}$ respectively. The state and input constraints are defined by $\mathcal{X}=\Pi_{i=1}^{m}\mathcal{X}_{i}$, $\mathcal{D}=\Pi_{i=1}^{m}\mathcal{D}_{i}$. Without the inequality constraints, the optimal solution to~(\ref{eq:quad_social}), which we shall refer to as the unconstrained maximizer, can be easily obtained as \begin{equation} \hat{d}(x)=\tilde{Q}^{-1}(Rx+\tilde{c}),\label{eq:unconsd} \end{equation} where $\tilde{Q}=Q+\beta_{1}{\bf 1}_{m}{\bf 1}_{m}^{T}$ and $\tilde{c}=c-\beta_{2}{\bf 1}_{m}$. The constrained optimal solution can be expressed using $\hat{d}(x)$ conveniently, which is given in the following lemma. \begin{lem} The optimal solution to~(\ref{eq:quad_social}) is given by \begin{equation} d^{*}(x)=\text{\emph{Proj}}_{(\mathcal{X}-Ax)\cap\mathcal{D}}^{\tilde{Q}}\hat{d}(x).\label{eq:dproj} \end{equation} \end{lem} \begin{proof} It follows the standard proof as those in for example~\cite[Lemma 4]{GrammaticoPariseColombinoEtAl2016}. \end{proof} Note that the feedback policy~(\ref{eq:dproj}) is nonlinear due to the projection. Overall, the closed-loop system becomes \begin{equation} x^{+}=Ax+d^{*}(x).\label{eq:closedloop} \end{equation} In particular, it can be shown that~(\ref{eq:closedloop}) is a piecewise linear system~\cite{BemporadMorariDuaEtAl2002}. We first observe that the existence of an equilibrium of~(\ref{eq:closedloop}) is simply guaranteed by the controllability condition~(\ref{eq:ctrl}). \begin{prop} \label{prop:uniquess}The closed-loop system~(\ref{eq:closedloop}) has an equilibrium $x^{*}\in\mathcal{X}$ if the controllability condition~(\ref{eq:ctrl}) holds. \end{prop} \begin{proof} The equilibria of $x^{+}=Ax+d$ satisfy $(I-A)x=d$. Set $x^{+}=x$ in~(\ref{eq:quad_social}), and replace $s$ with ${\bf 1}_{m}^{T}d={\bf 1}_{m}^{T}(I-A)x$. It can be seen that the convex quadratic programming (possibly degenerated) has a solution if the constraints are non-empty, and the latter can be found to be exactly the controllability condition~(\ref{eq:ctrl}). \end{proof} \begin{rem} Note that we can further pose conditions to guarantee the uniqueness of the solution to~(\ref{eq:quad_social}) based on the KKT conditions. This is particularly for the degenerated case where the quadratic term is not strongly convex. However, we will not complicate the discussion here, since our stability results in the sequel will guarantee the uniqueness of the equilibrium. \end{rem} We will investigate the market stability through analyzing the stability of the discrete nonlinear system~(\ref{eq:closedloop}). Specifically, we will find conditions on the utility and the cost functions such that~(\ref{eq:closedloop}) is exponentially stable. In particular, this will guarantee no limit cycle exists. This is desired since the limit cycle corresponds to the oscillation of the power consumption and the high volatility of the market clearing price. Denote by $x^{+}=T(x)$ the discrete time nonlinear dynamics~(\ref{eq:closedloop}). If we can show that $T$ is a contraction mapping on $\mathcal{X}$, then it immediately suggests that there exists a unique equilibrium in $\mathcal{X}$ such that it is exponentially stable. In the case of $a_{i}\in(0,1)$, the globally exponential stability can be concluded in view of the policy~(\ref{eq:outside}). Next, we will first consider the single DER case, and then extend the results to the multiple DER case. \subsection{Single DER} This scenario arises frequently when an aggregate DER model is obtained by the coordinator to facilitate the power planning, see for example~\cite{RoozbehaniDahlehMitter2012,ZhaoHaoZhang2016,ZhaoZhangHaoEtAl2017,Hao2015}. Such a model represents the aggregate power flexibility of the DER population. It has the same form of~(\ref{eq:sys}) but with aggregate power consumption as its input, and has much larger state and input constraint sets. For clarity, we will denote by (scalar) $a$ the system matrix $A$ in this section. First, we have the following characterization of the mapping $T$. \begin{lem} \label{lem:ddoubleP}Consider the market consisting of one aggregate DER model with $m=1$. Then the nonlinear mapping $T$ of the closed-loop system dynamics~(\ref{eq:closedloop}) is given by \begin{equation} \text{\emph{Proj}}_{\mathcal{X}}\left[ax+\emph{Proj}_{\mathcal{D}}\left[\hat{d}(x)\right]\right],\label{eq:newExp} \end{equation} where $\hat{d}(x)$ is defined in~(\ref{eq:unconsd}). \end{lem} \begin{proof} First note that for scalar $\tilde{Q}=q+\beta_{1}>0$, it is easy to verify that $\text{Proj}_{(\mathcal{X}-ax)\cap\mathcal{D}}^{\tilde{Q}}$ is equivalent to $\text{Proj}_{(\mathcal{X}-ax)\cap\mathcal{D}}$, i.e., the projection in the $\mathbb{R}^{m}$. Then we need to show that for arbitrary $d$, the following holds. \begin{equation} ax+\text{Proj}_{(\mathcal{X}-ax)\cap\mathcal{D}}d=\text{Proj}_{\mathcal{X}}\left[ax+\text{Proj}_{\mathcal{D}}d\right].\label{eq:doubleProj} \end{equation} If $ax+\text{Proj}_{\mathcal{D}}d\in\mathcal{X}$, then $\text{Proj}_{\mathcal{D}}d\in\mathcal{X}-ax$, and $\text{Proj}_{\mathcal{D}}d=\text{Proj}_{(\mathcal{X}-ax)\cap\mathcal{D}}d$. Hence, the above equality holds. Otherwise, if $ax+\text{Proj}_{\mathcal{D}}d\notin\mathcal{X}$, we assume without loss of generality that $ax+\text{Proj}_{\mathcal{D}}d>\bar{x}$. Then the right hand side of~(\ref{eq:doubleProj}) is $\bar{x}$. Since $\text{Proj}_{\mathcal{D}}d>\bar{x}-ax$, we have $\text{Proj}_{(\mathcal{X}-ax)\cap\mathcal{D}}d=\bar{x}-ax,$ and therefore the left hand side is also $\bar{x}$. Thus we proved~(\ref{eq:doubleProj}) and also~(\ref{eq:newExp}). \end{proof} Under the help of Lemma~\ref{lem:ddoubleP}, we have the following theorem for the market stability. \begin{thm} \label{thm:OnePlayer}Consider the power market consisting of only one DER $(m=1)$ and suppose $a\in(0,1]$, $q,\,\beta_{1}>0$, $r\in\mathbb{R}$. Then the closed-loop system~(\ref{eq:closedloop}) is exponentially stable in $\mathcal{X}$ if the following condition holds \begin{equation} \left|a+\frac{r}{q+\beta_{1}}\right|<1.\label{eq:onecon} \end{equation} \end{thm} \begin{proof} By Lemma~\ref{lem:ddoubleP}, the operator $T=\text{Proj}_{\mathcal{\mathcal{X}}}\left[ax+\text{Proj}_{\mathcal{D}}\hat{d}(x)\right]$, where $\hat{d}(x)=rx/(q+\beta_{1})+\tilde{c}$. Without loss of generality, assume $x>y$. We then discuss the two cases $r\geq0$ and $r<0$. Case 1: $r\geq0$. Then $x>y$ implies $\hat{d}(x)\geq\hat{d}(y)$ and it follows that $\text{Proj}_{\mathcal{D}}\hat{d}(x)\geq\text{Proj}_{\mathcal{D}}\hat{d}(y)$, $ax+\text{Proj}_{\mathcal{D}}\hat{d}(x)\geq ay+\text{Proj}_{\mathcal{D}}\hat{d}(y)$, and $T(x)\geq T(y)$. Therefore, \begin{align*} \left|T(x)-T(y)\right| & =T(x)-T(y)\\ & \leq a(x-y)+\text{Proj}_{\mathcal{D}}\hat{d}(x)-\text{Proj}_{\mathcal{D}}\hat{d}(y)\\ & \leq a(x-y)+\hat{d}(x)-\hat{d}(y)\\ & =(a+\frac{r}{q+\beta_{1}})(x-y)\\ & <x-y, \end{align*} where the first and second inequalities are by the non-expansive property of the projection operation~~\cite[Proposition 4.8]{BauschkeCombettes2011}, and the last inequality is by condition~(\ref{eq:onecon}). Case 2: $r<0$. Then $x>y$ implies $\hat{d}(x)<\hat{d}(y)$ and it follows that $\text{Proj}_{\mathcal{D}}\hat{d}(x)\leq\text{Proj}_{\mathcal{D}}\hat{d}(y)$. Therefore, \begin{align*} \left|T(x)-T(y)\right| & =\left|a(x-y)+\text{Proj}_{\mathcal{D}}\hat{d}(x)-\text{Proj}_{\mathcal{D}}\hat{d}(y)\right|\\ & \leq\max\Bigl\{\text{Proj}_{\mathcal{D}}\hat{d}(y)-\text{Proj}_{\mathcal{D}}\hat{d}(x)+a(y-x),\\ & \quad a(x-y)\Bigr\}\\ & \leq\max\left\{ \hat{d}(y)-\hat{d}(x)+a(y-x),\,a(x-y)\right\} \\ & =\max\left\{ -a-\frac{r}{q+\beta_{1}},\,a\right\} (x-y)\\ & <x-y, \end{align*} where the second inequality used the non-expansive property of the projection operation, and the last inequality is by condition~(\ref{eq:onecon}). Combing these two cases, we see that $T$ is a contraction mapping on $\mathcal{X}$. Hence, there exists a unique equilibrium in $\mathcal{X}$ which is exponentially stable. Thus we showed that the system is exponentially stable in $\mathcal{X}$ under condition~(\ref{eq:onecon}). \end{proof} \begin{rem} The condition~(\ref{eq:onecon}) is exact the sufficient and necessary stability condition for the unconstrained closed loop system, \[ x^{+}=Ax+\hat{d}(x). \] However, it is not necessary for the constrained closed-loop system~(\ref{eq:closedloop}). It is easy to give examples for which the systems do not satisfy~(\ref{eq:onecon}) but are still globally exponential stable (with equilibrium on the boundary point). Also note that for the energy storage DER with $a=1$ (i.e., no energy dissipation), the coupling coefficient $r$ must be negative. \end{rem} \subsection{Multiple DERs} For the case of multiple DERs, the situation is much more complicated than the single DER case. The difficulty mainly stems from the weighted projection $\text{Proj}_{(\mathcal{X}-Ax)\cap\mathcal{D}}^{\tilde{Q}}\hat{d}(x)$ (see~(\ref{eq:dproj})). Even though the projection set $(\mathcal{X}-Ax)\cap\mathcal{D}$ is decoupled in $\mathbb{R}^{m}$, the projection of $\hat{d}(x)$ are coupled among each components since $\tilde{Q}\succ0$ is not diagonal. This reflects the fact that the optimal power consumption of each DER will depend on those of other DERs. Clearly, this is a result of the interaction between DERs through the market coordination. Due to this coupling, we do not have the characterization of $T$ as in~(\ref{eq:newExp}). However, when the number of DERs is large, they become weakly coupled. In fact, from the unconstrained maximizer $\hat{d}(x)$ we have by the Sherman\textendash Morrison formula~\cite{Meyer2000} that \begin{align*} \hat{d}(x) & =\tilde{Q}^{-1}(Rx+\tilde{c})\\ & =\left(I_{m}-\frac{\beta_{1}Q^{-1}{\bf 1}_{m}{\bf 1}_{m}^{T}}{1+\beta_{1}\sum_{i=1}^{m}q_{i}^{-1}}\right)Q^{-1}(Rx+\tilde{c})\\ & =Q^{-1}Rx-\frac{\beta_{1}Q^{-1}{\bf 1}_{m}}{1+\beta_{1}w_{1}}x_{a}+c_{o}, \end{align*} where $x_{a}=\sum_{i\in\mathcal{M}}\frac{r_{i}}{q_{i}}x_{i}$ is the aggregate state, $w_{1}=\sum_{i=1}^{m}q_{i}^{-1}$, and $c_{o}$ is a constant. Moreover, for the $j$th DER, the unconstrained power can be expressed as \begin{align*} \hat{d}_{j}(x) & =\frac{r_{j}}{q_{j}}x_{j}-\frac{\beta_{1}}{1+\beta_{1}w_{1}}\frac{x_{a}}{q_{j}}+c_{j}\\ & =\frac{1}{q_{j}}\left(r_{j}x_{j}-\frac{\beta_{1}}{1+\beta_{1}w_{1}}\sum_{i\in\mathcal{M}}\frac{r_{i}}{q_{i}}x_{i}\right)+c_{j}, \end{align*} where $c_{j}$ is the $j$th component of $c_{o}$. If $q_{i}^{-1}$ is not diminishing, we have as $m\rightarrow\infty$, $w_{1}\rightarrow\infty$. Hence, the $j$th DER is only affected by the $i$th DER, $\forall i\neq j$, by a diminishing factor of $\frac{\beta_{1}}{1+\beta_{1}w_{1}}\frac{r_{i}}{q_{i}}.$ This motivates us to approximate $\hat{d}(x)$ which couples over all the DERs' state by a decoupled one, given by, \begin{equation} \tilde{d}(x):=\Lambda^{-1}Rx+\tilde{Q}^{-1}\tilde{c},\label{eq:dtilde} \end{equation} where $\Lambda$ is diagonal. The diagonal matrix $\Lambda$ will be chosen such that $\epsilon:=\left\Vert \tilde{Q}^{-1}-\Lambda^{-1}\right\Vert _{2}$ is sufficiently small. The following lemma gives a approximation matrix $\Lambda$ and derives its approximation error bound. \begin{lem} \label{lem:errorBound}Choose the approximation matrix $\Lambda^{-1}$ as \begin{equation} \Lambda^{-1}=Q^{-1}-\frac{1}{2}\frac{\beta_{1}w_{2}}{1+\beta_{1}w_{1}}I_{m},\label{eq:Lambda} \end{equation} where $w_{j}=\sum_{i=1}^{m}q_{i}^{-j}$, for $j=1,2$. Then the approximation error is \[ \epsilon\leq\frac{1}{2}\frac{\beta_{1}w_{2}}{1+\beta_{1}w_{1}}. \] \end{lem} \begin{proof} Direct calculation yields \[ \left\Vert \tilde{Q}^{-1}-\Lambda^{-1}\right\Vert _{2}=\frac{\beta_{1}}{1+\beta_{1}w_{1}}\left\Vert \frac{1}{2}w_{2}I_{m}-q^{-1}q^{-T}\right\Vert _{2}, \] where we denote by $q^{-1}$ the column vector $[q_{1}^{-1},...q_{m}^{-1}]^{T}$ and by $q^{-T}$ its transpose. Notice that since $q^{-1}q^{-T}$ is a rank one matrix, it has eigenvalues $\{0,\,w_{2}\}$. It follows immediately that spectral radius of the symmetric matrix $\frac{1}{2}w_{2}I_{m}-q^{-1}q^{-T}$ is $\frac{1}{2}w_{2}$, and hence $\epsilon\leq\frac{1}{2}\frac{\beta_{1}w_{2}}{1+\beta_{1}w_{1}}$. This completes the proof. \end{proof} Using Lemma~\ref{lem:errorBound}, we can easily obtain an error bound for the limiting case of an arbitrary number of DERs. The following corollary also reveals that this error bound can be arbitrarily small if all $q_{i}'s$ are sufficiently large. \begin{cor} Suppose $q_{i}^{-1}$ is not diminishing. Then the approximation error $\epsilon$ is bounded by $\frac{1}{2}\max_{i\in\mathcal{M}}q_{i}^{-1}$. \end{cor} \begin{proof} If $q_{i}^{-1}$ is not diminishing, then $w_{1}\rightarrow\infty$ as $m\rightarrow\infty$. It follows from Lemma~\ref{lem:errorBound} that \begin{align*} \epsilon & \leq\frac{1}{2}\frac{\beta_{1}w_{2}}{1+\beta_{1}w_{1}}\leq\frac{1}{2}\frac{\max_{i\in\mathcal{M}}q_{i}^{-1}\beta_{1}w_{1}}{1+\beta_{1}w_{1}}, \end{align*} and the right hand side of last inequality goes to $\frac{1}{2}\max_{i\in\mathcal{M}}q_{i}^{-1}$ as $m\rightarrow\infty$. This completes the proof. \end{proof} It is worth mentioning that in practice, the coefficient $\beta_{1}$ will be relatively very small compare to $q_{i}$, since it denotes the quadratic cost of the total supply cost. Hence, the constant $1$ will dominate the denominator and the error $\epsilon$ will be very small even for a finite number of DERs. Now the error bounds of using $\tilde{d}(x)$ to approximate $\hat{d}$ can be easily obtained by $\left\Vert \tilde{d}(x)-\hat{d}(x)\right\Vert _{2}\leq\epsilon\left\Vert x\right\Vert \max r_{i}$ after applying the matrix induced norm inequality. Note that since $x\in(\mathcal{X}-Ax)\cap\mathcal{D}$ is bounded, this error bound is diminishing as $\epsilon\rightarrow0$ if $r_{i}'$s are bounded. Based on the above results, next we will approximate the constrained maximizer $d^{*}(x)$ in~(\ref{eq:dproj}) using $\tilde{d}(x)$ in~(\ref{eq:dtilde}). To avoid the cluster of notation, let $\Omega(x)$ denote the projection set $(\mathcal{X}-Ax)\cap\mathcal{D}$. Intuitively, if $\epsilon$ is small, we may approximate $d^{*}(x)$ by $\text{Proj}_{\Omega(x)}^{\Lambda}\tilde{d}(x),$ that is, the projection of $\tilde{d}(x)$ onto $\Omega(x)$ in $\mathcal{H}_{\Lambda}$, where recall that $\tilde{d}(x)$ is the approximation of $\hat{d}(x)$. This approximation can be justified from the KKT conditions corresponding to the projection $d^{*}(x)$. Note that by definition, \begin{equation} d^{*}(x)=\arg\min_{z\in\Omega(x)}\left\Vert z-\tilde{d}(x)\right\Vert _{\tilde{Q}}.\label{eq:defdstar} \end{equation} The KKT conditions for the above minimization problem can be found as, \[ \begin{cases} z=\tilde{d}(x)-\tilde{Q}^{-1}L^{T}\mu,\\ Lz=b(x), \end{cases} \] where $Lz=b(x)$ are the active inequality constraints in $z\in\Omega(x)$, and $\mu$ the associated Lagrangian multipliers. We see that the optimal projection of $\tilde{d}(x)$ is an affine function of $\mu$ which depends explicitly on $\tilde{Q}^{-1}$. Hence, we will use the following quantity to approximate $d^{*}(x)$, \begin{equation} \tilde{d}^{*}:=\text{Proj}_{\Omega(x)}^{\Lambda}\tilde{d}(x)=\arg\min_{z\in\Omega(x)}\left\Vert z-\tilde{d}(x)\right\Vert _{\Lambda}.\label{eq:FinalD} \end{equation} It is worth mentioning that explicit solutions of the optimal primal and dual solutions $z$ and $\mu$ of~(\ref{eq:defdstar}) can be obtained, see for example~\cite{BemporadMorariDuaEtAl2002}. A more thorough analysis based on these explicit solutions will be our future work. Now based on the approximation~(\ref{eq:FinalD}), we will prove the following sufficient condition to guarantee the market stability. The proof resembles that of the single DER case. The main point is that the approximation $\tilde{d}^{*}$ is totally decoupled among each DERs. \begin{thm} \label{thm:Nplayer}Suppose $a_{i}\in(0,1],\,q_{i}>0,\,r_{i}\in\mathbb{R}$. The closed-loop system~(\ref{eq:closedloop}) is exponentially stable if for all $i\in\mathcal{M}$, \begin{equation} \left|a_{i}+\phi_{i}r_{i}\right|<1,\label{eq:Ncon} \end{equation} where $\phi_{i}$ is the $i$th diagonal element of $\Lambda^{-1}$ in~(\ref{eq:Lambda}), given by \[ \phi_{i}=q_{i}^{-1}-\frac{1}{2}\frac{\beta_{1}w_{2}}{1+\beta_{1}w_{1}}, \] and $w_{j}:=\beta_{1}\sum_{j=1}^{m}q_{j}^{-j}$, for $j=1,2$. \end{thm} \begin{proof} We will prove that the mapping $T(x):=Ax+\text{Proj}_{\Omega(x)}^{\Lambda}\tilde{d}(x)$ is contractive. First note that since $\Lambda$ is diagonal and $\Omega(x)$ is hyper rectangular in $\mathbb{R}^{m}$, it can be easily shown by the definition of the projection that $\text{Proj}_{\Omega(x)}^{\Lambda}\tilde{d}(x)=\text{Proj}_{\Omega(x)}\tilde{d}(x)$. It then follows from the same argument of Lemma~\ref{lem:ddoubleP} component-wisely that \begin{equation} Ax+\text{Proj}_{\Omega(x)}\tilde{d}(x)=\text{Proj}_{\mathcal{X}}\left[Ax+\text{Proj}_{\mathcal{D}}\tilde{d}(x)\right].\label{eq:imm} \end{equation} Using~(\ref{eq:imm}) and the non-expansive property of the projection operation~\cite[Proposition 4.8]{BauschkeCombettes2011}, we have for arbitrary $x,\,y\in\mathcal{X}$, $x\neq y$, \begin{align*} \left\Vert T(x)-T(y)\right\Vert _{2} & \leq\left\Vert A(x-y)+\text{Proj}_{\mathcal{D}}\tilde{d}(x)-\text{Proj}_{\mathcal{D}}\tilde{d}(x)\right\Vert _{2}. \end{align*} Now consider the $i$th element of the vector inside the norm on the right hand side of the above inequality. Similar to the proof of Theorem~\ref{thm:OnePlayer}, without loss of generality, we assume $x_{i}>y_{i}$. If $r_{i}\geq0$, we have by the non-expansiveness of the projection that, \begin{multline} 0\leq a_{i}(x_{i}-y_{i})+\text{Proj}_{\mathcal{D}_{i}}\tilde{d}_{i}(x)-\text{Proj}_{\mathcal{D}_{i}}\tilde{d}_{i}(y)\\ \leq(a_{i}+\phi_{i}r_{i})(x_{i}-y_{i}),\label{eq:iterm1} \end{multline} where $\tilde{d}_{i}(x)$ is the $i$th element of $\tilde{d}(x)$ in~(\ref{eq:dtilde}). If $r_{i}<0$, then \begin{multline} \left|a_{i}(x_{i}-y_{i})+\text{Proj}_{\mathcal{D}_{i}}\tilde{d}(x)-\text{Proj}_{\mathcal{D}_{i}}\tilde{d}(y)\right|\\ \leq\max\Bigl\{\text{Proj}_{\mathcal{D}_{i}}\tilde{d}_{i}(y)-\text{Proj}_{\mathcal{D}_{i}}\tilde{d}_{i}(x)+a(y_{i}-x_{i}),\,a_{i}(x_{i}-y_{i})\Bigr\}\\ \leq\max\left\{ \tilde{d}_{i}(y)-\tilde{d}_{i}(x)+a(y_{i}-x_{i}),\,a_{i}(x_{i}-y_{i})\right\} \\ =\max\left\{ -a_{i}-\phi_{i}r_{i},\,a_{i}\right\} (x_{i}-y_{i}),\label{eq:iterm2} \end{multline} Combining $(\ref{eq:iterm1})$ and (\ref{eq:iterm2}), we have \begin{align*} \left\Vert T(x)-T(y)\right\Vert _{2} & \leq\left\Vert \Xi(x-y)\right\Vert _{2}, \end{align*} where $\Xi$ is a diagonal matrix whose $i$th element is given by \[ \begin{cases} a_{i}+\phi_{i}r_{i}, & \text{if \ensuremath{r_{i}\geq0},}\\ \max\left\{ -a_{i}-\phi_{i}r_{i},\,a_{i}\right\} , & \text{if }\ensuremath{r_{i}<0.} \end{cases} \] Then by the assumption~(\ref{eq:Ncon}), \[ \left\Vert T(x)-T(y)\right\Vert _{2}\leq\left\Vert \Xi\right\Vert _{2}\left\Vert x-y\right\Vert _{2}<\left\Vert x-y\right\Vert _{2}, \] which shows that $T$ is a contraction mapping on $\mathcal{X}$. Hence, the closed-loop system is exponentially stable. Thus it completes the proof. \end{proof} We see from the above derived condition that if $q_{i}^{-1}$ or $r_{i}$ is sufficiently small, we can always guarantee the stability of the closed-loop system~(\ref{eq:closedloop}). The small $q_{i}^{-1}$ corresponds to a bidding function with a relatively shallower slope. \begin{center} \begin{table} \caption{\label{tab:SimPara}Parameters for Unstable Market Dynamics} \centering{ \begin{tabular}{ccc|ccc} \multicolumn{1}{c}{} & \multicolumn{2}{c}{} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{}\tabularnewline \hline \hline \multirow{2}{*}{Param.} & \multicolumn{2}{c|}{Value} & \multirow{2}{*}{Param.} & \multicolumn{2}{c}{Value}\tabularnewline \cline{2-3} \cline{5-6} & $m=1$ & $m=100$ & & $m=1$ & $m=100$\tabularnewline \hline $a_{i}$ & $0.95$ & U$[0.95,\,0.9]$ & $x_{i}(0)$ & U$[\underline{x}_{i},\,\bar{x}_{i}]$ & U$[\underline{x}_{i},\,\bar{x}_{i}]$\tabularnewline $x_{i}^{r}$ & - & U$[350,\,500]$ & $q_{i}$ & $0.005$ & $0.005$\tabularnewline $\underline{x}_{i}$ & $2500$ & $x_{i}^{r}-200$ & $r_{i}$ & $-0.1a_{i}$ & $-2a_{i}$\tabularnewline $\bar{x}_{i}$ & 7500 & $x_{i}^{r}+200$ & $c_{i}$ & 500 & $2x_{i}^{r}$\tabularnewline $\underline{d}_{i}$ & $0$ & $0$ & $\beta_{1}$ & $0.04$ & $0.008$\tabularnewline $\bar{d}_{i}$ & $500$ & U$[100,150]$ & $\beta_{2}$ & $20$ & $20$\tabularnewline \hline \hline & & \multicolumn{1}{c}{} & & & \tabularnewline \end{tabular} \end{table} \par\end{center} \begin{center} \begin{figure*}[!tp] \begin{centering} \subfloat[\label{subfig:Unstable1DPrice}Unstable price response]{\centering{}\includegraphics[bb=30bp 0bp 432bp 176bp,clip,width=0.45\linewidth]{Unstable_1D_Price}}\hfill{}\subfloat[\label{subfig:Unstable1DPower}Unstable aggregate power response]{\centering{}\includegraphics[bb=24bp 0bp 432bp 176bp,clip,width=0.46\linewidth]{Unstable_1D_Power}} \par\end{centering} \subfloat[\label{subfig:Stable1DPrice}Stable price response]{\centering{}\includegraphics[bb=28bp 0bp 432bp 176bp,clip,width=0.45\linewidth]{Stable_1D_Price}}\hfill{}\subfloat[\label{subfig:Stable1DPower}Stable aggregate power response]{\centering{}\includegraphics[bb=25bp 0bp 432bp 176bp,clip,width=0.45\linewidth]{Stable_1D_Power}} \caption{\label{fig:SingleDER}Single DER market under base price change} \begin{centering} \subfloat[\label{subfig:UnStableNDPrice}Unstable price response]{\begin{centering} \includegraphics[bb=30bp 0bp 432bp 176bp,clip,width=0.45\linewidth]{Unstable_nD_Price} \par\end{centering} \centering{}}\hfill{}\subfloat[\label{subfig:UnStableNDPower}Unstable aggregate power response]{\centering{}\includegraphics[bb=35bp 0bp 432bp 176bp,clip,width=0.45\linewidth]{Unstable_nD_Power}} \par\end{centering} \subfloat[\label{subfig:StableNDPrice}Stable price response]{\centering{}\includegraphics[bb=30bp 0bp 432bp 176bp,clip,width=0.45\linewidth]{Stable_nD_Price}}\hfill{}\subfloat[\label{subfig:StableNDPower}Stable aggregate power response]{\centering{}\includegraphics[bb=27bp 0bp 432bp 176bp,clip,width=0.45\linewidth]{Stable_nD_Power}} \caption{\label{fig:MultiDER}Multi-DER market under base price change} \end{figure*} \par\end{center} \section{Numerical Examples} In this section, we will present some examples to demonstrate the application of the derived stability conditions. We consider a scenario where the base price changes over time. Note that the base price corresponds to the constant term $\beta_{2}$ in the marginal cost $\lambda=\beta_{1}s+\beta_{2},$ which can be influenced by the external wholesale market price. \subsection{Single DER} We first consider the market consists of only one aggregate DER. The columns below $m=1$ in Table~\ref{tab:SimPara} give the simulation parameters for the\emph{ unstable} market. In the simulation, the base price $\beta_{2}$ is set to $\{20,40,10,30,20\}$ successively every 20 market periods. The price and the aggregate power evolution are depicted in Figs.~\ref{fig:SingleDER}\ref{subfig:Unstable1DPrice}-\ref{subfig:Unstable1DPower}, respectively. We can see that the resulting market clearing prices and aggregate power keep oscillating. Moreover, the base price as a coordinating signal fails to adjust the aggregate power consumption effectively. Note that in this case, we have $q=0.005$ and the ratio $a+\frac{r}{q+\beta_{1}}=-1.1611$, which violates the condition~(\ref{eq:onecon}). If we increase $q$ to $0.2$, then $a+\frac{r}{q+\beta_{1}}=0.5542$ and the condition~(\ref{eq:onecon}) is satisfied. The resulting market clearing prices and aggregate power become stable in this case, as can be seen from in Figs.~\ref{fig:SingleDER}\ref{subfig:Stable1DPrice}-\ref{subfig:Stable1DPower}. \subsection{Multiple DERs} We next assume that there are $100$ DERs modeled by~(\ref{eq:sys}). The parameters for the system dynamics, utility functions, and the cost functions are generated randomly according to the distribution/values in the $m=100$ columns of Table~\ref{tab:SimPara}, where U$[h,l]$ denotes the uniform distribution within the range $[h,l]$, and the variable $x_{i}^{r}$ is used to generate several other variables in the table. The same base price change scenario in the single DER case is simulated where $\beta_{2}$ is set to $\{20,40,10,30,20\}$ successively every 20 market periods. We solve the social welfare problem~(\ref{eq:quad_social}) to obtain the market clearing prices and energy allocation at each market period. As shown in Figs.~\ref{fig:MultiDER}\ref{subfig:UnStableNDPrice}-\ref{subfig:UnStableNDPower}, both the market clearing prices and the aggregate power are highly volatile and oscillating with large amplitudes. In fact, the condition~(\ref{eq:Ncon}) are violated for all DERs with $a_{i}+\phi_{i}r_{i}\in(-190,-180)$. We next keep the other parameters the same, and choose $q_{i}=1.5$ in order to stabilize the market. With this new $q_{i}$, the condition~(\ref{eq:Ncon}) is satisfied for all DERs, where $a_{i}+\phi_{i}r_{i}$'s are around $-0.09$. The resulted market clearing price and aggregate demand evolution are illustrated in Figs.~\ref{fig:MultiDER}\ref{subfig:StableNDPrice}-\ref{subfig:StableNDPower}. We can see that the market converges to an equilibrium very quickly within 10 market periods. It is worth mentioning that one could also change $r_{i}$ to stabilize the market following the stability condition~(\ref{eq:Ncon}). Our extensive simulation shows that condition~(\ref{eq:Ncon}) is a very efficient certification for the market stability under multiple heterogeneous DERs. \section{Conclusions and Future Work} This paper investigated the electricity market stability under dynamic DER models. The individual DER was modeled by a scalar linear system with both state and input constraints. Under the assumption of quadratic utility and cost functions, we characterized the competitive equilibrium of the market, and convert the market stability into a discrete nonlinear system stability problem. The stability analysis of such systems is very challenging in general. We derived analytical conditions to guarantee the system stability via a contraction analysis approach. These conditions implied that the market stability can be guaranteed by simply choosing shallower slope or smaller coupling coefficient between individual state and consumption. Numerical examples were provided to demonstrate the application of the stability results. Our future work includes further investigation of less conservative conditions for the market stability and incorporating the feeder capacity limits into the market model. \bibliographystyle{IEEEtranS}
1,314,259,996,164
arxiv
\section{Introduction} The Asymptotic Giant Branch (AGB) is formed by stars with initial masses in the range between 0.8 and 8 M$_{\odot}$ in a late stage of their evolution. During most of the time H burning is the main source of energy for the AGB star but, occasionally, the inner He shell ignites in a ``thermal pulse'' and, eventually, the byproducts of He burning may reach the outer layers of the atmosphere (the so-called 3$^{rd}$ dredge-up). Thus, AGB stars, originally O-rich, can turn into C-rich AGB stars (C/O $>$ 1) after a few thermal pulses. Another important characteristic of AGB stars is the presence of neutron-rich elements (s-elements like Rb, Zr, Ba, Tc, Nd, etc.) in their atmospheres which are the consequence of the slow-neutron captures produced during the thermal pulsing phase. According to the most recent models two major neutron sources can operate in AGB stars depending on the stellar mass. The $^{13}$C($\alpha$,n)$^{16}$O reaction is the preferred neutron source for masses around 1$-$3 M$_{\odot}$ while for intermediate mass stars (M $>$ 3 M$_\odot$) the neutrons are thought to be mainly released by $^{22}$Ne($\alpha$,n)$^{25}$Mg (see e.g. Lattanzio \& Lugaro 2005 for a recent review). In the case of the more massive O-rich AGB stars (M $>$ 4 M$_\odot$), the convective envelope can penetrate the H-burning shell activating the so-called ``Hot bottom burning'' (HBB) process. HBB takes place when the temperature at the base of the convective envelope is hot enough (T $\geq$ 2$\times$10$^{7}$ K) and $^{12}$C can be converted into $^{13}$C and $^{14}$N through the CN cycle (Sackmann \& Boothroyd 1992). HBB models (e.g. Mazzitelli, D'Antona \& Ventura 1999, hereafter MDV99) predict also the production of $^{7}$Li by the chain $^{3}$He($\alpha$,$\gamma$)$^{7}$Be (e$^{-}$,$v$)$^{7}$Li, through the so-called ``$^{7}$Be transport mechanism'' (Cameron \& Fowler 1971). One of the predictions of these models is that Li should be detectable, at least for some time, on the stellar surface. The HBB activation in massive O-rich AGB stars is supported by studies of AGB stars in the Magellanic Clouds (hereafter, MCs) (e.g. Plez, Smith \& Lambert 1993). The detection of strong Li overabundances together with strong s-element enhancement in these massive (and luminous) AGB stars is the signature that they are indeed HBB stars which have undergone a series of thermal pulses and dredge-up episodes in their recent past. In our own Galaxy, only a handful of Li-rich stars have been found so far (e.g. Abia et al. 1993). Most of them are low mass C-rich AGB stars (e.g. Abia \& Isern 2000) and intermediate mass S- and SC-stars (e.g. Abia \& Wallerstein 1998) and not O-rich M-type stars. However, HBB is expected to be active in the most massive (and luminous) AGB stars (from $\sim$4 to 7 M$_{\odot}$ according to MDV99 HBB models), which might not be C-rich, but O-rich. The best candidates are the so-called {\it OH/IR stars}, luminous O-rich AGB stars extremely bright in the infrared, showing a characteristic double-peaked OH maser emission at 1612 MHz. These stars are also known to be very long period variables (LPVs), sometimes with periods of more than 500 days and large amplitudes of up to 2 bolometric magnitudes. However, they experience very strong mass loss rates (up to several times 10$^{-5}$ M$_{\odot}$yr$^{-1}$) and most of them are usually heavily obscured at this stage by thick circumstellar envelopes, making optical observations very difficult. Thus, no information exists yet on their Li abundances and/or possible s-process element enrichment. \section{Observations and results} A large sample (102) of long-period (300$-$1000 days), large amplitude variability (up to 8$-$10 magnitudes in the V band), late-type ($>$ M5) O-rich AGB stars displaying OH maser emission with a wide range of expansion velocities (from just a few km s$^{-1}$ to more than 20 km s$^{-1}$) was carefully selected. Stars were included in the sample if satisfying at least one of the above criteria and ideally as many of them as possible, which guarantees that they are actually massive stars. Consistently, stars in the sample were mainly members of the galactic disk population and displayed strong IR excesses detected by IRAS. High-resolution optical echelle spectra (R$\sim$40,000--50,000) were obtained for all stars in the sample during several observing periods in 1996--1997. The full log of the spectroscopic observations is shown in Table 1, including more detailed information on the observations. The two-dimensional frames containing the echelle spectra were reduced to single-order one-dimensional spectra using the standard {\sc echelle} software package as implemented in IRAF\footnote{Image Reduction and Analysis Facility (IRAF) software is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.}. Because of the very red colours of the sources observed, the S/N ratios achieved in the reduced spectra can strongly vary from the blue to the red orders (10-20 at $\sim$6000 \AA~while $>$100 at $\sim$8000 \AA). \begin{table} \footnotesize \begin{tabular}{cccc} \hline \tablehead{1}{c}{b}{Set-up\\} & \tablehead{1}{c}{b}{Date\\} & \tablehead{1}{c}{b}{$\Delta\lambda$\\(\AA/pix)} & \tablehead{1}{c}{b}{Range\\(\AA)} \\ \hline 4.2m WHT/UES &August 1996 &0.065 & 5300-9400\\ 4.2m WHT/UES &June 1997 &0.065 & 4700-10300\\ 4.2m WHT/UES &August 1997 &0.065 & 4700-10300\\ 3.6m ESO/CASPEC &February 1997 &0.085 & 6000-8200\\ \hline \end{tabular} \caption{Log of the spectroscopic observations} \label{tab:a} \end{table} We detected the presence of the Li I resonance line at 6708 \AA~in 25\% of the sources in the sample with a wide variety of strengths, while we did not find any signature of this line in 31\% of the stars. The remaining 44\% were heavily obscured by their thick circumstellar envelopes and they were too red/not found at optical wavelengths. In general, all stars (with or without lithium) show extremely red spectra with the flux level falling dramatically at wavelengths shorter than 6000 \AA. In addition, the spectra are severely dominated by strong molecular bands mainly due to titanium oxide (TiO), as a consequence of the very low temperature and the O-rich nature of these stars. Interestingly, the bandheads of ZrO seem to be absent in all spectra. This is shown in Figure 1 where we show the spectral region around the ZrO bandheads at 6474 and 6495 \AA. These ZrO bandheads (as well as those corresponding to other s-element oxides such as LaO or YO) are very strong in galactic S-stars and in massive MC AGB stars. \begin{figure} \centering \includegraphics[width=9cm,height=15cm,angle=-90]{dagh_proceeding_1.eps} \caption{High resolution optical spectra of sample stars displaying the lack of the ZrO absorption bands at 6474 and 6495 \AA~compared with two galactic S-stars (WY Cas and R And). WX Ser (IRAS 15255$+$1944) and V697 Her (IRAS 16260$+$3454) are Li-detected while S CrB (IRAS 15193$+$3132) and IRAS 16037$+$4218 are Li non-detected. The absorption band at $\sim$6480 \AA~corresponds to the TiO molecule.} \end{figure} \section{Chemical analysis} Our analysis combines state-of-the-art line blanketed model atmospheres and synthetic spectroscopy with extensive linelists. We have used the spherically symmetric, LTE, hydrostatic `MARCS' model atmospheres for cool stars and the `TURBOSPECTRUM' spectral synthesis code (Alvarez \& Plez 1998) to derive the Li and Zr (taken as representative of all other s-process elements) abundances in those stars for which an optical spectrum could be obtained. From an exhaustive study of the influence of the variations of the fundamental stellar parameters \textit{(e.g. T$_{eff}$, log g, M, z, $\xi$, C/O, etc.)} on the synthetic spectra and from our knowledge of the main characteristics of our stars we obtained the most adequate initial set of parameters as well as their plausible range of variation, and we constructed a grid of MARCS model spectra. Thus, we first determined by $\chi$$^{2}$ minimisation which of the spectra from our grid of models provided the best fit to the observations in the 6670$-$6730 \AA~and the 6455$-$6499 \AA~spectral regions. The goal was to fit the overall shape of the spectra including the TiO bandheads, which are very sensitive to variations in the effective temperature. Then, the Li and Zr abundances were derived by fitting the Li I resonance line at $\sim$6708 \AA~and the ZrO molecular bands at 6474 \AA~and 6495 \AA, respectively. As an example, the best fit in the 6670$-$6730 \AA~spectral region around the Li I line is presented in Figure 2 for the star IRAS 11081$-$4203. \begin{figure} \centering \includegraphics[width=9cm,height=15cm,angle=-90]{dagh_proceeding_2.eps} \caption{Best model fit and observed spectrum in the region 6670$-$6730 \AA~for the star IRAS 11081$-$4203. The \textit{T$_{eff}$} and Li abundance derived from this spectrum was 3000 K and \textit{log $\varepsilon$(Li)}=1.3, respectively. The parameters of the best model atmosphere fit are indicated in the top label.} \end{figure} \section{Li and Zr abundances} Our chemical abundance analysis shows that half of the stars show Li overabundances in the range \textit{log $\varepsilon$(Li)}$\sim$0.5 and 3.0\footnote{Li abundance in the scale 12$+$\textit{log N(Li)}. Note that the uncertainty in the Li abundances derived is estimated to be of the order of 0.4$-$0.6 dex. This error reflects mostly the sensitivity of the derived abundances to changes in the atmospheric parameters taken for the modelling.}. A very similar range of Li overabundances is found in the massive O-rich AGB stars studied in the MCs (e.g. Plez, Smith \& Lambert 1993). The Li overabundances observed are interpreted as a signature of the activation of the so-called ``Hot Bottom Burning'' (HBB), confirming that they are massive AGB stars (M $>$ 4 M$_\odot$ according to MDV99 HBB models). However, the non-detection of the ZrO molecular bands at 6474 \AA~and 6495 \AA~in any of the stars analysed imposed severe upper limits to the zirconium abundance ([Zr/Fe]$<$0.0$-$0.25 for \textit{T$_{eff}$} $\geq$ 3000 K and [Zr/Fe]$<$0.25$-$0.50 for \textit{T$_{eff}$} $<$ 3000 K). If the Zr enhancement is taken as a representative for the s-process enrichment, our results indicate that the massive AGB stars in our Galaxy are not S-stars. \section{Comparison with the Magellanic Clouds} In contrast with their galactic analogues, the more massive AGB stars in the MCs are O-rich stars showing s-process elements enhancement (S-stars). In addition, a higher proportion of them ($\sim$80\% compared to $\sim$50\% in our Galaxy) shows also Li enhancement. The Li enhancement indicates that they are also HBB stars but, why are these stars also enriched in s-process elements? The answer to this question must be related to the different metallicity of the stars in the MCs with respect to our Galaxy. Actually, theoretical models predict a higher efficiency of the dredge-up in low metallicity atmospheres (e.g. Herwig 2004) with respect to those with solar metallicity (e.g. Lugaro et al. 2003). In addition, there is an increasing observational evidence that lower metallicity environments are also less favourable to dust production, as it is suggested by the very small number of heavily obscured AGB stars in the MCs (e.g. Groenewegen et al. 2000). This is supported by the lower dust-to-gas ratios derived by van Loon (2000) in the few obscured MC AGB stars for which this analysis has been made. If mass loss is driven by radiation pressure on the dust grains, this might be less efficient with decreasing metallicity (Willson 2000). In that case, longer AGB lifetimes would be expected, which could increase the chance of nuclear-processed material to reach the stellar surface. The slow evolution predicted for AGB stars in the MCs as a consequence of the less efficient mass loss leaves time for more thermal pulses to occur during the AGB lifetime and, therefore, a more effective dredge-up of s-process elements to the surface can be expected before the envelope is completely gone at the end of the AGB. This would explain why even the more massive stars in the MCs show a strong s-process enrichment in contrast to their galactic counterparts. In our Galaxy the only AGB stars showing a similar overabundance in s-process elements seem to be the result of the evolution of low- to intermediate-mass stars (M $<$ 1.5$-$2.0 M$_\odot$), while no or very litle s-process enhancement is observed in galactic AGB stars with higher main sequence masses. Finally, the lower critical mass needed to develop HBB (e.g. M $>$ 3 M$_\odot$ at the metallicity of the LMC, compared to the $\sim$4 M$_\odot$ limit in our Galaxy) would favour the simultaneous detection of s-process elements and Li enrichment in a larger number of AGB stars in the MCs, as it is actually observed. In contrast to their MC counterparts, Li-rich massive AGB stars in our Galaxy would evolve so rapidly (because of the strong mass loss) that there is no time for a significant enhancement in s-process elements. In summary, our results suggest that the dramatically different abundance pattern found in AGB stars belonging to the MCs and to our Galaxy can be explained in terms of the different metallicity conditions under which these stars evolved. This is the first observatinal evidence that the chemical evolution during the AGB could be strongly modulated by metallicity. A complete description and discussion of these results as well as their evolutionary consequences will be given in Garc\'{\i}a-Hern\'andez et al. (2005, in preparation). \bibliographystyle{aipproc}
1,314,259,996,165
arxiv
\section{Introduction} In quantum mechanics the time-dependent Schr\"{o}dinger equation is a second order linear partial differential equation \begin{eqnarray} i\hbar \frac{\partial}{\partial t}\Psi(x,t) = -\frac{\hbar^2}{2M}\nabla^2 \Psi(x,t) + U(x)\Psi(x,t), \label{SchrodingerEq} \end{eqnarray} \noindent where the single-particle wavefunction $\Psi(x,t)$ is a function of position $x$ and time $t$, $U(x)$ denotes the potential energy, and $M$ is particle's mass. Though the Schrodinger equation has a specific form in terms of continuous variables $x$ and $t$, there are different discrete analogues of the equation \cite{DiscreteQMBook, discreteSE,TarasovPLA}. For example, by using the standard central difference formula for the Laplace operator \begin{align} \nabla^2 \Psi \to \sum_{a=1}^3 \frac{1}{\ell^2}\{\Psi({x}+\hat{a}\ell,t)+ \Psi({x}-\hat{a}\ell,t)-2\Psi({x},t)\}, \end{align} \noindent a possible discretization of Eq.(\ref{SchrodingerEq}) is \begin{align} i\hbar \frac{\partial \Psi({x},t)}{\partial t}= -\frac{\hbar^2}{2M\ell^2}\sum_{a=1}^3 \{ \Psi({x}+\hat{a}\ell,t)+ \Psi({x}-\hat{a}\ell,t)-2\Psi({x},t)\}+U({x})\Psi({x},t). \label{discrete01} \end{align} \noindent Here $\Psi(x,t)$ is defined only on the discrete position ${x}=\ell(\hat{1}n_1+\hat{2}n_2+\hat{3}n_3)$ ($n_1,n_2,n_3$ are integers), $\hat{a}$ ($a=1,2,3$) denote the unit base vectors of the Cartesian coordinate system, and $\ell$ is the lattice spacing between the nearest-neighboring spatial sites. Eq.(\ref{discrete01}) cannot be thought as an exact discrete analog of Eq.(\ref{SchrodingerEq}) since it has different dispersion relation $\varepsilon(k)$ than that in the continuous Schr\"{o}dinger equation with zero-potential $U(x)=0$ \cite{discreteSE}. In other words, the theory described by (\ref{discrete01}) is different from the theory described by the exact discrete analogue of the Schr\"{o}dinger equation since both theories do have different Hamiltonian operators in discrete space. They are equal to each other only in the zero-spacing limit $\ell \to 0$. An exact discretization of Schrodinger equation in one-dimensional space has been derived directly from the continuous Schrodinger equation \cite{TarasovPLA} \begin{align} i\hbar\frac{d\Psi({x},t)}{dt}=\frac{\hbar^2}{M\ell^2}\{\frac{\pi^2}{6}\Psi(x,t) +\mathop{\sum_{m=-\infty}^\infty}_{m \neq 0}\frac{(-1)^m}{m^2}\Psi(x-m\ell,t)\}+ U(x)\Psi(x,t). \label{exactDiscrete01} \end{align} \noindent Different from the standard central difference formula, the exact discretized Schr\"{o}dinger equation has difference of integer order that is represented by infinite series, and a long-range interaction is suggested in the discretized equation. In this paper, we show that a natural way for the derivation of the exact discretized Schr\"{o}dinger equation is from the Hamiltonian operator of the Schr\"{o}dinger field theory. It is known that the continuous Schr\"{o}dinger equation that describes the time evolution of wavefunction can be derived from the following equation \begin{align} i\hbar \frac{\partial}{\partial t}|\Psi\rangle = H|\Psi\rangle, \label{Schrodinger02} \end{align} \noindent where the Hamiltonian operator $H$ plays the role of the generator of time evolution. The so-called wavefunction $\Psi(x,t)$ is the inner product between the position eigenket $|x\rangle$ and the quantum state $|\Psi\rangle$, $\Psi(x,t)=\langle x|\Psi\rangle$. Once the Hamiltonian operator is given, by taking the inner product on both sides of (\ref{Schrodinger02}), the Schr\"{o}dinger equation in (\ref{SchrodingerEq}) is obtained. Following the same line of thought, the exact discretized Schr\"{o}dinger equation could also be derived from the Hamiltonian operator of the quantum system in discrete space. For example, consider a Schr\"{o}dinger field theory with the Lagrangian density \cite{QMbook} \begin{align} {\cal L} &= \frac{i\hbar}{2}\left\{\Psi^*(x,t)\frac{\partial}{\partial t}\Psi(x,t)-[\frac{\partial}{\partial t}\Psi^*(x,t)] \Psi(x,t)\right \} - \frac{\hbar^2}{2m} \nabla \Psi^*(x,t)\cdot \nabla \Psi(x,t) \nonumber \\&+U(x)\Psi^*(x,t) \Psi(x,t),\label{Lagrangian} \end{align} \noindent the continuous Schr\"{o}dinger equation is derived as the equation of motion in the theory. Without the self interaction term (ie., $U(x)$=0), after the second quantization of the free theory in discrete space, the free Hamiltonian operator is diagonal in the discrete momentum space \begin{align} H_0=\sum_k \varepsilon_k a_k^\dagger a_k, \label{H01} \end{align} \noindent where $\varepsilon_k = \hbar^2k^2/2M$ denotes the energy for a free particle with momentum $\hbar k$, and $a_k^\dagger$ and $a_k$ are the creation and the annihilation operator that satisfy the quantization relation \begin{align} [a_k, a_{k'}^\dagger] = \delta_{kk'}. \label{quantization} \end{align} \noindent Here the momentum vector $k$ takes only discrete values. Once the free Hamiltonian operator $H_0$ is obtained in momentum representation, it is straightforward to rewrite $H_0$ in position representation by using discrete Fourier transforms. Obviously, $H_0$ will not be diagonal in position representation due to Heisenberg's uncertainty relation, thus a spatially localized particle has a chance to hop to other locations instead of staying at the same position in the free theory. With the presence of the self interaction $U(x)$, the Hamiltonian operator becomes \begin{align} H=H_0+\sum_x U(x) a_x^\dagger a_x. \label{PotentialOp} \end{align} \noindent Here the potential energy part of $H$ is diagonal in position representation. Once the Hamiltonian operator $H$ in position representation is obtained, the exact discrete analogue of Schr\"{o}dinger equation can be derived from (\ref{Schrodinger02}) by \begin{align} i\hbar \frac{\partial}{\partial t}\langle x|\Psi\rangle =\langle x| H|\Psi\rangle. \end{align} In the next section we show that the exact discretization of Schrodinger equation can be obtained by transforming $H_0$ from momentum representation into position representation. In the third section, we discuss the discrete version of the commutator relation $[\hat{X}, \hat{P}]$ between position operator $\hat{X}$ and momentum operator $\hat{P}$. Next we compare the exact discretized Schr\"{o}dinger equation with the standard central difference formula by numerically studying the problem of a wave packet passing through a potential barrier. In the last section we give our conclusion. \section{Derivation of the exact discretized Schrodinger equation} \begin{figure} \centering \includegraphics{PositionLattice.png} \caption{The three-dimensional discrete position space with a square lattice structure. The dimension in each direction is $L=N\ell$.} \label{fig:PositionLattice} \end{figure} Consider a discrete position space with a square lattice structure that is described by \begin{align} {x} = \ell(\hat{1}n_1 +\hat{2}n_2+\hat{3}n_3), \quad -\frac{N}{2}\le n_1, n_2, n_3 \le\frac{N}{2}-1. \label{positionX} \end{align} \noindent Here ${x}$ denotes the position of the spatial lattice sites, $n_1, n_2$, and $n_3$ are integers, $\hat{a}$ ($a=1,2,3$) denote the unit base vectors of the Cartesian coordinate system, $\ell$ is the lattice spacing, and $N$ is a large even number. As shown in Fig.\ref{fig:PositionLattice}, the three-dimensional discrete position space can be viewed as a cube with side length $L=N\ell$ and total volume $L^3$. Given a free Schr\"{o}dinger field theory in the discrete space with Hamiltonian $H_0$, the operator $H_0$ is diagonal in momentum space and has the form as described in (\ref{H01}). The diagonal form of $H_0$ in momentum space is indeed a direct consequence of the translational symmetry in position space in the free theory. In the free theory, a normalized single particle state with momentum k is \begin{align} |{k}\rangle = a_k^\dagger|0\rangle, \label{particleState} \end{align} \noindent where $|0\rangle$ denotes the vacuum state, and the momentum $k$ takes only discrete values \begin{align} {k}&=\frac{2\pi}{L}(m_1, m_2, m_3), \quad -\frac{N}{2}\le m_1,m_2,m_3 \le\frac{N}{2}-1. \label{discreteK} \end{align} \noindent The integers $m_1,m_2$ and $m_3$ range from $-N/2$ to $N/2-1$. The discrete momentum space also has a square lattice structure but with lattice spacing $2\pi/L$. To transform Hamiltonian from momentum representation into position representation, we define the field operator $a_x$ and its hermitian conjugate $a_x^\dagger$ as the discrete Fourier transforms of $a_k$ and $a_k^\dagger$ \begin{align} a_x &= \frac{1}{\sqrt{N^3}}\sum_k e^{i\vec{k}\cdot \vec{x}}a_k, \label{ax}\\ a_x^\dagger &= \frac{1}{\sqrt{N^3}}\sum_k e^{-i\vec{k}\cdot \vec{x}}a_k^\dagger. \label{axdagger} \end{align} \noindent Here the discrete momentum ${k}$ is defined in (\ref{discreteK}), and ${x}$ denotes the position of lattice sites in the three-dimensional discrete position space. From the quantization relation in (\ref{quantization}), the field operators $a_x$ and $a_x^\dagger$ are shown to satisfy the commutator relation \begin{align} [a_x, a_{x'}^\dagger] = \delta_{xx'}. \label{QRX} \end{align} \noindent The commutator relation in (\ref{QRX}) thus allows us to define the position eigenkets as \begin{align} |x\rangle = a_x^\dagger |0\rangle. \label{Xstate} \end{align} \noindent The position eigenkets are normalized states, like the momentum states $|k\rangle$, can also be single-particle states in the Schr\"{o}dinger field theory in the discrete space. From (\ref{ax}) and (\ref{axdagger}), $a_k$ and $a_k^\dagger$ can be written as linear combinations of $a_x$ and $a_x^\dagger$ \begin{align} a_k &= \frac{1}{\sqrt{N^3}}\sum_x e^{-i\vec{k}\cdot \vec{x}}a_x, \label{ak}\\ a_k^\dagger &= \frac{1}{\sqrt{N^3}}\sum_x e^{i\vec{k}\cdot \vec{x}}a_x^\dagger. \label{akdagger} \end{align} \noindent Taking Eq.s (\ref{ak}) and (\ref{akdagger}) into $H_0$ in (\ref{H01}), we have \begin{align} H_0 = \frac{1}{N^3}\sum_{x}\sum_k \varepsilon_k a_x^\dagger a_{x} +\frac{1}{N^3}\mathop{\sum_{x,x'}}_{x\neq x'}\sum_k \varepsilon_k e^{i\vec{k}\cdot (\vec{x}-\vec{x}')}a_{x}^\dagger a_{x'}. \label{H02} \end{align} \noindent The form of $H_0$ in (\ref{H02}) can be simplified further if the single-particle energy $\varepsilon_k$ is known. In the Schr\"{o}dinger field theory, $\varepsilon_k = \hbar^2 {k}^2/2M$, the sum $(1/N^3)\sum_k \varepsilon_k$ in (\ref{H02}) is found to be \begin{align} \frac{\hbar^2}{2MN^3}\sum_k k^2&= \frac{3\hbar^2}{2MN} \left(\frac{2\pi}{L}\right)^2\sum_{n=-N/2}^{N/2-1} n^2 =\frac{\hbar^2\pi^2}{2M\ell^2}(1+\frac{2}{N^2})\nonumber \\ &\longrightarrow \frac{\hbar^2 \pi^2}{2M\ell^2}, \quad \mbox{as } N\to \infty. \label{sumEk} \end{align} \noindent Using the fact that \begin{align} \sum_{k_a}e^{ik_a (n-n')\ell}= N\delta_{nn'}, \quad a=1,2,3. \end{align} \noindent where $k_a$ is the $a$-th component of wave vector $k$, $n$ and $n'$ are integers. The other sum $(1/2MN^3) \sum_k k^2 e^{i \vec{k} \cdot (\vec{x}-\vec{x}')}$ in (\ref{H02}) with nonzero spatial separation $(\vec{x}-\vec{x}')$ is found be nonzero only when $(\vec{x}-\vec{x}')$ lies in either direction $\hat{1}, \hat{2}$, or $\hat{3}$. Let assume that $\vec{x}-\vec{x}'=\hat{1} m\ell$ ($m \neq 0$), then the sum is \begin{align} &\frac{1}{2MN^3}\sum_k k^2 e^{i\vec{k}\cdot (\vec{x}-\vec{x}')} = \frac{2\pi^2}{MNL^2}\sum_{n=-N/2}^{N/2-1} n^2 (e^{i2\pi m/N})^n \nonumber \\ &=\frac{2\pi^2}{MN^3\ell^2}\{(-1)^m\frac{N^2}{4}+ 2\sum_{n=1}^{N/2-1}n^2\cos\left({2\pi m n}/{N}\right) \} \nonumber \\&=\frac{(-1)^m\pi^2}{MN^2 \ell^2 \sin^2(m\pi/N)}, \quad m\neq 0. \label{sum02} \end{align} \noindent In the $N\to\infty$ limit, when $\vec{x}=\vec{x}'+\hat{a} m\ell$ ($a=1,2,3$), the sum in (\ref{sum02}) becomes \begin{align} \frac{1}{2MN^3}\sum_k k^2 e^{i\vec{k}\cdot (\vec{x}-\vec{x}')} \longrightarrow \frac{(-1)^m}{M\ell^2 m^2}. \label{sum03} \end{align} \noindent Finally, from Eq.s (\ref{H02}, \ref{sumEk}, \ref{sum03}), the free Hamiltonian operator in the large $N$ limit in position representation is \begin{align} H_0 =\frac{\hbar^2}{M\ell^2} \{\frac{\pi^2}{2} \mathbb{1}+ \sum_x \mathop{\sum_{m=-\infty}^{\infty}}_{m\neq 0} \sum_{a=1}^3 \frac{(-1)^m}{m^2} a^\dagger_{x+\hat{a}m\ell}a_x \}. \label{H0x} \end{align} \noindent Here $\mathbb{1}=\sum_x a_x^\dagger a_x$ denotes the identity operator. $H_0$ is non-diagonal in position representation with the presence of the hopping interaction $a^\dagger_{x+\hat{a}m\ell}a_x$. The hopping interaction terms are responsible for the position-momentum uncertainty relation $\Delta x \Delta p \ge \hbar/2$. With the hopping terms, a spatially localized particle can hop to other places rather than just staying at the same location. Turning on potential energy $U(x)$, the Hamiltonian operator for the Schr\"{o}dinger field theory is \begin{align} H=\frac{\hbar^2}{M\ell^2} \{\frac{\pi^2}{2} \mathbb{1}+ \sum_x \mathop{\sum_{m=-\infty}^{\infty}}_{m\neq 0} \sum_{a=1}^3 \frac{(-1)^m}{m^2} a^\dagger_{x+\hat{a}m\ell}a_x\} +\sum_x U(x)a_x^\dagger a_x. \label{3DH} \end{align} \noindent Similarly, in two dimensional and one dimensional discrete space, the Hamiltonian operators for the corresponding Schr\"{o}dinger field theories are \begin{align} H^{2dim} &=\frac{\hbar^2}{M\ell^2} \{\frac{\pi^2}{3} \mathbb{1}+ \sum_x \mathop{\sum_{m=-\infty}^{\infty}}_{m\neq 0} \sum_{a=1}^2 \frac{(-1)^m}{m^2} a^\dagger_{x+\hat{a}m\ell}a_x \}+\sum_x U(x)a_x^\dagger a_x, \label{2dimH0} \\ H^{1dim} &=\frac{\hbar^2}{M\ell^2} \{\frac{\pi^2}{6} \mathbb{1}+ \sum_x \mathop{\sum_{m=-\infty}^{\infty}}_{m\neq 0} \frac{(-1)^m}{m^2} a^\dagger_{x+m\ell}a_x\}+\sum_x U(x)a_x^\dagger a_x. \label{1dimH0} \end{align} The exact discrete analog of Schrodinger equation is derived as follows. Let's take the one-dimensional Schr\"{o}dinger theory as an example. Consider a single-particle quantum state $|\Psi\rangle$ in position representation \begin{align} |\Psi\rangle = \sum_x \Psi(x,t)a_x^\dagger |0\rangle=\sum_x \Psi(x,t)|x\rangle. \label{singlePtState} \end{align} \noindent From (\ref{Schrodinger02}) and (\ref{1dimH0}), it has \begin{align} &\sum_x i\hbar\frac{\partial \Psi(x,t)}{\partial t}|x\rangle \nonumber \\ =&\frac{\hbar^2}{M\ell^2} \{\frac{\pi^2}{6} \sum_x \Psi(x,t)|x\rangle + \sum_{x} \mathop{\sum_{m=-\infty}^{\infty}}_{m\neq 0} \frac{(-1)^m}{m^2}\Psi(x,t)|x+m\ell\rangle\} +\sum_{x}U(x)\Psi(x,t)|x\rangle. \label{SchrodingerEq03} \end{align} \noindent Comparing the coefficient of $|x\rangle$ on both sides of the equation (\ref{SchrodingerEq03}), we get \begin{align} i\hbar\frac{\partial \Psi(x,t)}{\partial t} = \frac{\hbar^2}{M\ell^2} \{\frac{\pi^2}{6}\Psi(x,t) + \mathop{\sum_{m=-\infty}^{\infty}}_{m\neq 0} \frac{(-1)^m}{m^2}\Psi(x-m\ell,t)\}+U(x)\Psi(x,t). \label{1dimDisEq01} \end{align} \noindent The result in (\ref{1dimDisEq01}) is indeed the exact discrete analogue of Schr\"{o}dinger equation given in (\ref{discrete01}). Using the identity \begin{align} \sum_{m=1}^\infty (-1)^m/m^2= -\frac{\pi^2}{12}, \label{12pi2} \end{align} \noindent the equation in (\ref{1dimDisEq01}) can be further written as \begin{align} &i\hbar\frac{\partial \Psi(x,t)}{\partial t} \nonumber \\ =&\frac{\hbar^2}{M}\sum_{m=1}^\infty\{ \frac{(-1)^m}{m^2\ell^2}[\Psi(x+m\ell,t)+\Psi(x-m\ell,t)-2\Psi(x,t)]\}+U(x)\Psi(x,t). \label{1dimDisEq02} \end{align} \noindent Eq.(\ref{1dimDisEq02}) shows that the second order differential operation $\partial^2 \Psi(x,t)/\partial x^2$ can be replaced by the infinite difference \begin{align} \frac{\partial^2 \Psi(x,t)}{\partial x^2} \to -2\sum_{m=1}^\infty \frac{(-1)^m}{m^2\ell^2}[\Psi(x+m\ell,t)+\Psi(x-m\ell,t)-2\Psi(x,t)], \label{infDiff} \end{align} \noindent in the discrete version of quantum mechanics. In two $(D=2)$ or three $(D=3)$ dimensional space, the exact discretized Schr\"{o}dinger equation is \begin{align} &i\hbar\frac{\partial \Psi(x,t)}{\partial t} \nonumber \\ =&\frac{\hbar^2}{M}\sum_{a=1}^D \sum_{m=1}^\infty\{ \frac{(-1)^m}{m^2\ell^2}[\Psi(x+\hat{a}m\ell,t)+\Psi(x-\hat{a}m\ell,t)-2\Psi(x,t)]\}+U(x)\Psi(x,t). \label{DdimDisEq02} \end{align} Now we know that Eq.(\ref{DdimDisEq02}) is an exact analogue of the continuous Schr\"{o}dinger equation since it is derived directly from the Hamiltonian operator of the theory in discrete space. The standard central difference equation given in (\ref{discrete01}) is only an approximate. In fact, Eq.(\ref{discrete01}) actually describes another quantum theory that has the Hamiltonian operator \begin{align} H=H_0+\sum_x U(x)a_x^\dagger a_x, \label{H2} \end{align} \noindent with \begin{align} H_0&=\frac{3\hbar^2}{M\ell^2} \{\mathbb{1} -\frac{1}{6}\sum_x \sum_{b=1}^3 (a^\dagger _{x+\hat{b}\ell} a_x + a^\dagger_{x-\hat{b} \ell} a_x)\} \label{H2x}\\ &=\frac{3\hbar^2}{M\ell^2}\sum_k [1-\frac{1}{3}\sum_{b=1}^3\cos(k_b\ell)]a_k^\dagger a_k. \label{H2k} \end{align} \noindent Obviously, the Hamiltonian operator given in (\ref{H2}, \ref{H2x}, \ref{H2k}) is different from the Hamiltonian operator given in (\ref{3DH}). The dispersion relation $\varepsilon_k$ in (\ref{H2k}) \begin{align} \varepsilon_k = \frac{3\hbar^2}{M\ell^2}(1-\frac{1}{3}\sum_{b=1}^3\cos(k_b\ell)), \label{dispersion} \end{align} \noindent is effectively the same dispersion relation given in \cite{discreteSE}. It differs from the free particle energy $\hbar^2 k^2/2M$ by less than one percent if $|k_b\ell| < 0.35$. The two Hamiltonian operators indeed describe different quantum theories in discrete space. When $|k_b\ell|<0.35$ is satisfied, the theory described by the Hamiltonian in (\ref{H2k}) could be a good approximation of the theory that is described by the Hamiltonian in (\ref{H01}). They are equal to each other only in the $\ell \to 0$ limit. \section{Discrete analogue of canonical quantization relation} In Schrodinger field theory in discrete space, the position and momentum operators in $\hat{b}$-direction $(b=1,2,3)$ are defined as \begin{align} \hat{P}_b &= {\sum_k} \hbar k_b a_k^\dagger a_k,\quad b=1,2,3. \label{Pop}\\ \hat{X}_b &={\sum_x} x_b a_x^\dagger a_x, \quad b=1,2,3. \label{Xop} \end{align} \noindent With the definition of momentum operator, a position eigenstate $|x_0\rangle$ will have a non-zero mean value of $\hat{P}_b$ \begin{align} \langle x_0|\hat{P}_b|x_0\rangle = -\frac{\hbar \pi}{L}. \label{meanP} \end{align} \noindent It is due to the asymmetric range of $k_b$ \begin{align} k_b=\frac{2\pi}{L}n_b, \quad n_b=\frac{-N}{2}, \frac{-N}{2}+1,...,\frac{N}{2}-1. \end{align} \noindent The non-zero mean value can be removed either by removing $k_b=-\pi/\ell$ from the allowable values of $k_b$ or by taking the limit $L\to\infty$. Similarly, a momentum state $|k_0\rangle$ also has a non-zero mean $\langle k_0|\hat{X}_b|k_0\rangle = -{\ell}/{2}$ due to the asymmetry of the range of $x_b$. It can also be removed either by removing $x_b=-L/2$ from the allowable eigenvalues of $\hat{X}_b$ or by taking the $\ell\to 0$ limit. In position representation, the momentum operator becomes \begin{align} \hat{P}_b =& \frac{\hbar}{N^3} \sum_{x,y}\mathop{\sum_k} k_b e^{i\vec{k}\cdot (\vec{x}-\vec{y})}a_x^\dagger a_y \nonumber \\ =&\frac{\hbar}{N^3}\{\sum_x \mathop{\sum_k} k_b a_x^\dagger a_x + {\sum_k} \mathop{\sum_{x,y}}_{x\neq y}k_b e^{i\vec{k}\cdot(\vec{x}-\vec{y})} a_x^\dagger a_y \} \nonumber \\ =&-\frac{\hbar \pi}{L} \{\sum_x \sum_m(-1)^m a^\dagger_{x+\hat{b}m\ell} a_x +i\sum_x \sum_{m\neq 0} (-1)^m \cot(\frac{m\pi}{N}) a_{x+\hat{b}m\ell}^\dagger a_x\}. \label{Pop2} \end{align} \noindent The first part of momentum operator in (\ref{Pop2}) \begin{align} -\frac{\hbar \pi}{L} \sum_x \sum_m(-1)^m a^\dagger_{x+\hat{b}m\ell} a_x \end{align} \noindent is also due to the asymmetry of the range of $k_b$. It disappears if a symmetric momentum operator is defined \begin{align} \hat{P}_b &= \mathop{\sum_k}_{k_b \neq -\pi/\ell} \hbar k_b a_k^\dagger a_k,\quad b=1,2,3. \label{Popsym} \end{align} \noindent For spatially smooth wavefunction $\Psi(x,t)$ the first part of $\hat{P}_b$ has a negligible contribution to $\langle x|\hat{P}_b|\Psi\rangle$ since $(-1)^m\Psi(x-\hat{b}m\ell,t)$ oscillates extremely rapidly in space as $m$ increases \begin{align} \langle x|\frac{\hbar \pi}{L}\sum_{x'} \sum_m (-1)^m a_{x'+\hat{b}m\ell}^\dagger a_{x'}|\Psi\rangle =\frac{\hbar \pi}{L} \sum_m (-1)^m\Psi(x-\hat{b}m\ell,t) \ll 1. \end{align} \noindent In the $\sqrt{N}\ell \to \infty$ limit \begin{align} &|\langle x|\frac{\hbar \pi}{L}\sum_{x'} \sum_m (-1)^m a_{x'+\hat{b}m\ell}^\dagger a_{x'}|\Psi\rangle| \le \frac{\hbar \pi}{L}\sum_m |\Psi(x-\hat{b}m\ell,t)| \le \frac{\hbar \pi} {\sqrt{N}\ell} \longrightarrow 0. \label{1stPartP} \end{align} \noindent Here $\Psi(x,t)$ is assumed to be a normalized wavefunction, $\sum_x |\Psi(x,t)|^2=1$, so that $\sum_m |\Psi(x-\hat{b}m\ell,t)| \le \sqrt{N}$. Therefore, in the $\sqrt{N}\ell \to \infty$ limit for any normalized quantum state $|\Psi\rangle$, the exact discrete analogue of the differential operation becomes \cite{TarasovPLA, TarasovCommutator} \begin{align} \frac{i}{\hbar}\langle x |\hat{P}_b|\Psi\rangle = \frac{\partial \Psi(x,t)}{\partial x_b} \Rightarrow \mathop{\sum_{m=-\infty}^\infty}_{m\neq 0} \frac{(-1)^m}{m \ell} \Psi(x+\hat{b}m\ell,t). \label{Pop4} \end{align} The commutator relation between $\hat{X}_a$ and $\hat{P}_b$ is found in a similar way. From (\ref{Pop}) and (\ref{Xop}), \begin{align} [\hat{X}_a, \hat{P}_b]&=\hbar \sum_{x}\sum_{k} x_ak_b [a_x^\dagger a_x, a_k^\dagger a_k] \nonumber \\ &=\frac{\hbar}{N^3}\sum_{x,x'}\sum_{k} k_b(x_a-x_a') e^{i\vec{k}\cdot(\vec{x}-\vec{x}')} a_x^\dagger a_{x'}. \label{XP01} \end{align} \noindent The result in (\ref{XP01}) is not zero only when $a=b$. Without loss of generality, let's consider the case that $a=b=1$. Using \begin{align} \sum_k k_1 e^{ik_1(x_1-x'_1)}&=\frac{2\pi}{L}\sum_{n=-N/2}^{N/2-1} n e^{i2\pi nm/N}=(-1)^{m+1}\frac{\pi}{\ell}(1+i\cot(\frac{\pi m}{N})),\label{sum04} \end{align} \noindent where $k_1=2\pi n/L$ and $x_1-x'_1=m\ell\neq 0$, the commutator $[\hat{X}_1, \hat{P}_1]$ is \begin{align} [\hat{X}_1, \hat{P}_1]&=-\hbar \sum_x \sum_{m\neq 0} (-1)^m (\frac{\pi m}{N}) (1+i\cot(\frac{\pi m}{N}))a_x^\dagger a_{x-\hat{1}m\ell}. \nonumber \\ &=i\hbar \sum_x a_x^\dagger a_x -\hbar \sum_x \sum_m (-1)^m (\frac{\pi m}{N}) (1+i\cot(\frac{\pi m}{N}))a_x^\dagger a_{x-\hat{1}m\ell}. \label{XP02} \end{align} \noindent Here the integer $m$ ranges from $-N/2 \le (x_1/\ell-n) \le N/2-1$. From the above results, we get the commutator relation \begin{align} [\hat{X}_a, \hat{P}_b]=i\hbar \delta_{ab}\mathbb{1} -\hbar\delta_{ab} \sum_x \sum_m (-1)^m (\frac{\pi m}{N}) (1+i\cot(\frac{\pi m}{N}))a_x^\dagger a_{x-\hat{a}m\ell}. \label{XP03} \end{align} \noindent The commutator relation shown in (\ref{XP03}) differs from the conventional quantization relation $[\hat{X}_a, \hat{P}_b]=i\hbar\delta_{ab}\mathbb{1}$ in continuous space. Since for a Schr\"{o}dinger theory that is defined on a discrete space with finite number of lattice sites (the total number of sites is $N^3$), all momentum eigenstates $|k\rangle$ and position eigenstates $|x\rangle$ are normalized states. This means that all $|k\rangle$ and $|x\rangle$ eigenkets can be used as the single-particle quantum states in the theory if relativistic energy is allowed (For highly localized state $|\Psi\rangle = |x\rangle$, the particle state has energy $\langle \Psi|H|\Psi\rangle = \pi^2 \hbar^2/2M\ell^2 \gg Mc^2$). Thus, the relations that $\langle k|[\hat{X}_a, \hat{P}_b]|k\rangle=0=\langle x|[\hat{X}_a, \hat{P}_b]|x\rangle$ help to explain why the conventional commutator relation $[\hat{X}_a, \hat{P}_b]=i\hbar\delta_{ab}\mathbb{1}$ cannot be correct in discrete space. When the commutator acts upon a quantum state $|\psi\rangle = \sum_x \Psi(x,t)|x\rangle$, it gives \begin{align} \langle x|[\hat{X}_a,\hat{P}_b] |\Psi\rangle=\hbar\delta_{ab}\{i \Psi(x,t)- \sum_m (-1)^m (\frac{\pi m}{N}) (1+i\cot(\frac{\pi m}{N}))\Psi(x+\hat{a}m\ell, t) \}. \label{XP04} \end{align} \noindent The infinite sum in (\ref{XP04}) has extremely small contribution when the normalized wavefunction $\Psi(x,t)$ is a localized and relatively smooth function in position space. In the situation, $(-1)^m (\frac{\pi m}{N}) (1+i\cot(\frac{\pi m}{N}))\Psi(x+\hat{a}m\ell, t)$ is not only a finite localized function but also oscillates very rapidly in space. So, in the limit that $N\gg 1$ and $\ell \to 0$, \begin{align} \sum_m (-1)^m (\frac{\pi m}{N}) (1+i\cot(\frac{\pi m}{N}))\Psi(x+\hat{a}m\ell, t) \longrightarrow 0. \label{XP05} \end{align} \noindent Thus in the continuous limit, for any quantum state $|\Psi\rangle$ with a localized and relatively smooth wavefunction, the conventional commutator relation holds \begin{align} \langle x|[\hat{X}_a,\hat{P}_b] |\Psi\rangle=i\hbar\delta_{ab}\Psi(x,t). \label{XP06} \end{align} \section{The exact discretization versus the standard central difference equation} The problems of a Gaussian wave packet passing through one or two potential barriers have been studied either theoretically or numerically \cite{WPTunneling, TransTimeVaryB}. In the paper a comparison between the exact discrete analogue of Schr\"{o}dinger equation and the standard central difference equation is made by numerically studying the transmission probability for a Gaussian wave packet passing through a square potential barrier in one dimensional space. We assume the initial wavefunction for a particle to be a Gaussian wave packet with its center located at $x_0$ \begin{align} \Psi(x,t=0) = \left(\frac{1}{\sqrt{2\pi}\sigma}\right)^{1/2}\exp[-\frac{(x-x_0)^2}{4\sigma^2}+ik_0 (x-x_0)].\label{iniPsi} \end{align} \noindent Here $\sigma$ is the standard deviation of particle's probability distribution at time $t=0$. The initial wave packet $\Psi(x,0)$ actually consists of many plane waves with wave numbers $k$ around $k_0$ ($k_0>0$) \begin{align} \Psi(x,0) = \frac{1}{\sqrt{2\pi}} \int dk \Phi(k) e^{ik(x-x_0)}, \label{iniPsik} \end{align} \noindent where $\Phi(k)$ is \begin{align} \Phi(k)=\left(\frac{2\sigma^2}{\pi}\right)^{1/4}\exp[-\sigma^2(k-k_0)^2]. \label{iniPhi} \end{align} \noindent Since $\Phi(k)$ is symmetric about $k_0$, the average energy for the wave packet is thus $E_0=\hbar^2k_0^2/2M$. The wavefunction at a later time is obtained by solving Schr\"{o}dinger equation. Using the exact discretized Schr\"{o}dinger equation in (\ref{DdimDisEq02}), the wave function $\Psi(x,t+\Delta t)$ at a later time $t+\Delta t$ can be obtained from the same wavefunction at the previous time $t$ \begin{align} \Psi(x,t+\Delta t) &= \Psi(x,t) -i\Delta \tau \{ \sum_{m>0}\frac{(-1)^m}{m^2}[\Psi(x+m\ell,t)+\Psi(x-m\ell,t) \nonumber \\ &-2\Psi(x,t)] +\eta(x) \Psi(x,t) \}+O((\Delta \tau)^2), \label{Num01} \end{align} \noindent where $\Delta \tau \equiv \hbar \Delta t/(M\ell^2)$ denotes the time step parameter in numerical calculation, and $\eta(x)$ is defined as \begin{align} \eta(x)=\frac{M\ell^2 U(x)}{\hbar^2}=\frac{(k_0\ell)^2 U(x)}{2E_0}. \label{eta} \end{align} \noindent As long as $\Delta \tau$ is small enough the numerical calculation based on (\ref{Num01}) should give an accurate and sensible result, and does not depend on the lattice spacing $\ell$ directly. On the other hand, if the standard central difference equation is used in numerical calculation, then $\Psi(x,t+\Delta t)$ is merely determined by the wavefunction at the previous time $t$ at site $x$ and its nearest neighbors $x\pm \ell$ \begin{align} \Psi(x,t+\Delta t) &= \Psi(x,t) -i\Delta \tau \{\Psi(x,t)-\frac{1}{2}[\Psi(x+\ell,t)+\Psi(x-\ell,t)] \nonumber \\ & +\eta(x) \Psi(x,t) \}+O((\Delta \tau)^2). \label{Num02} \end{align} \noindent Numerical calculation based on (\ref{Num02}) needs not only the enough small value of $\Delta \tau$ but also the condition $|k_0\ell|<0.35$. Unlike the use of the exact discretization in (\ref{Num01}), a suitable choice of the spatial lattice spacing $\ell$ is important in getting a reasonable numerical result. The condition $|k_0\ell|<0.35$ guarantees that the theory described by the standard central difference equation is close to the conventional Schr\"{o}dinger field theory. Actually, the discrepancy in free particle's energy between the two theories is less than one percent if $|k_0\ell|<0.35$ is satisfied. From the aspect of numerical calculation, the most important difference between the two discretization methods is the non-local nature in the exact discretization method. In (\ref{Num01}), $\Psi(x,t+\Delta t)$ is determined by the same wavefunction at all spatial lattice sites in the previous time moment $t$. It is very different from the standard central discretization in which $\Psi(x,t+\Delta t)$ is only determined by the wavefunction at the same and the nearest neighboring sites in the previous time $t$. Thus a numerical calculation based on the exact discretization formula usually takes a much longer execution time than that based on the standard central difference equation. \begin{figure} \centering \includegraphics{BarrierTrans.png} \caption{Quantum scattering of a wave packet through a potential barrier of height $U$ and width $W=10 \ell$, with $\ell=1$. The bell-shaped incident wave packet (probability distribution: dashed line) has the average energy $E_0$ with $E_0/U=\pi^2/8.0$, $k_0=\pi/6$, and $\sigma=15\ell$. The reflected and transmitted wave (probability distribution) is shown as the solid line in the figure. The potential locates between the lattice site 251 and 261. } \label{fig:BarrierTrans} \end{figure} Fig.\ref{fig:BarrierTrans} shows the reflected and transmitted wave of an incident wave packet passing through a potential barrier that is located at the center of space with height $U$ and barrier width $W=10\ell$. The space is one-dimensional with $N=500$ lattice sites and the lattice spacing is set as $\ell=1$. The average energy $E_0$ for the incident particle is chosen as $E_0/U=\pi^2/8.0$ with $k_0=\pi/6$, and the standard deviation $\sigma$ of the probability distribution of the initial wave packet is $\sigma=15 \ell$. By choosing the time step $\Delta \tau=0.001$, the numerical calculation based on (\ref{Num01}) gives the transmission probability \begin{align} \mbox{P}(transmission) = 0.654, \end{align} \noindent a value that is close to $0.632$ by theoretical calculation \begin{align} \mbox{P}_{th}(transmission) = \int dk \frac{4\varepsilon_k(\varepsilon_k-U)|\Phi(k)|^2}{4\varepsilon_k(\varepsilon_k-U)+U^2\sin^2(\alpha(k) W)}, \end{align} \noindent where $W$ denotes the width of the square potential, and \begin{align} \alpha(k)=\sqrt{\frac{2M(\varepsilon_k-U)}{\hbar^2}} \end{align} could be either real or imaginary depending on whether $(\varepsilon_k-U)$ is positive or negative. The numerical calculation which is based on the standard central difference equation gives $0.603$ for the transmission probability, about $5$ percents lower than the theoretical value. The accuracy can be enhanced further to give $0.633$ for the transmission probability if $\ell=1/3$ is chosen and the space has $1500$ lattice sites. The value of $k_0\ell$ is reduced from $\pi/6>0.35$ for $\ell=1$ to $\pi/18<0.35$ for $\ell=1/3$. Smaller value of $k_0\ell$ usually leads to the numerical result with higher accuracy as expected. Other than the aspect of numerical calculation, the exact discretized Schr\"{o}dinger equation does present a non-local transport behavior of particles. In the Schr\"{o}dinger field theory, a spatially localized quantum particle at location $x$ can jump to any location $x+m\ell$ ($m\neq 0$) with a probability in a short time $\Delta t$ \begin{align} \mbox{Prob}(x \to x+m\ell)=\left(\frac{\hbar \Delta t}{M m^2\ell^2}\right)^2, \label{QJump} \end{align} \noindent not just jump to its nearest neighboring sites. With the probability, the expected value of $m^2\ell^2$ is \begin{align} \langle m^2\ell^2 \rangle = \left( \frac{\hbar \pi \Delta t}{\sqrt{3}M\ell}\right)^2. \label{ml2} \end{align} \noindent It implies that, in a short time interval $\Delta t$, the standard deviation of the jumping distance for the highly localized particle is $\hbar \pi \Delta t/(\sqrt{3}M\ell)$. Thus the standard deviation of the momentum $P$ for the localized particle is $\Delta P = \hbar\pi/(\sqrt{3}\ell)$, and the uncertainty relation holds \begin{align} (\Delta P)\ell =\frac{\hbar \pi}{\sqrt{3}}>\Delta P \Delta x \ge \frac{\hbar}{2}. \label{DPDx} \end{align} \section{Conclusion} In this paper we show that the exact discrete analogue of Schr\"{o}dinger equation can be derived naturally from the Hamiltonian operator that is given in momentum representation after the second quantization of the Schr\"{o}dinger field theory. By defining the field operators $a_x$ and $a_x^\dagger$ to be the discrete Fourier transforms of $a_k$ and $a_k^\dagger$, the Hamiltonian operators can be transformed into that in position representation. The exact discretized Schr\"{o}dinger equation is then easily derived from time evolution of quantum states. The position and momentum operators are also constructed in position representation in the paper. The commutator relation between the two operators in discrete space is also derived and found to be different from the conventional commutator relation in continuous space. This results from the fact that both momentum and position eigenkets can be the single-particle quantum states for the Schr\"{o}dinger field theory in discrete space. In the continuous limit ($N\to \infty, \ell\to 0$), the commutator relation in discrete space is shown to go back to the conventional one in continuous space. Though in the paper we assume that the creation and annihilation operator satisfy bosonic quantization relation $[a_k, a_{k'}^\dagger]=\delta_{kk'}$, the results in the previous sections including the exact discrete analogue of Schr\"{o}dinger equation, and the commutation relation between position and momentum operator, are remained the same with fermionic quantization relation $\{a_k, a_{k'}^\dagger\} = \delta_{kk'}$. A comparison between the exact discrete analogue of Schrodinger equation and the standard central difference equation is made by numerically studying the transmission probability for a particle passing through a square potential barrier in one dimensional space. In the quantum scattering problem, both discretization schemes give sensible and accurate results as compared to theoretical calculation. Usually it will take more computation time when using exact discretization formula in numerical calculation since it needs one to sum up contributions from the wavefunctions at all spatial sites. On the other hand, the condition $|k_0\ell|<0.35$ must be satisfied if we want to have an accurate numerical result by using the standard central difference equation. Sometimes it means that a smaller lattice spacing $\ell$ and thus a larger number of spatial lattice sites must be chosen in numerical calculation. Conceptually, the exact discretized Schr\"{o}dinger equation is more quantum-like than the standard central discretization since it allows particles to jump to remote sites via the hopping interaction terms $a_{x+m\ell}^\dagger a_x$ in the Hamiltonian operator. The hopping interactions origin from the free Hamiltonian operator and play important roles in particle transport. The theory that leads to the standard central discretization simply does not have such hopping terms except for the jumps to nearest neighboring sites $a_{x\pm\ell}^\dagger a_x$. \input{Exact.bbl} \end{document}
1,314,259,996,166
arxiv
\section{Introduction} Mali has a literacy rate of 35\%, fifth-to-last in the world, which coincidentally is also its human development index ranking \citep{con2019hdr}. A survey by \cite{bleck2015education} revealed a strong correlation between use of French and the knowledge needed for informed citizenship and political engagement. Bambara, the vehicular language of Mali, is spoken by about 80\% of the population; French, the official language of Mali and the language of written communication, is spoken by an estimated 21\% \citep{lafage1993french}. Machine translation is a promising technology \citep{DBLP:journals/corr/abs-1906-05685} for dramatically increasing Bambara speakers' access to information that can impact education, health, and economic development. Bambara has a writing system that is taught in many primary schools, but its actual use outside the classroom remains rare compared to French. It is a severely under-resourced language for machine translation, as the quantity of bilingual aligned texts is extremely small. We explore the feasibility of crowdsourcing to create aligned translations from French to Bambara. Our primary source for translators will likely be Malian university students, as they are necessarily fluent in French, the language of instruction in Mali. However, many students have not learned to read or write Bambara. Thus, we assessed the quality of translations from written French to spoken Bambara and compared them to those from written French to written Bambara. \section{Methods} Seven translators participated, four using the written-written and three the written-oral method. The texts were: an excerpt of a Wikipedia article on a Mali-related topic (195 words, 10 sentences) and introductions to stories from Malian televised news broadcasts (428 words, 22 sentences). We chose a Bambara speaker that self-identified as having an advanced level of reading and writing knowledge to evaluate the translations. Having confirmed with the Académie Malienne des Langues, the official government agency for national languages, that no standard assessment of Bambara competence exists, we obtained BLEU scores for his translations of 3 sentences from a published bilingual medical text, using the text's Bambara translations as the reference. The evaluator obtained an average score of .40. The evaluator rated the translations qualitatively according to three criteria: the extent to which the translation accurately conveyed the information in the source text, the literal closeness of the Bambara translation to the source text, and the extent to which the translation used standard Bambara orthography (if written) and vocabulary (both written and oral). The evaluation was measured on a percentage scale where 0\% would be given to a translation that bore no relation to source text and 100\% to a translation that perfectly represented the source text according to the specific criteria. We also measured the translations' BLEU scores. Since we lacked reference translations for these texts, we ranked our translations pairwise, with each one in turn serving as a reference to the others. We gave the translators minimal written guidelines in French, with additional oral guidance in Bambara and French. As we wanted to simulate conditions using a large number of unsupervised non-professional translators we instructed the participants to only make a single pass at the translation, not to use reference sources, and to make a best approximation in the event that they were unsure of a translation. Our experiment showed that more detailed guidelines are necessary and it also indicated various kinds of post-processing of the translations that would be helpful. For example, proper nouns were translated in many different ways, including being: kept in their French form, phonetically Bambara-ized with forms reflecting differences in pronunciation, and, occasionally, written using the standard Bambara spelling. Numbers were sometimes written as words, other times as numbers and sometimes as words followed by the numeric value in parentheses, a convention often used in Malian texts. Contractions and agglutinated words were sometimes contracted or agglutinated, sometimes not. It is likely that, under a more tightly controlled translation regime, the BLEU scores will be much higher. \section{Results} Table \ref{tab:summary} shows the results for the Malian news broadcast and the Wikipedia article. The differences in average scores between the news broadcast and the Wikipedia article, aside from the small sampling, most probably reflect the different challenges of the texts. The news broadcast is essentially an oral text and it is easier to reproduce the exact meaning with a more colloquial style. The Wikipedia article has long and complex sentences, making it easier to miss details and nuances while the translator hews closer to the French source and falls back on more formal standard Bambara. The translations of the news broadcast showed a relatively limited difference in meaning and use of standard Bambara between written and oral translations, but significant difference in the literalness of the translations. The relatively large standard deviations shown in Table \ref{tab:stddev} indicate a wide range of quality between translators and translations, suggesting that screening translations based on basic quality metrics may be necessary and effective. \begin{table} \begin{tabular}{r|rrrrrr} &Malian news && &Wikipedia&\\ &Written&Oral&Overall&Written&Oral&Overall\\ \hline Exact meaning & 0.830 & 0.770 & 0.840 & 0.870 & 0.530 & 0.730\\ Literalness & 0.730 & 0.530 & 0.640 & 0.830 & 0.580& 0.760\\ Standard Bambara & 0.740 & 0.790 & 0.760 & 0.830 & 0.850 & 0.830\\Highest BLEU Pair & 0.408 & 0.363 & 0.408 & 0.645 & 0.377 & 0.645 \end{tabular} \caption{Malian news broadcast and Wikipedia article translation ratings and BLEU scores.} \label{tab:summary} \end{table} \begin{table} \begin{tabular}{r|rrrrrr} &Malian news && &Wikipedia&\\ &Written&Oral&Overall&Written&Oral&Overall\\ \hline Exact meaning & 0.181 & 0.208 & 0.197 & 0.206 & 0.186 & 0.220\\ Literalness & 0.130 & 0.298 & 0.243 & 0.238 & 0.211 & 0.244\\ Standard Bambara & 0.234 & 0.178 & 0.207 & 0.171 & 0.068 & 0.144\\ \end{tabular} \caption{Score variance standard deviation} \label{tab:stddev} \end{table} Here is an example of a written and oral translations of French text, followed by the qualitative and quantitative evaluation of quality. French Source\\ Objectif réfléchir à de nouvelles stratégies de lutte contre le terrorisme qui continue de faire des victimes dans le sahel. English\\ Objective to reflect on new strategies to fight terrorism which continues to claim victims in the Sahel. Written Translation\\ Laj$\varepsilon$ ni kun tun ye ka hakili jakab\rotatecharone{c} k$\varepsilon$ f$\varepsilon\varepsilon$r$\varepsilon$ kuraw la banbaanciw juguya la miniw b$\varepsilon$ ka ciy$\varepsilon$nni k$\varepsilon$ Saheli k\rotatecharone{c}n\rotatecharone{c}n\rotatecharone{c}n\rotatecharone{c}na la Oral Translation\\ A kun tun ye ka miriya kuraw ta ka banbaanciw k$\varepsilon$l$\varepsilon$li sira kan o mun bi ka k$\varepsilon$ sababu ye ka fagali caman k$\varepsilon$ saheli k\rotatecharone{c}n\rotatecharone{c}nana The highest scoring BLEU pairs in all but one of the aligned translations from the news source were between oral and written translation methods. In the one remaining case written-written and written-oral pairs had approximately the same high BLEU scores, the scores being the highest from among all the news source translations. The translations of the Wikipedia article show that the meaning of the text was captured much better in the written-to-written translations. With only one exception, the highest scoring BLEU pairs were the written-to-written translations. These results suggest that written-to-written translation may be best for more complex texts while oral translations works well on simple texts. \section{Conclusion} Our experiments increased our confidence that oral translation using non-professional translators is a viable process for at least part of our effort to collect aligned texts for Bambara. Given that our pool of potential translators consists primarily of students that do not read or write Bambara, this lends support to the thesis that properly marshalling these resources can significantly contribute to a machine translation effort for our under-resourced, predominately oral language. Our tests suggest that initial efforts with oral translation might best focus on simpler French texts imbued with oral speech patterns. Having collected and compared a small sample of translations it was easy to identify a large number of semantically inconsequential differences in translation that might be normalized to produce better machine translation outcomes either through translator guidelines or by automated text cleaning.
1,314,259,996,167
arxiv
\section{Introduction} Within the post-Moore era ahead, several design factors and fabrication constraints increasingly emphasize the requirements for in-circuit adaptation to as-built variations. These include device scaling trends towards further reductions in feature sizes~\cite{Nikonov2015}, the narrow operational tolerances associated with the deployment of hybrid Complementary Metal Oxide Semiconductor (CMOS) and post-CMOS devices ~\cite{Ghosh2010,Ghosh2016}, and the noise sensitivity limits of analog-assisted neuromorphic computing paradigms \cite{Liu2013}. While many recent works have advanced new architectural approaches for the evaluation phase of neuromorphic computation utilizing emerging hardware devices, there have been comparatively fewer works to investigate the hardware-based realization of their training and adaptation phases that will also be required to cope with these conditions. Thus, this paper develops one of the first viable approaches to address post-fabrication adaptation and retraining in-situ of resistive weighted-arrays in hardware, which are ubiquitous in post-Moore neuromorphic approaches. Namley, a tractable in-field reconfiguration-based approach is developed to leverage in-field configurability to mitigate the impact of process variation. Reconfigurable fabrics are characterized by their fabric flexibility, which allows realization of logic elements at medium and fine granularities, as well as in-field adaptability, which can be leveraged to realize variation tolerance and fault resiliency as widely-demonstrated for CMOS-based approaches such as \cite{Oreifej2018,Ashraf2013}. Utilizing reconfigurable computing by applying hardware and time redundancy to the digital circuits offers promising and robust techniques for addressing the above-mentioned reliability challenges. For instance, it is shown in \cite{Ashraf2013} that a successful refurbishment for a circuit with 1,252 look-up tables (LUTs) can be achieved with only 10\% spare resources to accommodate both soft and hard faults. Within the post-Moore era, reconfigurable fabrics can also be expected to continue their transition towards embracing the benefits of increased heterogeneity along several cooperating dimensions to facilitate neuromorphic computation~\cite{DeMara2017}. Since the inception of the first field-programmable devices, various granularities of general-purpose reconfigurable logic blocks and dedicated function-specific computational units have been added to their structures. These have resulted in increased computational functionality compared to homogeneous architectures. In recent years, emerging technologies are proposed to be leveraged in reconfigurable fabrics to advance new transformative opportunities for exploiting technology-specific advantages. Technology heterogeneity recognizes the cooperating advantages of CMOS devices for their rapid switching capabilities, while simultaneously embracing emerging devices for their non-volatility, near-zero standby power, high integration density, and radiation-hardness. For instance, spintronic-based LUTs are proposed in \cite{Zandphys,Suzuki2015,Yang2018} as the primary building blocks in reconfigurable fabrics realizing significant area and energy consumption savings. In this paper, we extend the transition toward heterogeneity along various logic paradigms by proposing a heterogeneous technology fabric realizing both probabilistic and deterministic computational models. The cooperating advantages of each are leveraged to address the deficiencies of the others during the neuromorphic training and evaluation phases, respectively. In this paper, we propose a spintronic neuromorphic reconfigurable Array (SNRA) that uses probabilistic spin logic devices to realize deep belief network (DBN) architectures while leveraging deterministic computing paradigms to achieve in-circuit training and evaluation. Most of the previous DBN research has focused on software implementations, which provides flexibility, but requires significant execution time and energy due to large matrix multiplications that are relatively inefficient when implemented on standard Von-Neumann architectures. Previous hardware-based implementation of RBM have sought to overcome software limitations by using FPGAs \cite{kim2010large,le2010high}, stochastic CMOS \cite{ardakani2017vlsi}, and hybrid memristor-CMOS designs \cite{bojnordi2016memristive}. Recently, Zand et al. \cite{ZandGLSVLSI} utilized a spintronic device that leverages intrinsic thermal noise within low energy barrier nanomagnets to provide a natural building block for RBMs. While most of the aforementioned designs only focus on the test operation, the work presented herein concentrates on leveraging technology heterogeneity to implement a train and evaluation circuitry for DBNs with various network topologies on our proposed SNRA fabric. \begin{figure} \centering \includegraphics[scale=0.42]{Fig1.png} \caption{(a) An RBM structure, (b) a 3$\times$3 RBM implemented by a 4$\times$4 crossbar architecture, (c) a DBN structure including multiple hidden layers.} \label{fig:crossbar} \end{figure} \section{Restricted Boltzmann Machines} Restricted Boltzmann machines (RBMs) are a class of recurrent stochastic neural networks, in which each state of the network, \textit{k}, has an energy determined by the connection weights between nodes and the node bias as described by (1), where $s_i^k$ is the state of node \textit{i} in \textit{k}, \textit{b\textsubscript{i}} is the bias, or intrinsic excitability of node \textit{i}, and \textit{w\textsubscript{ij}} is the connection weight between nodes \textit{i} and \textit{j} \cite{ackley1985}. \begin{equation} E(k) = -\sum_{i} s_i^k b_i -\sum_{i<j} s_i^k s_j^k w_{ij} \end{equation} Each node in a RBM has a probability to be in state one according to (2), where $\sigma$ is the sigmoid function. RBMs, when given sufficient time, reach a Boltzmann distribution where the probability of the system being in state \textit{\textbf{v}} is found by (3), where \textit{\textbf{u}} could be any possible state of the system. Thus, the system is most likely to be found in states that have the lowest associated energy. \begin{equation} P(s_i = 1) = \sigma (b_i + \sum_{j} w_{ij} s_j) \end{equation} \begin{equation} P(v) = \frac{e^{-E(v)}}{\sum_{u} e^{-E(u)}} \end{equation} Restricted Boltzmann machines (RBMs) are constrained to two fully-connected non-recurrent layers called the \textit{visible layer} and the \textit{hidden layer}. RBMs can be readily implemented by a crossbar architecture, as shown in Fig.\ref{fig:crossbar}. The most well-known approach for training RBMs is contrastive divergence (CD), which is an approximate gradient descent procedure using Gibbs sampling \cite{carreira2005}. CD operates in four steps as described below: 1. \textit{Feed-forward:} the training input vector, \textbf{$v$}, is applied to the visible layer, and the hidden layer, \textbf{$h$}, is sampled. 2. \textit{Feed-back:} The sampled hidden layer output is fed-back and the generated input is sampled, \textbf{$v'$}. 3. \textit{Reconstruct:} \textbf{$v'$} is applied to the visible layer and the reconstructed hidden layer is sampled to obtain \textbf{$h'$}. 4. \textit{Update:} The weights are updated according to (4), where $\eta$ is the learning rate and \textbf{$W$} is the weight matrix. \begin{equation} \Delta W = \eta (vh^T-v'h'^T) \end{equation} RBMs can be readily stacked to realize a DBN, which can be trained similar to RBMs. Training a DBN involves performing CD on the visible layer and the first hidden layer for as many steps as desired, then fixing those weights and moving up a hierarchy as follows. The first hidden layer is now viewed as a visible layer, while the second hidden layer acts as a hidden layer with respect to the CD procedure identified above. Next, another set of CD steps are performed, and then the process is repeated for each additional layer of the DBN. \begin{figure} \centering \includegraphics[scale=0.50]{Fig2.png} \caption{(a) A 4$\times$2 RBM hardware implementation, (b) SHE-DWM based weighted connections, and (c) p-bit based probabilistic neuron \cite{Camsari2017}.} \label{fig:array} \end{figure} \section{Proposed RBM Structure} A feasible hardware implementation of a 4$\times$2 RBM structure is shown in Fig.~\ref{fig:array}(a), in which three terminal spin Hall effect (SHE)-driven domain wall motion (DWM) device \cite{Sengupta2016} is used as weights and biases, while the probabilistic spin logic devices (p-bits) are utilized to produce a probabilistic output voltage that has a sigmoid relation with the input currents of the devices, as shown in Fig.~\ref{fig:array}(b) and Fig.~\ref{fig:array}(c), respectively. The p-bit device consists of a SHE-driven magnetic tunnel junction (MTJ) with a circular near-zero energy barrier nanomagnet, which provides a natural sigmoidal activation function required for DBNs as studied in \cite{Camsari2017,Faria2017,sutton2017,behin2016}. Transmission gates (TGs) are used within the bit cell of the weighted connections to adjust the weights by changing the domain wall (DW) position in SHE-DWM devices, as well as controlling the RBM operation phases. TGs can provide an energy-efficient and symmetric switching behavior \cite{zandTVLSI2017}, which is specifically desired during the training operation. \iffalse \begin{table}[] \centering \caption{Signaling to Control The Array Operations.} \vspace{-0.2cm} \label{tab:signaling} \begin{tabular}{ccccccc} \hline Operation & WWL & RWL & BL & SL & V+ & V- \\ \hline \begin{tabular}[c]{@{}c@{}}Increase Weight\end{tabular} & VDD & GND & VDD & GND & Hi-Z & Hi-Z \\ \begin{tabular}[c]{@{}c@{}}Decrease Weight\end{tabular} & VDD & GND & GND & GND & Hi-Z & Hi-Z \\ Read & GND & VDD & VIN & Hi-Z & VDD & VDD/2 \\ \hline \end{tabular} \end{table} \fi \begin{table}[] \centering \caption{Required signaling to control the RBM operation phases.} \label{tab:signaling} \begin{tabular}{llllll} \hline \multicolumn{2}{l}{Operation Phase} & WWL & RWL & BL & SL \\ \hline \multicolumn{2}{l}{Feed-Forward / Test} & \multirow{3}{*}{GND} & \multirow{3}{*}{VDD} & \multirow{3}{*}{Hi-Z} & \multirow{3}{*}{Hi-Z} \\ \multicolumn{2}{l}{Reconstruct} & & & & \\ \multicolumn{2}{l}{Feed-Back} & & & & \\ \hline \multirow{2}{*}{Update} & Increase Weight & \multirow{2}{*}{VDD} & \multirow{2}{*}{GND} & Vtrain & GND \\ & Decrease Weight & & & GND & Vtrain \\ \hline \end{tabular} \end{table} Table~\ref{tab:signaling} lists the required signaling to control the RBM's training and test operations. During the feed-forward, feed-back, and reconstruct operations, write word line (WWL) is connected to ground (GND) and the bit line (BL) and source line (SL) are both in high impedance (Hi-Z) state disconnecting the write path. The read word line (RWL) is connected to VDD, which turns ON the read TGs in the weighted connection bit cell shown in Fig.~\ref{fig:array}(b). The voltage applied by the input neuron generates a current through TG1 and TG2, which is then injected to the output neuron and modulates the output probability of the p-bit device. The amplitude of the current depends on the resistance of the weighted connection which is defined by the position of the DW in the SHE-DWM device. During the update phase, the RWL is connected to GND, which turns off TG1 and TG2 and disconnects the read path. Meanwhile, the WWL is set to VDD which activate the write path. Resistance of the weighted connections can be adjusted by the BL and SL signals, as listed in Table~\ref{tab:signaling}. The amplitude of the training voltage (Vtrain) connected to BL and SL should be designed in a manner such that it can provide the desired learning rate, $\eta$, to the training circuit. For instance, a high amplitude $Vtrain$ results in a significant change in the DW position in each training iteration, which effectively reduces the number of different resistive states that can be realized by the SHE-DWM device. On the other hand, a higher SHE-DWM resistance leads to a smaller current injected to the p-bit device. Thus, the input signal connected to the weighted connection with higher resistance will have lower impact on the output probability of the p-bit device, representing a lower weight for the corresponding connection between the input and output neurons. \begin{figure} \centering \includegraphics[scale=0.48]{Fig4.png} \caption{FSM designed to control the train and test operations in a DBN.} \label{fig:FSM} \end{figure} \begin{figure*} \centering \includegraphics[scale=0.70]{Fig3.pdf} \vspace{-0.3cm} \caption{The hardware realization for the \textit{update} state in the FSM developed to train a 4$\times$2 RBM, (a) first clock cycle, and (b) second clock cycle.} \label{fig:update} \end{figure*} \section{Proposed Hardware Implementation of Contrastive Divergence Algorithm} To implement the contrastive divergence (CD) algorithm required for training the weights in an RBM structure, we have designed a four-state finite state machine (FSM) as shown in Fig.~\ref{fig:FSM}. The proposed FSM is in the \textit{feed-forward} state during the test operation. When the training begins, the input of the visible layer and the corresponding output of the hidden layer will be stored in the \textit{\textbf{v}} and \textit{\textbf{h}} registers, respectively. The size of the \textbf{\textit{v}} and \textbf{\textit{h}} registers depend on the number of neurons in the visible and hidden layers. For instance, in the sample 4$\times$2 RBM shown in Fig.~\ref{fig:array} the size of the \textit{\textbf{v}} and \textbf{\textit{h}} registers are 4-bits and 2-bits, respectively. In the \textit{feed-back} state, the sampled hidden layer is fed-back to the RBM array and the corresponding output of the visible layer is stored in the \textbf{\textit{v\_bar}} register. Next, the stored values in \textbf{\textit{v\_bar}} are applied to the RBM to reconstruct the hidden layer, and the obtained output of the hidden layer will be stored in \textbf{\textit{h\_bar}} register. Finally in the \textit{update} state, the data stored in \textit{\textbf{v}}, \textbf{\textit{h}}, \textbf{\textit{v\_bar}}, and \textbf{\textit{h\_bar}} registers are used to provide the required BL and SL signals to adjust the weights according to (4). Figure~\ref{fig:update} depicts the schematic of the hardware designed for the \textit{update} state of the FSM developed for a 4$\times$2 RBM. In each clock cycle, The designed circuit adjusts the weights in a single column of the RBM shown in Fig.~\ref{fig:array}. Thus, the number of clock cycles required to complete the update state depends on the number of neurons in the hidden layer of the RBM. A \textit{counter} register is used in the design to ensure that all of the columns in the RBM are updated. The counter value starts from zero and will be incremented in each clock cycle until it reaches the $h_n$ value, which is the total number of nodes in the hidden layer. Once the counter reaches $h_n$, the update state is completed and the FSM goes to the \textit{feed-forward} state. The logical AND gates are used to implement the $vh^T$ and $v'h'^T$ expressions required to find $\Delta W$ for the weights in each column. The output of Boolean gates implementing $vh^T$ and $v'h'^T$ are stored in \textit{\textbf{BL\_reg}} and \textbf{\textit{SL\_reg}} registers, respectively, which provide the required signaling for adjusting the weights according to the Table~\ref{tab:signaling}. Herein, to better understand the functionality of the hardware developed for the \textit{update} state, we have used an example with the $v$, $h$, $v'$, and $h'$ matrices having the hypothetical values mentioned below: $$ v= \begin{bmatrix} v_0\\ v_1\\ v_2\\ v_3 \end{bmatrix} = \begin{bmatrix} 1\\ 0\\ 1\\ 0 \end{bmatrix} \quad h= \begin{bmatrix} 1 \\ 0 \end{bmatrix} \quad v'= \begin{bmatrix} 0\\ 0\\ 1\\ 0 \end{bmatrix} \quad h'= \begin{bmatrix} 0 \\ 1 \end{bmatrix} $$ Hence, the $\Delta W$ can be calculated using (4) as shown below: $$ \Delta W= \eta (vh^T-v'h'^T) = \eta \begin{bmatrix} 1 & 0\\ 0 & 0\\ 1 & -1\\ 0 & 0 \end{bmatrix} = \begin{bmatrix} \delta w_{00} & \delta w_{01}\\ \delta w_{10} & \delta w_{11}\\ \delta w_{20} & \delta w_{21}\\ \delta w_{30} & \delta w_{31} \end{bmatrix} $$ \begin{figure} \centering \includegraphics[scale=0.6]{Fig5.png} \caption{The output signals generated by the proposed FSM. The clock frequency is 500MHz, which can be modified based on the design requirements.} \label{fig:wave} \end{figure} \begin{figure*} \centering \includegraphics[scale=0.48]{Fig6.png} \caption{(a) The schematic of the hardware designed to control the testing and training operations of a 4$\times$2 RBM implemented on a Xilinx Kintex-7 FPGA family, (b) the structure of a 6-input SHE-MTJ based fracturable LUT used as the building block of the proposed SNRA architecture.} \label{fig:schematic} \end{figure*} According to the obtained $\Delta W$, $w_{21}$ should be decreased while the $w_{00}$ and $w_{20}$ increases, and the remaining weight values remain unchanged. The hardware realization of the mentioned example is shown in Fig.~\ref{fig:update}, in which the values stored in the registers are \textit{\textbf{v}}=4'b0101, \textbf{\textit{h}}=2'b01, \textbf{\textit{v\_bar}}=4'b0100, and \textbf{\textit{h\_bar}}=2'b10. It is worth noting that, the $v_0$ element in the $v$ matrix is stored in the least significant bit of the \textit{\textbf{v}} register, while $v_3$ is stored in the most significant bit. Other matrices are stored to their corresponding registers in the similar manner. In this example, RBM has two output neurons, therefore $h_n$ is equal to two and the update operation can be completed in two clock cycles. In the first cycle shown in Fig.~\ref{fig:update}(a), the counter is equal to zero and the first bits of \textbf{\textit{h}} and \textbf{\textit{h\_bar}} registers are selected by the multiplexers to be used as the input of the AND gates. Therefore, the below BL and SL signals are generated, $$ BL= \begin{bmatrix} BL0\\ BL1\\ BL2\\ BL3 \end{bmatrix} = \begin{bmatrix} 1\\ 0\\ 1\\ 0 \end{bmatrix} \quad SL= \begin{bmatrix} SL0\\ SL1\\ SL2\\ SL3 \end{bmatrix} = \begin{bmatrix} 0\\ 0\\ 0\\ 0 \end{bmatrix} $$ As listed in Table~\ref{tab:signaling}, the above BL and SL signals will increase $w_{00}$ and $w_{20}$ weights shown in Fig.~\ref{fig:array}, if the WWL0 and WWL1 signals are ``1'' and ``0'', respectively. Similarly, in the second clock cycle, the counter is equal to one and the second bits of \textbf{\textit{h}} and \textbf{\textit{h\_bar}} registers are used to produce below BL and SL signals as below, $$ BL= \begin{bmatrix} BL0\\ BL1\\ BL2\\ BL3 \end{bmatrix} = \begin{bmatrix} 0\\ 0\\ 0\\ 0 \end{bmatrix} \quad SL= \begin{bmatrix} SL0\\ SL1\\ SL2\\ SL3 \end{bmatrix} = \begin{bmatrix} 0\\ 1\\ 0\\ 0 \end{bmatrix} $$ This results in a decrease in the $w_{21}$ weight, while the other weights remain unchanged. Thus, the proposed hardware provides the desired functionality required for the \textit{update} state according to (4). Herein, we have used the Verilog hardware description language (HDL) to implement our proposed four-state FSM. The ModelSim simulator is used to simulate the developed register-transfer level (RTL) Verilog codes. Figure~\ref{fig:wave} shows the obtained waveforms required for training a 4$\times$2 RBM array with the hypothetical register values mentioned above. The results show that the desired BL, SL, RWL, and WWL control signals are generated in five clock cycles, which verifies the functionality of our proposed FSM. To obtain the hardware resources required for our proposed DBN control circuitry, we have synthesized and implemented it using Xilinx ISE Design Suite 14.7. The schematic of the hardware developed to control the testing and training operations for a 4$\times$2 RBM is shown in Fig.~\ref{fig:schematic}(a), in which 32 six-input fracturable look-up table (LUT) and Flip Flop (FF) pairs are used to implement both sequential and combinational logic. It is worth noting that out of the 32 LUT-FF pairs only three of them are utilized for the test operation, thus roughly 90\% of the circuit can be power-gated during the test operation. However in conventional homogeneous technology FPGAs, volatile static random access memory (SRAM) cells are employed in LUTs to store the logic function configuration data. Therefore, by power-gating the SRAM-based LUTs the configuration data will be lost and the FPGA is required to be re-programmed. In addition to volatility, SRAM cells also suffer from high static power and low logic density \cite{Kuon2007}. Hence, alternative emerging memory technologies have been attracting considerable attention in recent years as an alternative for SRAM cells. \section{The proposed SNRA architecture} Herein, we propose a heterogeneous-technology spintronic neuromorphic reconfigurable array (SNRA), which can combine both deterministic and probabilistic logic paradigms. The SNRA fabric is organized into islands of probabilistic modules surrounded by Boolean configurable logic blocks (CLBs). Both the probabilistic and deterministic elements are field programmable using a configuration bit-stream based on conventional FPGA programming paradigms. Herein, the probabilistic modules consist of RBMs, which can be connected hierarchically within the field-programmable fabric to form various topologies of DBNs. Each RBM leverages SHE-MTJs with unstable nanomagnets ($\Delta \ll 40kT$) to generate the probabilistic sigmoidal activation function of the neurons. With respect to the deterministic logic, the CLBs are comprised of LUTs which realize the training and evaluation circuitry. Non-volatile high energy barrier ($\Delta \geq 40kT$) SHE-MTJ devices are used as an alternative for SRAM cells within LUT circuits. The routing networks include routing tracks, as well as switch and connection blocks similar to that of the conventional FPGAs. The feasibility of integrating MTJs and CMOS technologies in an FPGA chip has been verified in 2015 by researchers in Tohoku University \cite{Suzuki2015}. They have fabricated a nonvolatile FPGA with 3,000 6-input MTJ-based LUTs under 90nm CMOS and 75nm MTJ technologies. The measurement of fabricated devices under representative applications exhibited significant improvements in terms of power consumption and area. Despite the mentioned improvements, the conventional spin transfer torque (STT)-based MTJ devices suffer from high switching energy and reliability issues. Thus, we propose using SHE-MTJ based LUT circuits with reduced switching energy and increased reliability of tunneling oxide barrier \cite{Manipatruni2014}. Readers are referred to \cite{Fong2016} for additional information regarding the STT-MTJ and SHE-MTJ devices. Figure~\ref{fig:schematic}(b) shows the structure of a six-input SHE-MTJ based fracturable LUT \cite{zandTNANO}, which can implement a six-input Boolean function or two five-input Boolean functions with common inputs. In general, LUT is a memory with $2^m$ cells in which the truth table of an $m$-input Boolean function is stored. The logic function configuration data is stored in SHE-MTJs in form of different resistive levels determined based on the magnetization configurations of ferromagnetic layer in MTJs, i.e parallel configuration results in a lower resistance standing for logic ``0'' and vice versa. The LUT inputs can be considered as the address according to which corresponding output of the Boolean function will be returned through the select tree. The LUT circuit shown in Fig.~\ref{fig:schematic}(b) includes two pre-charge sense amplifiers (PCSAs) that are used to read the logic state of the SHE-MTJs. The PCSA compares the stored resistive value of the SHE-MTJ cells in the LUT circuit with a reference MTJ cell that its resistance is designed between the low and high resistances of the LUT's SHE-MTJ cells. Therefore, if the resistive value of a SHE-MTJ cell in the LUT circuit is greater than the resistance of the reference cell, the output of the PCSA will be ``1'' and vice versa. The readers are referred to \cite{zandTNANO} for additional information regarding the functionality of a SHE-MTJ based LUT circuit. \section{Results and Discussions} Herein, we have modified a MATLAB implementation of DBN developed in \cite{Tanaka2014} and utilized MNIST data set \cite{Lecun1998} to calculate the error rate and evaluate the performance of our DBN architecture. The simplest model of the belief network that can be used for MNIST digit recognition includes a single RBM with 784 nodes in the visible layer to handle 28×28 pixels of the input images, and 10 nodes in hidden layer representing the output classes. Herein, we have examined the error rate for five different network topologies using 1,000 test samples as shown in Fig.~\ref{fig:err}. As it is expected, increasing the number of the hidden layers, nodes, and training images improves the performance of the DBN, however these improvements are realized at the cost of higher area and power dissipation. \begin{figure} \centering \includegraphics[scale=0.7]{Fig7.png} \caption{Error rate vs. training samples for various DBN topologies \cite{ZandGLSVLSI}.} \label{fig:err} \end{figure} To compare the resource utilization between the five network topologies investigated in this paper, we have used Xilinx ISE Design Suite 14.7 to implement their control circuitry based on the FSM design proposed in Section IV. The obtained logic resource utilization for each of the mentioned DBN topologies is listed in Table~\ref{tab:resources}. Since the training operation in different layers of the DBN does not happen simultaneously, the resources can be shared for training each RBM. Therefore, the amount of logic resources utilized to implement the FSM of a DBN relies on the size of the largest RBM in the network. For instance, as listed in Table~\ref{tab:resources}, the resource utilization for training a 784$\times$500$\times$10 DBN is equal to that of a 784$\times$500$\times$500$\times$10 DBN, since the size of the largest RBM in both networks is 784$\times$500. To provide a fair power consumption comparison between the investigated DBN topologies, we have simulated an SRAM-based six-input fracturable LUT-FF pair in SPICE circuit simulator using 45nm CMOS library with 1V nominal voltage. The obtained static and dynamic power dissipation are listed in Table~\ref{tab:compare}. Herein, we have only focused on the power dissipated by the LUT-FF pairs, and used the below relation to measure the power consumption for each topology: \begin{equation} P_{total} = \sum_{i} A_{i}P_{read} + I_{i}P_{standby} \end{equation} where $A_{i}$ and $I_{i}$ are the number of active and idle LUT-FF pairs in RBM $i$ of the DBN, respectively. The obtained power dissipation values for various DBN topologies are listed in the last column of Table~\ref{tab:resources}. The provided trade-offs between the error rate and power consumption can be leveraged to design a desired DBN based on the application requirements. \begin{table}[] \centering \caption{FSM logic resource utilization and power dissipation for various DBN topologies.} \label{tab:resources} \begin{tabular}{lcccc} \hline Topology & \begin{tabular}[c]{@{}c@{}}Slice \\ Registers\end{tabular} & \begin{tabular}[c]{@{}c@{}}Slice \\ LUTs\end{tabular} & \begin{tabular}[c]{@{}c@{}}Fully-used \\ LUT-FFs\end{tabular} & \begin{tabular}[c]{@{}c@{}}Power \\ Consumption\end{tabular}\\ \hline 784$\times$10 & 3185 & 123 & 51 & 0.32 mW \\ 784$\times$500$\times$10 & 4655 & 3545 & 1771 & 14.2 mW \\ 784$\times$800$\times$10 & 5533 & 2449 & 2421 & 19.3 mW \\ 784$\times$500$\times$500$\times$10 & 4655 & 3545 & 1771 & 25.3 mW \\ \begin{tabular}[c]{@{}l@{}}784$\times$800$\times$800 $\times$10\end{tabular} & 5617 & 2449 & 2421 & 34.5 mW \\ \hline \end{tabular} \end{table} \iffalse \begin{table}[] \centering \caption{My caption} \label{my-label} \begin{tabular}{lccccc} \hline Topology & 784$\times$10 & 784$\times$500$\times$10 & 784$\times$800$\times$10 & 784$\times$500$\times$500$\times$10 & 784$\times$800$\times$800$\times$10 \\ Power (mW) & 0.32 & 11.44 & 15.52 & 22.56 & 30.73 \\ \hline \end{tabular} \end{table} \fi \begin{table}[] \centering \caption{Performance comparison between six-input fracturable SRAM-based LUT and SHE-MTJ based LUT.} \label{tab:compare} \begin{tabular}{llcc} \hline \multicolumn{2}{c}{Features} & SRAM-LUT & SHE-MTJ LUT \\ \hline \multirow{2}{*}{Device Count} & MOS & 1163 & 565 \\ & MTJ & - & 66 \\ \hline \multicolumn{1}{l}{\multirow{3}{*}{\begin{tabular}[l]{@{}c@{}}Power ($\mu$W)\end{tabular}}} & Read & 6.28 & 1.1 \\ \multicolumn{1}{c}{} & Write & 28 & 188 \\ \multicolumn{1}{c}{} & Static & 1.6 & 0.21 \\ \hline \multirow{2}{*}{Delay} & Read & \textless{} 10 ps & \textless{} 30 ps \\ & Write & \textless{} 0.1 ns & \textless{} 2 ns \\ \hline \multirow{2}{*}{Energy} & Read & $\sim$ 62.8 aJ & $\sim$ 33 aJ \\ & Write & $\sim$ 2.8 fJ & $\sim$ 376 fJ \\ \hline \end{tabular} \end{table} To investigate the effect of \textit{technology heterogeneity} on the performance of the proposed DBN control circuitry, we have simulated a SHE-MTJ based six-input fracturable LUT in SPICE using 45nm CMOS and 60nm MTJ technologies. The modeling approach proposed in \cite{zandTNANO}\cite{RoohiTCAD} is leveraged to model the behavior of SHE-MTJ devices. In particular, first, a Verilog-A model of the device is developed and used in SPICE to obtain the write current, as well as the power dissipation of the read/write operations. Next, the write current is used in a descriptive MATLAB model of a SHE-MTJ device to extract the corresponding write delay. The simulation results obtained for a SHE-MTJ based six-input fracturable LUT circuit are listed in Table~\ref{tab:compare}. \begin{figure}[!t] \centering \includegraphics[width=3.2in, height=3in]{Fig8.png} \caption{Power dissipation of developed FSM for various DBN topologies.} \label{fig:power} \end{figure} Three types of power consumption profiles can be identified in FPGA LUTs. During the configuration phase, the LUTs must be initialized and thus written. This incurs an initial write energy consumption, which occurs infrequently thereafter. Second, upon configuration the LUTs comprising active logic paths will consume read power including a certain sub areas within high gate equivalent capacity of FPGA chips. Third, the remainder of the LUTs, which can be a large number, may be inactive and consume standby power. SRAM-based FPGA is challenged by the difficulty with power-gating LUTs which must retain the stored configuration. While, a SHE-MTJ based LUT can be readily power-gated and incur near-zero standby energy due to its non-volatility characteristic. On the other hand, replacing SRAM cells with SHE-MTJ devices results in a considerable reduction in the transistor count of the LUT circuit since each SRAM cell includes 6 MOS transistors in its structure, while SHE-MTJ devices can be fabricated on top of the MOS circuitry incurring very low area overhead. In particular, SHE-MTJ based LUT circuit achieves at least 51\% reduction in MOS transistor count compared to the conventional SRAM-based LUT, as listed in Table~\ref{tab:compare}. Transistors with minimum feature size are utilized in the SHE-MTJ based LUT circuit to control the SHE-MTJ write and read operations. Thus, the device count results can provide a fair comparison between SHE-MTJ based LUTs and conventional SRAM-based LUTs in terms of area consumption, since all of the MOS transistors used in both designs have the minimum feature size possible by the 45nm CMOS technology. Figure~\ref{fig:power} provides a comparison between the conventional SRAM-based FPGA and the proposed SNRA with a focus on the power dissipation induced by LUT-FF pairs utilized to implement the developed DBN control circuitry. The combined improvements in the read and standby modes of the proposed SNRA resulted in realizing at least 80\% reduction in power consumption compared to the conventional CMOS-based reconfigurable fabrics for various DBN topologies. The results obtained for the read operation are comparable to that of the STT-MTJ based FPGA proposed by the Suzuki et al. \cite{Suzuki2015}. However, the utilization of SHE-MTJ based LUTs within the SNRA architecture instead of STT-MTJs can result in at least 20\% reduction in configuration energy as demonstrated by authors in \cite{zandTNANO}. \section{Conclusion} The concept of SNRA offers an intriguing architectural approach to realize beyond von-Neumann paradigms which embrace both probabilistic and Boolean computation. As developed herein, the inclusion of in-field programmability offers several practical benefits beyond simulation towards a feasible post-Moore fabric. Most importantly, it can accommodate process variation issues that would otherwise preclude the validity of the baseline training values that differ from the manufactured component. To coordinate training, a four-state FSM is shown to be sufficient to implement the contrastive divergence (CD) algorithm, as well as the control circuitry for the test operation of DBNs with various topologies. The proposed FSM is capable of unsupervised training of an RBM in $N+3$ clocks where $N$ denoted the number of nodes in the hidden layer of RBM. Interpolating the synthesis results from the Xilinx toolchain indicate a conventional FPGA footprint can accommodate training circuitry for significantly deeper belief networks. This is facilitated using the flexible allocation and routing of layers and their downstream destinations which is a central tenant of CD training. For instance, it was shown that the FSM for both 784$\times$500$\times$10 and 784$\times$500$\times$500$\times$10 DBN topologies can be implemented with 1,771 LUTs, since the size of the largest RBM in both networks is 784$\times$500. Beyond the flexible architectural approach, within the SNRA fabric, the device parameters are tuned to realize either stochastic switching or deterministic behavior. In particular, near-zero energy barrier SHE-MTJ devices are used to provide a natural probabilistic sigmoidal function required for implementation of the neuron's activation function within an RBM structure. Meanwhile, non-volatile SHE-MTJ devices with high energy barrier ($\Delta \geq 40kT$) can be used to implement LUTs. Use of SHE-MTJ based LUTs achieves more than 80\% and 50\% reduction in terms of power dissipation and area, respectively, compared to conventional SRAM-based reconfigurable fabrics. These improvements are achieved at the cost of higher energy consumption during the reconfiguration operation, which occurs rarely and can be tolerated due to the significant area and power reductions realized during the normal operation of the SNRA. \section*{Acknowledgment} This work was supported in part by the Center for Probabilistic Spin Logic for Low-Energy Boolean and Non-Boolean Computing (CAPSL), one of the Nanoelectronic Computing Research (nCORE) Centers as task 2759.006, a Semiconductor Research Corporation (SRC) program sponsored by the NSF through CCF 1739635. \iffalse \begin{IEEEbiographynophoto} {Ramtin Zand} received B.Sc. degree in Electrical Engineering in 2010 from IKIU, Qazvin, Iran. He also received his M.Sc. degree in Digital Electronics at Sharif University of Technology, Tehran, Iran, in 2012. He is currently working toward the Ph.D. degree in Computer Engineering at the University of Central Florida, Orlando, USA. His research interests are in Reconfigurable and Adaptive Computing Architectures with emphasis on spintronic devices. \end{IEEEbiographynophoto} \begin{IEEEbiographynophoto} {Ronald F. DeMara} (S'87-M'93-SM'05) received the Ph.D. degree in Computer Engineering from the University of Southern California in 1992. Since 1993, he has been a full-time faculty member at the University of Central Florida where he is a Professor of Electrical and Computer Engineering, and joint faculty of Computer Science, and has served as Associate Chair, ECE Graduate Coordinator, and Computer Engineering Program Coordinator. His research interests are in computer architecture with emphasis on reconfigurable logic devices, evolvable hardware, and emerging devices, on which he has published approximately 200 articles and holds one patent. He received IEEE's Joseph M. Bidenbach Outstanding Engineering Educator Award in 2008. He is a Senior Member of IEEE and has served on the Editorial Boards of IEEE Transactions on VLSI Systems, Journal of Circuits, Systems, and Computers, the journal of Microprocessors and Microsystems, and as Associate Guest Editor of ACM Transactions on Embedded Computing Systems, as well as a Keynote Speaker of the International Conference on Reconfigurable Computing and FPGAs (ReConFig). He is lead Guest Editor of IEEE Transactions on Computers joint with IEEE Transactions on Emerging Topics in Computing 2017 Special Section on Innovation in Reconfigurable Computing Fabrics: from Devices to Architectures. He is currently an Associate Editor of IEEE Transactions on Computers, and serves on various IEEE conference program committees, including ISVLSI and SSCI. \end{IEEEbiographynophoto} \fi \bibliographystyle{IEEEtran}
1,314,259,996,168
arxiv
\section{Summary} Seismic inversion helps geophysicists build accurate reservoir models for exploration and production purposes. Deep learning-based seismic inversion works by training a neural network to learn a mapping from seismic data to rock properties using well log data as the labels. However, well logs are often very limited in number due to the high cost of drilling wells. Machine learning models can suffer overfitting and poor generalization if trained on limited data. In such cases, well log data from other surveys can provide much needed useful information for better generalization. We propose a learning scheme where we simultaneously train two network architectures, each on a different dataset. By placing a soft constraint on the weight similarity between the two networks, we make them learn from each other where useful for better generalization performance on their respective datasets. Using less than 3$\%$ of the available training data, we were able to achieve an average $r^{2}$ coefficient of 0.8399 on the acoustic impedance pseudologs of the SEAM dataset via joint learning with the Marmousi dataset. \section{Introduction} Seismic inversion refers to the process of estimating rock properties in the subsurface. This allows geophysicists to build accurate reservoir models for hydrocarbon exploration and production. While these properties can be measured directly at the well locations, they must be estimated using seismic data at the non-well locations. Classical seismic inversion usually works by starting with a smooth model of subsurface properties and forward modeling it to generate synthetic seismic data. The synthetic seismic is compared to the actual seismic and the difference between the two is used to update the model parameters. A detailed overview of classical seismic inversion methods is provided by \citep{Veeken2004SeismicIM}. \\ Deep learning, a subset of machine learning, has in the recent past led to ground breaking advancements in the field of Image classification \citep{Krizhevsky2017}, object detection \citep{objdetect}, image segmentation \citep{segmentation}, image and video captioning \citep{captioning}, speech recognition \citep{speech}, and machine translation \citep{DBLP:conf/emnlp/ChoMGBBSB14}. The success of deep learning in computer vision and natural language processing domains has of late inspired geophysicists to replicate these successes in the field of seismic interpretation. Machine learning has been used to solve problems in salt body delineation \citep{haibinSaltbodyDetection, AsjadSaltDetection, AmirSaltDetection} , fault detection \citep{haibinFaultDetection, HaibinFaultDetection2}, facies classification \citep{YazeedFaciesClassification, YazeedFaciesWeakClassification}, and structural similarity based seismic image retrieval and segmentation \citep{YazeedStructurelabelPrediction}. Recently, there has been a lot of interest in developing deep learning-based workflows for seismic inversion. \citep{BiswasPhysicsGuidedCNN, DasCNNInversion} used Convolutional Neural Networks (CNNs) for estimating Acoustic and Elastic Impedance from seismic data. \cite{motazRNN1} and \cite{mustafaTCN} introduced sequence modelling-based neural networks based on Recurrent Neural Networks (RNNs) and Temporal Convolutional Network (TCN) respectively for estimation of various rock properties from seismic data. They demonstrated that such networks were more capable of learning temporal relationships between seismic traces for efficient rock property estimation. \citep{motazSemiSupervisedAcoustic, motazSemiSupervisedElastic} also showed how incorporating the forward model into the network architecture resulted in an implicit regularization of the network, thereby improving the quality of property estimations. All such methods are based upon learning a mapping from seismic data to well log measurements at the well positions, and then using the learned mapping to estimate the properties at the non-well positions. One limitation with such approaches is that they require a lot of labeled training data to achieve satisfactory generalization performance. However, most surveys have only a limited number of wells, due to the high cost of drilling them. This makes machine learning models prone to overfitting if trained on such limited well log data. One way of overcoming this is to use knowledge gained from learning on well logs from other surveys in estimating rock properties on the target survey. Transfer learning is a very popular machine learning framework that uses knowledge from a source dataset while training a machine learning model on the target dataset. It has been shown to help achieve better generalization performance and quicker convergence on the target dataset. It also results in less effort being expended to manually label training examples in the target survey, especially when it is costly and time consuming. For a comprehensive review of transfer learning methodologies that have been used in the past, refer to \citep{transferlearning}. In this paper, we propose a transfer learning scheme for seismic inversion that jointly learns on multiple datasets using identical copies of the same network architecture. In addition to optimizing the losses on their respective datasets, we also impose a soft constraint on the weights of the network copies to be similar to each other. This effectively results in a knowledge sharing scheme where the two networks are learning from each other where it is mutually beneficial while being able at the same time to adapt to the specific nature of their respective dataset. \section{Methodology} \subsection{2-D Temporal Convolutional Network} As mentioned beforehand, our algorithm employs the use of two dimensional Temporal Convolutional Blocks for estimating rock properties from seismic data. The architecture is shown in Figure ~\ref{fig:architecture}. The Architecture consists of a feature extractor module that uses a series of 2-D convolutional kernels to extract increasingly abstract features from the input seismic image. The convolutional kernels in this module use an exponentially increasing dilation factor in depth while staying constant in width. Using increasingly dilated convolutions in depth allows us to model input seismic data temporally for efficiently capturing long term dependencies, leading to better estimation of the desired rock property. The kernel being 2-D allows us to inject spatial context awareness into our network estimations. The output of the feature extractor block is fed simultaneously into a Regression module and a Reconstruction module. The latter is responsible for reconstructing the input seismic image while the former outputs the desired rock property. This is an example of multi-task learning via representation sharing, where multiple tasks (output estimation and input reconstruction in this case) are learnt simultaneously in the hope that the network can learn more generalizable feature representations, leading to better performance on all tasks. This is especially the case when the tasks are highly related to each other. \begin{figure*}[htbp] \centering \includegraphics[width=2\columnwidth]{architecture.png} \caption{The Architecture uses a series of 2-D Temporal Convolutional Blocks to extract features from the input. The input is a 2-D patch of seismic data centered at the well position. The output of the Feature Extractor is fed simultaneously into the regression module and the reconstruction module for the estimation of rock property and reconstruction of seismic input respectively.} \label{fig:architecture} \end{figure*} \subsection{Soft Weight Sharing} As discussed before, another major component of our deep learning based-seismic inversion algorithm is learning from related datasets for rock property prediction. This is achieved by simultaneously training identical copies of the same architecture, one for each dataset. Each network receives a batch of input training examples from its respective dataset, processes them to get the outputs, and uses the corresponding ground-truths to backpropagate the losses through the network to update network weights. In addition to this, we also force the network weights in all corresponding layers to be close to each other in the L2 norm sense. By doing this, we effectively bias the networks to search the parameter space for a solution where the architecture will generalize better to inputs sampled from different distributions. However, by not constraining the weights to be exactly the same, each copy of the architecture is also free to find the optimal set of weights for its dataset in the vicinity of this solution space. Moreover, in the situation where the two datasets are very different from each other and learning on one will not help the other, the networks can choose to not learn from each other at all. The process is illustrated in Figure ~\ref{fig:weight_sharing}. Consider the two networks to be represented by $\mathcal{F}$ and $\mathcal{G}$ respectively. Both $\mathcal{F}$ and $\mathcal{G}$ consist of trainable weights organized into a set of $L$ convolutional layers. Consider $\theta_{A}^{l}$ to be the weight tensor in the $l$-th layer in network $A$, where $l\in [0, L-1]$. Then both $\mathcal{F}$ and $\mathcal{G}$ can be represented as follows: \begin{equation} \mathcal{F} = [\theta_{F}^{0}, \theta_{F}^{1},\cdots, \theta_{F}^{L-1}] \label{eq:network1} \end{equation} \begin{equation} \mathcal{G} = [\theta_{G}^{0}, \theta_{G}^{1},\cdots, \theta_{G}^{L-1}] \label{eq:network2} \end{equation} The Weight Mismatch Loss is then defined as: \begin{equation} l_{WML} = \sum_{l=0}^{L-1} \|\theta_{\mathcal{F}}^{l} - \theta_{\mathcal{G}}^{l}\|_{2}^{2} \label{eq:weight_mismatch} \end{equation} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{weight_sharing.png} \caption{The two architectures are constrained to have weights in all corresponding layers close to each other in the L2 norm sense.} \label{fig:weight_sharing} \end{figure} \subsection{Network Training} Consider $D_{1} = \{X_{1}, Y_{1}\}$ and $D_{2} = \{X_{2}, Y_{2}\}$ to represent two datasets, where the subscript refers to the dataset. $X = \{x^{1}, ..., x^{N}| x^{i} \in \mathbb{R}^{d \times m}\}$ represents the collection of $N$ seismic images in a dataset, where each $x^{i}$ is a $d\times m$ dimensional image. $d$ refers to the depth of the image while $m$ is the width. $Y = \{y^{1}, ..., y^{N}|y^{i}\in\mathbb{R}^{d}\}$ refers to collection of well log properties corresponding to each $x^{i} \in X$, where each $y^{i}$ is a $d$ dimensional rock property trace. A batch of seismic images from each dataset is processed by its respective network to get the estimated well properties, $\hat{y}^{i}$ as well as the reconstructed seismic images, $\hat{x}^{i}$ as shown below: \begin{equation} \hat{y}_{1}^{i}, \hat{x}_{1}^{i} = \mathcal{F}_{\Theta}(x_{1}^{i}) \label{eq:4} \end{equation} \begin{equation} \hat{y}_{2}^{i}, \hat{x}_{2}^{i} = \mathcal{G}_{\Theta}(x_{2}^{i}) \label{eq:5} \end{equation} The regression and reconstruction losses are then defined as: \begin{equation} l_{reg} = \frac{1}{N_{1}}\sum_{i=1}^{N_{1}}\|\hat{y_{1}}^{i} - y_{1}^{i}\|_{2}^{2} + \frac{1}{N_{2}}\sum_{i=1}^{N_{2}}\|\hat{y}_{2}^{i} - y_{2}^{i}\|_{2}^{2} \end{equation} and \begin{equation} l_{recon} = \frac{1}{N_{1}}\sum_{i=1}^{N_{1}}\|\hat{x_{1}}^{i} - x_{1}^{i}\|_{2}^{2} + \frac{1}{N_{2}}\sum_{i=1}^{N_{2}}\|\hat{x}_{2}^{i} - x_{2}^{i}\|_{2}^{2} \end{equation} where $N_{1}$ and $N_{2}$ are the batch sizes in the two datasets. The total loss is then obtained as: \begin{equation} \textrm{Total Loss} = l_{reg} + l_{recon} + \alpha\times l_{WML}, \end{equation} where $\alpha$ is a hyperparameter that controls the influence of the weight mismatch loss on the training of the two networks. Over each training iteration, the loss obtained above is backpropagated through both networks and the weights updated to reduce the training error at the next iteration. If $\alpha$ is set too high, it forces the networks to look for the same solution, that might not be optimized for each dataset individually. If $\alpha$ is set too low, it makes the training of the two networks effectively independent of each other. An intermediate value for $\alpha$ results in the two networks learning from each other when it is useful for optimization on their own datasets, and ignoring each other when knowledge sharing is not useful. \section{Results and Discussion} We demonstrate our workflow for the estimation of Acoustic Impedance from poststack, migrated seismic data on the open source Marmousi and SEAM datasets. We set up a network Architecture as shown in Figure ~\ref{fig:architecture} for each dataset, and train them jointly for 900 epochs. We use ADAM \citep{kingma2014adam} as the optimization algorithm, which adaptively sets the learning rate during the progression of training. We also impose a weight decay constraint of 0.001 which helps preventt overfitting by constraining the L2 norm of the network weights. We uniformly sample 12 acoustic impedance pseudologs and their corresponding seismic trace data from the crossline section in SEAM located at North 23900m. For Marmousi, we sample uniformly 51 acoustic impedance pseudologs and the corresponding seismic data in the dataset. Marmousi is a synthetic dataset with the seismic data generated by simple convolutional forward modeling, while SEAM contains migrated seismic data made to simulate real world acquisition conditions and artifacts. This makes it a much harder dataset to learn on, especially with only a limited number of pseudologs available. We use a greater number of pseudologs in Marmousi to provide the network training on SEAM with sufficient information to learn from. The results of this training scheme are illustrated in Figure ~\ref{fig:seam}. One can clearly see that our algorithm is able to delineate with sufficient detail, the vertical variations in acoustic impedance, especially in the left half of the section. The top of the salt dome has also been marked out to a high degree of accuracy, despite us not having access to many pseudologs there. We have also been able to mark out the top and the bottom of the high impedance arch occurring around a depth of 12000m to a reasonable degree of detail. One can see that estimation get noisier in the bottom-right portion of the section. This is to be expected since the seismic data in these regions is extremely weak and sometimes not receptive at all to variations in acoustic impedance. Despite this, our algorithm is still able to capture the general increasing trend in acoustic impedance values well. Figure 4 shows some individual acoustic impedance traces extracted from both the estimated and ground truth acoustic impedance sections at select positions and overlaid on top of each other. As explained before, the two largely agree with each other for the most part, except around a depth of 12000m where we have a sudden jump in the value for acoustic impedance, along with a weakened seismic data in the seismic section. The $r^{2}$ coefficient, also called the coefficient of determination, gives information about the goodness of fit of a model. Given a set of observed values and a corresponding set of predicted values, the $r^{2}$ coefficient is an indicator of how well the predicted values approximate the real ones. A value of 1 indicates a perfect fit. We calculated the average $r^{2}$ coefficient between the estimated and ground truth acoustic impedance sections on SEAM, and it turns out to be 0.8399, which indicates that our model has been able to model acoustic impedance well, given that we only had 12 training samples to train it with, which is around 2$\%$ of the total available training data. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{seam_AI.png} \caption{Estimated Acoustic Impedance section (top) vs the groundtruth (bottom).} \label{fig:seam} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{trace_plots.png} \caption{Trace plots for select positions in both the estimated and ground-truth acoustic impedance sections.} \label{fig:traces} \end{figure} \section{Conclusion} In this work, we demonstrate a deep learning-based seismic inversion workflow where we jointly train identical copies of a neural network on two different datasets. We show how, by placing a soft constrain on the network weights to be similar, we allow the transfer of useful knowledge to take place from one dataset to the other while simultaneously letting the networks adapt to their specific datasets. An important implication of this workflow is that one need not use a large number of training examples in either, since mutual information between the them would serve to compensate for that. Another implication for this work is that one can scale this workflow to any number of datasets. We demonstrate the utility of this approach through estimating Acoustic Impedance on the SEAM dataset, although the methodology would be equally valid for other rock properties. \bibliographystyle{seg}
1,314,259,996,169
arxiv
\cleardoublepage\oldsection{\cleardoublepage\oldsection} \title[Equivalence of Collet--Eckmann conditions]{Equivalence of Collet--Eckmann conditions for slowly recurrent rational maps} \author{Mats Bylund} \address{Centre for Mathematical Sciences, Lund University, Box 118, 221 00 Lund, Sweden} \email{mats.bylund@math.lth.se} \subjclass[2010]{37F10, 37F15, 37B20} \begin{document} \begin{abstract} In this short note we observe that within the family of slowly recurrent rational maps on the Riemann sphere, the Collet--Eckmann, second Collet--Eckmann, and topological Collet--Eckmann conditions are equivalent and also invariant under topological conjugacy. \end{abstract} \maketitle \cleardoublepage\oldsection{Introduction} The Collet--Eckmann condition first appeared in the seminal papers by P.~Collet and J.-P.~Eckmann \cite{CE1,CE2} where they studied chaotic behaviour of certain non-uniformly expanding maps on the interval. This condition, which requires exponential growth of the derivative along the critical orbit(s), was later introduced in \cite{P1} to the study of holomorphic (rational) maps on the Riemann sphere. The Collet--Eckmann condition, which often implies the existence of absolutely continuous invariant measures with strong ergodic properties, is known to be abundant in both the real \cite{BC1,BC2,AM} and complex \cite{Rees, Asp} settings. A related and purely topological condition was introduced in \cite{PR1}, where it was proved to be implied by the Collet--Eckmann condition. Much work has been made to identify the relationships between the \emph{Collet--Eckmann condition} (abbr.~CE), the \emph{second Collet--Eckmann condition} (abbr.\ CE2), and the \emph{topological Collet--Eckmann condition} (abbr.\ TCE) (see below for definitions). Notably these conditions are known to be equivalent within the family of unicritical maps (see \cite{PR-LS} and references therein). In \cite{PR-LS} examples are given of maps which satisfy TCE but not CE and/or CE2, and maps which satisfy CE but not CE2, and vice versa. The main problem that arises is when critical points come close to other critical points of high multiplicity. By assuming a recurrence condition of the critical orbits, known as the \emph{slow recurrence condition} (abbr.\ SR), we observe in this note that these conditions become equivalent; in a sense slow recurrence takes the r\^ole of unicritical. The slow recurrence condition is defined as follows. \begin{Def} A rational map $f \colon \hat{\mathbb{C}} \to \hat{\mathbb{C}}$ of degree $\geq 2$ is said to satisfy the \emph{slow recurrence condition} if for each $\alpha > 0$ there exists $C > 0$ such that, for every critical point $c \in \operatorname{Crit}(f) \cap J(f)$, \[ \operatorname{dist}\left(f^n (c), \operatorname{Crit}(f) \cap J(f)\right) \geq Ce^{-\alpha n} \quad (n \geq 1). \] \end{Def} \begin{Rem} Note that if $f$ satisfies SR then no critical point is mapped onto another critical point. \end{Rem} The SR condition is generally believed to be a typical property among rational Collet--Eckmann maps, and it should be noted that within the real quadratic family this is known to be true due to a result by A.~Avila and C.~G.~Moreira \cite{AM}. In fact they proved that for a typical non-hyperbolic (the critical point does not tend to an attractive cycle) real quadratic map $F$ one has \[ \operatorname{dist}(F^n (c),c) \geq \frac{C}{n^{1+\epsilon}} \quad (n \geq 1), \] for any $\epsilon > 0$ and $C = C(\epsilon) > 0$ a constant. Moreover, in the multimodal setting, B.~Gao and W.~Shen \cite{GS14} proved that for one-parameter families the slow recurrence condition is satisfied on a set of positive Lebesgue measure. We also mention that for complex unicritical polynomials $z \mapsto z^d + c$, it follows from a result by J.~Graczyk and G.~\'Swi\c atek \cite{GS15} that the slow recurrence condition is satisfied for a typical parameter $c$ with respect to harmonic measure on the boundary of the connectedness locus. The SR condition is also natural in the sense that CE+SR is invariant under topological conjugacy, as was observed by H.~Li (Theorem~A.1 in \cite{Li}, see also \cite{LW06}). The short proof of the invariance is given at the end of this note. The following is our main observation. \begin{Prop}\label{mainProp} Within the family of slowly recurrent rational maps of degree $\geq 2$ on the Riemann sphere, CE, CE2, and TCE are equivalent. Moreover, these conditions are invariant under topological conjugacy. \end{Prop} In \cite{PR-LS} examples of real polynomials of degree 5 that satisfy CE but not CE2 (and vice versa) are given, and also examples of real polynomials of degree 3 that satisfy TCE but neither CE nor CE2. We therefore conclude that none of these examples satisfy SR. We make a final remark that it would be interesting to investigate the set of rational maps satisfying TCE+SR. Indeed if almost every topological Collet--Eckmann map is slowly recurrent then TCE and CE are equivalent up to a set of measure zero. Below we indicate the changes in three already existing lemmas in order to reach the above stated result of Proposition~\ref{mainProp}. For completeness we provide the minimal of definitions and proofs, but refer to the relevant articles for greater detail. Throughout this note the standing assumption is that $f$ is a slowly recurrent rational map on the Riemann sphere $\hat{\mathbb{C}}$ of degree $\geq 2$, and with $f^n$ we mean $f$ composed with itself $n$ times. We let $B(z,r) = \{w : \operatorname{dist}(z,w) < r\} \subset \hat{\mathbb{C}}$ denote the disk of radius $r > 0$ centred at $z$, and we let $\operatorname{Crit}'(f) = \operatorname{Crit}(f) \cap J(f)$ with $\operatorname{Crit}(f)$ the set of critical points of $f$, and $J(f)$ the Julia set of $f$. Distances, diameters, and derivatives are taken with respect to the spherical metric on $\hat{\mathbb{C}}$. \cleardoublepage\oldsection{Equivalence of CE+SR and CE2+SR} The Collet--Eckmann condition and second Collet--Eckmann condition are defined as follows. \begin{Def} A rational map $f \colon \hat{\mathbb{C}} \to \hat{\mathbb{C}}$ of degree $\geq 2$ without parabolic periodic points is said to satisfy the \emph{Collet--Eckmann condition} (CE) if there exist constants $\lambda_1 > 1$ and $C_1 > 0$ such that, for each critical point $c \in \operatorname{Crit}'(f)$, \[ \vert (f^n)^\prime (f(c)) \vert \geq C_1 \lambda_1^n \quad (n \geq 0). \] \end{Def} \begin{Def} A rational map $f \colon \hat{\mathbb{C}} \to \hat{\mathbb{C}}$ of degree $\geq 2$ is said to satisfy the \emph{second Collet--Eckmann condition} (CE2) if there exist constants $\lambda_2 > 1$ and $C_2 > 0$ such that, for every $n \geq 1$ and every $w \in f^{-n}(c)$ for $c \in \operatorname{Crit}'(f)$ not in the forward orbit of other critical points, \[ \vert (f^n)^\prime(w) \vert \geq C_2 \lambda_2^n. \] \end{Def} In \cite{GS} it was proved that these two conditions are equivalent for critical points of maximal (dynamical) multiplicity. This was achieved through the so-called \emph{(reversed) telescope construction}. At the heart of these techniques lies the \emph{shrinking neighbourhoods} (first introduced in \cite{P1}) which are defined as follows. Fix a decreasing sequence of positive real numbers $(\delta_n)$ satisfying $\prod_{n}(1-\delta_n) > 1/2$. Let $B_r = B(z,r)$, and consider a sequence $(f^{-n}(z))$ of consecutive preimages of $z$. With $\Delta_n = \prod_{k < n}(1-\delta_k)$, the $n^\text{th}$ shrinking neighbourhoods of $z$ are now defined as $U_n = \operatorname{Comp}_{f^{-n}(z)} f^{-n}B_{r\Delta_n}$ and $U_n' = \operatorname{Comp}_{f^{-n}(z)} f^{-n}B_{r\Delta_{n+1}}$. Here, $\operatorname{Comp}_w$ denotes the connected component containing $w$. With the right \emph{scale} around each critical point, using these shrinking neighbourhoods, one gets distortion and expansion estimates. The scale is defined by the choice of two positive numbers $R' \ll R \ll 1$, and the correct choice of these two numbers is crucial for the local analysis. We refer to \cite{GS} for details; for our purposes it is enough to keep in mind that these two numbers are fixed throughout the analysis. \subsection{CE+SR\texorpdfstring{$\implies$}{text}CE2+SR} Let $(c_{-k})_{k=1}^n$ be a sequence of consecutive preimages of $c_0 = c \in \operatorname{Crit}'(f)$ of length $n \geq 1$, i.e. $f(c_{-k}) = c_{-k+1}$ and $f^k(c_{-k}) = c$. In \cite{GS}, the authors inductively define an increasing sequence of numbers $0 = n_0 < n_1 < \cdots < n_m = n$, and each (backward) orbit of length $n_{k+1}-n_k$ is classified as either a \emph{type $1$}, \emph{type $2$}, or \emph{type $3$} orbit. For orbits of type $\dots 2 , \dots 3,$ or $1\dots 13$ (one reads from the right), one has exponential growth of the derivative (a $12$ block is not allowed by construction). The only problem thus arise when a given backward orbit begins with a block of $1$'s which is not preceded by a $3$. For clarity we give the definition of a type 1 orbit. \begin{Def} A sequence $z_0 = z, z_{-1} \in f^{-1}(z),\dots, z_{-n} \in f^{-n}(z)$ of consecutive preimages of $z$ is of the first type with respect to the critical points $c'$ and $c''$ if \begin{enumerate}[1)] \item Shrinking neighbourhoods $U_k$ for $B(z,r)$, $1 \leq k \leq n$, avoid critical points for some $r < 2R'$, \item The critical point $c'' \in \partial U_n$, \item The critical value of $c'$ is close to $z$ with $f(c') \in B(f(z),R)$. \end{enumerate} \end{Def} The situation of having a block of $1$'s not preceded by a $3$ can only happen in the beginning, and given such a situation the authors prove that there is a constant $\lambda > 1$ such that \begin{equation}\label{CECE2} \vert (f^n)^\prime (c_{-n}) \vert^{\mu_{\max}} \geq \operatorname{const} \lambda^n r_1^{\mu_{\max}-\mu(c)}, \end{equation} where $\mu(c)$ is the multiplicity of $c$, $\mu_{\max} = \max_{c\in \operatorname{Crit}'(f)} \mu(c)$, and $r_1 < 2R'$ is the radius of a disk centred at $c$. Here $r_1$ can not be chosen freely in order for the inductive definition of the $n_k$'s to work, thus for large $n$ (\ref{CECE2}) might not yield expansion. The authors assume $\mu(c) = \mu_{\max}$ and in doing so prove that CE implies CE2 for critical points of maximal multiplicity (Proposition~1 in \cite{GS}). If one assumes SR this problem is easily seen to vanish since the slow recurrence condition dictates how small $r_1$ can be. \begin{Lemma} If a slowly recurrent rational map $f \colon \hat{\mathbb{C}} \to \hat{\mathbb{C}}$ of degree $\geq 2$ satisfies CE then it satisfies CE2. \end{Lemma} \begin{proof} Suppose the situation is as described above, and let $n_1$ be the length of the first type $1$ orbit. Per definition of a type $1$ orbit there exists a critical point $c'' \in \partial U_{n_1}$ which is mapped into $B(c,r_1)$. From SR we get that \[ r_1 \geq \operatorname{dist}(f^{n_1}(c''),c) \geq C e^{-\alpha n_1}. \] Since $n_1 \leq n$, inserting the above in (\ref{CECE2}) we find that \[ \vert (f^n)^\prime (c_{-n}) \vert^{\mu_{\max}} \geq \operatorname{const} \lambda^n \left(Ce^{-\alpha n}\right)^{\mu_{\max} - \mu(c)} \geq C_2 \lambda_2^n, \] where we can make $\lambda_2$ arbitrarily close to $\lambda$ by decreasing $\alpha$ (and thus also decreasing $C_2$). \end{proof} \subsection{CE2+SR\texorpdfstring{$\implies$}{text}CE+SR} Pick $c \in \operatorname{Crit}'(f)$, fix $n$, and consider a sequence of images $z_0 = f^n(f(c)), z_{-1} = f^{n-1}(f(c)),\dots, z_{-(n+1)} = c$. Similarly as in the previous case, the authors inductively define an increasing subsequence $n_0 < n_1 < \dots < n_m = n$. Here $n_0$ is the smallest positive integer such that $z_{-(n_0 + 1)}$ is in the $R$-neighbourhood of some critical point. Due to the \emph{exponential shrinking of components} (see below for a definition), which is implied by CE2 (see \cite{PR-LS}), one can prove that during this last orbit of length $n_0$ one has expansion. (In \cite{GS} \emph{R-expansion} was taken as an assumption.) The conditions imposed on $n_j$, $j \neq 0$, are as follows: \begin{enumerate}[I)] \item The sequence $z_{-n_{j-1}},\dots,z_{-n_j}$ is of the \emph{first reversed type}, \item Some critical point $c^{(j)} \in B(z_{-(n_j+1)},R)$. \end{enumerate} The definition of a \emph{first reversed type} orbit is as follows. \begin{Def} A sequence $z_0 = z, z_{-1} \in f^{-1}(z),\dots, z_{-n} \in f^{-n}(z)$ of consecutive preimages of $z$ is of the reversed first type with respect to two critical points $c'$ and $c''$ if \begin{enumerate}[1)] \item Shrinking neighbourhoods $U_k$ for $B(z_{-1},r)$, $1 \leq k \leq n-1$, avoid critical points, \item $\operatorname{dist}(z_{-1},c') = r/2 < R$, \item $c'' \in U_n$. \end{enumerate} \end{Def} The authors continue and prove (Proposition~5 in \cite{GS}) that there is a constant $\lambda > 1$ such that \begin{equation}\label{CE2CE} \vert (f^n)^\prime(f(c))\vert \geq \operatorname{const} \lambda^n \left(\operatorname{diam}(U_m)\right)^{\mu_{\max} - \mu(c)}. \end{equation} Here $\operatorname{diam}(U_m)$ is the diameter of a shrinking neighbourhood around $c$. As in the previous case, this factor might interfere with expansion for large $n$ unless $c$ is assumed to be a critical point of maximal multiplicity $\mu(c) = \mu_{\max}$. Again, assuming SR, we get a lower bound for the diameter. \begin{Lemma} If a slowly recurrent rational map $f \colon \hat{\mathbb{C}} \to \hat{\mathbb{C}}$ of degree $\geq 2$ satisfies CE2 then it satisfies CE. \end{Lemma} \begin{proof} It is given that $f^m(c) \in B(c',R)$ for a critical point $c'$, and $U_m$ is the shrinking neighbourhood of $B(f^m(c),r)$ of radius $r = 2\operatorname{dist}(f^m(c),c')$. By definition of a reversed type 1 orbit $f^{-(m-1)} \colon B(f^m(c),r/2) \to f(U_m)$ is univalent, and with an application of Koebe's~$\frac{1}{4}$-lemma we find that \[ \operatorname{diam}(U_m) \geq \operatorname{diam}(f(U_m)) \geq \frac{1}{C}r\vert (f^{m-1})^\prime (f(c))\vert^{-1}. \] (Here $C > 1$ is a constant depending on the scale $R$ we are working with, and it shows up since we are adapting Koebe's~$\frac{1}{4}$-lemma to the spherical metric.) The first inequality follows since $c \in U_m$ and thus the image of $U_m$ under $f$ is contracted. Since $r/2 = \operatorname{dist}(f^m(c),c')$, invoking SR and that $m \leq n$, we find by inserting the above in (\ref{CE2CE}) that \[ \vert (f^n)^\prime(f(c))\vert \geq \operatorname{const}\left[\frac{1}{C} \vert (f^{m-1})^\prime (f(c))\vert^{-1}\right]^{\mu_{\max} - \mu(c)} \lambda^n \left(Ce^{-\alpha n}\right)^{\mu_{\max} - \mu(c)}. \] We observe that $\vert (f^{m-1})^\prime(f(c)) \vert \leq K$ with $K = K(R)$ an absolute constant depending on the choice of $R$. Indeed, for each critical point $c$ under consideration, and for a fixed $R$, there exists a unique smallest integer $m = m(c,R)$ for which $f^m(c) \in B(c'',R)$, for some critical point $c''$. We simply let \[ K = \max_{c \in \operatorname{Crit}'(f)} \vert (f^{m(c,R)-1})^\prime (f(c)) \vert. \] Thus we get that \[ \vert (f^n)^\prime(f(c))\vert \geq C_1 \lambda_1^n, \] where we can make $\lambda_1$ arbitrarily close to $\lambda$ by decreasing $\alpha$ (and thus also decreasing $C_1$). \end{proof} \cleardoublepage\oldsection{Equivalence of CE+SR and TCE+SR} The \emph{topological Collet--Eckmann condition} for rational maps on the Riemann sphere was first introduced in \cite{PR1} and is defined as follows. Recall that for a connected set $\Omega$, $\operatorname{Comp}_w g^{-1}(\Omega)$ denotes the connected component of $g^{-1}(\Omega)$ containing $w$. \begin{Def} A rational map $f \colon \hat{\mathbb{C}} \to \hat{\mathbb{C}}$ of degree $\geq 2$ is said to satisfy the \emph{topological Collet--Eckmann condition} (TCE) if there exist $M \geq 0$, $P \geq 0$ and $r > 0$ such that for every $z \in J(f)$ there exists a strictly increasing sequence of positive integers $n_j$, for $j = 1,2, \dots$, such that $n_j \leq P \cdot j$, and for each $j$ \[ \# \{i : 0 \leq k < n_j, \operatorname{Comp}_{f^k(z)} f^{-(n_j-k)} \left(B(f^{n_j}(z),r)\right) \cap \operatorname{Crit} \neq \emptyset\} \leq M. \] \end{Def} Since TCE is formulated purely in topological terms it is a topological invariant. One of the useful properties of this condition is its many equivalent formulations (see \cite{PR-LS} and also \cite{PR-L,R-L10}). Here we make use of the following equivalent condition. \begin{Def} A rational map $f \colon \hat{\mathbb{C}} \to \hat{\mathbb{C}}$ of degree $\geq 2$ is said to satisfy \emph{exponential shrinking of components} (ExpShrink) if there exists $\lambda_{\operatorname{Exp}} > 1$ and $r_{\operatorname{Exp}} > 0$ such that for every $x \in J(f)$, every $n > 0$, and every connected component $W$ of $f^{-n}(B(x,r_{\operatorname{Exp}}))$ \[ {\rm{diam}}(W) \leq (\lambda_{\operatorname{Exp}}^{-1})^n. \] \end{Def} It was first proved in \cite{PR1} that CE implies TCE, and in \cite{PR2} it was proved that under the assumption that for every $c \in \operatorname{Crit}'(f)$ whose forward trajectory does not meet any other critical point \begin{equation}\label{nonrecurrent} \operatorname{cl}\bigcup_{n > 0} f^n(c) \cap \left(\operatorname{Crit}(f)\smallsetminus \{c\}\right) = \emptyset, \end{equation} TCE implies CE. This latter result clearly implies that CE+(\ref{nonrecurrent}) is a topological invariant; in particular CE is a topological invariant in the case of unicritical maps. Another proof of this result was obtained in \cite{P2}. We will effectively replace condition (\ref{nonrecurrent}) with SR, thus proving that TCE+SR implies CE+SR. This constitutes an obvious modification in the proof of Lemma~4.5 in \cite{P2}; for completeness we give a sketch of this proof. (See also Proposition~3.4 in \cite{Li}.) \begin{Lemma} If a slowly recurrent rational map $f \colon \hat{\mathbb{C}} \to \hat{\mathbb{C}}$ of degree $\geq 2$ satisfies ExpShrink then it satisfies CE. \end{Lemma} \begin{proof} Let $\alpha$ be the exponent in SR and let $n_0 = n_0(\alpha)$ be large enough such that for every $n \geq n_0$ \[ \operatorname{dist}(f^j(f(c)), \operatorname{Crit}(f)) > e^{-2\alpha n} \quad (j = 0,1,\dots, n-1). \] This condition is assumed in Lemma~4.5 \cite{P2}, and the proof now continues as follows. Fix $\epsilon > 0$ arbitrary and let \[ s = \left[ \frac{-\log \epsilon}{\log \lambda_{\operatorname{Exp}}} + \frac{2\alpha n}{\log \lambda_{\operatorname{Exp}}} \right] + 1, \] where $\left[x\right]$ denotes the integral part of $x$. By ExpShrink we have that for all $0 \leq j \leq n$ \begin{align*} \operatorname{diam}\left( \operatorname{Comp}_{f^{n-j}(f(c))} f^{-s-j}\left(B(f^{n+s}(f(c)),r_{\operatorname{Exp}})\right) \right) &\leq (\lambda_{\operatorname{Exp}}^{-1})^{s+j} \\ &\leq (\lambda_{\operatorname{Exp}}^{-1})^{s} \\ &\leq \epsilon e^{-2\alpha n}. \end{align*} Let $B = B(f^n(f(c)),r_{\operatorname{Exp}} e^{-3\alpha M n})$, where \[ M = \left[\frac{\log \sup_{\hat{\mathbb{C}}}\vert f^\prime \vert}{\log \lambda_{\operatorname{Exp}}} \right] + 1. \] Then for $n$ large enough we get that \[ B \subset \operatorname{Comp}_{f^n(f(c))} f^{-s}\left(B(f^{n+s}(f(c)), r_{\operatorname{Exp}}) \right). \] Let $W_n = \operatorname{Comp}_{f(c)} f^{-n}(B)$. Then there exists $w \in W_n$ such that \[ \vert (f^n)^\prime(w) \vert \geq \frac{\operatorname{diam} B}{\operatorname{diam} W_n} \geq (2r_{\operatorname{Exp}} e^{-3\alpha M n})\lambda_{\operatorname{Exp}}^n. \] If $\epsilon$ is sufficiently small we have distortion and can switch from $w$ to $f(c)$, hence \[ \vert (f^n)^\prime(f(c)) \vert \geq \lambda_1^n, \] where we can make $\lambda_1$ arbitrarily close to $\lambda_{\operatorname{Exp}}$ by decreasing $\alpha$ and $\epsilon$ (and thus increasing $n_0$). \end{proof} \cleardoublepage\oldsection{Topological invariance} We finish by giving the short proof of the topological invariance, as outlined in Theorem~A.1 \cite{Li}. \begin{Lemma} Let $f$ and $g$ be topologically conjugated rational maps on the Riemann sphere of degree $\geq 2$. If $f$ satisfies TCE+SR then so does $g$. \end{Lemma} \begin{proof} Since $f$ is TCE+SR it is CE+SR and therefore, by Theorem~A in \cite{PR2}, the conjugacy is quasi-conformal and therefore bi-H\"older. Let $h$ denote this conjugacy, and let $A > 0$ and $B > 0$ be the associated constant and exponent from the H\"older condition, respectively. Let $c_1'$ and $c_2'$ be distinct critical points of $g$; then $c_1 = h^{-1}(c_1')$ and $c_2 = h^{-1}(c_2')$ are distinct critical points of $f$. Since $h$ preserves TCE, $g$ is at least TCE. The fact that $g$ is also SR follows from \begin{align*} A \operatorname{dist}\left(g^n(c_1'),c_2'\right)^B &\geq \operatorname{dist}\left(h^{-1}(g^n(c_1')),h^{-1}(c_2')\right) \\ &= \operatorname{dist}\left(f^n(c_1),c_2\right) \\ &\geq C e^{-\alpha n}. \end{align*} \end{proof} \begin{small} \begin{acknowledgement} I thank M.~Aspenberg and W.~Cui for discussions. \end{acknowledgement} \end{small} \bibliographystyle{alpha}
1,314,259,996,170
arxiv
\section*{Acknowledgements} The authors acknowledge the financial support of this work by the Department of Science and Technology of the Government of India under the Project Code SR/S2/CMP-71/2012. We also would like to thank Paul-Drude-Institute, Berlin, Germany for the samples and Prof. S. Ghosh of TIFR, Mumbai for help in certain characterizations. \bibliographystyle{apsrev4-1}
1,314,259,996,171
arxiv
\section{Introduction} In the context of extensions to General Relativity, Horndeski's theories~\cite{1974IJTP...10..363H} stand out since they constitute the most general four-dimensional, diffeomorphism covariant theories leading to second order equations of motion~\cite{1974IJTP...10..363H,Woodard:2006nt,Deffayet:2011gz}. These theories, where the gravitational degrees of freedom are expressed in terms of a metric tensor together with a scalar field, have a particularly distinct feature. Namely, the equations of motion they define are of {\em second order type}. This property ensures the absence of Ostrogradski ghosts (e.g.~\cite{Woodard:2006nt}), which is arguably a necessary condition for any physical theory. The freedom allowed by this family of theories has been exploited in many fronts, for instance, with goals to address dark energy (see e.g.~\cite{Nicolis:2008in,Deffayet:2009wt,Ferreira:2019xrr,Kobayashi:2011nu} and references therein) and incipient explorations of extensions of GR in binary mergers (e.g.~\cite{Damour:1996ke,Barausse:2012da,Mirshekari:2013vb,Sagunski:2017nzb,Julie:2017pkb,Hirschmann:2017psw,Bernard:2018hta,Witek:2018dmd}). In the former front, particular examples include quintessence \cite{Ratra:1987rm, Caldwell:1997ii}; kinetic quintessence or k-essence \citep{ArmendarizPicon:2000dh,ArmendarizPicon:2000ah} and chameleon/galileon \citep{Khoury:2003aq,Khoury:2003rn} theories. Typically, applications have been analyzed in the context of linearized studies over specific backgrounds endowed with special symmetries, and consistency with observations in different regimes is addressed. In particular, the subset of Horndeski's theories allowed by observation have been severely constrained by the gravitational wave event GW170817~\cite{Baker:2017hug,Creminelli:2017sry,Ezquiaga:2017ekz}, with constraints derived from linearized studies. In spite of certainly relevant discussions and consequences derived in this context, it is important however to understand the fully nonlinear behavior of allowed theories within this family. For instance, a successful observation derived at the linear level which is not derivable from the nonlinear system would arguably call into question the original action as providing a true model for nature\footnote{In the former case, the linear system should instead be regarded as the fundamental building block} or the extent to which a linear analysis yields comprehensive/sensible constraints. Consistency of solutions to the linear problem with those of the full problem in the linear regime is not always a given as non-linear solutions might give rise to phenomenology completely absent at the linear level --even when initial conditions are chosen within the linear regime. In the case of General Relativity and its application in cosmology, requirements for this being the case have been discussed in~\cite{fischer1973}. Naturally, the extent to which a linear solution can be trusted --with respect to the original action-- depends on one's understanding of the non-linear regime. To gain such understanding, which in turn can help decide which subset within the theory can be considered physical, a stricter and certainly natural condition must be satisfied. Namely, {\em well posedness}~\cite{jH02}. This condition, implies a given problem has a unique solution that depends continuously on initial and boundary data\footnote{The absence of Ostrogradski ghosts is necessary but certainly not sufficient for well posedness.}. From a natural point of view, the satisfaction of these properties is crucial for the physical understanding of the problem under consideration. Consequently, insisting a valid theory must yield well-posed initial boundary value problems is a general powerful requirement to restrict potential options and identify possible alternatives to General Relativity. The analysis of this condition in specific theories is typically complex, which has hindered drawing straightforward conclusions for Horndeski's theories in the past. Recent works however have begun to explore this issue and, in particular, have uncovered significant restrictions~\cite{Papallo:2017qvl,Ijjas:2018cdm,Kovacs:2019jqj} even locally. We stress the importance of this condition can not be underestimated even in linearized regimes. Failure to satisfy it implies it would be impossible to trust possible solutions let alone seek them in the first place --regardless of the method employed to seek for such solutions! In this note, both strongly motivated by and building from these recent works, we revisit the problem from a slightly different angle by performing our analysis in a different frame --the so called Einstein frame--. As a result, a simpler system is obtained where the complexity of the analysis lies primarily in understanding the equation of motion for the scalar field. In such frame it is arguably easier to elucidate the degree to which these theories display a more involved behavior when contrasted with General Relativity and the obstacles that might arise to obtain global solutions. We recall that much has been discussed on the issue of frames in scalar-tensor gravity theories such as Horndeski's (e.g.~\cite{Faraoni:1999hp,Flanagan:2004bz}). Whether the Ricci scalar appears ``clean'' in such action or not is often signaled as being a description in the Einstein (former) or Jordan frame. Here we explore (a subset of) Horndeski's theories from the initial value problem (IVP) point of view in the Einstein frame, discuss the existence of delicate issues and illustrate their consequences via numerical simulations. As well, and for concreteness, we do not concern ourselves at this time with the ``non-vacuum case'' --i.e. scenarios which include matter. Here further issues come into play that in and on themselves raise further concerns for the theory to remain viable --from observational points of view. This, in turn, requires invoking mechanisms like ``Vainshtein screening'' which bring about further mathematical difficulties, see e.g.~\cite{Brito:2014ifa}. This work is organized as follows. In section II we revisit the special case of Horndeski's theories that has been identified as well posed and re-analyze it in the Einstein frame drawing general conclusions about such case and discuss issues related to well posedness in the nonlinear regime. In section III and IV we illustrate our discussion both analytically and numerically in a couple of special cases. \section{Horndeski's theory, special case analyzed} Horndeski's theories describe gravitational interactions in terms of a metric tensor $g_{ab}$ and a scalar field $\phi$. Their equations of motion are determined from the action, \begin{equation} S = \frac{1}{16\pi G} \int d^4x \sqrt{-g} (\Sigma_{i=1}^{5} \, {\cal L}_i) \end{equation} where, \begin{eqnarray} {\cal L}_1 &=& R + X - V(\phi) \, , \\ {\cal L}_2 &=& {\cal G}_2(\phi,X) \, , \\ {\cal L}_3 &=& {\cal G}_3(\phi,X) \Box \phi \, , \\ {\cal L}_4 &=& {\cal G}_4(\phi,X) R + \partial_X {\cal G}_4(\phi,X) \delta^{ac}_{bd} \nabla_a \nabla^b \phi \nabla_c \nabla^d \phi \, , \\ {\cal L}_5 &=& {\cal G}_5(\phi,X) G_{ab} \nabla^a \nabla^b \phi - \frac{1}{6} \partial_X {\cal G}_5(\phi,X) \delta^{ace}_{bdf} \nabla_a \nabla^b \phi \nabla_c \nabla^d \phi \nabla_e \nabla^g \phi \, . \end{eqnarray} with $X=-1/2 \nabla_a \phi \nabla^a \phi$, $G_{ab}$ the Einstein tensor, ${\cal G}_i$ are functions of the scalars $\{ \phi,X \}$, $V$ is a potential and $\delta_{a_1..a_n}^{b_1..b_n}$ is the generalized Kronecker delta symbol. A thorough analysis of hyperbolicity properties of the resulting equations of motion, given the complexity of the PDE system, is naturally a difficult task. One such study has been presented recently in~\cite{Papallo:2017qvl} (see also~\cite{Ijjas:2018cdm}). Here, following steps taken to establish local well posedness of Einstein equations~\cite{fourès-bruhat1952}, --whereby the introduction of harmonic coordinates renders Einstein equations manifestly symmetric hyperbolic-- a judicious coordinate choice is found to guarantee strong hyperbolicity. Within this context, it is shown that only a special subset of Horndeski's theories leads to strong hyperbolicity in harmonic gauge in the nonlinear regime. This subset is given by the action, \begin{equation}\label{jordanH1} S=\frac{1}{16\pi}\int d^4x \sqrt{-g} \left[ (1+{\cal G}_4(\phi))R + X - V(\phi) + {\cal G}_2(\phi,X) \right] \; , \end{equation} Notice the action above corresponds to the so called Jordan frame (as the Ricci scalar appears multiplied by a non-trivial function of $\phi$). The equations of motion obtained from this action can be found in~\cite{Papallo:2017qvl}. A conformal transformation, of the form $\tilde g_{ab} = \Omega^2 g_{ab}$ with $\Omega=\sqrt{1+{\cal G}_4(\phi))}$, can be exploited to obtain the equations in the Einstein frame. We assume that the conformal factor $\Omega$ never vanishes, which ensures that the transformation is well-defined and the two formulations of the theory are equivalent. It allows one to rewrite the above action as, \begin{equation}\label{einsteinH1} S=\frac{1}{16\pi}\int d^4x \sqrt{-\tilde g} \left\{ \tilde R + \frac{1}{(1+{\cal G}_4(\phi))^2}\left[\left(3[{\cal G}'_4(\phi)]^2 +1+{\cal G}_4(\phi)\right)\tilde{X} - V(\phi) + {\cal G}_2\left(\phi,(1+{\cal G}_4(\phi))\tilde{X}\right)\right] \right\} \; , \end{equation} where $\tilde X=-1/2 \tilde{\nabla}_c \phi \, \tilde{\nabla}^c \phi$. From this action, the equations of motion are \begin{align} & \tilde{G}_{ab} = \left[\frac{3[{\cal G}'_4(\phi)]^2+1+{\cal G}_4(\phi)}{2(1+{\cal G}_4(\phi))^2}\,\tilde{X} +\frac{- V(\phi) + {\cal G}_2(\phi,X)}{2(1+{\cal G}_4(\phi))^2}\right]\tilde{g}_{ab} +\left[\frac{3[{\cal G}'_4(\phi)]^2}{2(1+{\cal G}_4(\phi))^2}+\frac{1+\partial_{X}{\cal G}_{2}(\phi,X)}{2(1+{\cal G}_4(\phi))}\right]\tilde{\nabla}_a \phi\tilde{\nabla}_b \phi \;,\label{einsteingabpregauge} \\[7pt] \nonumber & \biggl[\tilde{g}^{ab} - \frac{(1+{\cal G}_{4}(\phi))^2 \partial^2_{XX}{\cal G}_{2}(\phi,X)}{3[{\cal G}'_4(\phi)]^2+(1+{\cal G}_4(\phi))(1+\partial_{X}{\cal G}_{2}(\phi,X))} \tilde{\nabla}^a \phi\tilde{\nabla}^b \phi \biggr] \tilde{\nabla}_a\tilde{\nabla}_b \phi \\ \nonumber &\qquad\qquad = \frac{1}{3[{\cal G}'_4(\phi)]^2+(1+{\cal G}_4(\phi))(1+\partial_{X}{\cal G}_{2}(\phi,X))} \Biggl\{V'(\phi)-\partial_{\phi}{\cal G}_{2}(\phi,X)-2\,{\cal G}'_4(\phi)\frac{V(\phi)-{\cal G}_{2}(\phi,X)}{1+{\cal G}_{4}(\phi)} \\ \nonumber &\qquad\qquad\qquad +\biggl[2(1+{\cal G}_{4}(\phi))\partial^{2}_{\phi X}{\cal G}_{2}(\phi,X)+{\cal G}'_{4}(\phi) \Bigl( 6{\cal G}_4''(\phi)-1-3\partial_{X}{\cal G}_{2}(\phi,X)-6\frac{[{\cal G}'_4(\phi)]^2}{1+{\cal G}_{4}(\phi)}\Bigr)\biggr]\tilde{X} \\ &\qquad\qquad\qquad +2{\cal G}'_{4}(\phi)(1+{\cal G}_{4}(\phi))\partial^{2}_{XX}{\cal G}_{2}(\phi,X)\tilde{X}^{2} \Biggr\} \label{einsteinphipregauge} \;. \end{align} In order to write the scalar field equation in the convenient form~\eqref{einsteinphipregauge}, we have divided it by the overall factor ${{3[{\cal G}'_4(\phi)]^2+(1+{\cal G}_4(\phi))(1+\partial_{X}{\cal G}_{2}(\phi,X))}\,(1+{\cal G}_{4}(\phi))^{-2}}$. In the following, we will assume that this factor is non-zero. Notice that neither of the right hand sides in eqns~\eqref{einsteingabpregauge}-\eqref{einsteinphipregauge} involve second order derivatives of the relevant fields (metric or scalar), and, the hyperbolic properties of the system can be assessed independently for $g_{ab}$ and $\phi$. In the case of the metric tensor, such properties only depend on the metric tensor itself and we can draw from the vast knowledge about properties of Einstein equations (see e.g.~\cite{Sarbach:2012pr}). In particular, we recall that they can be straightforwardly rendered into symmetric hyperbolic form. Indeed, following again~\cite{fourès-bruhat1952}, one can introduce harmonic coordinates ($\tilde \Gamma_a = 0$), and equation~\eqref{einsteingabpregauge} becomes symmetric hyperbolic. Further, we recall the speed of propagation of perturbations is {\em independent of the metric tensor itself} (thus the equation is linearly degenerate and no shocks can arise from smooth initial data). Importantly, the observations above with regards to well posedness (at least locally) and linear degeneracy are certainly valid for other gauges. As we shall discuss below, regardless of the gauge choice, the scalar field equation has particular `worrisome' properties. The principal part of the scalar field equation depends on $\{g_{ab},\phi,\partial_a \phi\}$. Indeed, the principal part of the equation for $\phi$, equation (\ref{einsteinphipregauge}), is given by a wave equation of a modified metric \begin{equation} \gamma^{ab} = \tilde{g}^{ab} - \frac{(1+{\cal G}_{4}(\phi))^2 \partial^2_{XX}{\cal G}_{2}(\phi,X)}{3[{\cal G}'_4(\phi)]^2+(1+{\cal G}_4(\phi))(1+\partial_{X}{\cal G}_{2}(\phi,X))} \tilde{\nabla}^a \phi\tilde{\nabla}^b \phi \label{effectiveinversemetric}\, . \end{equation} Thus, propagation speeds of scalar field perturbations depend on the state of the field and its gradient. As a consequence, shocks can develop from smooth initial data, at which point uniqueness of the solution is lost and with it, well posedness~\footnote{To recover it, further conditions would need to be imposed, see discussions in~\cite{Reall:2014sla,Allwright:2018rut}.}. Another potential problem is that the equation itself might change character point-wise in the spacetime. Indeed the character of this equation, i.e. hyperbolic, elliptic or parabolic, is determined by the eigenvalues of $\gamma^{ab}$. Namely, if no eigenvalue is zero, and the sign of only one of them is opposite to the others the equation is hyperbolic~\footnote{If more than one is of opposite sign, the equation would be ultra-hyperbolic in character.} (with $+$ signature it would be one negative). If all signs are the same the equation is elliptic and if at least one eigenvalue is zero parabolic. For a well defined initial value problem describing a small departure from General Relativity, the equation would be hyperbolic. Notice that at the linear level, equation~\eqref{einsteinphipregauge} is symmetric hyperbolic, linearly degenerate and the scalar field perturbations propagate at the speed of light of the metric $\tilde g_{ab}$. However, at the non-linear level --even with smooth initial data-- if dispersion does not win and gradients grow (assuming $\partial_{XX}{\cal G}_2 \neq 0$) the character of the equation can change and, by continuity, it would do so by turning --locally-- to parabolic and then elliptic. Thus either through a change of character, or by loss of uniqueness due to shocks well posedness could be lost. Interestingly, a change in character in spherically symmetric non-linear studies in subclasses of Horndeski's theories has been identified, for instance in k-essence~\cite{Akhoury:2011hr} and Einstein-Dilaton-Gauss-Bonnet~\cite{Ripley:2019hxt}. The theories studied in these references are seemingly different from Eq.~\eqref{jordanH1}, but they can be linked to a Horndeski theory through the following mappings. In the former case, only the kinetic term ${\cal G}_{2}(X)$ is present, while in the latter we only have ${\cal{G}}_{5}(\phi,X)=-\lambda\ln|X|$ where $\lambda$ is the coupling constant~\footnote{Although such a function ${\cal G}_{5}$ is not smooth at $X=0$, the equations of motion are well defined everywhere~\cite{Papallo:2017ddx}.}. Notice however that the potential change in character or the development of shocks might be absent in special cases. To assess this, consider the following transformation for the scalar field \begin{equation}\label{scalartransfornation} \tilde{\phi} = \int \frac{3[{\cal G}'_4(\phi)]^2+(1+{\cal G}_4(\phi))(1+\partial_{X}{\cal G}_{2}(\phi,X))}{(1+{\cal G}_{4}(\phi))^2}\,\mathrm{d}\phi\,. \end{equation} The scalar equation of motion becomes \begin{align} \nonumber \tilde{g}^{ab} \tilde{\nabla}_a\tilde{\nabla}_b \tilde{\phi} = \frac{1}{(1+{\cal G}_4(\phi))^2} \Biggl\{& V'(\phi)-\partial_{\phi}{\cal G}_{2}(\phi,X)-2\,{\cal G}'_4(\phi)\frac{V(\phi)-{\cal G}_{2}(\phi,X)}{1+{\cal G}_{4}(\phi)} \\ & -{\cal G}'_{4}(\phi) \biggr[ 6{\cal G}_4''(\phi)-1+\partial_{X}{\cal G}_{2}(\phi,X)-6\frac{[{\cal G}'_4(\phi)]^2}{1+{\cal G}_{4}(\phi)}\biggr]\tilde{X} \Biggr\} \;, \end{align} where $\phi$ is to be understood as a function of $\tilde{\phi}$: $\phi(\tilde{\phi})$, provided the relation~\eqref{scalartransfornation} is invertible. Then, the scalar field $\tilde{\phi}$ obeys a wave equation of the original metric $\tilde{g}_{ab}$, and no pathologies would arise (unless $\tilde g^{ab}$ itself becomes singular). However, the equivalence between the new scalar field and the old one is a nontrivial question, as the transformation~\eqref{scalartransfornation} may not always be well defined. In particular, the requirement that the newly defined scalar field should verify $\tilde{\nabla}_{[\mu}\tilde\nabla_{\nu]}\tilde{\phi} = 0$ further implies that $\tilde{\nabla}_{[\mu}\left(\tilde{X}\,\partial_{\nu]}\phi\right) = 0$, thus $\tilde X \partial_a \phi$ is twist-free. Such condition could be regarded as an external constraint to ensure well posedness. In the simple example of Sec~\ref{Ex1}, we perform a similar redefinition of the scalar field which is always well defined as it does not depend on $X$. As an illustration, we show in the nonlinear example of Sec.~\ref{Ex2} how the twist evolves for several representative cases. \\ Notice that by working in the Einstein frame, we have straightforwardly recovered the conclusions from~\cite{Papallo:2017qvl}, i.e. {\em local well posedness} of this class of Horndeski's theories by virtue of the equations of motion for $g_{ab}$ and $\phi$ being symmetric hyperbolic. The question of global solutions to this theory is, naturally, far more involved which is not unexpected as this is already a complex question in General Relativity! Nevertheless, some relevant conclusions can be drawn. Namely, \begin{itemize} \item At the nonlinear level for {\em weak data}, the equation satisfies Klainerman's {\em null condition}~\cite{2014arXiv1407.6276H} if ${\cal G}$ is at least order $X$ ($\propto \tilde X$). Consequently, together with stability of Minkowski results~\cite{Christodoulou:1993uv} or the weak null energy condition satisfaction by Einstein equations~\cite{Lindblad:2004ue}, together with contributions of $\phi$ satisfying Strauss' conjecture \cite{2011arXiv1110.4454W} would imply the (subclass) of Horndeski's theories considered has a global solution in the small data case. Beyond the weak case however, little is known; though, as mentioned, the propagation speed dependence on the field and its gradient implies a high likelihood of shocks arising and/or a change in character. Would such issues arise and be ``invisible'' to far observers? It would depend on whether they generically form {\em inside} a black hole. In such case, pathological issues might be shielded from problematic consequences at the classical level. A priori this seems far from guaranteed; indeed, in the context of ref~\cite{Ripley:2019hxt}, a change in character of the equations is encountered prior to a black hole being formed. We will also illustrate such a behavior in section \ref{Ex2}. \item Since the speed of propagation of (perturbations of) metric tensor and scalar field can be different, black holes are defined by the fastest outward propagation speeds. Additionally, gravitational Cherenkov radiation would be possible and high energy cosmic rays can help to draw constraints on this process (e.g.~\cite{Kimura:2011qn}). \end{itemize} Last, we can also check what we can draw from adopting the harmonic gauge in the Einstein frame and its implication in the Jordan frame. For starters, it is trivial to determine that $\tilde \Gamma^a = \Omega^{-2} \, ( \Gamma^a -2 \nabla^a \ln \Omega)$. Thus, in the Jordan frame the harmonic condition from the Einstein frame calls for adopting coordinates that satisfy instead $\Gamma^a = 2 \nabla^a \ln \Omega$. Which implies \begin{equation} \Gamma^a = \frac{{\cal G}'_4}{1+{\cal G}_4} \nabla^a \phi \end{equation} which is precisely the condition derived in~\cite{Papallo:2017qvl} in the Jordan frame to obtain a strongly hyperbolic system of equations and establish local well posedness. \section{Illustration in specific cases} \subsection{Jordan and Einstein frames equations of motion. Hyperbolicity and implications}\label{Ex1} Within the class of Horndeski's theories, one of the simplest ones is given by, \begin{equation}\label{jordan} S=\frac{1}{16\pi}\int d^4x \sqrt{-g} \left[ \phi R - \frac{\omega}{\phi} g^{\alpha\beta} \nabla_{\alpha}\phi \nabla_{\beta}\phi \right] \; , \end{equation} where $\omega$ is a function of $\phi$ only. A comparison with Horndeski's Lagrangian implies, \begin{equation} {\cal G}_2 = \frac{(2 \omega - \phi)}{\phi} X \, \, , \, \, {\cal G}_4 = \phi - 1 \nonumber \, ; \end{equation} with all the other functions (including the potential) set to zero. From our previous discussion, since $\partial_{XX} {\cal G}_2 = 0$, it is clear that in the Einstein frame characteristics of both metric tensor and scalar field are determined by the metric. This theory has recently been the subject of fully non-linear studies in the context of binary black neutron star mergers~\cite{Barausse:2012da,Palenzuela:2013hsa,Shibata:2013pra}. In such scenarios global solutions describing several orbits, merger and aftermath have been successfully achieved. This suggests an underlying robustness of the equations of motion which can be understood at the analytical level rather simply. To fix ideas, let us consider the vacuum case. The field equations derived from the (Jordan frame) action (\ref{jordan}) are \begin{equation} \label{jordaneqmetric} R_{\mu\nu}-\frac{1}{2} g_{\mu\nu} R= \frac{\omega}{\phi^2} \left( \nabla_{\mu}\phi \nabla_{\nu} \phi -\frac{1}{2} g_{\mu\nu} \nabla^{\alpha}\phi \nabla_{\alpha}\phi \right) +\frac{1}{\phi} \left( \nabla_{\mu}\nabla_{\nu} \phi-g_{\mu\nu} \Box \phi \right) \; , \end{equation} \begin{equation} \label{jordaneqfield} \Box \phi = -\frac{\phi R}{2 \omega} +\left(\frac{1}{2 \phi} -\frac{\omega'}{2\omega}\right)(\nabla \phi)^2 \; . \end{equation} Upon replacing the Ricci scalar one re-expresses equation (\ref{jordaneqfield}) as, \begin{equation} \label{jordaneqfieldreplacedricci} \Box \phi = -\frac{\omega'}{3+2\omega}(\nabla \phi)^2\; . \end{equation} which satisfies the null condition in the weak case. However, a non-trivial coupling --at the level of the principal part-- is present in equation (\ref{jordaneqmetric}). Furthermore, notice the right hand side of this equation contains second derivatives of the scalar field --thus such terms do belong to the principal part of the system. As well, because of such terms, the right hand side does not seemingly satisfy the null energy condition. Both these observations indicate it is not a priori clear that solutions obtained from this system are well behaved. However, through the conformal transformation~\cite{BD61}, \begin{equation} \label{CT} g_{\mu\nu} \longrightarrow \tilde{g}_{\mu\nu}=\phi \, g_{\mu\nu} \; , \end{equation} and the scalar field redefinition \begin{equation} \label{SFredefinition} \phi \longrightarrow \tilde{\phi}= \int \frac{(3+2\omega)^{1/2}}{\phi} \, d\phi \; , \end{equation} one recasts the theory in the Einstein frame. In this frame, the theory is defined by the standard Einstein-Hilbert action with an extra field, \begin{equation} \label{einstein} S=\int d^4x \sqrt{-\tilde{g}} \left[ \frac{\tilde{R}}{16\pi} -\frac{1}{2} \, \tilde{g}^{\mu\nu} \tilde{\nabla}_{\mu}\tilde{\phi} \tilde{\nabla}_{\nu}\tilde{\phi} \right] \; . \end{equation} The field equations are the usual Einstein equations with the scalar field as a source together with a rather trivial equation for the scalar field itself, \begin{equation} \tilde{R}_{\mu\nu}-\frac{1}{2} \tilde{g}_{\mu\nu} \tilde{R}= 8\pi \left( \tilde{\nabla}_{\mu}\tilde{\phi} \tilde{\nabla}_{\nu} \tilde{\phi} -\frac{1}{2} \, \tilde{g}_{\mu\nu} \tilde{\nabla}^{\alpha} \tilde{\phi} \tilde{\nabla}_{\alpha}\tilde{\phi} \right) \; , \end{equation} \begin{equation} \tilde{\Box} \tilde{\phi}=0 \; . \end{equation} The equation for the (conformal) metric $\tilde g_{ab}$ is amenable to the standard analysis of well posedness in Einstein equations (e.g.~\cite{Sarbach:2012pr}). In particular, adopting {\em harmonic coordinates} ($\tilde \Gamma_a = 0$) the principal part of equation (\ref{einstein}) becomes just ten wave equations. Further, the right hand side now obeys the null energy condition. Thus, in the Einstein frame it follows that at least a local in time solution will exist and standard geometrical arguments can be exploited to assess general features of the spacetime behavior. What does this imply in the Jordan frame? Here, since, $\tilde \Gamma_a = \phi^{-2} \, (\phi \Gamma_a - \nabla_a \phi)$, the discussion above suggests adopting coordinates satisfying $\Gamma_a = \phi^{-1} \nabla_a \phi$. With this choice, the equations of motion in the Jordan frame can be re-expressed in the following way. Beginning with \begin{equation} R_{ab} = \frac{\omega}{\phi^2} \nabla_a \phi \nabla_b \phi + \frac{1}{2 \phi} g_{ab} \Box \phi + \frac{1}{\phi} \nabla_a \nabla_b \phi\,, \end{equation} we then define $\hat R_{ab} + \nabla_{(a} \Gamma_{b)} \equiv R_{ab}$ (i. e. taking out the covariant derivative of the trace of the Christoffels). Now, replacing in such a term the condition on the coordinates, we obtain \begin{equation} \hat R_{ab} = \frac{\omega+1}{\phi^2} \nabla_a \phi \nabla_b \phi + \frac{1}{2} g_{ab} \phi^{-1} \Box \phi \,. \end{equation} A priori we still have second order derivatives in the right hand side of the above equation, but --on shell-- we can use the equation for the field $\phi$ still. Recall, \begin{equation} \Box \phi = -\frac{\omega'}{(3+2\omega)}(\nabla \phi)^2\,. \end{equation} Thus, the metric equation results in \begin{equation} \hat R_{ab} = \frac{(1+\omega)}{\phi^2} \nabla_a \phi \nabla_b \phi -\frac{\omega'}{2\phi(3+2\omega)}g_{ab} (\nabla \phi)^2\,. \label{newjordan} \end{equation} And it is evident the right hand side can satisfy the null energy condition for $w \ge 1$. \subsection{Exploring the non-linear behavior. Case with $\partial_{XX} {\cal G}_2 \neq 0$}\label{Ex2} We now turn our attention to Horndeski's theories with a nonlinear kinetic term ${\cal G}_2(\phi, X) = - g X^2$, with all other functions, as well as the potential, set to zero for simplicity. This choice, similar to those adopted in \citep{Akhoury:2011hr}, can be thought of as the first nonlinear term in a Taylor expansion of the kinetic term in a $k$-essence theory, \begin{equation} S = \int d^4 x \sqrt{-g} \left[ R + X - g X^2\right] \label{G2action}\, . \end{equation} Our goal is to study the nonlinear behavior of the theory and explore the possible phenomenology that can arise. While we are restricting to a rather special case, as we shall see, a number of possible pitfalls can appear which are likely to manifest in more general cases. To simplify the treatment and presentation, we concentrate on spherically symmetric scenarios and present several cases defined by different initial conditions as well as the value of the coupling $g$. For simplicity we adopt Schwarzschild coordinates where the metric can be written as, \begin{equation} ds^2 = - \alpha^2 dt^2 + a^2 dr^2 + r^2 d\Omega^2\, \label{G2metric}. \end{equation} Thus the only dynamical metric functions are the lapse function $\alpha(t, r)$ and $a(t,r)$. Recall that these coordinates become singular when a horizon forms. Such scenario takes place when $l^\mu \nabla_\mu r = 0$, where $l^\mu$ is a null vector \citep{Akhoury:2011hr}. In the gauge (\ref{G2metric}), this is simply $\alpha = 0$. Consequently, with our current implementation we can explore up to black hole formation. Despite this limitation, as we shall see below, one can identify several problematic scenarios arising either outside the black hole or even prior to its formation. Thus, severe restrictions to well posedness arise which are not cloaked by a horizon for asymptotic observers. To simplify the discussion and the numerical implementation, we introduce standard first order variables as used in \citep{Choptuik:1992jv}, \begin{equation} \Phi \equiv \phi'\, , \qquad \Pi \equiv \frac{a}{\alpha} \dot \phi \label{G2firstorderdef}\, , \end{equation} using the notation $\dot f = \partial_t f$ and $f' = \partial_r f$. In the special case of ${\cal G}_2(\phi, X) = {\cal G}_2(X)$, as in (\ref{G2action}), equations (\ref{einsteingabpregauge}) and (\ref{einsteinphipregauge}), respectively, take the form \begin{equation} R_{\mu \nu} - \frac12 g_{\mu \nu} R = \left[\frac{X + {\cal G}_2(X)}{2}\right] g_{\mu \nu} + \left[\frac{1 + \partial_X {\cal G}_2(X)}{2}\right] \nabla_\mu \phi \nabla_\nu \phi \label{G2gabeom}\, , \end{equation} \begin{equation} \left[g^{\mu\nu} - \frac{\partial^2_{XX} {\cal G}_2(X)}{1 + \partial_X {\cal G}_2(X)} \nabla^\mu \phi \nabla^\nu \phi \right] \nabla_\mu \nabla_\nu \phi = 0 \label{G2phieom} \end{equation} where the effective inverse metric $\gamma^{\mu \nu}$, as in equation (\ref{effectiveinversemetric}), is given by \begin{equation} \gamma^{\mu \nu} = g^{\mu\nu} - \frac{\partial^2_{XX} {\cal G}_2(X)}{1 + \partial_X {\cal G}_2(X)} \nabla^\mu \phi \nabla^\nu \phi \label{G2gamma}\, . \end{equation} Now, in order to monitor the character of the equation of motion for the scalar field (\ref{G2phieom}), the eigenvalues of the effective inverse metric must be computed. In particular we extract at any given time the two eigenvalues, here labeled as $\lambda_\pm$ for every spatial point. Since we are mainly interested in one of the eigenvalues going to zero, the relevant quantities will be $\min(\lambda_+)$ and $\max(\lambda_-)$, where $\min(\cdot)$ and $\max(\cdot)$ refer to the minimum and maximum in the spatial (radial) direction, at any given time. It is important to keep in mind that although $\lambda_+ > 0$ and $\lambda_- < 0$ for $\phi = 0$, this is not necessarily the case for arbitrary configurations. In fact, the equations will change character when these conditions cease to be satisfied. The two eigenvalues can be expressed as \begin{equation} \lambda_\pm = \frac{\gamma^{tt} + \gamma^{rr}}{2} \pm \sqrt{\left(\frac{\gamma^{tt} + \gamma^{rr}}{2}\right)^2 - \gamma^{tt}\gamma^{rr} + (\gamma^{tr})^2} = \frac{\gamma^{tt} + \gamma^{rr}}{2} \pm \sqrt{\left(\frac{\gamma^{tt} + \gamma^{rr}}{2}\right)^2 - \det (\gamma^{\mu\nu})}\label{lambdaeigs}\; . \end{equation} It is evident that the system will become parabolic when $\det (\gamma^{\mu\nu}) = 0$, as expected. Additionally, it is important to keep track of the characteristic speeds, or propagation velocities, of the scalar field. This can be done by extracting the eigenvalues, here labeled as $V_\pm$, of the principal part of the (first order) equations of motion for $\Phi$ and $\Pi$. These eigenvalues determine the shape of the light cones for the scalar field, and can be used to identify features such as sound horizons (horizons for the scalar field \citep{Akhoury:2011hr}). With our conventions, asymptotically $V_+ \rightarrow 1$ while $V_- \rightarrow -1$ describing, respectively, the incoming and outgoing modes of the field. A sound horizon --with respect to asymptotic observers-- will appear\footnote{Naturally the opposite condition still defines a local sound horizon, cloaking some local region from being reached by scalar field perturbations.} when $V_- = 0, V_+ \ge 0$. Again, as in the case of the effective metric, we are interested in $\min(V_+)$ and $\max(V_-)$. \begin{equation} V_\pm = - \frac{\gamma^{tr}}{\gamma^{tt}} \pm \sqrt{\left(\frac{\gamma^{tr}}{\gamma^{tt}}\right)^2 - \frac{\gamma^{rr}}{\gamma^{tt}}} = - \frac{\gamma^{tr} }{\gamma^{tt}} \pm \sqrt{-\frac{\det(\gamma^{\mu\nu})}{(\gamma^{tt})^2}}\label{Veigs}\, . \end{equation} As mentioned, when $\det (\gamma^{\mu\nu}) = 0$ the equation changes character. However, the rate at which $(\gamma^{tt})^2 \rightarrow 0$ distinguishes two important cases with respect of the {\em type} of change. Recall that mixed character equations can often be classified in comparison to two standard equations (\cite{Ripley:2019irj}). These are the Tricomi equation \begin{equation} \partial_y^2 u(x, y) + y \partial_x^2 u(x, y) = 0 \label{Tricomi}\, , \end{equation} where the characteristic speeds, $\pm{{y}^{1/2}}$, go to zero at the character transition line $y=0$, and the Keldysh equation \begin{equation} \partial_y^2 u(x, y) + \frac{1}{y} \partial_x^2 u(x, y) = 0 \label{Keldysh}\, , \end{equation} where the speed $\pm{{y}^{-1/2}}$ diverges at the transition line. Notice that the discriminant between the two characteristic speeds (\ref{Veigs}) turns out to be proportional to $-\det(\gamma^{\mu\nu})$. Therefore, as long as $(\gamma^{tt})^2 \rightarrow 0$ slower than $\det(\gamma^{\mu\nu}) \rightarrow 0$, the characteristic speeds $V_+,V_-$ will coincide and the scalar field light cone becomes degenerate. Thus, there must exist some instant of time, before the system becomes --at least locally-- parabolic, when either $V_+$ or $V_-$ is zero (the latter case implying a sound horizon) indicative of a Tricomi-type transition. On the other hand, if $(\gamma^{tt})^2 \rightarrow 0$ faster than $\det(\gamma^{\mu\nu}) \rightarrow 0$ the characteristic speeds diverge indicating a transition of Keldysh type. This case is more delicate to tract numerically as the diverging speeds imply the time-step should be adjusted to decrease inversely with the maximum speed with an explicit integration algorithm. (Note: an implicit update could be implemented to bypass this issue, but at the expense of missing physics taking place at smaller scales than the time-step adopted). Interestingly, in \cite{Ripley:2019irj}, only a Tricomi-type behavior is observed. Anticipating our results, we observe both cases depending on the value of the coupling $g$: Tricomi-like for $g < 0$ and Keldysh-type transitions $g > 0$. The well posedness of Tricomi equation has been explored in \cite{doi:10.1002/cpa.3160230404,Otway} and, as discussed in~\cite{Ripley:2019irj} the initial/boundary conditions to ensure well posedness would be rather unnatural from a time-development point of view. \subsubsection*{Implementation details} In the first order variables (\ref{G2firstorderdef}) we can extract from the $rr$ and $tt$ components of equation (\ref{G2gabeom}), respectively, the first order constraint equations \begin{equation} \alpha' = \frac{\alpha}{8r} \left[4(a^2 - 1) + r^2 (\Phi^2 + \Pi^2) \right] - g\frac{r \alpha}{16 a^2} \left[(\Phi^2 + \Pi^2)^2 - 4\Phi^4\right]\label{G2alpha}\, , \end{equation} \begin{equation} a' = \frac{a}{8r} \left[4(1 - a^2) + r^2 (\Phi^2 + \Pi^2) \right] + g\frac{r}{16 a} \left[(\Phi^2 + \Pi^2)^2 - 4\Pi^4\right] \label{G2a}\, . \end{equation} Equation (\ref{G2phieom}), in terms of the first order variables, is given by \begin{align} \dot \Pi = \frac{1}{r^2}\left(r^2 \frac{\alpha}{a} \Phi\right)' + \frac{2 g}{a^2 + g \left(\Phi^2 - 3 \Pi^2\right)} \frac{\alpha}{a} \Bigg[ (\Phi^2 + \Pi^2) \Phi ' - 2 \Phi \Pi \Pi' \nonumber \\ + \left( \frac{r}{4}\Pi ^2 - \frac{a'}{a}\right) \left(\Phi^2-\Pi ^2\right)\Phi + \frac{g r}{4 a^2} \left(\Phi^2- \Pi^2\right)^2 \Phi \Pi ^2 + \frac{2}{r} \Phi \Pi ^2 \Bigg] \label{G2Pi}\, , \end{align} together with the condition that $\partial_t \partial_r \phi = \partial_r \partial_t \phi$, namely \begin{equation} \dot \Phi = \left(\frac{\alpha}{a}\Pi\right)' \label{G2Phi}\, . \end{equation} The effective inverse metric from equation (\ref{G2gamma}) reads \begin{equation} \gamma^{tt} = -\frac{1}{\alpha ^2} \left( 1-g \frac{2 \Pi ^2}{a^2+g \left(\Phi ^2 -\Pi ^2 \right)}\right)\, , \qquad \gamma^{rr} = \frac{1}{a^2} \left(1 + g\frac{2 \Phi ^2}{a^2+g \left(\Phi ^2-\Pi ^2\right)} \right)\, , \end{equation} \begin{equation} \gamma^{tr} = -g\frac{2 \Pi \Phi }{a \alpha \left(a^2+g \left(\Phi ^2-\Pi ^2\right)\right)}\, , \end{equation} and the matrix defining the principal part of equations (\ref{G2Pi}) and (\ref{G2Phi}) is, \begin{equation} M = \left( \begin{array}{cc} 0 & \frac{\alpha}{a} \\ -\frac{a \gamma^{rr}}{\alpha \gamma^{tt}} & - 2 \frac{\gamma^{tr}}{\gamma^{tt}} \\ \end{array} \right) = \frac{\alpha}{a}\left( \begin{array}{cc} 0 & 1 \\ 1 + 2g\frac{\Phi ^2 + \Pi ^2}{a^2 + g \left(\Phi ^2-3 \Pi ^2\right)} & -4 g \frac{ \Pi \Phi }{a^2 + g \left(\Phi ^2-3 \Pi ^2\right)} \\ \end{array} \right)\; . \end{equation} The equations of motion are solved in a constrained evolution scheme. Both $\alpha(t, r)$ and $a(t, r)$ are obtained through a spatial integration while the scalar field is integrated in time through a Runge Kutta 4th order time integrator. At each time step (intermediate or full), given a spatial profile for the fields $\Phi$ and $\Pi$, the constraint equations (\ref{G2alpha}) and (\ref{G2a}) are integrated in space using also a Runge-Kutta 4th order method (RK4). First, $a$ is integrated radially outwards from $r = 0$ to $r = r_\text{max}$ with the initial condition $a(r = 0) = 1$. This condition ensures regularity at ($\alpha' = a' = 0$) at the origin. Then, $\alpha$ is integrated radially inwards with the condition $\alpha(r_\text{max}) = 1 / a(r_\text{max})$. Notice that, as these integrations are carried out, the fields $\Phi$ and $\Pi$ which are needed at `virtual radial points' in between grid points are obtained through fourth order (second order near the spatial boundaries) spatial interpolations at any given time step. Evolution of $\Phi$ and $\Pi$ forward in time is carried out via the method of lines with a RK4 integration using equations (\ref{G2Pi}) and (\ref{G2Phi}). Spatial derivatives are computed with second order (first order near the boundaries) finite-difference operators satisfying summation by parts. Regularity at the origin is addressed by using l'H\^opital's rule at $r=0$ to regularize the equation, and we defined totally outgoing boundary conditions at the outer radial boundary. A small amount of fourth order (second order near the boundaries) artificial dissipation is added for convenience as described. For further details see \cite{Calabrese:2003yd,Calabrese:2003vx,Guzman:2007ua}. The numerical results displayed in this paper are performed in a spatial domain ranging from $r = 0$ to $r = r_\text{max} = 100$, and a spatial resolution of $\Delta r = 1/80$. (though convergence and consistency of the solutions obtained is checked with resolutions of $\Delta r = 1/20$ and $\Delta r = 1/40$). The Courant number is initially taken to be $C = 1/10$, and therefore $\Delta t = C \Delta r = 1/800$. Numerical output is produced every 40 time steps. For cases displaying very fast changes, or a high speed of propagation of the scalar field, we switch to a Courant parameter of $C = 1/100$ ($\Delta t = 1/8000$) in the last part of the simulation, and we produce output of the solution every 4 time steps. For reference, the times when these refinements are initiated are listed in the Appendix. \subsubsection*{Initial conditions and coupling parameters} As mentioned, our goal is to explore the possible phenomenology that can arise in this theory. We have performed extensive studies to try and isolate different scenarios and, for concreteness in our presentation, we present three representative cases for positive and negative coupling values. In particular, we adopt initial data for the (first order variables of the) scalar field given by: \begin{equation} \Phi(t = 0, r) = A \exp \left( - \frac{(r - r_0)^2}{\sigma^2}\right) \cos \left( \frac{\pi}{10} r \right)\, , \qquad \Pi(t = 0, r) = 0\, . \end{equation} with $r_0 = 55$. The three cases, labeled {\bf A}, {\bf B} and {\bf C}, are defined by the following parameters: \begin{itemize} \item Case {\bf A}: $A = 0.02$, $\sigma = 15.0$ \item Case {\bf B}: $A = 0.14$, $\sigma = 1.5$ \item Case {\bf C}: $A = 0.045$, $\sigma = 15.0$ \end{itemize} For each of these parameter sets, we have obtained solutions for $g = +1$ (labeled A+, B+ and C+) and for $g = -1$ (A-, B- and C-). Naturally, the scale over which a non-trivial physical behavior occurs depends on: (i) the initial location and amplitude of the pulse --as it travels towards the origin in spherical symmetry, his associated energy density naturally grows-- and (ii) the strength of the coupling parameter $g$. \subsection*{Negative coupling constant: $g = -1$} Setting $g = -1$, we observe three different outcomes depending on the initial conditions of the wave pulse as illustrated in FIG. \ref{fig:EigenvaluesMinus}. If the data is weak enough, case A-, the ingoing pulse reaches the origin, bounces off it and disperses as it propagates to infinity. For configuration B-, the eigenvalue $\lambda_+$ of the effective inverse metric crosses zero at $t \approx 56.63, r \approx 1.75$ while the lapse remains bounded from below by $\alpha \approx 0.62$. This indicates the system has become parabolic before a light horizon forms. Further, as predicted by equation (\ref{Veigs}), the characteristic speeds of the scalar field merge together as $\lambda_+ \to 0$ and acquire an imaginary part after that. Before the transition point, the eigenvalue $V_-$ crosses zero at $t \approx 56.52, r \approx 1.90$, and therefore a sound horizon is indeed produced. However, since the lapse function $\alpha$ is positive everywhere, there is no light horizon and perturbations of the metric tensor can still propagate through the sound horizon, thus the transition point is not disconnected from outside observers. This is not the only possible outcome for strong enough initial data, as in configuration C- a light horizon does form, together with a sound horizon at $r \approx 6.5$, without any change in character of the scalar field equation. In FIG. \ref{fig:EigenvaluesMinus} C-, the final state at and outside this region is described by a black hole with an outwards propagating field. As mentioned, we can not comment on what takes place inside the horizon. Interestingly, case B- displays characteristic speeds going to zero before going imaginary where the equation changes character to parabolic. This, as discussed in \cite{Ripley:2019irj}, is an indication that the equation is of Tricomi type. \begin{figure}[h!] \includegraphics[width=150pt]{Plots/EigenvaluesAminus} \includegraphics[width=150pt]{Plots/EigenvaluesBminus} \includegraphics[width=150pt]{Plots/EigenvaluesCminus} \caption{Eigenvalues for $g = -1$, in cases A- (left), B- (center) and C- (right). The upper three plots show the (max/min of) eigenvalues $\lambda_\pm$ of the effective inverse metric $\gamma^{\mu\nu}$ and the minimum of $\alpha$. The lower three plots show the eigenvalues $V_\pm$ of the principal part of the scalar field equations, corresponding to the characteristic speeds of propagation of the scalar. In each plot, the upper red curves correspond to the spatial maximum (red dashed) and minimum (red solid) values of the $\lambda_+$ and $V_+$, while the lower blue curves depict the spatial maximum (blue solid) and minimum (blue dashed) of $\lambda_-$ and $V_-$. The thick black solid line is the lapse function $\alpha$, used to identify the formation of a black hole. A gray line at 0 is added as a guide to the eye.\label{fig:EigenvaluesMinus}} \end{figure} \subsection*{Positive coupling constant: $g = +1$} For $g = +1$, delicate features in the solution for the same initial conditions developed in a more marked way and, arguably, more violently. The obtained behavior is illustrated in FIG. \ref{fig:EigenvaluesPlus}. Naturally, there is not much qualitative difference in A+ configuration. This is to be expected since for weak enough data, the impact of the scalar field is considerably suppressed. In cases B+ and C+, however, $\lambda_-$ crosses zero and the system becomes parabolic in a rather sharp, abrupt way. The transition occurs at $t \approx 54.82, r \approx 1.70$ for case B+, and at $t \approx 68.63, r \approx 0$ for case C+. In contrast to the previous case, cases B+, C+ display fastly growing characteristic speeds right before becoming imaginary where the equation changes character to parabolic. This, as discussed in~\cite{Ripley:2019irj}, is an indication that the equation is of Keldysh type. Moreover, this implies these regimes have a natural causal horizon significantly larger than that of light (e.g.~\cite{Bonvin:2006vc}). Nevertheless, the change of character in the equation signals well-behaved solutions can only be obtained within a finite range of time. Furthemore, this change of character --for both values of coupling-- takes place {\em prior to a shock being formed}. \begin{figure}[h!] \includegraphics[width=150pt]{Plots/EigenvaluesAplus} \includegraphics[width=150pt]{Plots/EigenvaluesBplus} \includegraphics[width=150pt]{Plots/EigenvaluesCplus} \caption{Eigenvalues for $g = +1$, in cases A+ (left), B+ (center) and C+ (right). The upper three plots show the (max/min of) eigenvalues $\lambda_\pm$ of the effective inverse metric $\gamma^{\mu\nu}$ and the minimum of $\alpha$. The lower three plots show the eigenvalues $V_\pm$ of the principal part of the scalar field equations, corresponding to the characteristic speeds of propagation of the scalar. In each plot, the upper red curves correspond to the spatial maximum (red dashed) and minimum (red solid) values of the $\lambda_+$ and $V_+$, while the lower blue curves depict the spatial maximum (blue solid) and minimum (blue dashed) of $\lambda_-$ and $V_-$. The thick black solid line is the lapse function $\alpha$, used to identify the formation of a black hole. A gray line at 0 is added as a guide to the eye.\label{fig:EigenvaluesPlus}} \end{figure} Finally, we illustrate the behavior of the (only non-trivial) component, $\tau_{tr}$, of the twist \begin{equation} \tau_{\mu \nu} = \nabla_{[\mu}(X\partial_{\nu]}\phi)\, , \end{equation} in figures (\ref{fig:TwistMinus}),(\ref{fig:TwistPlus}) for the negative and positive couplings adopted. As it is evident in the figures, in the weak cases (A-,A+), the twist remains bounded and relatively small throughout the evolution. In contrast, in all but the C- cases the twist grows without bound. In case C-, however, the twist remains bounded since the large value of $a$ at the horizon causes $X = a^{-2}(\Pi^2 - \Phi^2)/2$ to approach zero. \begin{figure}[h!] \includegraphics[width=150pt]{Plots/TwistAminus} \includegraphics[width=150pt]{Plots/TwistBminus} \includegraphics[width=150pt]{Plots/TwistCminus} \caption{$\max|\tau_{tr}|$ for cases A- (left), B- (center) and C- (right).\label{fig:TwistMinus}} \end{figure} \begin{figure}[h!] \includegraphics[width=150pt]{Plots/TwistAplus} \includegraphics[width=150pt]{Plots/TwistBplus} \includegraphics[width=150pt]{Plots/TwistCplus} \caption{$\max|\tau_{tr}|$ for cases A+ (left), B+ (center) and C+ (right).\label{fig:TwistPlus}} \end{figure} \section{Final Comments} In this work we explored the subset of Horndeski's theories identified as being able to define locally well posed problems. The analysis we build upon, described in~\cite{Papallo:2017qvl,Ijjas:2018cdm}, relied on identifying and exploiting a specific gauge. Such a choice might a priori be regarded as restrictive, however when seen from the Einstein frame point of view, it can be argued as being quite natural. Further, note the discussion --and problems identified that can arise-- for the dynamics of the scalar field holds regardless of the gauge chosen to consider the evolution for the metric sector. In particular, one can argue for the existence of global well behaved solutions in the weak data case. Beyond this regime, however the truly non-linear character of the equations can induce phenomenology which present serious roadblocks. Avoiding such issues requires satisfying a twist-free condition, but such a case might be too restrictive depending on the application and context of interest. In the general case, the strong possibility of a change in character of the equation --from hyperbolic to elliptic through a parabolic stage-- as well as the loss of uniqueness through the appearance of shocks further question the ability to define well-posed problems with these theories. (In simplified settings, similar deficiencies have been identified~\cite{Appleby:2011aa,DeFelice:2011bh,Brito:2014ifa}). We mention in passing that since the effective metric depends on the gradient of the scalar field, the transition to parabolic/elliptic regimes is likely to take prior to the formation of shocks in generic situations (also highlihgted in~\cite{Ripley:2019hxt}). Hence, considering Horndeski's theories as the leading order in a gradient expansion, problems might arise still within the a priori assumed regime of applicability. The timescale for the identified pathologies to arise depends, naturally, on the coupling value considered and the initial data adopted. Due to these difficulties, the extent to which global solutions obtained within the linearized regime and the information one can draw from them with respect to the original action can be regarded as suspect. This observation, which is arguably in tension with interesting observations drawn at linearized levels in the cosmological context, perhaps calls for a different philosophy with respect to Horndeski's theories. For instance, to use the linearized equations of motion as a starting point to build a new one free of the (many) problems identified at the nonlinear level through the addition of further suitable operators (for a related discussion, see~\cite{deRham:2019wjj}). However, it might come at the expense of higher derivatives being introduced. A complementary or alternative approach would be to identify the set of behaviors which can be considered physical and, armed with a suitable justification, modify the non-linear equations of motion to control unphysical pathologies (e.g.~\cite{Cayuso:2017iqc,Allwright:2018rut}). \acknowledgements We are indebted to Gustav Holzegel and Jonathan Luk for discussions on the Null Condition and the Strauss conjecture. We also thank Anna Ijjas, Frans Pretorius, Oscar Reula and Olivier Sarbach for discussions. R.~L. gratefully acknowledges the hospitality of the Perimeter Institute for Theoretical Physics, where this work was done. R.~L. is supported by the Spanish Ministerio de Ciencia, Innovaci\'on y Universidades Grant No. FPU15/01414 and by MEC grant FPA2016-76005-C2-2-P. This research was supported in part by NSERC, CIFAR and the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada, and by the Province of Ontario through the Ministry of Research and Innovation.
1,314,259,996,172
arxiv
\section{Conclusion} \label{sec:conclusion} Motivated by the development of communicative DSRs, we developed a method for generating natural fetching instructions. The proposed method (Multi-ABN) is a multimodal attention branch network that generates natural sentences including referring expressions. We summarize the following important contributions of the paper: \begin{itemize} \item The Multi-ABN is an attention branch network that generates sentences based on visual and linguistic attention. The Multi-ABN yields better under BLEU, ROUGE and CIDEr metrics than a baseline method such as \cite{yu2017joint}. \item The Multi-ABN outperforms visual only or linguistic only attention branch networks, which emphasizes the contribution of both linguistic and visual modalities. \item The Multi-ABN is able to generate perspective-free fetching instructions by the use of several visual attention branches related to different viewpoints of the same scene. \end{itemize} In future work, we plan to extend our work with a physical experimental study with non-expert users. Additionally, we plan to apply the Multi-ABN on a fully communicative DSR scheme that couples sentence generation and natural language comprehension for fetching tasks. \section{Experiments} \label{sec:experiment} \subsection{Dataset} We evaluated our method with the WRS-VS dataset introduced in Section \ref{sec:statement} and illustrated in Fig. \ref{fig:samples}. We collected $308 \times \text{M}$ images. In the following experiment, we set $M = 3$, which means that there were three different images from each given scene. We annotated $1015$ targets with $2015$ sentences in the training set and $34$ targets with $74$ different sentences in the validation set. The annotation was performed by an expert user. This data set has an average of $3.4$ targets per image, and $9.5$ words for each instruction. The vocabulary set $V$ is composed of 233 unique words. \subsection{Experimental Setup} The parameter settings of the Multi-ABN are summarized in Table \ref{tab:param}. We describe first the different attention branches that compose the Multi-ABN. Each visual attention branch (noted Vis. AB) uses 2D convolutional layers of size $3 \times 3 \times \|V\|$ before the global average pooling layer. To generate each visual attention map, a convolutional layer of dimension $1 \times 1 \times 1$ is used. Because we consider $M = 3$ different images, a two-layer $MLP_{a}$ is used to encode the different weighted visual feature maps concatenated with the linguistic feature map. In parallel, the linguistic attention branch (noted Ling. AB) uses 1D convolutional layers of size $3\times 3\times \|V\|$ followed by a single-layer embedding of dimension $\|V\|$. Similarly to the visual case, the linguistic attention map is obtained by processing the features with a convolutional layer of dimension $1\times 1\times 1$. \begin{table}[h] \small \centering \caption{\small Parameter settings of the Multi-ABN}\label{tab:param} \begin{tabular}{|c|l|} \hline Multi-ABN Opt. method & Adam (Learning rate= $5e^{-4}$, $\beta_1=0.99$, $\beta_2=0.9$) \\ \hline LSTM &$3$ layers, $1024$-cell\\ \hline MLP num. nodes & $MLP_{a}$: $1024$, $1024$ \\ \hline Vis. AB & Conv: $3\text{x}3\text{x}\|V\|$, Att. Conv : $1\text{x}1\text{x}1$\\ \hline Ling. AB & Conv: $3\text{x}3\text{x}\|V\|$, Att. Conv : $1\text{x}1\text{x}1$, MLP: $\|V\|$ \\ \hline Batch size&$32$ \\ \hline \end{tabular} \end{table} In the perception branch, the LSTM has $N =3$ layers, with each cell having dimension $d=1,024$. The network is trained with an Adam optimizer with a learning rate of $5e^{-4}$, considering a batch size of 32 samples. \subsection{Quantitative results} As mentioned in Section \ref{sec:statement}, we use standard image captioning metrics to evaluate the performance of Multi-ABN. To be exhaustive, we report the results of baseline metrics BLEU score (1-gram to 4-gram) as well as more evolve scoring systems ROUGE, CIDEr and METEOR. These scores are reported in each column of Table \ref{tab:results}. The Multi-ABN was compared with a baseline method \citep{yu2017joint}, in which a speaker model is used to generate sentences. For fair comparison, because the speaker model is only adapted to $M = 1$ images, we concatenated $M = 3$ images into a single one to form the input of the speaker model. The same method was applied for comparison to another baseline that is the visual semantic embedding architecture \cite{vinyals2015show}. As reported in Table \ref{tab:results}, Multi-ABN outperforms the speaker model under ROUGE, CIDEr and BLEU score, while METEOR evaluation does not make emphasize any significant difference. In comparison to VSE method, Multi-ABN performs significantly better for all metrics, in particular, CIDEr score is improved by 0.46 points. \begin{table}[t] \normalsize \caption{\small Evaluation of Multi-ABN sentence generation. The Multi-ABN is compared with speaker model\cite{yu2017joint} using reinforcement learning as well as a baseline method using visual semantic embedding (VSE)\cite{vinyals2015show}, Multi-ABN with visual attention branch (VAB) only, and Multi-ABN with linguistic attention branch (LAB) only.} \label{tab:results} \centering \begin{tabular}{l|ccccccc} \hline {\bf Method }& \multicolumn{7}{c}{Evaluation metric} \\ \cline{2-8} &\multicolumn{1}{c|}{BLEU-1} &\multicolumn{1}{c|}{BLEU-2} &\multicolumn{1}{c|}{BLEU-3} &\multicolumn{1}{c|}{BLEU-4} &\multicolumn{1}{c|}{ROUGE} &\multicolumn{1}{c|}{METEOR} &\multicolumn{1}{c}{CIDEr} \\ \hline \hline Speaker \cite{yu2017joint}& 0.319 & 0.201 &0.132 & 0.102 & 0.309 & {\bf 0.195} &0.802 \\ \hline VSE & 0.306& 0.199& 0.123& 0.073& 0.285& 0.108 & 0.588 \\ \hline Ours (VAB only) & 0.323 & 0.216 & 0.143 & 0.102 & 0.333 & 0.165 & 0.824 \\ \hline Ours (LAB only) & 0.301 & 0.250 & 0.123 & 0.099 & 0.353 & 0.142 & 0.902 \\ \hline Ours (Multi-ABN) & \bf 0.390 & \bf 0.287& \bf 0.184& \bf 0.142& \bf 0.359& 0.193 & \bf 1.048 \\ \hline \end{tabular} \end{table} In addition, several ablation tests were conducted to isolate and emphasize the contribution of each attention branch mechanism. We compare our results, Multi-ABN with visual attention branch only (VAB) and Multi-ABN with linguistic attention branch (LAB). The results in Table \ref{tab:results} show that the Multi-ABN drastically improved all the metrics. The visual and linguistic attention branches each improved the baseline visual semantic architecture. The LAB improved the sentence generation quality more in terms of ROUGE and CIDEr, while the BLEU and METEOR scores were better with the VAB only. This suggests that LAB leads to a better generalization and variety in the produced sentence, while the VAB is better able to mimic the dataset sentences. \subsection{Qualitative results} \subsubsection{Sentence generation} In the following, because of limited space, we illustrate our results with only a single image ({\it i.e.}, ${\bf x}_v^{1}$), however two auxiliary images were used to train Multi-ABN and generate the fetching instruction. Qualitative results of our method are illustrated in Fig. \ref{fig:wrs_samples}. The Multi-ABN can be applied in a framework (see subfigure (a)) where the generated sentence are instructed to a robot. While completing the fetching task, the robot collect additional data that are used to generate more accurate or new fetching instructions. Subfigures (b) and (c) present correct fetching instructions, while subfigure (d) show an erroneous sentence generation. Indeed in the latter, the referring expression does not allow to disambiguate the target from the other large bottle. \begin{figure*}[h] \subfloat[]{\includegraphics[height=2.4cm, width=3.3cm]{figures/Multi_ABN.png}} \enskip \subfloat[I want a rabbit item on the upper part of shelf]{\includegraphics[scale = 0.19]{figures/sample1.png}} \enskip \subfloat[I want an empty large plastic bottle placed on the left hand sofa]{\includegraphics[scale = 0.19]{figures/sample2.png}} \enskip \subfloat[Can you get a large plastic bottle on the side table]{\includegraphics[scale = 0.19]{figures/sample3.png}} \caption{\small Generated sentences by our method. Solid green rectangles represent the target. Only the first image ${\bf x}_v^{1}$ of each sample are given. Subfigures (b) and (c) show correct predictions while subfigure (d) shows an erroneous generation} \label{fig:wrs_samples} \end{figure*} \subsubsection{Visualization of attention maps} Similarly to methods based on visual attention, Multi-ABN can also exhibit the visual alignment between text and image. These alignments are depicted in Fig. \ref{fig:aligment}, where the attention map for each generated word is depicted on the image ${\bf x}_v^{1}$. \begin{figure*}[h] \subfloat[Bring me the small item on the right-sided armchair.]{\includegraphics[scale = 0.34]{figures/att_sample1.png}}\\ \subfloat[Pick up the yellow toy from the white shelf.]{\includegraphics[scale = 0.34]{figures/att_sample2.png}}\\ \subfloat[Take the tea on the lower row of the shelf.]{\includegraphics[scale = 0.34]{figures/att_sample3.png}}\\ \caption{\small Multi-ABN visual attention evolution for different sentence steps of sentence generation. The visual attention is updated to the relevant parts of the image. } \label{fig:aligment} \end{figure*} These results confirm the relevance of the visual attention for generating a sequence of words. Multi-ABN is able to learn the correspondence between linguistic and visual features. Indeed the visual attention was able to focus on the relevant part of the image. \section{Introduction} The growth in the aged population has steadily increased the need for daily care and support. Robots that can physically assist people with disabilities \citep{brose2010role} offer an alternative to overcoming the shortage of home care workers. This context has boosted the need for standardized domestic service robots (DSRs) that can provide necessary support functions as shown by \citep{piyathilaka2015human, smarr2014domestic,iocchi2015robocup}. Nonetheless, one of the main limitations of DSRs is their inability to naturally interact through language. Specifically, most DSRs do not allow users to instruct them with various expressions relating to an object for fetching tasks. By tackling this limitation, a user-friendly way to interact with DSRs could be provided to non-expert users. Solving this task is particularly important for robots that perform manipulation tasks in home environments. Indeed, to understand ambiguous carry-and-place \citep{magassouba2018multimodal} or fetching \citep{magassouba2019understanding} instructions in an end-to-end approach, a large number of samples of natural manipulation instructions are required. Unfortunately, it is costly to obtain such data. Hence, methods to automatically augment or generate instructions data could drastically reduce the cost of building a large-scale dataset for DSRs. In this light, our work addresses the task of automatic sentence generation for fetching instructions. This task consists of generating various natural fetching instructions given a target object in a image, {\it e.g.},``{\it Go get me the empty bottle from the armchair on the right side }.'' Natural sentences often contain referring expressions to designate a given target. However, generating referring expressions is challenging. Indeed, the many-to-many nature of mapping between the language and real world makes it difficult to generate such sentences but, at the same time, offers flexible sentence structure. In this paper, we propose the multimodal attention branch network (Multi-ABN) which is an extension of the attention branch network proposed in \citep{Fukui_2019_CVPR}. The initial attention branch network was proposed as an image classifier, inspired by class activation mapping (CAM) \citep{zhou2016cvpr} structures, to infer attention maps. It is composed of an attention branch that predicts an attention map and a perception branch that classifies images. This architecture is extended in Multi-ABN where several visual and linguistic attention branches are proposed to respectively infer the visual and linguistic attention maps. Indeed, instead of using a single image of a given scene, several snapshots of the same scene from different viewpoints are processed to generate perspective-free referring expressions. Our aim is to generate sentences that are not tied to the viewpoint of the human-annotated training image. From these attention maps, a long short-term memory (LSTM) network generates fetching instructions in the perception branch. The main contributions of this paper are summarized as: \begin{itemize} \item[$\bullet$] We propose a Multi-ABN which generates fetching instructions based on multiple images from different perspectives of a fixed scene \ref{sec:method}. \item[$\bullet$] Multi-ABN extends the existing methods by adopting visual and linguistic attention mechanisms based on class activation mapping structures. \item[$\bullet$] Multi-ABN outputs a visual explanation for the generated fetching instructions. \end{itemize} \section{Proposed method} Multi-ABN is composed of a linguistic attention branch as well as several visual attention branches for each different viewpoint. In the following we detail the input features, as well as the different branches (attention and perception branches) that have been used to address multimodality. The full network structure is given in Fig. \ref{fig:abn}. The aim of this network is to generate a sequence $Y = \{ {\bf y}_1, {\bf y}_2 \dots {\bf y}_T \}$ where $T$ is the length of a generated sentence and ${\bf y}_k \in \mathbf{R}^{d}$ for an embedding dimension $d$. \label{sec:method} \subsection{Input features} Let us consider $X = \{ {\bf x}_n | n = 1, \dots, N \}$ a dataset composed of $N$ samples. Hereinafter, for readability, we voluntarily omit the sample index $n$, so that ${\bf x}_n$ is written as ${\bf x}$ when further clarity is not required. Each sample ${\bf x}$ is characterized by the set of inputs: \begin{equation}\label{equ:inputs} {\bf x}=\{ {\bf x}_{v}^{1}, {\bf x}_{v}^{2}, \dots, {\bf x}_{v}^{M}, {\bf x}_{src}, {\bf x}_{targ}, {\bf x}_{rel}\}. \end{equation} The input ${\bf x}_{v}^{j}$ defines the set of image inputs taken from different viewpoints, where the superscript $j \in \{1, \dots, M \}$ defines the camera ID. Additionally, ${\bf x}_{src}$, ${\bf x}_{targ}$ and ${\bf x}_{rel}$ respectively denote the source image, target image and relation features of the target in the environment. It should be noted that the source, target and relation features are extracted once only from the main image ${\bf x}_{v}^{1}$. Inputs ${\bf x}_{targ}$ and ${\bf x}_{src}$ are the cropped images of the target and source respectively. Inputs ${\bf x}_{rel}$ denotes the target-source, target-image and source-image spatial relation. Each of these relations are characterized by the following: \begin{equation}\label{equ:x} {\bf r}_{\sfrac{m}{n}}=\begin{bmatrix} \frac{x_m}{W_n}, & \frac{y_m}{H_n}, &\frac{w_m}{W_n},&\frac{h_m}{H_n},&\frac{w_m h_n}{W_m H_n} \end{bmatrix} \end{equation} where $(x_m, y_m, w_m, h_m)$ are the horizontal position, vertical position, width and height of the component $m$, while $W_n$ and $H_n$ are the width and height of the component $n$. As a result, the relation features are defined as ${\bf x}_{rel} = \{{\bf r}_{\sfrac{targ}{src}}, {\bf r}_{\sfrac{targ}{v}}, {\bf r}_{\sfrac{src}{v}}\}$ with a dimension $d_{rel}=15$. \subsection{Attention branches} \subsubsection{Visual attention branches} The attention branch \citep{Fukui_2019_CVPR} allows us to build both the linguistic and visual attention map, based on the extracted feature maps. We consider two different types of feature maps, linguistic and visual detailed below. Multi-ABN is composed of $M$ visual attention branches, that is for the images $ {\bf x}_{v}^{j}$. Each of these visual attention branch takes as input visual feature maps denoted ${\bf f}_k$. These feature maps are obtained from a convolutional feature extraction of $ {\bf x}_{v}^{j}$ . In this paper, we based our feature extractor on VGG16\citep{simonyan2014very}. Note that other feature extractor such as ResNet\citep{he2016deep} could also be used. ${\bf f}_k$ then corresponds to the output from the $5^{th}$ convolutional block of the VGG network. Each feature map ${\bf f}_k$ has a dimension $14\text{x}14\text{x}512$. A visual attention branch outputs visual feature maps ${\bf v}_{k+1}$ weighted by a visual attention mask. To do so, inspired by the CAM structure \citep{zhou2016cvpr}, the visual feature maps are encoded through four convolutional layers. These convolutional layers are followed by a global average pooling (GAP) and a two-layer multilayer perceptron (MLP) denoted as $\text{MLP}_a$. Prior to the first layer of $MLP_a$, the visual features are concatenated with the linguistic feature map ${\bf h}_k$ (see next section). The likelihood $p_v({\bf y}_{k+1})$ is then predicted. In parallel, a visual attention map ${\bf a}_k$ is created by an additional convolution and sigmoid normalization of the third convolutional layer of the visual attention branch. This attention map allows to selectively focus on certain parts of an image related to the predicted sequence. The output visual feature maps are then obtained by a masking process given by: \begin{equation}\label{equ:vis_att} {\bf v}_{k}= {\bf a}_k \odot {\bf f}_{k}, \end{equation} where $\odot$ denotes the Hadamard product. \subsubsection{Linguistic attention branch} In addition a linguistic attention branch takes as input the linguistic feature maps ${\bf h}_{k}$. ${\bf h}_{k}$ is simply defined as the output (or hidden state) of an LSTM generating the instruction sequence $Y$. This LSTM network is detailed in the next section. The linguistic feature map ${\bf h}_{k}$ is encoded through 1-dimensional convolution layers followed by a single fully connected layer so as to output likelihood $p_l({\bf y}_{k+1})$. Linguistic attention map ${\bf a}_l$ is obtained from the second convolutional layer that is convoluted in an additional layer and normalized by a sigmoid activation function. This attention map allows to selectively focus on a area of the LSTM state the also encodes all the previous states. Similarly to visual attention branches, the output ${\bf l}_{k}$ of linguistic attention branch is given by: \begin{equation}\label{equ:ling_att} {\bf l}_{k}= {\bf a}_l \odot {\bf h}_{k} \end{equation} \subsection{Perception branch} The perception branch is a classifier that predicts the likelihood of $p({\bf y}_{k+1})$ in a sequence of length $T$. The perception branch takes as input the concatenation of all weighted visual feature maps ${\bf v}_{k}^{j}$ that is referred as ${\bf c}_{k}$ and the weighted linguistic feature map ${\bf l}_k$, as well as the target, source and relation features. The architecture of the perception branch is based on a multilayer LSTM network. The perception also outputs linguistic feature map ${\bf h}_{k}$ that is simply the hidden state of each LSTM cell. Note that because each embedded word ${\bf y}_k$ is predicted sequentially, the last hidden state also corresponds to the output of the LSTM. More thoroughly, the LSTM is initialized by the latent space feature ${\bf x}_{f}$ obtained by embedding and concatenating the target ${\bf x}_{targ}$, the source ${\bf x}_{src}$ and the relation feature ${\bf x}_{rel}$. In a compact formulation, the first hidden state can be written as \begin{equation}\label{equ:LSTM_init} {\bf h}_1= \text{LSTM}({\bf x}_{f}). \end{equation} It should be mentioned that the forget, memory, output and hidden state variables are voluntarily omitted for more concision. In the following steps, considering an iteration $k$, with $k>0$, each hidden state is defined as follows \begin{equation}\label{equ:LSTM_cells} {\bf h}_k=\text{LSTM}(E({\bf c}_{k} \oplus {\bf y}_{k-1})), \end{equation} where $\oplus$ indicates a concatenation operation and $E(\cdot)$ is an embedding function. In this configuration, ${\bf c}_{k}$ can be considered as the visual context of the current LSTM state. Eventually, to predict the likelihood $p_o({\bf y}_{k+1})$ in the sequence, the weighted linguistic feature map ${\bf l}_{k}$ is processed in a embedding layer. \subsection{Loss functions} The global training loss function of the network is the sum of the attention branch loss $L_{att}$ and the perception branch loss $L_{per}$ so that \begin{equation}\label{equ:loss} L = L_{att} + L_{per}. \end{equation} Perception loss $L_{per}$ is defined as a cross-entropy function in which the class of ${\bf y}_{k+1}$ is predicted through \begin{align} \label{equ:J} L_{per} &= -\sum_n \sum_{m} y^{*}_{nm} \log p(y_{nm}), \end{align} where $y^{*}_{nm}$ denotes the label given to the $m$-th dimension of the $n$-th sample, and $y_{nm}$ denotes its prediction. The attention loss $L_{att}$ depends on the visual attention loss and linguistic attention loss, which are also both cross-entropy loss functions, as defined in Eq. \eqref{equ:J}, that enable to build the corresponding attention maps. \section{Problem statement} \label{sec:statement} \subsection{Task Description} This study targets the generation of sentences for fetching instructions including referring expressions. Referring expressions usually describe an object using properties of the object with respect to landmark objects. A typical generated fetching instruction can be to `` go get me the pink doll on the upper part of the shelf''. In this instruction, the landmark is ``the shelf'' while ``upper part'' is a spatial referring expression. To generate such a sentence, our system assumes the following inputs and outputs: \begin{itemize} \item[$\bullet$]{\bf Input}: a fixed scene observed from several perspectives. \item[$\bullet$]{\bf Output}: the most likely generated sentence for a given target and source \end{itemize} The inputs of our system are more thoroughly described in Section \ref{sec:method}. The terms {\it target} and {\it source} are defined as follows. \begin{itemize} \item[$\bullet$]{\bf Target}: the daily life object ({\it e.g.}, apple or bottle) that the user intends for the robot to fetch. \item[$\bullet$]{\bf Source}: the origin of the target, generally pieces of furniture such as shelves or drawers. \end{itemize} Unlike most methods proposed in the literature, our sentence generation method is based on several images of a fixed scene. Indeed, using single image to build a fetching instruction introduces a drawback when considering DSRs for manipulation task. This limitation is mainly related to DSRs that interact in a three-dimensional environment. There might be a mismatch between the user's perspective (a single 2D image) and the robot's perspective \cite{cohen2019grounding}. A DSR's view of a given scene is dynamic, {\it e.g.}, a target can be behind, on the left side, or on right side of the same landmark depending on the current pose. Hence, to avoid generating referring expressions that are related to a given point of view of the scene, {\it e.g}, ``the apple on the left side of the table'', we use images from different perspectives of the same fixed scene. In this configuration, referring expressions such as ``left of'' or ``right side of'' are correct only if they are valid for all observations. Several challenges should be tackled to generate valid fetching instructions. First, several objects may be of the same type as that of the target, so referring expressions should be used to disambiguate the target from the other objects. Second, several existing objects and sources may be used as landmarks for generating the referring expressions. However, the generated sentence should use referring expressions that do not imply any ambiguity of the target, independently of the point of view. The standard evaluation metrics of our approach are based on the automatic metrics of image captioning that is BLEU, ROUGE, CIDEr and METEOR, as reported in the experimental section. \begin{figure}[tp] \centering \subfloat{\includegraphics[scale = 0.14]{sample11.jpg}} \caption{\small Source (blue) and target (green) samples of the WRS-VS dataset considering several perspectives. The perspective influences the validity of the instruction and the referring expressions that can be used. {\bf Valid sentence} : ``Bring me the apple that is near the glass on the kitchen table''/ {\bf Invalid sentence}: `` Bring me the apple on the left side of the blue glass''.} \label{fig:samples} \end{figure} \subsection{Task Environments} The sentence generation system should be general and flexible enough to be used for various scenarios. We therefore consider a simulated environment in which the task repeatability and various situations can be addressed at low cost. In this study, we use the simulated environments that were provided in the World Robot Summit 2018 Virtual Space (WRS-VS) challenge. The simulator is based on SIGVerse \cite{inamura2013development}, which is a three-dimensional environment based on the Unity engine and is able to simulate interactions between agents and the environment. The WRS-VS consists of typical indoor environments as illustrated in Fig. \ref{fig:samples}, from which we built a dataset. In this environment, we use a DSR that records several snapshots of a given observable scene. From this context, our method should generate fetching instructions such as ``{\it Give me the rabbit doll from the upper part of the shelf}''. \section{Related work} Building communicative robots that can understand ambiguous manipulation instructions generally require the fusion of multiple modalities, which are generally visual and linguistic. Several studies focus on understanding manipulation instructions in an end-to-end approach. For instance, \citep{hatori2018interactively} proposed a target object prediction method from natural language in a pick-and-place task environment, using a visual semantic embedding model. Similarly \citep{Shridhar-RSS-18} tackled the same kind of problem using a two-stage model to predict the likely target from the language expression and the pairwise relationships between different target candidates. More recently, in a context related to DSRs, \citep{magassouba2019understanding} proposed to use both the target and source candidates to predict the likely target in a supervised manner. In \citep{magassouba2019understanding}, the placing task was addressed through a GAN classifier network predicting the most likely destination from the initial instruction. The proposed systems mainly focus on multimodal language grounding through referring expression comprehension. Complementary to these works, some recent studies have also focused on generating referring expressions to identify a target. In \citep{kunze2017spatial}, the authors proposed several algorithms to generate referring expressions in a rule-based approach. In contrast, in \citep{dougan2019learning}, the authors used deep learning for estimating spatial relations to describe an object in a sentence. However, the set of spatial relationships is hand-crafted and known beforehand. We, instead, target an end-to-end approach that do not require hand-crafted or rule-based methods. Multi-ABN is inspired by the attention branch network (ABN) \citep{Fukui_2019_CVPR}. The ABN is based on the CAM structure \citep{zhou2016cvpr, Selvaraju_2017_ICCV} to build visual attention maps for image classification. In essence, a CAM is built to identify salient regions used by a given class in an image classifier. Attention mechanisms have also been used in different ways in image processing and natural language processing. In the context of image captioning, the authors of \citep{xu2015show} proposed to generate image captions with hard and soft visual attention. This approach learns the alignment between the salient area of an image and the generated sequence of words. Multiple visual attention networks have also been proposed in \citep{Yang_2016_CVPR} to solve visual question answering. However, most of these approaches use only a single modality for attention: visual attention. In contrast, we claim in this work that both linguistic and visual branch attention improve the sentence generation process. To do so, we use annotated data obtained from the simulation environment SIGVerse \cite{inamura2013development}. Nowadays, many studies use simulated environments to collect synthetic data. Synthetic data tend to be increasingly photo-realistic and have the advantage of task repeatability as well as environment variation for a relatively low cost. Using such environments, various tasks such as grasping \cite{bousmalis2018using} or motion control \cite{tan2018sim} have been addressed.
1,314,259,996,173
arxiv
\section{Introduction} Evanescent light offers a very effective means of atom manipulation because of the strong induced dipole interaction between atoms and evanescent field when the field is far from resonance. Blue-detuned evanescent light, that is light tuned above the atomic resonance, can be used to guide atoms in a hollow fiber\cite{Ito1},\cite {Ito2}, \cite{Renn}. With blue-detuning, atoms are repelled from the high intensity field region near the fiber wall. The intensity in the evanescent field is significant over a distance of approximately $\sim \lambda$ in the hollow region. If the inner diameter is several microns, the motion of atoms will be influenced by the evanescent field over a large transverse area inside the fiber. This system is very suited to studying nonlinear quantum and classical dynamics with two degrees of freedom in the transverse plane of the fiber. The maximum confined transverse atomic velocities are only several centimeters per second, therefore the atomic dynamics will be governed by the Schr\"{o}dinger equation with an electric dipole coupling to the evanescent field. There are few experimental studies of quantum nonlinear dynamics for systems with two degrees of freedom and most of our understanding of quantum nonlinear dynamics and quantum chaos is based on simpler one degree of freedom systems, such as an atom in a modulated standing wave\cite{Moore94}, or the microwave ionisation of hydrogen\cite{Jensen91}. A notable example of experimental study of quantum chaos in two degree of freedom are the recent experiments on chaotic mesoscopc billiards\cite{Marcus92,Taylor97}. It is clear that quite new phenomenon can appear in two degree of freedom systems\cite{Kovlosky94} that are absent in one degree of freedom systems. Given the success of atom optics in providing tests of quantum chaos in one dimension it is advisable to consider what might be achieved using similar techniques for two dimensional systems. In this paper we will consider the two dimensional quantum and classical transverse motion of cold atoms in a hollow fiber with a periodically modulated evanescent field. \section{the optical potential and Hamiltonian } A two-level atom interacting with far-off-resonant inhomogeneous laser field has an effective potential\cite{cohen-Tannoudji} of the form \begin{equation} U({\bf r}\/)=\frac{\hbar\Delta}{2}ln(1+p), \end{equation} where $\Delta$ is the detuning and $p=\frac{\Omega^2/2}{\Delta^2+\Gamma^2/4}$ is a saturation parameter, with the Rabi frequency $\Omega$. For far-off-resonant donut beams, $p \ll 1$, and thus \begin{equation} U({\bf r}\/)=\frac{\hbar\Omega({\bf r}\/)^2}{4\Delta}. \end{equation} In experiments\cite{Ito1}, \cite{Ito2},\cite{Renn} this potential has been used to guide cold atoms by reflecting them from evanescent light fields on the surface of the glass. For light striking the glass-vacuum interface at angle $\theta$, the evanescent intensity profile is \begin {equation} I(r)=I(0)\alpha^2 \exp[2\kappa(r-r_{1})], \end{equation} where $I(0)$ is the input laser intensity at the fiber entrance, $r_1$ is the inner radius of fiber , and the factors $\alpha$ and $\kappa$ are given in terms of the index of refraction $n$, inner reflection angle $\theta$ and laser wavelength $\lambda$ by $\alpha=2\sqrt{n^2/(n^2-1)}\cos{\theta}$, and $\kappa=(2\pi/{\lambda})\sqrt{( n^2\sin^2{\theta}-1)}$. The Hamiltonian in transverse $(x,y)$ plane for the system is \begin{equation} H_{0}=\frac{{ p^2_x+p^2_y}}{2M}+K\exp[2\kappa(r-r_{1})], \end{equation} where $K=\frac{\hbar\Gamma^2}{8\Delta}\frac{I(0)}{I_{s}}\alpha^2$ and $I_s $ is the saturation intensity. When the laser intensity is periodically modulated, $I(0)$ becomes time dependent, $I(0)[1+\epsilon\cos(\omega t)]$ and the dynamics of the atom can be chaotic for certain initial conditions. There are two factors to consider in treating the two dimensional classic chaotic dynamics for evanescent wave guided atoms in a hollow fiber. Firstly, the boundary of the hollow fiber will limit the divergence of the atomic motion in the radial direction making it is easier to simulate the two-dimensional the dynamics of the atomic system (and also easier to probe the two dimensional distribution in experiment). Secondly, due to the small inner radius of a hollow fiber, atoms with a small transverse velocity will be selected at the entry to the hollow fiber and it is not necessary to pre-cool the transverse temperature. The effective optical potential depends on the inner radius $r_{1}$ and decay coefficient $\kappa$. If the $r_{1}$ is much larger than wavelength $\lambda$ or $\theta$ is too large, the potential will be very steep on the boundary and decay rapidly away from the inner surface of the fiber. In order to observe two dimensional chaotic dynamics in $(x,y)$ plane, $r_{1}$ should be the same magnitude as $1/\kappa$, so that the potential will decay slowly away from the surface of the hollow fiber. It is possible to make the reflection angle approach the critical reflection angle $sin^{-1}(1/n)$ by using several micro hollow fibers to guide the atoms and polish the incoming end of the hollow fiber at a specific acute angle \cite{Renn}. We take $r_{1}=2\mu m$ and $\theta=45^0$ and consider helium as an example. The parameters for helium are, line width $\Gamma/{2 \pi}=1.6 MHz$, mass $M=4m_p$, wave length $\lambda=1.083\mu m$, saturation intensity $I_{s}=0.16 mW/cm^2$ and recoil velocity $v_{r}=9 cm/s$. We now define dimensionless parameters $(\tilde{x},\tilde{y})=(2\kappa x,2\kappa y)$, $\tilde{r}=2\kappa r$, $( \tilde{p_x}, \tilde{p_y})=(\frac{2\kappa p_x}{M\omega_0},\frac{2\kappa p_y}{M\omega_0})$, and $ \tilde{H}=H \frac{(2{\kappa}^2)}{M\omega_0^2}$, $\tilde{\omega}=\omega/\omega_0$ and $\tilde{t}=\omega_0 t$, where $\omega_0$ is a reference frequency. Omitting the tildes and defining $\xi=\frac{K(2\kappa)^2}{M\omega_0^2}$, the Hamiltonian can be rewritten as \begin{equation} H(t)=\frac{{ p^2_x+p^2_y}}{2}+\xi e^{\sqrt{x^2+y^2}-r_{1}}(1+\epsilon\cos{\omega t}), \end{equation} with the canonical communication relations \begin{equation} [{q_{j}}, {p_{k}}]=i\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}\delta_{jk}, \end{equation} where $q_j, p_k$ represents $x,y$ and $\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}=\frac{\hbar(2\kappa)^2} {M \omega_0}$, plays the role of a dimensionless Planck constant. If the dimensionless Plank constant $\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}$ approaches $1$ , the atomic motion in small hollow fibers will be correctly described by quantum dynamics. \section{classic chaotic dynamics and 2D distribution} Although the flux of guided atoms entering the fiber is very small, we still have the possibility to observe classic chaotic dynamics by integrating the signal over many periods of the modulation (fig. 1). The integration length can be taken long enough to ensure that it includes large numbers of atoms. We will discuss the spatial distribution of the atom when the classical motion is chaotic. Using Hamilton's equations we find that the motion in the transverse plane without modulation ($\epsilon=0$) is described by the equations, \begin{equation} \dot{p_{x}}=-\xi \frac{x}{\sqrt{x^2+y^2}} e^{\sqrt{x^2+y^2}-r_{1}}, \end{equation} \begin{equation} \dot{p_{y}}=-\xi \frac{y}{\sqrt{x^2+y^2}} e^{\sqrt{x^2+y^2}-r_{1}} \end{equation} \begin{equation} \dot{x}=p_{x}, \end{equation} \begin{equation} \dot{y}=p_{y}. \end{equation} Clearly there is only one stable fixed point on the axis ($r=0$). The choice of modulation frequency $\omega$ depends on the frequency of unperturbed periodic motion. For simplicity we assume $y=0$ and $p_{y}=0$, so the expression for $H$ simplifies to a one dimensional Hamiltonian system. The period of motion for the unperturbed Hamiltonian $H_{0}$ is\cite{lich} \begin{equation} T_{0}=\oint{\frac{dx}{\partial H_{0}/\partial p_{x}} = 2 \int_{-x_M}^{x_M} \frac{dx}{\sqrt{2(H_{0}-\xi e^{\sqrt{x^2}-r_{1}})}}}, \end{equation} where $x_{M}$ is determined by $H_{0}= \xi e^{\sqrt{x^2_{M}}-r_{1}}$. Therefore \begin{equation} \omega_{0}=\frac{\pi}{\int_{-x_M}^{x_M} \{2[H_{0}-\xi e^{\sqrt{x^2}-r_{1}}]\} ^{-1/2}dx}. \end{equation} The graph of $\omega_{0}$ versus $H_{0}$ and $ x_{M}$ versus $H_{0}$ is shown in fig. 2. We can select the modulation frequency $\omega$ to control the position of the fixed points. Here we take $\omega_0=265 KHz$ , dimensionless modulation frequency $\omega=2$ and set $\xi=50$. The new fixed points will appear in $x \approx 0.41$ and $x \approx 2.87$. We use a symplectic integration routine \cite{forest},\cite{dyrting} to solve the equations of motion so as to preserve the Poisson bracket relation $\{x(t),p_x(t)\}=1$, and thus maintain the Hamiltonian character of the motion. We plot the stroboscopic portrait of the system at multiples of the period of modulation, $t=(2\pi/\omega)s$, where s is an integer referred to as the strobe number. From fig. 3 we can see that regions of globally chaotic motion will arise when $\epsilon$ is large, together with some regular regions. A broad initial phase space distribution of atoms will enable some atoms to become trapped in these stable regions. Laser cooling and trapping techniques have the ability to cool the atom to very low velocities and trap them with well localized momentum, however the position distribution is not so well localized. Therefore the appropriate description of the initial conditions is in terms of a probability density on phase space $(x,y,p_x,p_y)$. We define a classical state to be a probability measure on phase space of the form $Q(x,y,p_x,p_y)dxdydp_xdp_y$. The probability density satisfies the Liouville equation \begin{equation} \frac{\partial{Q}}{\partial{t}}={\{H, Q\}}_{q_i,p_i}, \end{equation} where${\{,\}}_{q_i,p_i}$ is the Poisson bracket. This equation can be solved by the method of characteristics. To simulate an experiment, we assume atoms are initially randomly uniformly distributed on $x^2+y^2 < {r_{1}}^2$, The momentum distributions for $p_x$ and $p_y$ are assumed to be Gaussian distributions. Therefore \begin{equation} Q_{0}(x,y,p_x,p_y)=Q_{0}(x)Q_{0}(y)Q_{0}(p_x)Q_{0}(p_y), \end{equation} where \begin{equation} Q_{0}(p_i)=\frac{1}{2\pi\sigma_{p_{i}}}\exp{[-{(p_i-p_i(0))^2}/{2\sigma_{p_{i}}] }}. \end{equation} The variances of $p_x$ and $p_y$ are related to the temperature $T_{i}$ \begin{equation} \sigma_{p_{i}}=k_{B} T_{i}/[M\omega_s^2/(2\kappa)^2]. \end{equation} We simulated the atomic system of $10^4$ numbers and take $\sigma_{p_{i}}=0.1$, which corresponds to radial rms velocity $2cm/s$ for helium if $ \theta=45^0$ and $\omega=2$ . The variance of $Q$ function with time is \begin {equation} Q(\bf{r}, \bf{p}, t)=Q_{0}[ \bar{\bf{r}}(\bf{r}, \bf{p}, -t), \bar{\bf{p}}(\bf{r}, \bf{p}, -t)] \end{equation} In the case of no modulation, atoms will accumulate around the fix point $x=y=0$. When the modulation is added the atoms will diffuse in regions of chaotic motion but some will accumulate around several rings corresponding to fixed points at non-zero radius (fig. 4). Because the inner size of hollow fiber is very small. The flux of guiding atoms entering the fiber will be low. But we can use a large blue-detuned shield laser, with the duration period of square wave equals to the modulation period $T={2\pi}/{\omega}$, and the time interval $ dT << T$. When atoms emit out from the hollow fiber, they will travel freely without the optical potential. If the shield laser turn on, it will block atoms enter the detection region. In the short time interval $ dT $, atoms will enter the detection region and the detector records the atomic distribution of momenta and positions. This procedure can be repeated as long as possible till we have integrated enough atomic numbers for many snap shots. The simulation shows that for integration of atomic numbers at different integer strobe numbers , it will produce similar results (fig. 5). Because rings (fig. 4)represent new fixed points in phase space and the atomic radial momenta are almost approach zero(fig. 3). When atoms emit out and travel freely the shape of rings will not change very much in detection region. But the atomic velocity orientations outside the fixed points are randomly distributed. In order to measure the rings and small spatial distribution of atoms, high precision position measurements are required. The Raman-induced resonance imaging method \cite{Thomas},\cite{Stokes}, \cite{Gardner} can be used to measure the positions with nanometer spatial resolution limited by the quantum uncertainty principle. \section{Two dimensional quantum nonlinear dynamics} If the transverse temperature of atoms in hollow fiber is very cold, quantum nonlinear dynamics will result. Here we take the same parameters and same definition for dimensionless parameters as the classical dynamics and the dimensionless Plank constant $\frac{\hbar(2\kappa)^2} {M \omega_s} \simeq 1$. The dimensionless 2D Schr\"{o}dinger equation for atoms in hollow fiber \begin{equation} i\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}\frac{\partial\psi({\bf r}\/,t)}{\partial t}=\hat{H}(t)\psi({\bf r}\/,t), \end{equation} where \begin{equation} \hat{H}(t)=-\frac{\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}^2}{2}(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2})+V(x, y,t). \end{equation} and \begin{equation} V(x, y, t)=\xi e^{\sqrt{x^2+y^2}-r_{1}}(1+\epsilon\cos{\omega t}), \end{equation} where $x^2+y^2 \leq {r_{1}}^2$. For simplicity we assume the boundary condition for eq. (18) is \begin{equation} \psi(x, y, t)|_{\sqrt{x^2+y^2} = r_{1}}=0. \end{equation} The Split Operator Method \cite{Kosloff}and 2D FFT (Fast Fourier Transformation) \cite{Press} will be used to obtain the numerical solution of Schr\"{o}dinger eq. 18. The scheme in which the kinetic operator and potential operator are used to propagate the wave function separately: \begin {equation} \exp[-i\hat{\bf H}\/{\delta}t/\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}]\sim \exp[-i(\hat{\bf P}\/)^2{\delta}t/{4\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}}] \exp[-i(\hat{\bf V}\/){\delta}t/{2\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}}] \exp[-i(\hat{\bf P}\/)^2{\delta}t/{4\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}}], \end{equation} and the computing errors are of $O(\delta t^3)$. We assume at $t=0$ the wave function is a minimum uncertainty wave function. The initial variance of $x, y$ are the same $\sigma_{x}=\sigma_{y}=\sigma$. The expression for this wave function is \begin{equation} \psi(x, y) =\frac{1}{\sqrt{2\pi\sigma}}e^{-\frac{(x-x_{0})^2}{4\sigma}} e^{-\frac{(y-y_{0})^2}{4\sigma}}e^{iP_{0x}x/\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}}e^{iP_{0y}y/\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}}. \end{equation} In order to observe genuine quantum nonlinear behavior we have to wait for a time that is longer than the classical period. The time scale for quantum nonlinear dynamics $T_{rev}$ is given by expression\cite{chen} \begin{equation} T_{rev}=T_{0} (\frac{\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}}{2}|\partial \omega_{0}/\partial{\bar{E}}|)^{-1}. \end{equation} In fig. 6 we have plotted the momentum mean $<p_{x}>$ and its variance as a function of time for $\epsilon=0$. Quantum collapses and revivals appear for quantum nonlinear dynamics as expected. In order to understand the influence of the modulation of the potential, we have plotted the variances of $<x^2>-<x>^2$, $<{p_{x}}^2>-<{p_{x}}>^2$ versus strobe numbers. In fig. 7 we found at integer strobe numbers, the modulation will increase the variance of momentum , but suppress the variance of position. The fluctuation of variances of positions and momentum are consistent with the Heisenberg uncertainty relationship in this quantum system. If we rewrite $\hat{H}$ in polar coordinator$(r, \theta)$, we get \begin{equation} \hat{H}=-\frac{\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}^2}{2}(\frac{\partial^2}{\partial r^2}+\frac{1}{r}\frac{\partial}{\partial r}+\frac{1}{r^2}\frac{\partial^2}{\partial\theta^2})+V(r, t), \end{equation} The general solution of eq.(19) will be \cite{Collatz} \begin {equation} \psi(r, \theta, t)=\sum_{m}{{a_{m} (r, t)} e^{ im\phi}}, \end {equation} where m is angular momentum quantum number. It shows that the evolution of wave function will be related to atomic angular momentum. In general for $t > 0$ the probability ${|\psi(x, y)|}^2$ will not be symmetric except the initial wave function has no angular momentum. In fig. 8 for initial conditions $x_{0}=y_{0}=0$ and $p_{0x}=p_{0y}=0$ we have plotted th the probability at $y=0$ plane as a function of $x$ at the time of strobe number $50$, it shows that it will stay symmetric both for $\epsilon =0$ and $0.7$. In fig. 9, for initial condition $x_{0}=y_{0}=0$ and $p_{0x}=p_{0y}=1$ , the probability distribution for $t > 0$ will not be symmetric because initial atomic wave functions include angular momenta, and the interaction between atomic momenta and optical potential will destroy the spatial symmetry of probability. Therefore the atomic angular momentum plays an important rule in the evolution of wave function. \section{conclusion and discussion} The atomic motion in small diameter hollow fiber is quantum mechanical if dimensionless Plank constant $\shortstack{\hspace*{0.02em}\_\\*[-1.0ex]\it k}$ approach $1$ . Quantum collapses and revivals will appear in the dynamics of the mean values . If the intensity is periodically modulated, we found the modulation will increase the variance of momentum , but suppress the variance of position at integer strobe numbers. The fluctuation of variances of positions and momentum is consistent with the Heisenberg uncertainty relationship in this system. In this two degree of freedom quantum system, the atomic angular momentum will influence the evolution of wave function. Although the flux of guiding atoms entering the fiber is low, it's still possible to observe the classic chaotic dynamics by integrating the signals at different strobe numbers in experiment. The integration length can be taken long enough to ensure that it includes a large numbers of atoms. We have shown that an atom moving in an intensity modulated evanescent-wave field in hollow fiber can exhibit chaotic dynamics in the transverse plane. For atomic momenta $p_{x}$, $p_{y}$ with Gaussian distributions, some atoms will become trapped in rings corresponding to radial fixed points of the modulated system in the moments of integer strobe numbers. For the atomic average radial velocity is around recoil velocity and the spatial distribution size no larger than inner diameter of fiber, high resolution position and velocity measurements must be used.The Raman-induced resonance imaging method can reach the nanometer spatial resolution limited by the uncertainty principle. For atomic momentum distribution can be measured by Time of Flight absorption imaging method{\cite{Davis}, \cite{Bradley}} which is used to measure the temperature of super cold atoms in BEC experiment, or atomic velocity selection method using stimulated Raman transitions\cite{Kasevich}. It is quite interesting that both classical chaotic dynamics and quantum dynamics have the possibility to be realized in experiment for atoms propagating in hollow fiber. Because the fiber's inner size is very small, it will select atoms with very small radial velocity to enter the fiber and there is no need to pre-cool the radial motion. In addition the fiber's boundary will limit the divergence of radial motion, making it is easier to observe both two dimensional classical and quantum dynamics in experiment. We believe this is a far more practical scheme for observing the classical and quantum nonlinear dynamics of radially confined atoms than other schemes such as those that use a donut beam scheme\cite{liu}. \section{Acknowledgment} One of authors(XML) would like to thank Dr. Kenneth Baldwin at Australia National University for useful discussion about physical parameters of helium and hollow fiber. \thebibliography{99} \bibitem{Ito1} H. Ito, K. Sakaki, W. Jhe, M. Ohtsu, Opt. Commun. 141, 43-47(1997). \bibitem{Ito2} H. Ito, T. Nakata, K. Sakaki, M. Ohtsu, Phys. Rev. Lett. 76, 4500(1996). \bibitem{Renn} Michael J. Renn, Elizabeth A. Donley, Eric Cornell, Carl E. Wieman, Dana Z. Anderson, Phys. Rev. A. 53, R648(1996). \bibitem{Moore94}F.L.Moore, J.C.Robinson, C. Bharucha, P.E.Williams and M.G.Raizen, Phys. Rev. Lett, {\bf 73}, 2974 (1994). \bibitem{Jensen91}R.V.Jensen,S.M.Susskind and M.M.Sanders, Phys. Rep. {\bf 201}, 1 (1991). \bibitem{Marcus92}C.M.MArcus,A.J.Rimberg,R.M.Westervelt,P.F.Hopkins, and A.C.GOssard, Phys. REv. Lett. {\bf 69}, 506 (1992). \bibitem{Taylor97}R. P. Taylor,R. Newbury,A. S. Sachrajda,Y. Feng,P. T. Coleridge,C. Dettmann,Ningjia Zhu,Hong Guo, A. Delage,P.J. Kelly, and Z. Wasilewski, Phys. Rev. Lett {\bf 78}, 1952 (1997). \bibitem{Kovlosky94}A.R.Kovlosky, Phys. Rev. E {\bf 50}, 3569 (1994). \bibitem{chen} Wenyu Chen, S.Dyrting and G.J. Milburn, Australia Journal of Physics, 49, 777-818 (1996). \bibitem{cohen-Tannoudji} C. Cohen-Tannoudji, in \em Fundamental Systems in Quantum Optics, \em Proceedings of the Les Houches Summer School(North-Holland, Amsterdm, 1992). \bibitem{lich} A.J.Lichtenberg, M.A. Lieberman (1983), \em Regular and Stochastic motion, \em Springer-Verlag, New York ,Heeidelberg Berlin. \bibitem{forest} Etienne Forest and Martin Berz. Canonical integration and analysis of periodic maps using non-standard analysis and Lie methods. In Kurt Bernardo Wolf, editor, \em Lie Methods in Optics II \em, page 47, Berlin Heidelberg, 1989, Springer-Verlag. \bibitem{dyrting} S.Dyrting. Ph.D thesis (1995), Department of Physics, University of Queensland. \bibitem{liu} X.M. Liu and G.J. Milburn, Phys. Rev. E, 59, 2842(1999). \bibitem{Kosloff} Ronnie Kosloff, J. Phys. Chem, 92, 2087(1988). \bibitem{Collatz} Lothar Collatz (1986), \em Differential Equations: An Introduction with Applications \em, John Wiley \& Sons. \bibitem{Press} William H Press et al (1992), \em Numerical Recipes in C \em, Cambridge, University Press. \bibitem{Thomas} J.E. Thomas, Phys. Rev. A. 42, 5652(1990). \bibitem{Stokes} K. D. Stokes, C.Schnurr, J. R. Garner, M. Marable, G. R. Welch, and J. E. Thomas, Phys. Rev. Lett. 67, 1997(1991). \bibitem{Gardner} J. R. Gardner, M. L. Marable, G. R. Welch,and J. E. Thomas, Phys. Rev. Lett. 70, 3404(1993). \bibitem{Kasevich} Mark Kasevich, David S. Weiss, Erling Riis, Kathryn Moler, Steven Kasapi and Steven Chu, Phys. Rev. Lett. 66, 2297(1991). \bibitem{Davis} K. B. Davis, M. -O. Mewes, M. R. Andrew, N. J. van Druten, D. S. Durfee, D. M. Kurn, and W. Ketterle, Phys. Rev. Lett. 75, 3969(1995). \bibitem{Bradley} C. C. Bradley, C. A. sackett, J. J. Tollett, and R. G. Hulet, Phys. Rev. Lett. 75, 1687(1995). \newpage \begin{figure}[htbp] \caption{{\em The diagram of proposed experiment. The large blue-detuned shield laser will block atoms enter the detection region except in the short time interval $ dT << T$. The integration of many snap shots will have the similar effect of atomic distribution compared to one snap shot of large atomic numbers. }} \protect\label{fig_NM} \end{figure} \begin{figure}[htbp] \caption{{\em a. The relations between frequency of motion and Hamiltonian $\omega_{0} \sim H_{0} $ and $x_{M}\sim H_{0}$.} } \protect\label{fig_NM} \end{figure} \begin{figure}[htbp] \caption{{\em Stroboscopic portrait of the system with $\epsilon=0.7$, $p_x(0)=0$, $p_{y}=0$ and $y=0$. The maximum strobe number is $500$.} } \protect\label{fig_NM} \end{figure} \begin{figure}[htbp] \caption{{\em The atomic distribution in $(x,y)$ plane at the strobe number 50 for $\epsilon=0.7$. The $10^4$ atoms were taken in phase space. The atoms were initially distributed on $x^2+y^2 \leq r^2_0$ region. The momenta of $p_x$, $p_{y}$ are Gaussian distributions and $\sigma_{p_{x}}=\sigma_{p_{y}}=0.1$..} } \protect\label{fig_NM} \end{figure} \begin{figure}[htbp] \caption{{\em The atomic distributions of $10^4$ numbers in $(x,y)$ plane. After strobe number 50, we count 200 atoms at the time of every integer strobe number until $s=150$. } } \protect\label{fig_NM} \end{figure} \begin{figure}[htbp] \caption{{\em The average momentum $<p_{x}>$ and momentum variance $<{p_{x}^2}>-{<p_{x}>}^2$ evolutions with time. The initial wave function is the least uncertainty state, $\sigma_{x}=\sigma_{y}=0.1$. For (a), $x_{0}=y_{0}=0$, $p_{0x}=p_{0y}=1.0 $. For (b), $x_{0}=y_{0}=0.5$, $p_{0x}=p_{0y}=0.0 $.} } \protect\label{fig_NM} \end{figure} \begin{figure}[htbp] \caption{{\em The position variance$<{x}^2>-<x>^2$ and momentum variance $<{p_{x}^2}>-{<p_{x}>}^2$ versus strobe numbers. The initial wave function is the least uncertainty state, $\sigma_{x}=\sigma_{y}=0.1$. For (a) and (b) $x_{0}=y_{0}=0$, $p_{0x}=p_{0y}=1.0 $. For (c) and (d), $x_{0}=y_{0}=0.5$, $p_{0x}=p_{0y}=0.0 $.} } \protect\label{fig_NM} \end{figure} \begin{figure}[htbp] \caption{{\em The probability distributions ${|\psi(x, 0)|}^2 $ for $y=0$. The initial wave function is the least uncertainty state, $\sigma_{x}=\sigma_{y}=0.1$, $x_{0}=y_{0}=0$, $p_{0x}=p_{0y}=0$.The point line is distribution for $t=0$, the dashed line is distribution for $t=50 T$ , no modulation $\epsilon=0$, and the solid line is the distribution for $t=50 T$ , $\epsilon=0.7$. } } \protect\label{fig_NM} \end{figure} \begin{figure}[htbp] \caption{{\em The probability distributions ${|\psi(x, 0)|}^2 $ for $y=0$. The initial wave function is the least uncertainty state, $\sigma_{x}=\sigma_{y}=0.1$, $x_{0}=y_{0}=0$, $p_{0x}=p_{0y}=0.1$. The point line is distribution for $t=0$, the dashed line is distribution for $t=50 T$ , no modulation $\epsilon=0$, and the solid line is the distribution for $t=50 T$ , $\epsilon=0.7$.} } \protect\label{fig_NM} \end{figure} \end{document}
1,314,259,996,174
arxiv
\section{Introduction} \label{sect:intro} Let $Z$ be a variety over a finite field. The triangulated category of $\ell$-adic sheaves on $X$ has a full subcategory $\dbmc Z$ of ``mixed sheaves,'' defined in terms of eigenvalues of the Frobenius morphism. The existence and good formal properties of this category are among the most important consequences of Deligne's proof of the Weil conjectures. It plays a major role in the theory of perverse sheaves and their applications in representation theory. An important part of the formalism of mixed sheaves is a certain filtration of $\dbmc Z$ by full subcategories $\{\dbmc Z_{\le w}\}_{w \in \mathbb{Z}}$, known as the \emph{weight filtration}. Let us now turn our attention to the world of equivariant coherent sheaves. Let $X$ be a scheme (say, of finite type over a field), and let $G$ be an affine group scheme acting on $X$ with finitely many orbits. In~\cite{a}, the first author introduced a class of $t$-structures, called \emph{staggered $t$-structures}, on the bounded derived category $\dgb X$ of $G$-equivariant coherent sheaves on $X$. These $t$-structures depend on the choice of a certain kind of filtration of the abelian category of equivariant coherent sheaves. These filtrations, known as \emph{$s$-structures}, bear an at least superficial resemblance to the weight filtration of $\dbmc Z$. The main goal of this paper is to try to make this resemblance into a precise statement, and to thereby place these two kinds of structures in a unified setting. We do this by introducing the notion of a \emph{baric structure} on a triangulated category. The usual weight filtration on $\dbmc Z$ is not a baric structure, but a modified version of it due to S.~Morel~\cite{mor} is. (Indeed, the definition of a baric structure is largely motivated by Morel's results.) An $s$-structure is not a baric structure either: for one thing, it is a filtration of an abelian category, not of a triangulated category. We show in this paper how to construct baric structures on $\dgb X$ using an $s$-structure on $X$. We also exhibit several other examples of baric structures that have appeared in the literature. The second goal of the paper is to recast the construction in~\cite{a} as an instance of an abstract operation that can be done on any triangulated category. Specifically, given a triangulated category with ``compatible'' $t$- and baric structures, we outline a procedure, which we call \emph{staggering}, for producing a new $t$-structure. Note that in~\cite{a}, ``staggered'' was simply a name assigned to certain specific $t$-structures by definition, whereas in this paper, ``to stagger'' is a verb. We prove that these two uses of the word are consistent: that is, that the $t$-structures of~\cite{a} arise by staggering the standard $t$-structure on $\dgb X$ with respect to a suitable baric structure. (The staggering operation can also be applied to the weight baric structure on $\dbmc Z$, as well as to other baric structures. This yields a new $t$-structure that has not previously been studied.) An outline of the paper is as follows. We begin in Section~\ref{sect:baric} by giving the definition of a baric structure and of the staggering operation. In Section~\ref{sect:examples}, we give examples of baric structures, including Morel's version of the weight filtration. Next, in Section~\ref{sect:baric-coh1}, we begin the study of baric structures on derived categories of equivariant coherent sheaves, especially those that behave well with respect to the geometry of the underlying scheme. The next three sections are devoted to the relationship between baric structures and $s$-structures. First, in Section~\ref{sect:stagt}, we review relevant definitions and results from~\cite{a}. Section~\ref{sect:baric-coh2} contains the main result of the paper, showing how $s$-structures on the abelian category of coherent sheaves give rise to baric structures on the derived category. In Section~\ref{sect:mult}, we briefly consider the reverse problem, that of producing $s$-structures from baric structures. Finally, in Section~\ref{sect:stag2}, we study staggered $t$-structures associated to the baric structures produced in Section~\ref{sect:baric-coh2}. Specifically, we prove that their hearts are finite-length categories, and we give a description of their simple objects. This was done in some cases in~\cite{a}, but remarkably, the machinery of baric structures allows us to remove the assumptions that were imposed in~{\it loc.~cit}. We conclude by mentioning an application of the machinery developed in this paper. The language of baric structures allows one to define a notion of ``purity,'' similar to the one for $\ell$-adic mixed constructible sheaves. In a subsequent paper~\cite{at}, the authors prove that every simple staggered sheaf is pure, and that every pure object in the derived category is a direct sum of shifts of simple staggered sheaves. These results are analogous to the well-known Purity and Decomposition Theorems for $\ell$-adic mixed perverse sheaves. \section{Baric structures} \label{sect:baric} In this section we introduce baric structures on triangulated categories (Definition~\ref{defn:baric}), and the operation of \emph{staggering} a $t$-structure with respect to a baric structure (Definition~\ref{defn:stag}). Staggering produces, out of a $t$-structure $(\mathfrak{D}^{\leq 0}, \mathfrak{D}^{\geq 0})$ on a triangulated category $\mathfrak{D}$, a new pair of orthogonal subcategories $({}^s\!\fD^{\leq 0},{}^s\!\fD^{\geq 0})$. Our main result is a criterion which guarantees that $({}^s\!\fD^{\leq 0},{}^s\!\fD^{\geq 0})$ is itself a $t$-structure (Theorem~\ref{thm:stag-gen}). \subsection{Baric structures} \begin{defn}\label{defn:baric} Let $\mathfrak{D}$ be a triangulated category. A \emph{baric structure} on $\mathfrak{D}$ is a pair of collections of thick subcategories $(\{\mathfrak{D}_{\le w}\}, \{\mathfrak{D}_{\ge w}\})_{w \in \mathbb{Z}}$ satisfying the following axioms: \begin{enumerate} \item $\mathfrak{D}_{\le w} \subset \mathfrak{D}_{\le w+1}$ and $\mathfrak{D}_{\ge w} \supset \mathfrak{D}_{\ge w+1}$ for all $w$. \item $\Hom(A,B) = 0$ whenever $A \in \mathfrak{D}_{\le w}$ and $B \in \mathfrak{D}_{\ge w+1}$. \item For any object $X \in D$, there is a distinguished triangle $A \to X \to B \to$ with $A \in \mathfrak{D}_{\le w}$ and $B \in \mathfrak{D}_{\ge w+1}$.\label{it:dt} \end{enumerate} \end{defn} This definition is at least superficially very similar to that of \emph{$t$-structure}, and in fact arguments identical to those given in~\cite[\S\S 1.3.3--1.3.5]{bbd} yield the following basic properties of baric structures. \begin{prop}\label{prop:baric-basic} Let $\mathfrak{D}$ be a triangulated category equipped with a baric structure $(\{\fD_{\leq w}\},\{\fD_{\geq w}\})_{w \in \Z}$. The inclusion $\mathfrak{D}_{\le w} \hookrightarrow \mathfrak{D}$ admits a right adjoint $\beta_{\le w}: \mathfrak{D} \to \mathfrak{D}_{\le w}$, and the inclusion $\mathfrak{D}_{\ge w} \to \mathfrak{D}$ admits a left adjoint $\beta_{\ge w}: \mathfrak{D} \to \mathfrak{D}_{\ge w}$. There is a distinguished triangle \[ \beta_{\le w}X \to X \to \beta_{\ge w+1}X \to, \] and any distinguished triangle as in Axiom~(3) above is canonically isomorphic to this one. Furthermore, if $v \le w$, then we have the following isomorphisms of functors: \begin{align*} \beta_{\le v} \circ \beta_{\le w} &\cong \beta_{\le v} & \beta_{\ge v} \circ \beta_{\le w} &\cong \beta_{\le w} \circ \beta_{\ge v} \\ \beta_{\ge w} \circ \beta_{\ge v} &\cong \beta_{\ge w} & \beta_{\le v} \circ \beta_{\ge w} &\cong \beta_{\ge w} \circ \beta_{\le v}=0 \qedhere \end{align*} \end{prop} Note that in a baric structure, unlike in a $t$-structure, the subcategories $\mathfrak{D}_{\le w}$ and $\mathfrak{D}_{\ge w}$ are required to be stable under shifts in both directions, and it is not assumed that there is an autoequivalence $\mathfrak{D} \to \mathfrak{D}$ taking $\mathfrak{D}_{\le w}$ to, say, $\mathfrak{D}_{\le w+1}$. Moreover, baric truncation functors enjoy the following important property. \begin{prop} The baric truncation functors $\beta_{\le w}$ and $\beta_{\ge w}$ take distinguished triangles to distinguished triangles. \end{prop} \begin{proof} Let $X \to Y \to Z \to$ be a distinguished triangle in $\mathfrak{D}$, and consider the natural morphism $\beta_{\le w}X \to X$. The composition of this morphism with $X \to Y$ factors through $\beta_{\le w}Y \to Y$ (since $\Hom(\beta_{\le w}X, Y) \cong \Hom(\beta_{\le w}X, \beta_{\le w}Y)$), so we obtain a commutative diagram \[ \xymatrix@=10pt{ \beta_{\le w}X \ar[r]\ar[d] & \beta_{\le w}Y \ar[d] \\ X \ar[r] & Y} \] Let us complete this diagram using the $9$-lemma~\cite[Proposition~1.1.11]{bbd}: \[ \xymatrix@=10pt{ \beta_{\le w}X \ar[r]\ar[d] & \beta_{\le w}Y \ar[r]\ar[d] & Z' \ar[r]\ar[d] & \\ X \ar[r]\ar[d] & Y\ar[r]\ar[d] & Z \ar[r]\ar[d] & \\ \beta_{\ge w+1}X \ar[r]\ar[d] & \beta_{\ge w+1}Y \ar[r]\ar[d] & Z'' \ar[r]\ar[d] & \\ &&&} \] Since $\mathfrak{D}_{\le w}$ and $\mathfrak{D}_{\ge w+1}$ are full triangulated subcategories of $\mathfrak{D}$, we see that $Z' \in \mathfrak{D}_{\le w}$ and $Z'' \in \mathfrak{D}_{\ge w+1}$. But then Proposition~\ref{prop:baric-basic} tells us that $Z' \cong \beta_{\le w}Z$ and $Z'' \cong \beta_{\ge w}Z$, so we obtain distinguished triangles \[ \beta_{\le w}X \to \beta_{\le w}Y \to \beta_{\le w}Z \to \qquad\text{and}\qquad \beta_{\ge w+1}X \to \beta_{\ge w+1}Y \to \beta_{\ge w+1}Z \to, \] as desired. \end{proof} \begin{defn} Let $\mathfrak{D}$ be a triangulated category equipped with a baric structure $(\{\fD_{\leq w}\},\{\fD_{\geq w}\})_{w \in \Z}$. We will use the following terminology: \begin{enumerate} \item The adjoints $\beta_{\le w}$ and $\beta_{\ge w}$ to the inclusions $\mathfrak{D}_{\le w} \hookrightarrow \mathfrak{D}$ and $\mathfrak{D}_{\ge w} \hookrightarrow \mathfrak{D}$ are called \emph{baric truncation functors}. \item The baric structure is \emph{bounded} if for each object $A \in \mathfrak{D}$, there exist integers $v, w$ such that $A \in \mathfrak{D}_{\ge v} \cap \mathfrak{D}_{\le w}$. \item It is \emph{nondegenerate} if there is no nonzero object belonging to all $\mathfrak{D}_{\le w}$ or to all $\mathfrak{D}_{\ge w}$. Note that a bounded baric structure is automatically nondegenerate. \item Let $\mathfrak{D}'$ be another triangulated category, and suppose it is equipped with a baric structure $(\{\mathfrak{D}'_{\le w}\}, \{\mathfrak{D}'_{\ge w}\})$. A functor of triangulated categories $F: \mathfrak{D} \to \mathfrak{D}'$ is said to be \emph{left baryexact} if $F(\mathfrak{D}_{\ge w}) \subset \mathfrak{D}'_{\ge w}$ for all $w \in \mathbb{Z}$, and \emph{right baryexact} if $F(\mathfrak{D}_{\le w}) \subset \mathfrak{D}'_{\le w}$ for all $w \in \mathbb{Z}$. \end{enumerate} \end{defn} Let us also record the following definitions, though we will not use them until later in the paper. \begin{defn} Let $\mathfrak{D}$ be a triangulated category equipped with a baric structure $(\{\fD_{\leq w}\},\{\fD_{\geq w}\})_{w \in \Z}$. \begin{enumerate} \item Suppose $\mathfrak{D}$ is equipped with an involutive antiequivalence $\mathbb{D}: \mathfrak{D} \to \mathfrak{D}$. The baric structure is \emph{self-dual} if $\mathbb{D}(\mathfrak{D}_{\le w}) = \mathfrak{D}_{\ge -w}$. \item Suppose $\mathfrak{D}$ has the structure of a tensor category, with tensor product $\otimes$. The baric structure is \emph{multiplicative} with respect to $\otimes$ if if for any $A \in \mathfrak{D}_{\le v}$ and $B \in \mathfrak{D}_{\le w}$, we have $A \otimes B \in \mathfrak{D}_{\le v+w}$. \item Suppose $\mathfrak{D}$ has an internal Hom functor $\cHom$. The baric structure is \emph{multiplicative} with respect to $\cHom$ if for any $A \in \mathfrak{D}_{\leq v}$ and $B \in \mathfrak{D}_{\geq w}$, we have $\cHom(A,B) \in \mathfrak{D}_{\geq w -v}$. \end{enumerate} Note that whenever we have an adjunction between $\otimes$ and $\cHom$, the multiplicativity conditions are equivalent. \end{defn} \subsection{Staggering} Below, if $\mathfrak{D}$ is equipped with a $t$-structure $(\mathfrak{D}^{\le 0}, \mathfrak{D}^{\ge 0})$, we write $\mathfrak{C} = \mathfrak{D}^{\le 0} \cap \mathfrak{D}^{\ge 0}$ for its heart, and we denote the associated truncation functors by $\tau^{\le n}$ and $\tau^{\ge n}$. The $n$th cohomology functor associated to the $t$-structure is denoted $h^n: \mathfrak{D} \to \mathfrak{C}$. \begin{defn}\label{defn:compat} Let $\mathfrak{D}$ be a triangulated category equipped with both a $t$-structure and a baric structure. These structures are said to be \emph{compatible} if $\tau^{\le n}$ and $\tau^{\ge n}$ are right baryexact, and $\beta_{\le w}$ and $\beta_{\ge w}$ are left $t$-exact. \end{defn} \begin{rmk} Of course there is a dual notion of compatibility, but it does not seem to arise as often. \end{rmk} \begin{defn}\label{defn:stag} Let $\mathfrak{D}$ be a triangulated category equipped with compatible $t$- and baric structures. Define two full subcategories of $\mathfrak{D}$ as follows: \begin{align*} {}^s\!\fD^{\le 0} &= \{A \in D \mid \text{$h^k(A) \in \mathfrak{D}_{\le -k}$ for all $k \in \mathbb{Z}$} \}, \\ {}^s\!\fD^{\ge 0} &= \{B \in D \mid \text{$\beta_{\le k}B \in \mathfrak{D}^{\ge -k}$ for all $k \in \mathbb{Z}$} \}. \end{align*} Assume that the pair $({}^s\!\fD^{\le 0}, {}^s\!\fD^{\ge 0})$ constitutes a $t$-structure. It is called the \emph{staggered $t$-structure}, or the $t$-structure obtained by \emph{staggering} the original $t$-structure with respect to the given baric structure. \end{defn} As usual, we let ${}^s\!\fD^{\le n} = {}^s\!\fD^{\le 0}[-n]$ and ${}^s\!\fD^{\ge n} = {}^s\!\fD^{\ge 0}[-n]$. \begin{lem}\label{lem:compat} Let $\mathfrak{D}$ be a triangulated category equipped with compatible $t$- and baric structures. Assume the $t$-structure is nondegenerate. \begin{enumerate} \item $A \in \mathfrak{D}_{\le w}$ if and only if $h^k(A) \in \mathfrak{D}_{\le w}$ for all $k$. \label{it:lth} \item $B \in \mathfrak{D}_{\ge w}$ if and only if $\beta_{\le w-1}\tau^{\le k}B \in \mathfrak{D}^{\ge k+2}$ for all $k$.\label{it:gth} \item We have \label{it:gthom} \[ \mathfrak{D}_{\ge w} \cap \mathfrak{C} = \{ B \in \mathfrak{C} \mid \text{$\Hom^k(A,B) = 0$ for all $A \in \mathfrak{D}_{\le w} \cap \mathfrak{C}$ and all $k \ge 0$} \}. \] \item $\mathfrak{D}_{\le w} \cap \mathfrak{C}$ is a Serre subcategory of $\mathfrak{C}$, and $\mathfrak{D}_{\ge w} \cap \mathfrak{C}$ is stable under extensions.\label{it:ltserre} \item ${}^s\!\fD^{\le 0}$ and ${}^s\!\fD^{\ge 0}$ are stable under extensions. \label{it:ssext} \item $\mathfrak{D}^{\le k} \cap \mathfrak{D}_{\le w} \subset {}^s\!\fD^{\le k+w}$, and $\mathfrak{D}^{\ge k} \cap \mathfrak{D}_{\ge w} \subset {}^s\!\fD^{\ge k+w}$. \label{it:sscap} \end{enumerate} \end{lem} \begin{proof} \eqref{it:lth}~Since $\mathfrak{D}_{\le w}$ is stable under $\tau^{\le k}$ and $\tau^{\ge k}$, it is clear that $A \in \mathfrak{D}_{\le w}$ implies that $h^k(A) \in \mathfrak{D}_{\le w}$. Conversely, suppose $h^k(A) \in \mathfrak{D}_{\le w}$ for all $k$. Recall (e.g. \cite[Proposition 4.4.6]{verdier}) that we have a spectral sequence \begin{equation}\label{eqn:e2-1} E_2^{ab} = \Hom(h^{-b}(A), B[a]) \quad\Longrightarrow\quad \Hom(A,B[a+b]). \end{equation} Since $\Hom(h^{-b}(A), B[a]) = 0$ for all $B \in \mathfrak{D}_{\ge w+1}$ and all $a, b \in \mathbb{Z}$, we see that $\Hom(A,B) = 0$ for all $B \in \mathfrak{D}_{\ge w+1}$, and hence that $A \in \mathfrak{D}_{\le w}$. \eqref{it:gth}~Consider the distinguished triangle \[ \beta_{\le w-1}\tau^{\le k}B \to \beta_{\le w-1}B \to \beta_{\le w-1}\tau^{\ge k+1}B \to. \] The last term is always in $\mathfrak{D}^{\ge k+1}$ by the left $t$-exactness of $\beta_{\le w-1}$. If $B \in \mathfrak{D}_{\ge w}$, so that $\beta_{\le w-1}B = 0$, then $\beta_{\le w-1}\tau^{\le k}B \cong (\beta_{\le w-1}\tau^{\ge k+1}B)[-1] \in \mathfrak{D}^{\ge k+2}$. Conversely, if the $t$-structure is nondegenerate, and if $\beta_{\le w-1}\tau^{\le k}B \in \mathfrak{D}^{\ge k+2}$ for all $k$, the distinguished triangle above shows that $\beta_{\le w-1}B \in \mathfrak{D}^{\ge k+1}$ for all $k$, and hence that $\beta_{\le w-1}B = 0$, so $B \in \mathfrak{D}_{\ge w}$, as desired. \eqref{it:gthom}~If $B \in \mathfrak{D}_{\ge w} \cap \mathfrak{C}$, then clearly $\Hom(A[-k],B) = 0$ for all $A \in \mathfrak{D}_{\le w-1} \cap \mathfrak{C}$ and all $k \ge 0$, since $A[-k] \in \mathfrak{D}_{\le w-1}$ for all $k$. Conversely, if $\Hom(A,B[k]) = 0$ for all $A \in \mathfrak{D}_{\le w-1} \cap \mathfrak{C}$ and all $k \ge 0$, the spectral sequence~\eqref{eqn:e2-1} shows that $\Hom(A,B) = 0$ for all $A \in \mathfrak{D}_{\le w-1}$, and hence that $B \in \mathfrak{D}_{\ge w}$. \eqref{it:ltserre}~Suppose we have a short exact sequence \[ 0 \to A \to B \to C \to 0 \] in $\mathfrak{C}$. If $A$ and $C$ are in $\mathfrak{D}_{\le w}$, then $B$ must be as well, since $\mathfrak{D}_{\le w}$ is stable under extensions. Conversely, suppose $B \in \mathfrak{D}_{\le w}$. Assume that $C \notin \mathfrak{D}_{\le w}$, and consider the distinguished triangle \[ \beta_{\le w}C \to C \to \beta_{\ge w+1}C \to. \] By left $t$-exactness of the baric truncation functors, we have an exact sequence \[ 0 \to h^0(\beta_{\le w}C) \to C \to h^0(\beta_{\ge w+1}C). \] We must have $h^0(\beta_{\ge w+1}C) \ne 0$: otherwise, we would have $C \cong h^0(\beta_{\le w}C) \in \mathfrak{D}_{\le w}$. Next, from the distinguished triangle \[ \beta_{\ge w+1}A \to 0 \to \beta_{\ge w+1}C \to, \] we see that $\beta_{\ge w+1}A \cong \beta_{\ge w+1}C[-1]$. In particular, $h^0(\beta_{\ge w+1}A) = 0$. But then the exact sequence \[ 0 \to h^0(\beta_{\le w}A) \to A \to h^0(\beta_{\ge w+1}A) = 0 \] shows that $A \cong h^0(\beta_{\le w}A) \in \mathfrak{D}_{\le w}$, and hence that $\beta_{\ge w+1}A = 0$ and $\beta_{\ge w+1}C = 0$. Thus, $A$ and $C$ are in $\mathfrak{D}_{\le w}$, as desired. That $\mathfrak{D}_{\ge w} \cap \mathfrak{C}$ is stable under extensions follows immediately from the fact that $\mathfrak{D}_{\ge w}$ is stable under extensions. \eqref{it:ssext}~Let $A \to B \to C \to$ be a distinguished triangle with $A \in {}^s\!\fD^{\le 0}$ and $C \in {}^s\!\fD^{\le 0}$, and consider the exact sequence \[ h^k(A) \overset{f}{\to} h^k(B) \overset{g}{\to} h^k(C). \] Since $h^k(A) \in \mathfrak{D}_{\le -k}$, its quotient $\im f$ is in $\mathfrak{D}_{\le -k}$ as well. Similarly, $\im g \in \mathfrak{D}_{\le -k}$ because it is a subobject of $h^k(C)$. Now, from the short exact sequence $0 \to \im f \to h^k(B) \to \im g \to 0$, we deduce that $h^k(B) \in \mathfrak{D}_{\le -k}$. Thus, $B \in {}^s\!\fD^{\le 0}$. On the other hand, if $A \to B \to C \to$ is a distinguished triangle with $A, C \in {}^s\!\fD^{\ge 0}$, consider the distinguished triangle \[ \beta_{\le k}A \to \beta_{\le k}B \to \beta_{\le k}C \to. \] Since $\beta_{\le k}A$ and $\beta_{\le k}C$ lie in $\mathfrak{D}^{\ge -k}$, $\beta_{\le k}B \in \mathfrak{D}^{\ge -k}$ as well, so $B \in {}^s\!\fD^{\ge 0}$. \eqref{it:sscap}~If $A \in \mathfrak{D}^{\le k} \cap \mathfrak{D}_{\le w}$, then $h^i(A[k+w]) = h^{i+k+w}(A) = 0$ if $i > -w$, and $h^i(A[k+w]) \in \mathfrak{D}_{\le w} \subset \mathfrak{D}_{\le -i}$ if $i \le -w$. Thus, $A[k+w] \in {}^s\!\fD^{\le 0}$, or $A \in {}^s\!\fD^{\le k+w}$. Next, suppose $B \in \mathfrak{D}^{\ge k} \cap \mathfrak{D}_{\ge w}$. Then $\beta_{\le i}B[k+w] = 0$ if $i < w$, and $\beta_{\le i}B[k+w] \in \mathfrak{D}^{\ge k}[k+w] = \mathfrak{D}^{\ge -w} \subset \mathfrak{D}^{\ge -i}$ if $i \ge w$. Hence, $B[k+w] \in {}^s\!\fD^{\ge 0}$, or $B \in {}^s\!\fD^{\ge k+w}$. \end{proof} \begin{prop}\label{prop:compat} Let $\mathfrak{D}$ be a triangulated category equipped with compatible $t$- and baric structures. Assume the $t$-structure is nondegenerate. \begin{enumerate} \item $\Hom(A,B) = 0$ for all $A \in {}^s\!\fD^{\le 0}$ and $B \in {}^s\!\fD^{\ge 1}$. \label{it:sshom} \item If $\Hom(A,B) = 0$ for all $B \in {}^s\!\fD^{\ge 1}$, then $A \in {}^s\!\fD^{\le 0}$. If $\Hom(A,B) = 0$ for all $A \in {}^s\!\fD^{\le 0}$, then $B \in {}^s\!\fD^{\ge 1}$. \label{it:ssorth} \item ${}^s\!\fD^{\le 0} \subset {}^s\!\fD^{\le 1}$ and ${}^s\!\fD^{\ge 0} \supset {}^s\!\fD^{\ge 1}$. \label{it:ssshift} \item If the baric structures is also nondegenerate, there is no nonzero object belonging to all ${}^s\!\fD^{\le n}$ or to all ${}^s\!\fD^{\ge n}$. \label{it:ssnondeg} \item If the $t$- and baric structures are bounded, then for any $A \in D$, there are integers $n, m$ such that $A \in {}^s\!\fD^{\ge n} \cap {}^s\!\fD^{\le m}$. \label{it:ssbdd} \end{enumerate} \end{prop} \begin{proof} \eqref{it:sshom}~For any $k \in \mathbb{Z}$, $h^{-k}(A) \in \mathfrak{D}_{\le k}$, and therefore $\Hom(h^{-k}(A),B[k]) \cong \Hom(h^{-k}(A), \beta_{\le k}B[k])$. But $\beta_{\le k}B \in \mathfrak{D}^{\ge k+1}$, so $\Hom(h^{-k}(A), \beta_{\le k}B[k]) = 0$ for all $k$. It follows from the spectral sequence~\eqref{eqn:e2-1} that $\Hom(A,B) = 0$. \eqref{it:ssorth}~Suppose $\Hom(A,B) = 0$ for all $B \in {}^s\!\fD^{\ge 1}$, and suppose for some $k$, $h^k(A) \notin \mathfrak{D}_{\le -k}$. That implies that $\tau^{\ge k}A \notin \mathfrak{D}_{\le -k}$, so $\beta_{\ge -k+1}\tau^{\ge k}A \ne 0$. In particular, the natural adjunction morphism $A \to \beta_{\ge -k+1}\tau^{\ge k}A$ is nonzero. However, $\beta_{\ge -k+1}\tau^{\ge k}A \in \mathfrak{D}^{\ge k} \cap \mathfrak{D}_{\ge -k+1} \subset {}^s\!\fD^{\ge 1}$. This contradicts the assumption that $\Hom(A,B) = 0$ for all $B \in {}^s\!\fD^{\ge 1}$, so we must have $h^k(A) \in \mathfrak{D}_{\le -k}$ for all $k$, and hence $A \in {}^s\!\fD^{\le 0}$. On the other hand, if $\Hom(A,B) = 0$ for all $A \in {}^s\!\fD^{\le 0}$, a similar argument involving the morphism $\tau^{\le -k}\beta_{\le k}B \to B$ shows that $B \in {}^s\!\fD^{\ge 1}$. \eqref{it:ssshift}~If $A \in {}^s\!\fD^{\le 0}$, then $h^k(A[1]) = h^{k+1}(A) \in \mathfrak{D}_{\le -k-1} \subset \mathfrak{D}_{\le -k}$, so $A[1] \in {}^s\!\fD^{\le 0}$, and hence ${}^s\!\fD^{\le 0} \subset {}^s\!\fD^{\le 1}$. Similarly, if $B \in {}^s\!\fD^{\ge 0}$, then $\beta_{\le k}B[-1] \in \mathfrak{D}^{\ge -k+1} \subset \mathfrak{D}^{\ge -k}$, so $B[-1] \in {}^s\!\fD^{\ge 0}$. \eqref{it:ssnondeg}~Suppose $A \in {}^s\!\fD^{\le n}$ for all $n$. Then $h^k(A) \in \mathfrak{D}_{\le n-k}$ for all $n$ and all $k$. The nondegeneracy of the baric structure implies that $h^k(A) = 0$; then, the nondegeneracy of the $t$-structure implies that $A = 0$. Next, suppose $A \in {}^s\!\fD^{\ge n}$ for all $n$, and assume $A \ne 0$. Choose some $w$ such that $\beta_{\le w}A \ne 0$, and then choose some $k$ such that $\tau^{\le k}\beta_{\le w}A \ne 0$. By right baryexactness of $\tau^{\le k}$, we know that $\tau^{\le k}\beta_{\le w}A \in \mathfrak{D}_{\le w}$, so we obtain a sequence of isomorphisms \[ \Hom(\tau^{\le k}\beta_{\le w}A, \tau^{\le k}\beta_{\le w}A) \cong \Hom(\tau^{\le k}\beta_{\le w}A, \beta_{\le w}A) \cong \Hom(\tau^{\le k}\beta_{\le w}A, A). \] In particular, the natural map $\tau^{\le k}\beta_{\le w}A \to A$ is nonzero. But clearly $\tau^{\le k}\beta_{\le w}A \in {}^s\!\fD^{\le k+w}$, so $A \notin {}^s\!\fD^{\ge k+w+1}$, a contradiction. \eqref{it:ssbdd}~This follows from Lemma~\ref{lem:compat}\eqref{it:sscap}. \end{proof} We will not prove in general that $({}^s\!\fD^{\le 0}, {}^s\!\fD^{\ge 0})$ is a $t$-structure. \begin{thm}\label{thm:stag-gen} Let $\mathfrak{D}$ be a triangulated category endowed with compatible bounded, nondegenerate $t$- and baric structures. Suppose we have a function $\mu: \mathfrak{D} \to \mathbb{N}$ with the following properties: \begin{enumerate} \item $\mu(X) = 0$ if and only if $X = 0$. \item If $X \in \mathfrak{D}^{\ge n}$ but $X \notin \mathfrak{D}^{\ge n+1}$, then $\mu(\tau^{\ge n+1}\beta_{\le -n}X) < \mu(X)$. \end{enumerate} Then $({}^s\!\fD^{\le 0}, {}^s\!\fD^{\ge 0})$ is a bounded, nondegenerate $t$-structure on $\mathfrak{D}$. \end{thm} \begin{proof} It will be convenient to use ``$*$'' operation on triangulated categories ({\it cf.} \cite[\S 1.3.9]{bbd}): given two classes of objects $\mathcal{A}, \mathcal{B} \subset \mathfrak{D}$, we denote by $\mathcal{A} * \mathcal{B}$ the class of all objects $X \in \mathfrak{D}$ such that there exists a distinguished triangle $A \to X \to B \to$ with $A \in \mathcal{A}$ and $B \in \mathcal{B}$. In view of the preceding proposition, the present theorem will be proved once we show that every object of $\mathfrak{D}$ belongs to ${}^s\!\fD^{\le 0} * {}^s\!\fD^{\ge 1}$. We proceed by induction on $\mu(X)$. If $\mu(X) = 0$, then $X = 0$, and there is nothing to prove. Otherwise, let $n$ be the smallest integer such that $h^n(X) \ne 0$. Let $A_1 = \tau^{\le n}\beta_{\le -n} X$, $X' = \tau^{\ge n+1}\beta_{\le -n} X$, and $B_1 = \beta_{\ge -n+1} X$. It follows from the right baryexactness of $\tau^{\le n}$ that $A_1 \in {}^s\!\fD^{\le 0}$, and, similarly, it follows from the left $t$-exactness of $\beta_{\ge -n+1}$ that $B_1 \in {}^s\!\fD^{\ge 1}$. Recall~\cite[Proposition~1.3.10]{bbd} that the ``$*$'' operation is associative. By construction, we have \[ X \in \{A_1\} * \{X'\} * \{B_1\} \subset {}^s\!\fD^{\le 0} * \{X'\} * {}^s\!\fD^{\ge 1}. \] Since $\mu(X') < \mu(X)$ by assumption, we know that $X' \in {}^s\!\fD^{\le 0} * {}^s\!\fD^{\ge 1}$, and hence \[ X \in {}^s\!\fD^{\le 0} * {}^s\!\fD^{\le 0} * {}^s\!\fD^{\ge 1} * {}^s\!\fD^{\ge 1}. \] Since ${}^s\!\fD^{\le 0}$ and ${}^s\!\fD^{\ge 1}$ are stable under extensions, we have ${}^s\!\fD^{\le 0} * {}^s\!\fD^{\le 0} = {}^s\!\fD^{\le 0}$ and ${}^s\!\fD^{\ge 1} * {}^s\!\fD^{\ge 1} = {}^s\!\fD^{\ge 1}$, so $X \in {}^s\!\fD^{\le 0} * {}^s\!\fD^{\ge 1}$, as desired. \end{proof} \section{Examples} \label{sect:examples} In this section, we exhibit several examples of baric structures occurring ``in nature.'' In the first one, the staggering operation of Definition~\ref{defn:stag} is a new approach to a known $t$-structure. In two others, this operation gives what appears to be a previously unknown $t$-structure. The main example of this paper---baric structures on derived categories of coherent sheaves---will be discussed in the next section. \subsection{Perverse sheaves} Let $X$ be a topologically stratified space (as in~\cite{gm:ih}), with all strata of even real dimension. (This example can be easily modified to relax that condition, or to treat stratified varieties over a field instead.) Let $D = D^b_c(X)$ be the bounded derived category of sheaves of complex vector spaces that are constructible with respect to the given stratification. For any $w \in \mathbb{Z}$, let $X_w$ be the union of all strata of dimension at most $2w$. (Thus, $X_w = \varnothing$ if $w < 0$.) This is a closed subspace of $X$. Let $i_w: X_w \to X$ be the inclusion map. Let $D_{\le w}$ be the full subcategory consisting of complexes whose support is contained in $X_w$, and let $D_{\ge w+1}$ be the full subcategory of complexes $\mathcal{F}$ such that $i_w^!\mathcal{F} = 0$. If $\mathcal{F} \in D_{\le w}$ and $\mathcal{G} \in D_{\ge w+1}$, then $\mathcal{F} \cong i_{w*}i_w^{-1}\mathcal{F}$, and \[ \Hom(\mathcal{F},\mathcal{G}) \cong \Hom(i_{w*}i_w^{-1}\mathcal{F},\mathcal{G}) \cong \Hom(i_w^{-1}\mathcal{F}, i_w^!\mathcal{G}) = 0. \] Next, let $j_{w+1}: (X \smallsetminus X_w) \to X$ be the open inclusion of the complement of $X_w$. For any complex $\mathcal{F}$, the distinguished triangle \[ i_{w*}i_w^!\mathcal{F} \to \mathcal{F} \to (j_{w+1})_*j_{w+1}^{-1}\mathcal{F} \to \] is one whose first term lies in $D_{\le w}$ and whose last term lies in $D_{\ge w+1}$. Thus, we see that $(\{D_{\le w}\}, \{D_{\ge w}\})_{w \in \mathbb{Z}}$ is a baric structure on $D^b_c(X)$, with baric truncation functors \[ \beta_{\le w} = i_{w*}i_w^! \qquad\text{and}\qquad \beta_{\ge w} = j_{w*}j_w^{-1}. \] It is easy to see that this baric structure is compatible with the standard $t$-structure on $D$. If $\mathcal{F}$ is supported on $X_w$, it is obvious that any truncation of it is as well, so $D_{\le w}$ is stable under $\tau^{\le n}$ and $\tau^{\ge n}$. On the other hand, it is clear from the formulas above that $\beta_{\le w}$ and $\beta_{\ge w}$ are both left $t$-exact. In the associated staggered $t$-structure $({}^s D^{\le 0}, {}^s D^{\ge 0})$, we have $\mathcal{F} \in {}^s D^{\le 0}$ if and only if $h^k(\mathcal{F}) \in D_{\le -k}$, or, in other words, \[ \dim \supp h^k(\mathcal{F}) \le -2k. \] The staggered $t$-structure in this case is none other than the perverse $t$-structure of middle perversity. \subsection{Quasi-exceptional sets} Let $\mathfrak{D}$ be a triangulated category. A set of objects $\{\nabla^w\}_{w \in \mathbb{N}}$ in $\mathfrak{D}$ indexed by nonnegative integers is called a \emph{quasi-exceptional set} if the following conditions hold: \begin{enumerate} \item If $v < w$, then $\Hom(\nabla^v, \nabla^w[k]) =0$ for all $k \in \mathbb{Z}$.\label{it:exc-hom} \item For any $w \in \mathbb{N}$, $\Hom(\nabla^w, \nabla^w[k]) = 0$ if $k < 0$, and $\End(\nabla^w)$ is a division ring.\label{it:exc:end} \end{enumerate} For $w \in \mathbb{N}$, let $\mathfrak{D}_{\le w}$ be the full triangulated subcategory of $\mathfrak{D}$ generated by $\nabla^0, \ldots, \nabla^w$, and for an integer $w < 0$, let $\mathfrak{D}_{\le w}$ be the full triangulated subcategory containing only zero objects. (Here, we are following the notation of~\cite{bez:qes}, but this will turn out to be consistent with our notation for baric structures as well.) A quasi-exceptional set is \emph{dualizable} if there is another collection of objects $\{\Delta_w\}_{w \in \mathbb{N}}$ such that \begin{enumerate} \setcounter{enumi}{2} \item If $v > w$, $\Hom(\Delta_v, \nabla^w[k]) = 0$ for all $k \in \mathbb{Z}$.\label{it:dexc-hom} \item For any $w \in \mathbb{N}$, we have $\Delta_w \cong \nabla^w \mod \mathfrak{D}_{\le w-1}$.\label{it:dexc-iso} \end{enumerate} The last condition means that $\Delta_w$ and $\nabla^w$ give rise to isomorphic objects in the quotient category $\mathfrak{D}_{\le w}/\mathfrak{D}_{\le w-1}$. Next, let $\mathfrak{D}_{\ge w}$ be the full triangulated subcategory generated by the objects $\{\nabla^k \mid k \ge w\}$. If $A \in \mathfrak{D}_{\le w}$ and $B \in \mathfrak{D}_{\ge w+1}$, then Axiom~\eqref{it:exc-hom} above implies that $\Hom(A,B) = 0$. In addition, by~\cite[Lemma~4(e)]{bez:qes}, each inclusion $\mathfrak{D}_{\le w} \to \mathfrak{D}_{\le w+1}$ admits a right adjoint $\iota_w$. By a straightforward argument, these functors can be used to construct distinguished triangles as in Definition~\ref{defn:baric}\eqref{it:dt}. Thus, $(\{\mathfrak{D}_{\le w}\}, \{\mathfrak{D}_{\ge w}\})_{w \in \mathbb{Z}}$ is a baric structure on $\mathfrak{D}$. It is nondegenerate and bounded by construction. A key result of~\cite{bez:qes} is the construction of a bounded, nondegenerate $t$-structure $(\mathfrak{D}^{\le 0}, \mathfrak{D}^{\ge 0})$ associated to a quasi-exceptional set. This $t$-structure is defined as follows (see~\cite[Proposition~1]{bez:qes}): \begin{align*} \mathfrak{D}^{\le 0} &= \langle \{ \Delta_w[n] \mid n \ge 0 \} \rangle, \\ \mathfrak{D}^{\ge 0} &= \langle \{ \nabla_w[n] \mid n \le 0 \} \rangle. \end{align*} Here, the notation $\langle S \rangle$ stands for the smallest strictly full subcategory of $\mathfrak{D}$ that is stable under extensions and contains all objects in the set $S$. We claim that this $t$-structure and the baric structure defined above are compatible. It follows from Axiom~\eqref{it:exc-hom} above that \[ \beta_{\le w}\nabla^v = \begin{cases} 0 & \text{if $w < v$,} \\ \nabla^v & \text{if $w \ge v$,} \end{cases} \qquad\text{and}\qquad \beta_{\ge w}\nabla^v = \begin{cases} 0 & \text{if $w > v$,} \\ \nabla^v & \text{if $w \le v$.} \end{cases} \] This calculation shows that the baric truncation functors preserve $\mathfrak{D}^{\ge 0}$. On the other hand, Axiom~\eqref{it:dexc-hom} implies that $\tau^{\le 0}\nabla^w$ is contained in the subcategory generated by $\Delta_0, \ldots, \Delta_w$, and that subcategory coincides with $\mathfrak{D}_{\le w}$ by Axiom~\eqref{it:dexc-iso}. Thus, $\tau^{\le 0}$ preserves $\mathfrak{D}_{\le w}$, so $\tau^{\ge 0}$ does as well. Finally, given a nonzero object $X \in \mathfrak{D}$, let $a(X)$ be the smallest integer $n$ such that $X \in \mathfrak{D}^{\ge -n}$, and let $b(X)$ be the smallest integer $w$ such that $X \in \mathfrak{D}_{\le w}$. Note that $b(X) \ge 0$. Let \[ \mu(X) = \begin{cases} \max \{a(X)+1,b(X)\} + 1 & \text{if $X \ne 0$,} \\ 0 & \text{if $X = 0$.} \end{cases} \] Clearly, $\mu$ takes nonnegative integer values, and $\mu(X) = 0$ if and only if $X = 0$. Moreover, if $a(X) = -n$ (which implies $\mu(X) \ge -n+2$), then $a(\tau^{\ge n+1}\beta_{\le -n}X) \le -n-1$ and $b(\tau^{\ge n+1}\beta_{\le -n}X) \le -n$, so $\mu(\tau^{\ge n+1}\beta_{\le-n}X) \le -n+1$. Thus, the conditions of Theorem~\ref{thm:stag-gen} are satisfied, and there is a staggered $t$-structure $({}^s\!\fD^{\le 0}, {}^s\!\fD^{\ge 0})$ on $\mathfrak{D}$. \subsection{Weight truncation for $\ell$-adic mixed constructible sheaves} Let $X$ be a scheme of finite type over a finite field $\mathbb{F}_q$, and let $\ell$ be a fixed prime number distinct from the characteristic of $\mathbb{F}_q$. Let $D = D^b_m(X,\mathbb{Q}_\ell)$ be the bounded derived category of mixed constructible $\mathbb{Q}_\ell$-sheaves on $X$. Let ${}^p\! h^n$ denote the $n$th cohomology functor with respect to the perverse $t$-structure on $D$ with respect to the middle perversity. Let $D_{\le w}$ (resp.~$D_{\ge w}$) be the full subcategory of $D^b_m(X,\mathbb{Q}_\ell)$ consisting of objects $\mathcal{F}$ such that ${}^p\! h^n(\mathcal{F})$ is of weight $\le w$ (resp.~$\ge w$) for all $n \in \mathbb{Z}$. S.~Morel has shown~\cite[Proposition~4.1.1]{mor} that $(\{D_{\le w}\}, \{D_{\ge w}\})_{w\in\mathbb{Z}}$ is a baric structure on $D^b_m(X, \mathbb{Q}_\ell)$. Since all objects in the heart of this $t$-structure have finite length, we may attach a nonnegative integer $\mu(\mathcal{F})$ to each complex $\mathcal{F}$ by the formula \[ \mu(\mathcal{F}) = \sum_{n \in \mathbb{Z}} (\text{length of ${}^p\! h^n(\mathcal{F})$}). \] Moreover, by~\cite[Proposition~4.1.3]{mor}, the baric truncation functors are $t$-exact for the perverse $t$-structure. This implies that $\mu$ satisfies the assumptions of Theorem~\ref{thm:stag-gen}, so the perverse $t$-structure on $D^b_m(X, \mathbb{Q}_\ell)$ can be staggered with respect to Morel's baric structure to obtain a new $t$-structure. The authors are not aware of any previous appearance of this ``staggered-perverse'' $t$-structure on $\ell$-adic mixed constructible sheaves. \subsection{Diagonal complexes} We conclude with an example, due to T.~Ekedahl~\cite{eke}, of a $t$-structure that closely resembles a staggered $t$-structure, although it does not in general arise by staggering with respect to a baric structure. (The authors thank N.~Ramachandran for pointing out this work to them.) Let $\mathfrak{D}$ be a triangulated category with a bounded, nondegenerate $t$-structure $(\mathfrak{D}^{\le 0}, \mathfrak{D}^{\ge 0})$, and as usual, let $\mathfrak{C} = \mathfrak{D}^{\le 0} \cap \mathfrak{D}^{\ge 0}$. Suppose $\{\mathfrak{C}_{\le w}\}_{w \in \mathbb{Z}}$ is an increasing collection of Serre subcategories of $\mathfrak{C}$, and let $\mathfrak{C}_{\ge w} = \{ B \in \mathfrak{C} \mid \text{$\Hom(A,B) = 0$ for all $A \in \mathfrak{C}_{\le w-1}$} \}$. Following Ekedahl, the collection $\{\mathfrak{C}_{\le w}\}$ is called a \emph{radical filtration} of the pair $(\mathfrak{D}, \mathfrak{C})$ if the following axioms hold: \begin{enumerate} \item For each object $A \in \mathfrak{C}$, there exist integers $v, w$ such that $A \in \mathfrak{C}_{\ge v} \cap \mathfrak{C}_{\le w}$. \item If $A \in \mathfrak{C}_{\le w}$ and $B \in \mathfrak{C}_{\ge v}$, then $\Hom^{v-w-1}(A, B) = 0$ in $\mathfrak{D}$. \end{enumerate} If $(\mathfrak{D}, \mathfrak{C})$ is equipped with a radical filtration, Ekedahl shows that the categories \begin{align*} \tilde \mathfrak{D}^{\le 0} &= \{ A \in \mathfrak{D} \mid \text{$h^k(A) \in \mathfrak{C}_{\le-k}$ for all $k \in \mathbb{Z}$} \}, \\ \tilde \mathfrak{D}^{\ge 0} &= \{ B \in \mathfrak{D} \mid \text{$h^k(B) \in \mathfrak{C}_{\ge-k}$ for all $k \in \mathbb{Z}$} \} \end{align*} constitute a bounded, nondegenerate $t$-structure on $\mathfrak{D}$. This is called the \emph{diagonal $t$-structure}, and the objects in its heart are called \emph{diagonal complexes}. These formulas are, of course, strongly reminiscent of those in Definition~\ref{defn:stag}. Let us comment briefly on the relationship between the two constructions. Given a radical filtration, one could hope to define a baric structure by setting $\mathfrak{D}_{\le w} = \{ A \in \mathfrak{D} \mid h^k(A) \in \mathfrak{C}_{\le w}\text{ for all }k \in \mathbb{Z} \}$. However, the construction of a baric truncation functor turns out to require a stronger Hom-vanishing condition between $\mathfrak{C}_{\le w}$ and $\mathfrak{C}_{\ge w+1}$ than that stated above: one needs something like Lemma~\ref{lem:compat}\eqref{it:gthom}. Conversely, given a baric structure, one could hope to define a radical filtration by setting $\mathfrak{C}_{\le w} = \mathfrak{D}_{\le w} \cap \mathfrak{C}$. This also fails, because a baric structure imposes no higher Hom-vanishing conditions on the right-orthogonal of $\mathfrak{C}_{\le w}$. \section{Baric Structures on Coherent Sheaves, I} \label{sect:baric-coh1} In this section, we will investigate baric structures on derived categories of coherent sheaves. Let $X$ be a scheme of finite type over a noetherian base scheme, and let $G$ be an affine group scheme over the same base, acting on $X$. We adopt the convention that all statements about subschemes are to be understood in the $G$-invariant sense. Thus, ``open subscheme'' will always mean ``$G$-stable open subscheme,'' and ``irreducible'' will mean ``not a union of two proper $G$-stable closed subschemes.'' This convention will remain in effect for the remainder of the paper. Let $\cg X$ and $\qg X$ denote the categories of $G$-equivariant coherent and quasicoherent sheaves, respectively, on $X$. One of the headaches of the subject is the need to work with three closely related triangulated categories, which we denote as follows: \begin{enumerate} \item[(1)] $\dgb X$ is the bounded derived category of $\cg X$. \item[(2)] $\dgm X$ is the bounded-above derived category of $\cg X$. \item[(3)] $\dgp X$ is the full subcategory of the bounded-below derived category of $\qg X$ consisting of objects with coherent cohomology sheaves. \end{enumerate} $\dgb X$ will be the focus of our attention, but it will be necessary to work $\dgm X$ and $\dgp X$ as well, simply because most operations on sheaves take values in one of those categories, even when acting on bounded complexes. \begin{defn} A \emph{baric structure} on $X$ is a baric structure on $\dgb X$ which is compatible with the standard $t$-structure. \end{defn} \begin{rmk} \label{rmks:schemebaric} Implicit in this definition are some finiteness conditions; {\it e.g.}, it is conceivable that there are interesting baric structures on $\dgp X$ that take advantage of the fact that the functors $\beta_{\leq w}$ can take bounded complexes to unbounded complexes. Nevertheless, this is the definition we will work with. \end{rmk} Inspired by parts~\eqref{it:lth} and~\eqref{it:gth} of Lemma~\ref{lem:compat}, we define the following subcategories of $\dgm X$ and $\dgp X$: \begin{align*} \dsml Xw &= \{ \mathcal{F} \in \dgm X \mid \text{$h^k(\mathcal{F}) \in \dsl Xw$ for all $k$} \}, \\ \dspg Xw &= \{ \mathcal{F} \in \dgp X \mid \text{$\beta_{\le w-1}\tau^{\le k}\mathcal{F} \in \dgbg X{k+2}$ for all $k$} \}. \end{align*} It is unknown whether these categories constitute parts of baric structures on $\dgm X$ or on $\dgp X$. Nevertheless, they will be useful in the sequel, in part because they admit the alternate characterization given in the lemma below. If $Y$ is another scheme endowed with a baric structure, we will, by a minor abuse of terminology, call a functor $\dgm X \to \dgm Y$ \emph{right baryexact} if it takes objects of $\dsml Xw$ to objects of $\dsml Yw$. Similarly, we call a functor $\dgp X \to \dgp Y$ \emph{left baryexact} if it takes objects of $\dspg Xw$ to $\dspg Yw$. \begin{lem}\label{lem:unbdd-orth} \begin{enumerate} \item For $\mathcal{F} \in \dgm X$, we have $\mathcal{F} \in \dsml Xw$ if and only if $\Hom(\mathcal{F},\mathcal{G}) = 0$ for all $\mathcal{G} \in \dsg X{w+1}$. \item For $\mathcal{F} \in \dgp X$, we have $\mathcal{F} \in \dspg Xw$ if and only if $\Hom(\mathcal{G},\mathcal{F}) = 0$ for all $\mathcal{G} \in \dsl X{w-1}$. \end{enumerate} \end{lem} In particular, we see from this lemma that \begin{equation}\label{eqn:pm-bdd} \begin{aligned} \dsml Xw \cap \dgb X &= \dsl Xw, \\ \dspg Xw \cap \dgb X &= \dsg Xw. \end{aligned} \end{equation} \begin{proof} (1)~Suppose $\mathcal{F} \in \dsml Xw$. By Lemma~\ref{lem:compat}\eqref{it:lth}, $\tau^{\ge k}\mathcal{F} \in \dsl Xw$ for all $k$. In particular, given $\mathcal{G} \in \dsg X{w+1}$, let $k$ be such that $\mathcal{G} \in \dgbg Xk$. Then $\Hom(\mathcal{F},\mathcal{G}) \cong \Hom(\tau^{\ge k}\mathcal{F}, \mathcal{G}) = 0$. Conversely, suppose $\mathcal{F} \in \dgm X$ but $\mathcal{F} \notin \dsml Xw$, so that for some $k$, $h^k(\mathcal{F}) \notin \dsl Xw$. Then $\tau^{\ge k}\mathcal{F} \notin \dsl Xw$. Let $\mathcal{G} = \beta_{\ge w+1}\tau^{\ge k}\mathcal{F}$. We then have a nonzero morphism $\tau^{\ge k}\mathcal{F} \to \mathcal{G}$. Moreover, since the baric structure on $\dgb X$ is compatible with the standard $t$-structure, we have that $\mathcal{G} \in \dgbg Xk$, so there is a natural isomorphism $\Hom(\tau^{\ge k}\mathcal{F},\mathcal{G}) \cong \Hom(\mathcal{F},\mathcal{G})$. Thus, $\Hom(\mathcal{F},\mathcal{G}) \ne 0$. (2)~Suppose $\mathcal{F} \in \dgpg Xw$. Given $\mathcal{G} \in \dsl X{w-1}$, let $k$ be such that $\mathcal{G} \in \dgbl Xk$. Then $\Hom(\mathcal{G},\mathcal{F}) \cong \Hom(\mathcal{G},\tau^{\le k}\mathcal{F}) \cong \Hom(\mathcal{G},\beta_{\le w-1}\tau^{\le k}\mathcal{F}) = 0$. Conversely, if $\mathcal{F} \in \dgp X$ but $\mathcal{F} \notin \dgpg Xw$, then for some $k$, $\beta_{\le w-1}\tau^{\le k}\mathcal{F} \notin \mathfrak{D}^{\ge k+2}$. Let $\mathcal{G} = \tau^{\le k+1}\beta_{\le w-1}\tau^{\le k}\mathcal{F}$. Then clearly $\mathcal{G} \in \dgbl X{k+1}$ and $\mathcal{G} \in \dsl X{w-1}$, and there is a nonzero morphism $\mathcal{G} \to \beta_{\le w-1}\tau^{\le k}\mathcal{F}$. In particular, the group $\Hom(\mathcal{G}, \beta_{\le w-1}\tau^{\le k}\mathcal{F}) \cong \Hom(\mathcal{G}, \tau^{\le k}\mathcal{F})$ is nonzero. Now, consider the exact sequence \[ \Hom(\mathcal{G}, (\tau^{\ge k+1}\mathcal{F})[-1]) \to \Hom(\mathcal{G}, \tau^{\le k}\mathcal{F}) \to \Hom(\mathcal{G}, \mathcal{F}). \] The first term vanishes because $(\tau^{\ge k+1}\mathcal{F})[-1] \in \dgpg X{k+2}$, so the natural map $\Hom(\mathcal{G}, \tau^{\le k}\mathcal{F}) \to \Hom(\mathcal{G}, \mathcal{F})$ is injective. It follows that $\Hom(\mathcal{G},\mathcal{F}) \ne 0$. \end{proof} \subsection{HLR baric structures} \label{sect:hlr} We do not wish to work with arbitrary baric structures on $\dgb X$; rather, we want them to be well-behaved in relation to the scheme structure on $X$. We have already imposed the condition that the baric structure be compatible with the standard $t$-structure. We may also ask that it give rise to baric structures on subschemes, in the following sense. \begin{defn}\label{defn:induced} Suppose $X$ is equipped with a baric structure, and let $\kappa: Y \hookrightarrow X$ be a locally closed subscheme. A baric structure on $Y$ is said to be \emph{induced} by the one on $X$ if $L\kappa^*$ is right baryexact and $R\kappa^!$ is left baryexact. \end{defn} The class of ``HLR (hereditary, local, and rigid) baric structures,'' defined below, is particularly well-behaved. For instance, every locally closed subscheme of a scheme with an HLR baric structure admits a unique induced baric structure. (See Theorem~\ref{thm:hlr-induced}.) The remainder of Section~\ref{sect:baric-coh1} is devoted to establishing various properties of HLR baric structures, and the main result of the paper, Theorem~\ref{thm:main}, is a statement about a class of nontrivial HLR baric structures. \begin{defn}\label{defn:hlr} A baric structure on $X$ is said to be \emph{hereditary} if every closed subscheme admits an induced baric structure. A hereditary baric structure on $X$ is said to be \emph{local} if every open subscheme admits an induced baric structure that is also hereditary. Next, a hereditary baric structure on $X$ is \emph{rigid} if for every sequence of closed subschemes $Z \overset{t}{\hookrightarrow} Z_1 \hookrightarrow X$ where $Z_1$ is a nilpotent thickening of $Z$ ({\it i.e.}, $Z_1$ has the same underlying topological space as $Z$), the induced baric structures on $Z$ and $Z_1$ are related as follows: \begin{equation}\label{eqn:rigid} \begin{aligned} \dsl {Z_1}w &= \text{the thick closure of $t_*(\dsl Zw)$,} \\ \dsg {Z_1}w &= \text{the thick closure of $t_*(\dsg Zw)$.} \end{aligned} \end{equation} Finally, a baric structure that is hereditary, local, and rigid is called an \emph{HLR baric structure}. \end{defn} It turns out that the ``local'' and ``rigid'' conditions on an HLR baric structure are redundant: \begin{thm}\label{thm:hlr} Every hereditary baric structure is HLR. \end{thm} This theorem will be proved in Section~\ref{sect:hlrproof}. We first require a couple of preliminary lemmas about induced baric structures, proved below. Following that, in Section~\ref{sect:hlrprop}, we will establish a number of useful properties of HLR baric structures. \begin{lem}\label{lem:induced} Let $\schemebaric X$ be a baric structure on $X$, and let $i: Z \hookrightarrow X$ be a closed subscheme. If $Z$ admits an induced baric structure, it is given by \begin{equation}\label{eqn:hered} \begin{aligned} \dsl Zw &= \{ \mathcal{F} \in \dgb Z \mid i_* \mathcal{F} \in \dsl Xw \}, \\ \dsg Zw &= \{ \mathcal{F} \in \dgb Z \mid i_* \mathcal{F} \in \dsg Xw \}. \end{aligned} \end{equation} Conversely, if the categories~\eqref{eqn:hered} constitute a baric structure on $Z$, then that baric structure is induced from the one on $X$. If an open subscheme $j: U \hookrightarrow X$ admits an induced baric structure, it is given by \begin{equation}\label{eqn:local} \begin{aligned} \dsl Uw &= \{ \mathcal{F} \in \dgb U \mid \text{$\mathcal{F} \cong j^*\mathcal{F}_1$ for some $\mathcal{F}_1 \in \dsl Xw$} \}, \\ \dsg Uw &= \{ \mathcal{F} \in \dgb U \mid \text{$\mathcal{F} \cong j^*\mathcal{F}_1$ for some $\mathcal{F}_1 \in \dsg Xw$} \}. \end{aligned} \end{equation} Conversely, if the categories~\eqref{eqn:local} constitute a baric structure on $U$, then that baric structure is induced from the one on $X$. \end{lem} \begin{proof} Let $\schemebaric Z$ be an induced baric structure on a closed subscheme $i: Z \hookrightarrow X$. If $\mathcal{F} \in \dsl Zw$, then for all $\mathcal{G} \in \dspg X{w+1}$, we have (by Lemma~\ref{lem:unbdd-orth}) that $\Hom(\mathcal{F}, Ri^!\mathcal{G}) = 0$, and therefore $\Hom(i_*\mathcal{F}, \mathcal{G}) = 0$. The latter implies that $i_*\mathcal{F} \in \dsl Xw$. Similarly, if $\mathcal{F} \in \dsg Zw$, then $\Hom(Li^*\mathcal{G}, \mathcal{F}) = \Hom(\mathcal{G}, i_*\mathcal{F}) = 0$ for all $\mathcal{G} \in \dsml X{w-1}$, so $i_*\mathcal{F} \in \dsg Xw$. For the opposite inclusion, given an object $\mathcal{F} \in \dgb Z$, form the distinguished triangle \[ i_*\beta_{\le w}\mathcal{F} \to i_*\mathcal{F} \to i_*\beta_{\ge w+1}\mathcal{F} \to \] in $\dgb X$. By the reasoning above, we have $i_*\beta_{\le w}\mathcal{F} \in \dsl Xw$ and $i_*\beta_{\ge w+1} \in \dsg X{w+1}$, so the first and last terms above must be the baric truncations of $i_*\mathcal{F}$: \[ i_*\beta_{\le w}\mathcal{F} \cong \beta_{\le w}i_*\mathcal{F} \qquad\text{and}\qquad i_*\beta_{\ge w+1}\mathcal{F} \cong \beta_{\ge w+1}i_*\mathcal{F}. \] Thus, if $i_*\mathcal{F} \in \dsl Xw$, then $\beta_{\ge w+1}i_*\mathcal{F} = i_*\beta_{\ge w+1}\mathcal{F} = 0$. Since $i_*$ is faithful, this implies that $\beta_{\ge w+1}\mathcal{F} = 0$, so that $\mathcal{F} \in \dsl Zw$. The same argument shows that $i_*\mathcal{F} \in \dsg Xw$ implies that $\mathcal{F} \in \dsl Xw$. Next, assume the categories~\eqref{eqn:hered} constitute a baric structure on $Z$. We will show that this baric structure is induced from the one on $X$. If $\mathcal{F} \in \dsml Xw$, then $\Hom(\mathcal{F}, i_*\mathcal{G}) = 0$ for all $\mathcal{G} \in \dsg Z{w+1}$ by Lemma~\ref{lem:unbdd-orth}, so $\Hom(Li^*\mathcal{F},\mathcal{G}) = 0$, and hence $Li^*\mathcal{F} \in \dsml Zw$. Similarly, if $\mathcal{F} \in \dspg Xw$, then $\Hom(i_*\mathcal{G},\mathcal{F}) = \Hom(\mathcal{G}, Ri^!\mathcal{F}) = 0$ for all $\mathcal{G} \in \dsl Z{w-1}$, so $Ri^!\mathcal{F} \in \dspg Zw$. Thus, $Li^*$ is right baryexact, and $Ri^!$ is left baryexact, as desired. We turn now to open subschemes. Suppose $\schemebaric U$ is an induced baric structure on an open subscheme $j: U \hookrightarrow X$. In view of the equalities~\eqref{eqn:pm-bdd}, the definition of ``induced'' implies that $j^*: \dgb X \to \dgb U$ is baryexact. In other words, if $\mathcal{F}_1 \in \dsl Xw$, then $j^*\mathcal{F}_1 \in \dsl Uw$, and if $\mathcal{F}_1 \in \dsg Xw$, then $j^*\mathcal{F}_1 \in \dsg Uw$. Conversely, if $\mathcal{F} \in \dsl Uw$, then there exists some object $\mathcal{F}' \in \dgb X$ such that $j^*\mathcal{F}' \cong \mathcal{F}$. Form the distinguished triangle $\beta_{\le w}\mathcal{F}' \to \mathcal{F}' \to \beta_{\ge w+1}\mathcal{F}' \to$, and apply $j^*$ to it. We know that $j^*\beta_{\le w}\mathcal{F}' \in \dsl Uw$ and that $j^*\beta_{\ge w+1}\mathcal{F}' \in \dsg U{w+1}$. Since $j^*\mathcal{F}' \cong \mathcal{F}$, we see from the triangle \[ j^*\beta_{\le w}\mathcal{F}' \to \mathcal{F} \to j^*\beta_{\ge w+1}\mathcal{F}' \to \] that $j^*\beta_{\ge w+1}\mathcal{F}' \cong \beta_{\ge w+1}\mathcal{F} = 0$, and hence that $\mathcal{F} \cong j^*\beta_{\le w}\mathcal{F}'$. Thus, setting $\mathcal{F}_1 = \beta_{\le w}\mathcal{F}'$, we have found an $\mathcal{F}_1 \in \dsl Xw$ such that $j^*\mathcal{F}_1 \cong \mathcal{F}$. The argument for $\dsg Uw$ is similar. Finally, assume the categories~\eqref{eqn:local} constitute a baric structure on $U$. We must show that this baric structure is induced. Clearly, $j^*$ is baryexact as a functor of bounded derived categories $\dgb X \to \dgb U$. Since $j^*$ is also exact, it commutes with truncation and cohomology functors, and it takes $\dgbg Xw$ to $\dgbg Uw$. It follows from these observations that it takes $\dsml Xw$ to $\dsml Uw$ and $\dspg Xw$ to $\dspg Uw$. \end{proof} \begin{lem}\label{lem:hlr-baric-res} Let $j: U \hookrightarrow X$ be the inclusion of an open subscheme, and let $i: Z \hookrightarrow X$ be the inclusion of a closed subscheme. Assume that $U$ and $Z$ are equipped with baric structures induced from one on $X$. Then: \begin{enumerate} \item $j^*$ takes $\dsml Xw$ to $\dsml Uw$ and $\dspg Xw$ to $\dspg Uw$. \item $Li^*$ takes $\dsml Xw$ to $\dsml Zw$. \item $Ri^!$ takes $\dspg Xw$ to $\dspg Zw$. \item $i_*$ takes $\dsml Zw$ to $\dsml Xw$ and $\dspg Zw$ to $\dspg Xw$. \end{enumerate} \end{lem} \begin{proof} Parts~(1), (2), and~(3) hold by definition. (4)~We saw in the proof of Lemma~\ref{lem:induced} that as a functor of bounded derived categories $\dgb Z \to \dgb X$, $i_*$ is baryexact. Since $i_*$ is also an exact functor, we have $h^k(i_*\mathcal{F}) \cong i_*h^k(\mathcal{F})$ for any $\mathcal{F} \in \dgm Z$. Thus, if $\mathcal{F} \in \dsml Zw$, we have $h^k(i_*\mathcal{F}) \in \dsl Xw$ for all $k$; in other words, $i_*\mathcal{F} \in \dsml Xw$. On the other hand, suppose $\mathcal{F} \in \dspg Zw$. Since $i_*$ is exact and baryexact on $\dgb Z$, we have $i_*\beta_{\le w-1}\tau^{\le k}\mathcal{F} \cong \beta_{\le w-1}\tau^{\le k}i_*\mathcal{F}$. Moreover, the fact that $\beta_{\le w-1}\tau^{\le k}\mathcal{F} \in \dgbg Z{k+1}$ for all $k$ implies that $i_*\beta_{\le w-1}\tau^{\le k}\mathcal{F} \in \dgbg X{k+1}$ for all $k$. Thus, $i_*\mathcal{F} \in \dspg Xw$. \end{proof} \begin{lem}\label{lem:closed-hered} Let $\schemebaric X$ be a hereditary baric structure on $X$, and let $i: Z \hookrightarrow X$ be the inclusion of a closed subscheme. The induced baric structure on $Z$ is also hereditary. \end{lem} \begin{proof} Let $\kappa: Y \hookrightarrow Z$ be a closed subscheme of $Z$. We must show that $Y$ admits a baric structure induced from the one on $Z$. In fact, we claim that the baric structure on $Y$ induced from the on $X$ (via $i \circ \kappa: Y \hookrightarrow X$) has the desired property. Suppose $\mathcal{F} \in \dsml Zw$. If $L\kappa^*\mathcal{F} \notin \dsml Yw$, then there is some $\mathcal{G} \in \dsg Y{w+1}$ such that $\Hom(L\kappa^*\mathcal{F},\mathcal{G}) \ne 0$. Then $\Hom(\mathcal{F}, \kappa_*\mathcal{G}) \ne 0$ and, because $i_*$ is faithful, $\Hom(i_*\mathcal{F}, i_*\kappa_*\mathcal{G}) \ne 0$. But this is impossible, because according to Lemma~\ref{lem:hlr-baric-res}, $i_*\mathcal{F} \in \dsml Xw$ and $(i\circ \kappa)_*\mathcal{G} \in \dsg X{w+1}$. Thus, $L\kappa^*\mathcal{F} \in \dsml Yw$. Similarly, if $\mathcal{F} \in \dspg Zw$, a consideration of $\Hom(\mathcal{G}, R\kappa^!\mathcal{F})$ and $\Hom(i_*\kappa_*\mathcal{G}, i_*\mathcal{F})$ for $\mathcal{G} \in \dsl Y{w-1}$ shows that $R\kappa^!\mathcal{F} \in \dspg Yw$. Thus, $L\kappa^*$ is right baryexact and $R\kappa^!$ is left baryexact, so the baric structure on $Y$ induced from the one on $X$ is also induced from the one on $Z$. The induced baric structure on $Z$ is therefore hereditary. \end{proof} \subsection{Properties of HLR baric structures} \label{sect:hlrprop} In this section, we prove three useful results about HLR baric structures. First, we prove that the HLR property is inherited by induced baric structures on subschemes. Next, we prove an additional rigidity property for nilpotent thickenings of closed subschemes. Finally, we prove a ``gluing theorem'' that states that an HLR baric structure is determined by the baric structures it induces on a closed subscheme and the complementary open subscheme. It should be noted that the proofs of these results depend on Theorem~\ref{thm:hlr}. \begin{thm}\label{thm:hlr-induced} Suppose $X$ is endowed with an HLR baric structure. Every locally closed subscheme $\kappa: Y \hookrightarrow X$ admits a unique induced baric structure. Moreover, this baric structure is also HLR. \end{thm} \begin{proof} We have already seen the uniqueness of the induced baric structure in the case of open or closed subschemes, in Lemma~\ref{lem:induced}. For a general locally closed subscheme, let us factor the inclusion map $\kappa: Y \to X$ as a closed imbedding $i: Y \hookrightarrow U$ followed by an open imbedding $j: U \hookrightarrow X$. Then $U$ acquires a unique induced hereditary baric structure from the baric structure on $X$, and it in turn induces a unique baric structure on its closed subscheme $Y$. This baric structure is also induced from the one on $X$: clearly, $L\kappa^* = Li^* \circ j^*$ is right baryexact, and $R\kappa^! = Ri^! \circ j^*$ is left baryexact. To show that this is the unique baric structure on $Y$ induced from the one on $X$, we must show that the baryexactness assumptions on $L\kappa^*$ and $R\kappa^!$ imply the same conditions on $Li^*$ and $Ri^!$. (It then follows that any baric structure induced from the one on $X$ is actually induced from the one on $U$.) Suppose $\mathcal{F} \in \dsml Uw$, and consider a distinguished triangle of the form \[ Li^*\tau^{\le k-1}\mathcal{F} \to Li^*\mathcal{F} \to Li^*\tau^{\ge k}\mathcal{F} \to. \] Since $Li^*\tau^{\le k-1}\mathcal{F} \in \dgml Y{k-1}$, we see that $h^k(Li^*\mathcal{F}) \cong h^k(Li^*\tau^{\ge k}\mathcal{F})$. Now, $\tau^{\ge k}\mathcal{F}$ is an object in $\dsl Uw$, so there exists an object $\mathcal{F}_1 \in \dsl Xw$ such that $j^*\mathcal{F}_1 \cong \tau^{\ge k}\mathcal{F}$. By assumption, $L\kappa^*\mathcal{F}_1 \in \dsml Yw$. But $L\kappa^*\mathcal{F} \cong Li^*\tau^{\ge k}\mathcal{F}$, so we conclude that $h^k(Li^*\tau^{\ge k}\mathcal{F}) \cong h^k(Li^*\mathcal{F}) \in \dsl Yw$. Thus, $Li^*\mathcal{F} \in \dsml Yw$. On the other hand, suppose that $\mathcal{F} \in \dspg Uw$, and consider a distinguished triangle of the form \[ Ri^!\tau^{\le k}\mathcal{F} \to Ri^!\mathcal{F} \to Ri^!\tau^{\ge k+1}\mathcal{F} \to. \] Since $Ri^!\tau^{\ge k+1}\mathcal{F} \in \dgpg Y{k+1}$, we see that $\tau^{\le k}Ri^!\mathcal{F} \cong \tau^{\le k}Ri^!\tau^{\le k}\mathcal{F}$. Next, consider the distinguished triangle \[ Ri^!\beta_{\le w-1}\tau^{\le k}\mathcal{F} \to Ri^!\tau^{\le k}\mathcal{F} \to Ri^! \beta_{\ge w}\tau^{\le k}\mathcal{F} \to. \] By assumption, $\beta_{\le w-1}\tau^{\le k}\mathcal{F} \in \dgbg U{k+2}$, so $Ri^!\beta_{\le w-1}\tau^{\le k}\mathcal{F} \in \dgpg Y{k+2}$. It follows that $\tau^{\le k}Ri^!\tau^{\le k}\mathcal{F} \cong \tau^{\le k}Ri^!\beta_{\ge w}\tau^{\le k}\mathcal{F}$. Now, $\beta_{\ge w}\tau^{\le k}\mathcal{F} \in \dsg Uw$, so there is some $\mathcal{F}_1 \in \dsg Xw$ such that $j^*\mathcal{F}_1 \cong \beta_{\ge w}\tau^{\le k}\mathcal{F}$. Since $R\kappa^!\mathcal{F}_1$ belongs to $\dspg Yw$ by assumption, we have $\beta_{\le w-1}\tau^{\le k}R\kappa^!\mathcal{F}_1 \in \dgbg Y{k+2}$. But we also have $R\kappa^!\mathcal{F}_1 \cong Ri^!\beta_{\ge w}\tau^{\le k}\mathcal{F}$, and from the chain of isomorphisms \[ \tau^{\le k}Ri^!\mathcal{F} \cong \tau^{\le k}Ri^!\tau^{\le k}\mathcal{F} \cong \tau^{\le k}Ri^!\beta_{\ge w}\tau^{\le k}\mathcal{F} \cong \tau^{\le k}R\kappa^!\mathcal{F}_1, \] we see that $\beta_{\le w-1}\tau^{\le k}Ri^!\mathcal{F} \in \dgbg Y{k+2}$. Thus, $Ri^!\mathcal{F} \in \dspg Yw$. We now conclude that any baric structure on $Y$ induced from the one on $X$ is also induced from the one on $U$, and is therefore uniquely determined. To show that the induced baric structure on a locally closed subscheme is HLR, it suffices, by Theorem~\ref{thm:hlr}, to show that it is hereditary. In the case of a closed subscheme, this was done in Lemma~\ref{lem:closed-hered}, and in the case of an open subscheme, there is nothing to prove: this property is part of the definition of ``local.'' The assertion then follows for a general locally closed subscheme, since, by construction, the induced baric structure on such a subscheme is obtained by first passing to an open subscheme, and then to a closed subscheme of that. \end{proof} Next, we turn to nilpotent thickenings of a closed subscheme. \begin{prop}\label{prop:rigid} Suppose $X$ is endowed with an HLR baric structure, and let $Z \overset{t}{\hookrightarrow} Z_1 \hookrightarrow X$ be a sequence of closed subschemes of $X$ with the same underlying topological space. Then: \begin{enumerate} \item For $\mathcal{F} \in \dgm {Z_1}$, $\mathcal{F} \in \dsml {Z_1}w$ if and only if $Lt^*\mathcal{F} \in \dsml Zw$. \item For $\mathcal{F} \in \dgp {Z_1}$, $\mathcal{F} \in \dspg {Z_1}w$ if and only if $Rt^!\mathcal{F} \in \dspg Zw$. \end{enumerate} \end{prop} \begin{proof} If $\mathcal{F} \in \dsml {Z_1}w$, it is obvious that $Lt^*\mathcal{F} \in \dsml Zw$, since the baric structure on $Z$ is induced from that on $Z_1$. Conversely, suppose $\mathcal{F} \in \dgm {Z_1}$ and $Lt^*\mathcal{F} \in \dsml Zw$. Then $\Hom(\mathcal{F}, t_*\mathcal{G}) \cong \Hom(Lt^*\mathcal{F},\mathcal{G}) = 0$ for all $\mathcal{G} \in \dsg Z{w+1}$. But by the definition of ``rigid,'' $\dsg {Z_1}{w+1}$ is generated by objects of the form $t_*\mathcal{G}$ with $\mathcal{G} \in \dsg Z{w+1}$, so it follows that $\Hom(\mathcal{F},\mathcal{G}') = 0$ for all $\mathcal{G}' \in \dsg {Z_1}{w+1}$, and hence that $\mathcal{F} \in \dsml {Z_1}w$. The proof of part~(2) is entirely analogous and will be omitted. \end{proof} Finally, we prove a ``gluing theorem'' for HLR baric structures. \begin{thm} Suppose $X$ is endowed with an HLR baric structure. Let $i:Z \hookrightarrow X$ be a closed subscheme of $X$, and let $j:U \hookrightarrow X$ be its open complement. Endow $U$ and $Z$ with the baric structures induced from that on $X$. Then we have \begin{align*} \dsl Xw &= \{ \mathcal{F} \in \dgb X \mid \text{$j^*\mathcal{F} \in \dsl Uw$ and $Li^*\mathcal{F} \in \dsml Zw$} \}, \\ \dsg Xw &= \{ \mathcal{F} \in \dgb X \mid \text{$j^*\mathcal{F} \in \dsg Uw$ and $Ri^!\mathcal{F} \in \dspg Zw$} \}. \end{align*} In particular, there is a unique HLR baric structure on $X$ which induces the baric structures $\schemebaric U$ and $\schemebaric Z$ on $U$ and $Z$. \end{thm} \begin{proof} If $\mathcal{F} \in \dsl Xw$, then $j^*\mathcal{F} \in \dsl Uw$ and $Li^*\mathcal{F} \in \dsml Zw$ by the definition of the induced baric structure. For the other direction, suppose that $j^* \mathcal{F} \in \dsl Uw$ and $Li^* \mathcal{F} \in \dsml Zw$. We will prove that $\mathcal{F} \in \dsl Xw$ by showing $\Hom(\mathcal{F},\mathcal{G}) = 0$ for all $\mathcal{G} \in \dsg X{w+1}$. Fix $\mathcal{G} \in \dsg X{w+1}$. We have an exact sequence \[ \lim_{\substack{\to \\ {Z_1}}} \Hom(i_{Z_1*}Li_{Z_1}^*\mathcal{F},\mathcal{G}) \to \Hom(\mathcal{F},\mathcal{G}) \to \Hom(j^*\mathcal{F},j^*\mathcal{G}), \] where the limit runs over nilpotent thickenings of $Z$. (See, for instance,~\cite[Proposition~2 and Lemma~3(a)]{bez:pcs} for an explanation of this exact sequence.) We have $j^* \mathcal{F} \in \dsl Uw$ and $j^*\mathcal{G} \in \dsg U{w+1}$, and by Lemma~\ref{lem:hlr-baric-res}, we have $i_{Z_1 *} Li_{Z_1}^* \mathcal{F} \in \dsml Xw$ so the first and third terms vanish. We conclude that $\Hom(\mathcal{F},\mathcal{G})$ also vanishes. The argument for $\dsg Xw$ is similar. \end{proof} \subsection{Proof of Theorem~\ref{thm:hlr}} \label{sect:hlrproof} In this section, we will prove that hereditary baric structures are automatically also local and rigid. We begin with a result about baric truncation functors with respect to a hereditary baric structure. If $X$ is endowed with a hereditary baric structure, and $\mathcal{F} \in \dgb X$ is actually supported on some closed subscheme $i: Z \hookrightarrow X$, then the baric truncations of $\mathcal{F}$ are obtained by taking baric truncations in the induced baric structure on $Z$, and then pushing them forward by $i_*$. In other words, hereditary baric structures have the property that baric truncation functors preserve support. More precisely: \begin{prop} \label{prop:settheorysupport} Let $\schemebaric X$ be a hereditary baric structure on $X$. Then \begin{enumerate} \item If $\mathcal{F} \in \dgb X$ has set-theoretic support on a closed set $Z \subset X$, then so do $\beta_{\leq w} \mathcal{F}$ and $\beta_{\geq w} \mathcal{F}$. \item If a morphism $u:\mathcal{F} \to \mathcal{G}$ in $\dgb X$ has set-theoretic support on $Z$, in the sense that $u \vert_{X \smallsetminus Z} = 0$, then so do $\beta_{\leq w}(u)$ and $\beta_{\geq w}(u)$. \end{enumerate} \end{prop} \begin{proof} If $\mathcal{F}$ is set-theoretically supported on $Z$ then there is a subscheme $i:Z_1 \hookrightarrow X$ of $X$, whose underlying closed set is $Z$, such that $\mathcal{F} = i_* \mathcal{F}'$ for some $\mathcal{F}' \in \dgb {Z_1}$. Form the distinguished triangle \[ \beta_{\leq w} \mathcal{F}' \to \mathcal{F}' \to \beta_{\geq w+1} \mathcal{F}' \to. \] By Lemma~\ref{lem:induced}, we have that $i_* \beta_{\leq w}\mathcal{F}' \in \dsl Xw$ and $i_* \beta_{\geq w+1}\mathcal{F}' \in \dsg X{w+1}$. Since we have a distinguished triangle \[ i_* \beta_{\leq w} \mathcal{F}' \to \mathcal{F} \to i_* \beta_{\geq w+1} \mathcal{F}'\to, \] we must have $i_* \beta_{\leq w} \mathcal{F}' \cong \beta_{\leq w} \mathcal{F}$ and $i_* \beta_{\geq w+1} \mathcal{F}' \cong \beta_{\geq w+1} \mathcal{F}$. In particular these objects are set-theoretically supported on $Z$, proving the first assertion. To prove the second assertion, consider the exact sequence \[ \lim_{\substack{\to \\ Z'}} \Hom(\mathcal{F}, i_{Z'*}Ri^!_{Z'}\mathcal{G}) \to \Hom(\mathcal{F},\mathcal{G}) \to \Hom(\mathcal{F}|_{X\smallsetminus Z},\mathcal{G}|_{X\smallsetminus Z}), \] where $i_{Z'}: Z' \hookrightarrow X$ ranges over all closed subscheme structures on $Z$. By assumption, $u \in \Hom(\mathcal{F},\mathcal{G})$ vanishes upon restriction to $X \smallsetminus Z$, so we see from the exact sequence above that it must factor through $i_{Z'*}Ri^!_{Z'}\mathcal{G} \to \mathcal{G}$ for some closed subscheme structure $i_{Z'}: Z' \hookrightarrow X$ on $Z$. Now, $i_{Z'*}Ri^!_{Z'}\mathcal{G}$ is in general an object of $\dgp X$, but since $\mathcal{F}$ lies in $\dgb X$, any morphism $\mathcal{F} \to i_{Z'*}Ri^!_{Z'}\mathcal{G}$ factors through $\tau^{\le n}i_{Z'*}Ri^!_{Z'}\mathcal{G}$ for sufficiently large $n$. It follows that $\beta_{\le w}(u)$ and $\beta_{\ge w}(u)$ factor through $\beta_{\le w}\tau^{\le n}i_{Z'*}Ri^!_{Z'}\mathcal{G}$ and $\beta_{\ge w}\tau^{\le n}i_{Z'*}Ri^!_{Z'}\mathcal{G}$, respectively. These objects have set-theoretic support on $Z$ by the first part of the proposition, so $\beta_{\le w}(u)$ and $\beta_{\ge w}(u)$ have set-theoretic support on $Z$ as well, as desired. \end{proof} We may use this fact to prove the following: \begin{thm}\label{thm:openlocal} Every hereditary baric structure is local. \end{thm} We will prove this theorem over the course of the following three propositions. Recall from Lemma~\ref{lem:induced} that in a local baric structure, the induced baric structures on open subschemes necessarily have the form given in the proposition below. \begin{prop}\label{prop:hered-open} Let $\schemebaric X$ be a hereditary baric structure on $X$, and let $U$ be an open subschme of $X$. For any $w \in \mathbb{Z}$, define full subcategories of $\dgb U$ as follows: \begin{align*} \dsl Uw &= \{ \mathcal{F} \in \dgb U \mid \text{$\mathcal{F} \cong j^*\mathcal{F}_1$ for some $\mathcal{F}_1 \in \dsl Xw$} \}, \\ \dsg Uw &= \{ \mathcal{F} \in \dgb U \mid \text{$\mathcal{F} \cong j^*\mathcal{F}_1$ for some $\mathcal{F}_1 \in \dsg Xw$} \}. \end{align*} Then $\dsl Uw$ and $\dsg Uw$ are thick subcategories of $\dgb U$. \end{prop} \begin{proof} Suppose that $\mathcal{F}$ and $\mathcal{G}$ belong to $\dsl Uw$, so that there exist $\mathcal{F}_1$ and $\mathcal{G}_1$ in $\dsl Xw$ with $\mathcal{F}_1 \vert_U \cong \mathcal{F}$ and $\mathcal{G}_1 \vert_U \cong \mathcal{G}$. Since $\dgb U$ is a localization of $\dgb X$, we may find for every morphism $u:\mathcal{F} \to \mathcal{G}$ an object $\mathcal{G}_2 \in \dgb X$ and a diagram $\mathcal{F}_1 \to \mathcal{G}_2 \leftarrow \mathcal{G}_1$ such that $(\mathcal{G}_2 \leftarrow \mathcal{G}_1)\vert_U$ is an isomorphism, and the composition \[ \mathcal{F} \cong \mathcal{F}_1 \vert_U \to \mathcal{G}_2 \vert_U \cong \mathcal{G}_1 \vert_U \cong \mathcal{G} \] coincides with $u$. We claim that the diagram \[ \beta_{\leq w} \mathcal{F}_1 \to \beta_{\leq w} \mathcal{G}_2 \leftarrow \beta_{\leq w} \mathcal{G}_1 \] has the same property. In that case, the cone on the composition $\mathcal{F}_1 \cong \beta_{\leq w} \mathcal{F}_1 \to \beta_{\leq w} \mathcal{G}_2$ belongs to $\dsl Xw$, which shows that the cone on $u:\mathcal{F} \to \mathcal{G}$ belongs to $\dsl Uw$. To prove the claim, note that the cone on the map $\mathcal{G}_1 \to \mathcal{G}_2$ is set-theoretically supported on the closed set $X \smallsetminus U$, and since the baric structure $\schemebaric X$ is hereditary, the same must be true for the cone on $\beta_{\leq w} \mathcal{G}_1 \to \beta_{\leq w} \mathcal{G}_2$; in particular the restriction of the latter map to $U$ is an isomorphism. We have shown that the $\dsl Uw \subset \dgb U$ is a triangulated subcategory. To show that it is thick we have to show that it is also closed under summands -- \emph{i.e.} that if $\mathcal{F} \oplus \mathcal{G} \in \dsl Uw$ then $\mathcal{F}$ and $\mathcal{G}$ also belong to $\dsl Uw$. Thus suppose that $\mathcal{F} \oplus \mathcal{G}$ belongs to $\dsl Uw$. Since $\dgb U$ is a localization of $\dgb X$, we may find a triangle $$\mathcal{F}_1 \to \mathcal{H} \to \mathcal{G}_1 \to$$ whose restriction to $U$ is isomorphic to the triangle $$\mathcal{F} \to \mathcal{F} \oplus \mathcal{G} \to \mathcal{G} \to $$ In particular the map $\mathcal{G}_1 \to \mathcal{F}_1[1]$ is set-theoretically supported on $X \smallsetminus U$, so by proposition \ref{prop:settheorysupport} the same must be true of $\beta_{\leq w} \mathcal{G}_1 \to \beta_{\leq w} \mathcal{F}_1$. From the diagram $$ \xymatrix{ \beta_{\leq w} \mathcal{F}_1 \ar[r] \ar[d] & \mathcal{H} \ar[r] \ar[d] & \beta_{\leq w} \mathcal{G}_1 \ar[r] \ar[d]& \\ \mathcal{F}_1 \ar[r] \ar[d] & \mathcal{H} \ar[r] \ar[d] & \mathcal{G}_1 \ar[r] \ar[d]& \\ \beta_{\geq w+1} \mathcal{F}_1 \ar[r] & 0 \ar[r]& \beta_{\geq w+1} \mathcal{G}_1 \ar[r] & \\ } $$ whose rows and columns are distinguished triangles, we see that $\beta_{\geq w+1} \mathcal{G}_1 \to \beta_{\geq w+1} \mathcal{F}_1$ is an isomorphism. But since this morphism has set-theoretic support on $X - U$ the objects $\beta_{\geq w+1} \mathcal{F}_1$ and $\beta_{\geq w+1} \mathcal{G}_1$ must have set-theoretic support on $X- U$ which implies there are isomorphisms $\beta_{\leq w} \mathcal{F}_1 \vert_U \cong \mathcal{F}$ and $\beta_{\leq w} \mathcal{G}_1 \vert_U \cong \mathcal{G}$. Thus $\mathcal{F}$ and $\mathcal{G}$ belong to $\dsl Uw$. A similar proof shows that the subcategories $\dsg Uw$ are thick. \end{proof} \begin{prop} Let $\schemebaric X$ be a hereditary baric structure on $X$, let $U$ be an open subscheme of $X$, and let $\schemebaric U$ be as in Proposition~\ref{prop:hered-open}. Then $\schemebaric U$ is a baric structure on $\dgb U$, compatible with the standard $t$-structure. \end{prop} \begin{proof} It is clear that $\dsl Uw \subset \dsl U{w+1}$ and $\dsg Uw \supset \dsg U{w+1}$ and that $\dgb U = \dsl Uw * \dsg U{w+1}$. If $\mathcal{F} \in \dsl Uw$ and $\mathcal{G} \in \dsg U{w+1}$, then we have an exact sequence \[ \Hom(\mathcal{F}_1,\mathcal{G}_1) \to \Hom(\mathcal{F},\mathcal{G}) \to \varinjlim_{i:Z \hookrightarrow X} \Hom(i_*Li^* \mathcal{F}_1, \mathcal{G}_1[1]) \to \] where $\mathcal{F}_1$ is an extension of $\mathcal{F}$ to $\dsl Xw$, $\mathcal{G}_1$ is an extension of $\mathcal{G}$ to $\dsg X{w+1}$, and $i:Z \hookrightarrow X$ runs over all subscheme structures on $X \smallsetminus U$. The first term above vanishes automatically, and each of the terms $\Hom(i_* Li^* \mathcal{F}_1,\mathcal{G}_1[1])$ vanishes because, by Lemma~\ref{lem:hlr-baric-res}, $i_*Li^*\mathcal{F}_1 \in \dsml Xw$. Thus, $\Hom(\mathcal{F},\mathcal{G}) = 0$ and $\schemebaric U$ is a baric structure on $\dgb U$. By assumption the baric structure $\schemebaric X$ is compatible with the standard $t$-structure on $\dgb X$. Thus if $\mathcal{F}_1$ belongs to $\dsl Xw$ then so do $\tau^{\leq n} \mathcal{F}_1$ and $\tau^{\geq n} \mathcal{F}_1$. The objects $\mathcal{F}_1 \vert_U$, $(\tau^{\leq n} \mathcal{F}_1) \vert_U \cong \tau^{\leq n} (\mathcal{F}_1 \vert_U)$ and $(\tau^{\geq n} \mathcal{F}_1)\vert_U \cong \tau^{\geq n} (\mathcal{F}_1 \vert_U)$ therefore all belong to $\dsl Uw$. Similarly, we have $(\beta_{\leq w} \mathcal{F}_1)\vert_U \cong \beta_{\leq w} (\mathcal{F}_1\vert_U)$ and $(\beta_{\geq w} \mathcal{F}_1)\vert_U \cong \beta_{\geq w} (\mathcal{F}_1 \vert_U)$ so that the baric truncation functors preserve $\dgb U^{\geq 0}$. Thus the baric structure $\schemebaric U$ is compatible with the standard $t$-structure on $\dgb U$. \end{proof} \begin{prop} Let $\schemebaric X$ be a hereditary baric structure on $X$, and let $U$ be an open subscheme of $X$. Then the collection of categories $\schemebaric U$ defined in Proposition~\ref{prop:hered-open} constitute a hereditary baric structure on $U$. \end{prop} \begin{proof} Using Lemma~\ref{lem:induced} and the previous proposition, we know that the baric structure $\schemebaric U$ is induced from the one on $X$. It remains only to show that this baric structure is hereditary. Let $i:Y \hookrightarrow U$ be a closed subscheme of $U$. By Lemma~\ref{lem:induced}, we must prove that the following categories constitute a baric structure on $Y$: \begin{align*} \dsl Yw &= \{ \mathcal{F} \in \dgb Y \mid \text{$i_*\mathcal{F} \cong \mathcal{F}_1|_U$ for some $\mathcal{F}_1 \in \dsl Xw$} \}, \\ \dsg Yw &= \{ \mathcal{F} \in \dgb Y \mid \text{$i_*\mathcal{F} \cong \mathcal{F}_1|_U$ for some $\mathcal{F}_1 \in \dsg Xw$} \}. \end{align*} Let $\overline{Y}$ be the closure of $Y$ in $X$, and let $i_1: \overline Y \hookrightarrow X$ be the inclusion map, so that we have a commutative square of inclusions \[ \xymatrix@=12pt{ Y \ar@{^{(}->}[r]^i \ar@{^{(}->}[d] & U \ar@{^{(}->}[d] \\ \overline{Y} \ar@{^{(}->}[r]^{i_1} & X} \] By definition, the hereditary baric structure on $X$ induces a baric structure on $\overline{Y}$. This baric structure is itself hereditary, by Lemma~\ref{lem:closed-hered}. Thus, by the previous proposition, the baric structure on $\overline Y$ induces one on its open subscheme $Y$. This is given by \begin{align*} (\dsl Yw)' &= \{ \mathcal{F} \mid \text{$\mathcal{F} \cong \mathcal{F}_2|_Y$ for some $\mathcal{F}_2 \in \dgb {\overline Y}$ with $i_{1*}\mathcal{F}_2 \in \dsl Xw$} \}, \\ (\dsg Yw)' &= \{ \mathcal{F} \mid \text{$\mathcal{F} \cong \mathcal{F}_2|_Y$ for some $\mathcal{F}_2 \in \dgb {\overline Y}$ with $i_{1*}\mathcal{F}_2 \in \dsg Xw$} \}. \end{align*} It suffices now to show that $\dsl Yw = (\dsl Yw)'$ and $\dsg Yw = (\dsg Yw)'$. If $\mathcal{F} \in \dgb Y$ is such that we may find $\mathcal{F}_2 \in \dgb{\overline{Y}}$ with $\mathcal{F}_2 \vert_Y \cong \mathcal{F}$ and $i_* \mathcal{F}_2 \in \dsl Xw$ then $\mathcal{F}_1 := i_{1*} \mathcal{F}_2$ has the property that $\mathcal{F}_1 \vert_U \cong i_* \mathcal{F}$. Thus, $(\dsl Yw)' \subset \dsl Yw$. To show the reverse inclusion, let $\mathcal{F} \in \dgb Y$ and $\mathcal{F}_1 \in \dsl Xw$ be such that $\mathcal{F}_1 \vert_U \cong i_* \mathcal{F}$, and let $\mathcal{F}_2' \in \dgb {\overline{Y}}$ be such that there exists a map $i_{1*} \mathcal{F}_2' \to \mathcal{F}_1$ which is an isomorphism over $U$. Then $i_{1*} \beta_{\leq w} \mathcal{F}_2' \to \mathcal{F}_1$ is also an isomorphism over $U$, and $\mathcal{F}_2 := \beta_{\leq w} \mathcal{F}_2'$ has the property that $\mathcal{F}_2 \vert_Y \cong \mathcal{F}$ and $i_{1*} \mathcal{F}_2 \in \dsl Xw$. Thus, $(\dsl Yw)' = \dsl Yw$. A similar argument shows that $(\dsg Yw)' = \dsg Yw$. \end{proof} Let us finally show that hereditary baric structures are rigid. \begin{prop} \label{prop:HimpliesR1} Let $\schemebaric X$ be a hereditary baric structure on $X$. Then $\schemebaric X$ is rigid. \end{prop} \begin{proof} Let $Z$ be a subscheme of $X$ and let $Z_1$ be a nilpotent thickening of $Z$ in $X$, and write $t$ for inclusion of $Z$ into $Z_1$. If $\mathcal{F}$ is a bounded chain complex of coherent sheaves on $Z_1$, then we may find a filtration of $\mathcal{F}$ by subcomplexes $\mathcal{F}_k$ whose subquotients are scheme-theoretically supported on $Z$. Thus in $\dgb {Z_1}$ we may find a sequence of objects and maps \[ 0 = \mathcal{F}_0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \cdots \to \mathcal{F}_n = \mathcal{F} \] such that the cone on $\mathcal{F}_{k-1} \to \mathcal{F}_k$ is of the form $t_* \mathcal{G}_k$. Now suppose $\mathcal{F}$ belongs to $\dsl {Z_1}w$. Then we may apply $\beta_{\leq w}$ to the sequence to obtain \[ 0 = \beta_{\leq w} \mathcal{F}_0 \to \beta_{\leq w} \mathcal{F}_1 \to \cdots \to \beta_{\leq w} \mathcal{F}_n = \mathcal{F} \] and distinguished triangles \[ \beta_{\leq w} \mathcal{F}_{k-1} \to \beta_{\leq w} \mathcal{F}_k \to \beta_{\leq w} t_* \mathcal{G}_k \to. \] It follows from Lemma~\ref{lem:induced} that the object $\beta_{\leq w} t_* \mathcal{G}_k$ is isomorphic to $t_* \beta_{\leq w} \mathcal{G}_k$. Thus, $\mathcal{F}$ is in the thick closure of the image of $\dsl Zw$ under $t_*$. A similar proof gives the same result for $\dsg {Z_1}w$. \end{proof} This completes the proof of Theorem~\ref{thm:hlr}. \section{Background on $s$-structures and Staggered Sheaves} \label{sect:stagt} In this section, we review the $t$-structures on derived categories of equivariant coherent sheaves that were introduced in~\cite{a}. (They were called ``staggered $t$-structures'' in {\it loc.~cit.}; in Section~\ref{sect:stag2}, we will prove that they usually arise by the staggering construction of Definition~\ref{defn:stag}.) These $t$-structures depend on two auxiliary data: an \emph{$s$-structure}, and a \emph{perversity function}. After fixing notation, we briefly recall some facts about these objects, and we then describe the $t$-structures themselves. We will also prove a few useful lemmas about these objects. As before, let $X$ be a scheme of finite type over a noetherian base scheme, acted on by an affine group scheme $G$ over the same base. We adopt the additional assumptions that the base scheme admits a dualizing complex in the sense of~\cite[Chap.~V]{har}, and that the category $\cg X$ has enough locally free objects. It follows (see~\cite[Proposition~1]{bez:pcs}) that $X$ admits an equivariant dualizing complex. Fix one, and denote it $\omega_X \in \dgb X$. Next, let $\mathbb{D} = \cRHom(\cdot, \omega_X)$ denote the equivariant Serre--Grothendieck duality functor. Let $X^{\mathrm{gen}}$ denote the set of generic points of $G$-invariant subschemes of $X$, and for any $x \in X^{\mathrm{gen}}$, we denote by $\overline{Gx}$ the smallest $G$-stable closed subset of $X$. (We do not usually regard $\overline{Gx}$ as having a fixed subscheme structure.) For any point $x \in X^{\mathrm{gen}}$ and any closed subscheme structure $i: Z \hookrightarrow X$ on $\overline{Gx}$, there is an open subscheme $V \subset Z$ such that $Ri^!\omega_X|_V$ is concentrated in a single degree in $\dgb V$. Let $\cod \overline{Gx}$ be the unique integer such that $h^{\cod \overline{Gx}}(Ri^!\omega_X|_V) \ne 0$. This number is independent of the choice of closed subscheme structure $i: Z \hookrightarrow X$ and of open subscheme $V \subset Z$. If $X$ is, say, an equidimensional scheme of finite type over a field, $\omega_X$ may be normalized so that $\cod \overline{Gx}$ is the ordinary (Krull) codimension of $\overline{Gx}$. An \emph{$s$-structure} on the scheme $X$ is a pair of collections of full subcategories $(\{\cgl Xw\}, \{\cgg Xw\})_{w \in \mathbb{Z}}$ of $\cg X$ satisfying a list of ten axioms, called (S1)--(S10) in~\cite{a}. We will not review all the axioms here, but we do recall some of the key properties of $s$-structures: \begin{itemize} \item Each $\cgl Xw$ is a Serre subcategory, and each $\cgg Xw$ is closed under extensions and subobjects. \item $\cgg Xw$ is the right orthogonal to $\cgl X{w-1}$. \item Each sheaf $\mathcal{F}$ contains a unique maximal subsheaf in $\cgl Xw$, denoted $\sigma_{\le w}\mathcal{F}$. The quotient $\sigma_{\ge w+1}\mathcal{F} \cong \mathcal{F}/\sigma_{\le w}\mathcal{F}$ is the largest quotient of $\mathcal{F}$ in $\cgg X{w+1}$. \item An $s$-structure on $X$ induces $s$-structures on all locally closed subschemes of $X$. \end{itemize} Assume henceforth that $X$ is equipped with a fixed $s$-structure. Given a point $x \in X^{\mathrm{gen}}$ and a closed subscheme structure $i: Z \hookrightarrow X$ on $\overline{Gx}$, choose an open subscheme $V \subset Z$ such that $Ri^!\omega_X|_V$ is concentrated in degree $\cod \overline{Gx}$. There is a unique integer, called the \emph{altitude} of $\overline{Gx}$ and denoted $\alt\overline{Gx}$, such that \[ Ri^!\omega_X|_V[\cod\overline{Gx}] \in \cgl V{\alt\overline{Gx}} \cap \cgg V{\alt\overline{Gx}}. \] Again, $\alt\overline{Gx}$ is independent of the choice of $i$ and $V$. The \emph{staggered codimension} of $\overline{Gx}$ is defined by \[ \scod \overline{Gx} = \alt \overline{Gx} + \cod \overline{Gx}. \] A (\emph{staggered}) \emph{perversity function} is a function $p: X^{\mathrm{gen}} \to \mathbb{Z}$ such that \[ 0 \le p(x) - p(y) \le \scod \overline{Gx} - \scod \overline{Gy} \qquad\text{if $x \in \overline{Gy}$.} \] Given a perversity $p: X^{\mathrm{gen}} \to \mathbb{Z}$, the function $\bar p: X^{\mathrm{gen}} \to \mathbb{Z}$ given by \[ \bar p (x) = \scod \overline{Gx} - p(x) \] is also a perversity function, known as the \emph{dual perversity}. Given a staggered perversity function $p$, we define a full subcategory of $\dgm X$ by \[ {}^p\! \dgml X0 = \left\{ \mathcal{F} \,\Bigg| \begin{array}{c} \text{for any $x \in X^{\mathrm{gen}}$, any closed subscheme structure} \\ \text{$i: Z\hookrightarrow X$ on $\overline{Gx}$, and any $k \in \mathbb{Z}$, there is a dense open} \\ \text{subscheme $V \subset Z$ such that $h^k(Li^*\mathcal{F})|_V \in \cgl V{p(x)-k}$} \end{array} \right\}, \] and a full subcategory of $\dgp X$ by \[ {}^p\! \dgpg X0 = \mathbb{D}({}^{\bar p}\! \dgml X0). \] The $t$-structure associated in~\cite{a} to the given $s$-structure and to a perversity $p$ is the pair $({}^p\!\dgbl X0, {}^p\!\dgbg X0)$, where \[ {}^p\!\dgbl X0 = {}^p\!\dgml X0 \cap \dgb X \qquad\text{and}\qquad {}^p\!\dgbg X0 = {}^p\!\dgpg X0 \cap \dgb X. \] The remainder of the section will be spent establishing a number of useful lemmas about these objects. Let $q: X^{\mathrm{gen}} \to \mathbb{Z}$ be a function such that \begin{equation}\label{eqn:g-const} q(x) = q(y) \qquad\text{whenever}\qquad \overline{Gx} = \overline{Gy}. \end{equation} Given such a function, let \[ {}_q\cgl Xw = \left\{ \mathcal{F} \in \cg X \,\Bigg| \begin{array}{c} \text{for any closed subscheme $i: \overline{Gx} \hookrightarrow X$ with} \\ \text{$x \in X^{\mathrm{gen}}$, there is a dense open subscheme} \\ \text{$V \subset \overline{Gx}$ such that $i^*\mathcal{F}|_V \in \cgl V{w+q(x)}$} \end{array} \right\}. \] One may either regard this definition as a condition only on \emph{reduced} closed subschemes of the form $\overline{Gx}$, or as a condition on all possible closed subscheme structures on the various closed sets $\overline{Gx}$. These two interpretations are equivalent by~\cite[Proposition~4.1]{a}, however, so there is no ambiguity in the definition. The first viewpoint is more convenient for checking explicit examples, but the second is sometimes more useful in proofs. \begin{lem}\label{lem:qcgl-res} Let $x \in X^{\mathrm{gen}}$, and let $i: Z \hookrightarrow X$ be a closed subscheme structure on $\overline{Gx}$. For any sheaf $\mathcal{F} \in {}_q\cgl Xw$ and any $r \ge 0$, there is a dense open subscheme $V \subset Z$ such that $h^{-r}(Li^*\mathcal{F})|_V \in \cgl V{w+q(x)}$. \end{lem} \begin{proof} The proof of this lemma follows that of~\cite[Lemma~8.2]{a} nearly verbatim. By the definition of ${}_q\cgl Xw$, we know that there is a dense open subset $Z'\subset Z$ such that $i^*\mathcal{F}|_{Z'} \in \cgl {Z'}{w+q(x)}$. Let $X' = X \smallsetminus (Z' \smallsetminus Z)$. Then $X'$ is a dense open subset of $X$, and $i: Z' \hookrightarrow X$ is a closed subscheme of $X'$. It clearly suffices to prove the lemma in the case where $X$ and $Z$ are replaced by $X'$ and $Z'$. We therefore henceforth assume, without loss of generality, that $i^*\mathcal{F} \in \cgl Z{w+q(x)}$. We now proceed by induction on $r$. For $r = 0$, the lemma is trivial: we have $i^*\mathcal{F} \in \cgl Z{w+q(x)}$ by assumption. Now, suppose $r > 0$. According to Axiom~(S10) in the definition of an $s$-structure~\cite{a}, there is an open subscheme $V' \subset Z$ such that for any open set $U \subset X$ with $U \cap Z \subset V'$, we have $\Ext^r(\mathcal{F}|_U, i_*\mathcal{G}|_U) = 0$ for all $\mathcal{G} \in \cgg Z{w+q(x)+1}$. (In fact, Axiom~(S10) guarantees this vanishing for all $\mathcal{G}$ in a slightly larger category, denoted $\tcgg Z{w+q(x)+1}$, but we will not require that additional information.) Equivalently, for any open $V \subset V'$, we have $\Hom(Li^*\mathcal{F}|_V, \mathcal{G}[r]|_V) = 0$ for all $\mathcal{G} \in \cgg Z{w+q(x)+1}$. We also have $\Hom(Li^*\mathcal{F}|_V, \mathcal{G}[r]|_V) \cong \Hom(\tau^{\ge -r}Li^*\mathcal{F}|_V, \mathcal{G}[r]|_V)$, and then from the distinguished triangle \[ \tau^{\le -r}\tau^{\ge -r}Li^*\mathcal{F} \to \tau^{\ge -r}Li^*\mathcal{F} \to \tau^{\ge -r+1}Li^*\mathcal{F} \to \] we obtain the exact sequence \begin{multline*} \cdots \to \Hom(\tau^{\ge -r}Li^*\mathcal{F}|_V, \mathcal{G}[r]|_V) \to \Hom(\tau^{\le -r}\tau^{\ge -r}Li^*\mathcal{F}|_V, \mathcal{G}[r]|_V) \to \\ \Hom(\tau^{\ge -r+1}Li^*\mathcal{F}[-1]|_V, \mathcal{G}[r]|_V) \to \cdots. \end{multline*} Since $\tau^{\le -r}\tau^{\ge -r}Li^*\mathcal{F} \cong h^{-r}(Li^*\mathcal{F})[r]$, the sequence above can be rewritten as \begin{multline*} \cdots \to \Hom(Li^*\mathcal{F}|_V, \mathcal{G}[r]|_V) \to \Hom(h^{-r}(Li^*\mathcal{F})|_V, \mathcal{G}|_V) \to \\ \Hom(\tau^{\ge -(r-1)}Li^*\mathcal{F}|_V, \mathcal{G}[r+1]|_V) \to \cdots. \end{multline*} The first term above vanishes. Note that \[ h^k(\tau^{\ge -(r-1)}Li^*\mathcal{F}) \cong \begin{cases} h^k(Li^*\mathcal{F}) & \text{if $-(r-1) \le k \le 0$,} \\ 0 & \text{otherwise.} \end{cases} \] Thus, by the inductive assumption, the cohomology sheaves of $\tau^{\ge -(r-1)}Li^*\mathcal{F}$ have the property that for each $k$, there is a dense open subscheme $V_k \subset Z$ such that $h^k(\tau^{\ge -(r-1)}Li^*\mathcal{F})|_{V_k} \in \cgl {V_k}{w+q(x)}$. This property is precisely the hypothesis of~\cite[Lemma~8.1]{a}, which then tells us that there is a dense open subscheme $V'' \subset Z$ such that the last term in the exact sequence above vanishes whenever $V \subset V''$. In particular, let us take $V = V' \cap V''$. The middle term above then clearly vanishes. Since $\Hom(h^{-r}(Li^*\mathcal{F})|_V, \mathcal{G}_1) = 0$ for all $\mathcal{G}_1 \in \cgl V{w+q(x)+1}$, we have $h^{-r}(Li^*\mathcal{F})|_V \in \cgl V{w+q(x)}$, as desired. \end{proof} \begin{lem}\label{lem:qcgl-serre} ${}_q\cgl Xw$ is a Serre subcategory of $\cg X$. \end{lem} \begin{proof} Suppose we have a short exact sequence $0 \to \mathcal{F}' \to \mathcal{F} \to \mathcal{F}'' \to 0$ in $\cg X$. Given $x \in X^{\mathrm{gen}}$ and a closed subscheme structure $i: Z \hookrightarrow X$ on $\overline{Gx}$, consider the exact sequence \[ h^{-1}(Li^*\mathcal{F}'') \to i^*\mathcal{F}' \to i^*\mathcal{F} \to i^*\mathcal{F}'' \to 0. \] Suppose $\mathcal{F}'$ and $\mathcal{F}''$ are in ${}_q\cgl Xw$. Then there are dense open subschemes $V', V'' \subset Z$ such that $i^*\mathcal{F}'|_{V'} \in \cgl {V'}{w+q(x)}$ and $i^*\mathcal{F}''|_{V''} \in \cgl {V''}{w+q(x)}$. Let $V = V' \cap V''$. Then, since $\cgl V{w+q(x)}$ is a Serre subcategory of $\cg V$, we see that $i^*\mathcal{F}|_V \in \cgl V{w+q(x)}$, so $\mathcal{F} \in {}_q\cgl Xw$. Conversely, if $\mathcal{F} \in {}_q\cgl Xw$, then there is a dense open subscheme $V \subset Z$ such that $i^*\mathcal{F}|_V \in \cgl V{w+q(x)}$. It follows that $i^*\mathcal{F}''|_V \in \cgl V{w+q(x)}$ as well, so $\mathcal{F}'' \in {}_q\cgl Xw$. Next, by Lemma~\ref{lem:qcgl-res}, there is some dense open subscheme $V' \subset Z$ such that $h^{-1}(Li^*\mathcal{F}'')|_{V'} \in \cgl {V'}{w+q(x)}$, and it follows that $i^*\mathcal{F}'|_{V \cap V'} \in \cgl{V \cap V'}{w+q(x)}$. Thus, $\mathcal{F}' \in {}_q\cgl Xw$ as well. \end{proof} Next, let $p$ be a staggered perversity function. The following alternate characterization of ${}^p\!\dgml X0$ will be useful. \begin{lem}\label{lem:dgml} We have \[ {}^p\!\dgml X0 = \{ \mathcal{F} \in \dgm X \mid \text{$h^k(\mathcal{F}) \in {}_p\cgl X{-k}$ for all $k \in \mathbb{Z}$} \}. \] \end{lem} \begin{rmk} Note the similarity between the right-hand side of this equation and the definition of ${}^s\!\fD^{\leq 0}$ of definition \ref{defn:stag}. \end{rmk} \begin{proof} Throughout the proof, $x$ will denote a point of $X^{\mathrm{gen}}$, and $i: Z \hookrightarrow X$ will denote a closed subscheme structure on $\overline{Gx}$. First, suppose $\mathcal{F}$ is concentrated in a single degree with respect to the standard $t$-structure, say in degree $n$, and that $h^n(\mathcal{F}) \in {}_p\cgl X{-n}$. If $k > n$, then of course $h^k(Li^*\mathcal{F}) = 0$. If $k \le n$, then by Lemma~\ref{lem:qcgl-res}, there is a dense open subscheme $V \subset Z$ such that $h^k(Li^*\mathcal{F})|_V \in \cgl X{p(x)-n} \subset \cgl X{p(x)-k}$, so $\mathcal{F} \in {}^p\!\dgml X0$. Next, if $\mathcal{F} \in \dgb X$ and $h^k(\mathcal{F}) \in {}_p\cgl X{-k}$ for all $k$, it follows that $\mathcal{F} \in {}^p\!\dgml X0$ by the preceding paragraph and a standard induction argument on the number of nonzero cohomology sheaves of $\mathcal{F}$. Finally, suppose that $\mathcal{F} \in \dgm X$ and that $h^k(\mathcal{F}) \in {}_p\cgl X{-k}$ for all $k$. For any $k \in \mathbb{Z}$, $\tau^{\ge k}\mathcal{F}$ is in $\dgb X$, so we already know that $\tau^{\ge k}\mathcal{F} \in {}^p\!\dgml X0$. But consideration of the distinguished triangle \[ Li^*\tau^{\le k-1}\mathcal{F} \to Li^*\mathcal{F} \to Li^*\tau^{\ge k}\mathcal{F} \to \] shows that $h^k(Li^*\mathcal{F}) \cong h^k(Li^*\tau^{\ge k}\mathcal{F})$, so in particular, there is a dense open subscheme $V \subset Z$ with $h^k(Li^*\mathcal{F})|_V \in \cgl V{p(x)-k}$, so $\mathcal{F} \in {}^p\!\dgml X0$, as desired. Conversely, suppose $\mathcal{F} \in {}^p\!\dgml X0$. Let $a$ be the largest integer such that $h^a(\mathcal{F}) \ne 0$. Then of course $h^a(Li^*\mathcal{F}) \cong h^a(Li^*\tau^{\ge a}\mathcal{F}) \cong i^*h^a(\mathcal{F})$, and we know that there is a dense open subscheme $V \subset Z$ such that $i^*h^a(\mathcal{F})|_V \in \cgl V{p(x)-a}$, so $h^a(\mathcal{F}) \in {}_p\cgl X{-a}$. Now, we will prove by downward induction on $k$ that $h^k(\mathcal{F}) \in {}_p\cgl X{-k}$ and that $\tau^{\le k-1}\mathcal{F} \in {}^p\!\dgml X0$ for all $k$. These statements hold trivially if $k > a$. Suppose we know that $h^{k+1}(\mathcal{F}) \in {}_p\cgl X{-k-1}$ and $\tau^{\le k}\mathcal{F} \in {}^p\!\dgml X0$. By the preceding paragraph, we know that $h^k(\mathcal{F}) = h^k(\tau^{\le k}\mathcal{F}) \in {}_p\cgl X{-k}$. Next, from the distinguished triangle $\tau^{\le k-1}\mathcal{F} \to \tau^{\le k}\mathcal{F} \to \tau^{[k,k]}\mathcal{F} \to$, we obtain the exact sequence \[ h^{r-1}(Li^*\tau^{[k,k]}\mathcal{F}) \to h^r(Li^*\tau^{\le k-1}\mathcal{F}) \to h^r(Li^*\tau^{\le k}\mathcal{F}). \] Assume $r \le k-1$ (otherwise, the middle term above vanishes). By Lemma~\ref{lem:qcgl-res}, for some dense open $V \subset Z$, $h^{r-1}(Li^*\tau^{[k,k]}\mathcal{F})|_V \in \cgl V{p(x)-k} \subset \cgl V{p(x)-r}$. Replacing $V$ by a smaller open subscheme if necessary, we may also assume that $h^r(Li^*\mathcal{F})|_V \in \cgl V{p(x)-r}$. It follows that $h^r(Li^*\tau^{\le k-1}\mathcal{F})|_V \in \cgl V{p(x)-r}$, so $\tau^{\le k-1}\mathcal{F} \in {}^p\!\dgml X0$. In particular, $h^k(\mathcal{F}) \in {}_p\cgl X{-k}$ for all $k$, as desired. \end{proof} In the course of the preceding proof, we have also established the following statement. \begin{cor}\label{cor:stag-trunc} The category ${}^p\!\dgml X0$ is stable under all standard truncation functions $\tau^{\le k}$ and $\tau^{\ge k}$.\qed \end{cor} \section{Baric Structures on Coherent Sheaves, II} \label{sect:baric-coh2} In this section, we achieve the main goal of the paper: the construction of a class of baric structures on derived categories of equivariant coherent sheaves. These baric structures depend on a function on $X^{\mathrm{gen}}$ that plays a role analogous to that played by a staggered perversity in Section~\ref{sect:stagt}. \begin{defn}\label{defn:rec} Suppose $G$ acts on $X$ with finitely many orbits. For each orbit $C \subset X$, let $\mathcal{I}_C \subset \mathcal{O}_X$ denote the ideal sheaf corresponding to the reduced closed subscheme structure on $\overline C \subset X$. An $s$-structure on $X$ is said to be \emph{recessed} if for each $C$, $\mathcal{I}_C/\mathcal{I}_C^2 \in \cgl X{-1}$. \end{defn} For the remainder of the paper, we assume that $G$ acts on $X$ with finitely many orbits, and that $X$ is endowed with a recessed $s$-structure. (See Remarks~\ref{rmk:ineq} and~\ref{rmk:ineq2}, however.) The assumption that the $s$-structure is recessed is a mild one: ``most'' of the $s$-structures appearing in~\cite{t} are recessed, as is the one used in~\cite{as}. Note that $\mathcal{I}_C/\mathcal{I}_C^2$ is always at least in $\cgl X0$, since it is a subquotient of $\mathcal{O}_X \in \cgl X0$. In addition, since the coherent pullback functor to a locally closed subscheme is right $s$-exact, it follows that the restriction of a recessed $s$-structure to any locally closed subscheme is also recessed. \begin{rmk} It is certainly possible to define the notion of ``recessed $s$-structure'' in a way that does not assume finiteness of the number of orbits. (One simply imposes a condition on the ideal sheaf of $\overline{Gx}$ for every $x \in X^{\mathrm{gen}}$, not just for every orbit closure.) However, it seems likely that when there are infinitely many orbits, there are no recessed $s$-structures. \end{rmk} Given a function $q: X^{\mathrm{gen}} \to \mathbb{Z}$ satisfying~\eqref{eqn:g-const}, define a a new function $\hat q: X^{\mathrm{gen}} \to \mathbb{Z}$ given by \[ \hat q(x) = \alt \overline{Gx} - q(x). \] Note that when $G$ acts on $X$ with finitely many orbits, a function $q: X^{\mathrm{gen}} \to \mathbb{Z}$ satisfying~\eqref{eqn:g-const} may be regarded as a $\mathbb{Z}$-valued function on the set of orbits. It will sometimes be convenient to adopt this point of view, and, given an orbit $C \subset X$, we sometimes write \[ q(C) = q(x_C) \qquad\text{where $x_C \in X^{\mathrm{gen}}$ is any generic point of $C$}. \] \begin{lem}\label{lem:q-ext} Let $\mathcal{G} \in \cg X$, and let $j: U \hookrightarrow X$ be an open subscheme. Suppose $\mathcal{F}_1 \subset \mathcal{G}|_U$ is such that $\mathcal{F}_1 \in {}_q\cgl Uw$. Then there exists a subsheaf $\mathcal{F} \subset \mathcal{G}$ such that $\mathcal{F}|_U \cong \mathcal{F}_1$ and $\mathcal{F} \in {}_q\cgl Xw$. \end{lem} \begin{proof} If $U$ is closed ({\it i.e.}, if $U$ is a connected component of $X$), then $j_*\mathcal{F}_1$ is naturally a subsheaf of $\mathcal{G}$, so we simply take $\mathcal{F} \cong j_*\mathcal{F}_1$. Otherwise, let $C$ be an open orbit in $\overline U \smallsetminus U$, and let $V$ be the open subscheme $U \cup C$. By induction on the number of orbits in $\overline U \smallsetminus U$, it suffices to find $\mathcal{F} \subset \mathcal{G}|_V$ such that $\mathcal{F} \in {}_q\cgl Vw$ and $\mathcal{F}|_U \cong \mathcal{F}$. Let $\kappa: C \hookrightarrow V$ be the inclusion map, and let $\mathcal{I}_C$ be the ideal sheaf of $C$ in $V$. Finally, let $\mathcal{F}'$ be some subsheaf of $\mathcal{G}|_V$ such that $\mathcal{F}'|_U \cong \mathcal{F}_1$. Suppose $\kappa^*\mathcal{F}' \in \cgl Cv$. If $v \le w+q(C)$, we may take $\mathcal{F} = \mathcal{F}'$, and we are finished. On the other hand, if $v > w+q(C)$, let $\mathcal{F} = \mathcal{I}_C^{v-w-q(C)}\mathcal{F}'$. Since $\mathcal{I}_C|_U \cong \mathcal{O}_U$, we clearly still have $\mathcal{F}|_U \cong \mathcal{F}_1$. The fact that the $s$-structure is recessed means that $\kappa^*\mathcal{I}_C \in \cgl C{-1}$, so $\kappa^*\mathcal{I}_C^{\otimes v-w-q(C)} \in \cgl C{-v+w+q(C)}$, and therefore $\kappa^*\mathcal{I}_C^{\otimes v-w-q(C)} \otimes \kappa^*\mathcal{F}' \in \cgl C{w+q(C)}$. Now, $\kappa^*\mathcal{F}$ is a quotient of $\kappa^*\mathcal{I}_C^{\otimes v-w-q(C)} \otimes \kappa^*\mathcal{F}'$, so $\kappa^*\mathcal{F} \in \cgl C{w+q(C)}$, as desired. \end{proof} Given a function $q: X^{\mathrm{gen}} \to \mathbb{Z}$, we define a full subcategory of $\dgm X$ by \[ {}_q\dsml Xw = \{ \mathcal{F} \in \dgm X \mid \text{$h^k(\mathcal{F}) \in {}_q\cgl Xw$} \}. \] We also define a full subcategory of $\dgp X$ by \[ {}_q\dspg Xw = \mathbb{D}({}_{\hat q}\dsml X{-w}). \] Finally, we put \[ {}_q\dsl Xw = {}_q\dsml Xw \cap \dgb X \quad\text{and}\quad {}_q\dsg Xw = {}_q\dspg Xw \cap \dgb X. \] The main result of the paper is the following. \begin{thm}\label{thm:main} The collection of subcategories $(\{{}_q\dsl Xw\}, \{{}_q\dsg Xw\})_{w \in \mathbb{Z}}$ is a bounded, nondegenerate HLR baric structure on $X$. \end{thm} The proof of this theorem will occupy the rest of this section. Note that the definition of ${}_q\dsml Xw$ is consistent with the notation used in Section~\ref{sect:baric-coh1}. We will see in Corollary~\ref{cor:qdspg} that the same holds for ${}_q\dspg Xw$. \begin{lem}\label{lem:subcat} ${}_q\dsl Xw$ and ${}_q\dsg Xw$ are thick subcategories of $\dgb X$. Moreover, ${}_q\dsl Xw \subset {}_q\dsl X{w+1}$, and ${}_q\dsg Xw \supset {}_q\dsg X{w+1}$. \end{lem} \begin{proof} It is obvious that ${}_q\dsl Xw$ is stable under shift. Since it is defined by the requirement that cohomology sheaves belong to a Serre subcategory of $\cg X$ (see Lemma~\ref{lem:qcgl-serre}), it is stable under extensions as well, so it is indeed a thick subcategory of $\dgb X$. It follows that ${}_q\dsg Xw$ is as well. It is obvious that ${}_q\dsl Xw \subset {}_q\dsl X{w+1}$, and hence that ${}_q\dsg Xw \supset {}_q\dsg X{w+1}$. \end{proof} \begin{lem}\label{lem:baric-res} Let $j: U \hookrightarrow X$ be the inclusion of an open subscheme, and $i: Z \hookrightarrow X$ the inclusion of a closed subscheme. Then: \begin{enumerate} \item $j^*$ takes ${}_q\dsml Xw$ to ${}_q\dsml Uw$ and ${}_q\dspg Xw$ to ${}_q\dspg Uw$. \item $Li^*$ takes ${}_q\dsml Xw$ to ${}_q\dsml Zw$. \item $Ri^!$ takes ${}_q\dspg Xw$ to ${}_q\dspg Zw$. \item $i_*$ takes ${}_q\dsml Zw$ to ${}_q\dsml Xw$ and ${}_q\dspg Zw$ to ${}_q\dspg Xw$. \end{enumerate} \end{lem} This statement closely resembles Lemma~\ref{lem:hlr-baric-res}; indeed, it would merely be an instance of that lemma if Theorem~\ref{thm:main} were already known. However, the proof of Theorem~\ref{thm:main} depends on this lemma, so we must give it an independent proof. \begin{proof} (1)~It is immediate from the definition of ${}_q\cgl Xw$ that $j^*$ takes ${}_q\cgl Xw$ to ${}_q\cgl Uw$. Since $j^*$ is an exact functor, it follows that it takes ${}_q\dsml Xw$ to ${}_q\dsml Uw$. Since $j^*$ commutes with $\mathbb{D}$, we also see that it takes ${}_q\dspg Xw$ to ${}_q\dspg Uw$. (2)~We proceed by noetherian induction: assume the statement is known if $X$ is replaced by a proper closed subscheme, or if $X$ is retained and $Z$ is replaced by a proper closed subscheme. Suppose $\mathcal{F} \in {}_q\dsml Xw$. We show by downward induction on $k$ that $h^k(Li^*\mathcal{F}) \in {}_q\cgl Zw$. For large $k$, $h^k(Li^*R\mathcal{F}) = 0$, so this holds trivially. Now, assume that $h^r(Li^*\mathcal{F}) \in {}_q\cgl Zw$ for all $r > k$, and consider the distinguished triangle $\tau^{\le k}Li^*\mathcal{F} \to Li^*\mathcal{F} \to \tau^{\ge k+1}Li^*\mathcal{F} \to$. Then $\tau^{\ge k+1}Li^*\mathcal{F}$ is an object of ${}_q\dsl Zw$, so for any $x \in Z^{\mathrm{gen}}$ and any closed subscheme structure $\kappa: Y \hookrightarrow Z$ on $\overline{Gx}$, we know that $L\kappa^*\tau^{\ge k+1}Li^*\mathcal{F} \in {}_q\dsml Yw$. Consider the exact sequence \[ h^{k-1}(L\kappa^*\tau^{\ge k+1}Li^*\mathcal{F}) \to h^k(L\kappa^*\tau^{\le k}Li^*\mathcal{F}) \to h^k(L\kappa^*Li^*\mathcal{F}). \] The first term above belongs to ${}_q\cgl Yw$. Observe that $h^k(L\kappa^*\tau^{\le k}Li^*\mathcal{F}) \cong \kappa^*h^k(Li^*\mathcal{F})$. Thus, to prove that $h^k(Li^*\mathcal{F}) \in {}_q\cgl Zw$, we must show that there is a dense open subscheme $V \subset Y$ such that $h^k(L\kappa^*\tau^{\le k}Li^*\mathcal{F})|_V \in \cgl V{w+q(x)}$. If $Y$ is a proper closed subscheme of $Z$, then we have assumed inductively that $L(\kappa \circ i)^*\mathcal{F} \in {}_q\dsml Yw$, and in that case, the last term in the sequence above belongs to ${}_q\cgl Yw$ as well. By Lemma~\ref{lem:qcgl-serre}, the middle term as well, and the existence of the desired open subscheme $V \subset Y$ follows. On the other hand, if $Y = Z$, and $\kappa$ is the identity map, then Lemma~\ref{lem:qcgl-res} gives us a dense open subscheme $V' \subset Z$ such that $h^k(Li^*\mathcal{F})|_{V'} \in \cgl {V'}{w+q(x)}$. The fact that $h^{k-1}(\tau^{\ge k+1}Li^*\mathcal{F}) \in {}_q\cgl Zw$ implies that there a dense open subscheme $V'' \subset Z$ with $h^{k-1}(\tau^{\ge k+1}Li^*\mathcal{F})|_{V''} \in {}_q\cgl {V''}{w+q(x)}$. If we let $V = V' \cap V''$, then we see from the exact sequence above that $h^k(\tau^{\le k}Li^*\mathcal{F})|_V \in \cgl V{w+q(x)}$, as desired. (3)~If $\mathcal{F} \in {}_q\dspg Xw$, let $\mathcal{F}' \in {}_{\hat q}\dsml X{-w}$ be such that $\mathbb{D}\mathcal{F}' \cong \mathcal{F}$. Then $Ri^! \mathcal{F} \cong \mathbb{D}(Li^*\mathcal{F}') \in {}_q\dspg Zw$ since $Li^*\mathcal{F}' \in {}_{\hat q}\dsml Zw$. (4)~Since ${}_q\dsml Zw$ and ${}_q\dsml Xw$ are defined by conditions on their cohomology sheaves, the first statement follows from the fact that $i_*$ is an exact functor taking ${}_q\cgl Zw$ to ${}_q\cgl Xw$. The second statement follows by duality. \end{proof} \begin{prop}\label{prop:baric-orth} If $\mathcal{F} \in {}_q\dsml Xw$ and $\mathcal{G} \in {}_q\dspg X{w+1}$, then $\Hom(\mathcal{F},\mathcal{G}) = 0$. \end{prop} \begin{proof} We proceed by noetherian induction: assume the theorem is known for all proper closed subschemes of $X$. Let $a$ and $b$ be such that $\mathcal{G} \in \dgpg Xa$ and $\mathcal{F} \in \dgml Xb$. Since $\Hom(\mathcal{F},\mathcal{G}) \cong \Hom(\tau^{\ge a}\mathcal{F}, \mathcal{G})$, we may replace $\mathcal{F}$ by $\tau^{\ge a}\mathcal{F}$ and assume that $\mathcal{F} \in {}_q\dsl Xw$. Next, let $\mathcal{G}' \in \dsml X{-w-1}$ be such that $\mathbb{D}\mathcal{G}' \cong \mathcal{G}$. For a sufficiently small integer $c$, we will have $\mathbb{D}(\tau^{\le c}\mathcal{G}') \in \dgpg X{b+1}$. From this, it follows that $\Hom(\mathcal{F},\mathcal{G}) \cong \Hom(\mathcal{F},\mathbb{D}(\tau^{\ge c+1}\mathcal{G}'))$. Replacing $\mathcal{G}$ by $\mathbb{D}(\tau^{\ge c+1}\mathcal{G}')$, we may assume that $\mathcal{G} \in {}_q\dsg Xw$. With $\mathcal{F}$ and $\mathcal{G}$ both in $\dgb X$, induction on the number of cohomology sheaves allows us to reduce to the case where both $\mathcal{F}$ and $\mathcal{G}' := \mathbb{D}\mathcal{G}$ are concentrated in a single degree. By shifting both objects simultaneously, we may assume without loss of generality that $\mathcal{F} \in \cg X$. Let $x$ be a generic point of $X$. There is an open subscheme $U \subset X$ containing $x$ such that $\mathcal{G}'|_U \in \cgl U{\alt \overline{Gx} - q(x)-w-1}$. By~\cite[Remark~3.2 and Lemmas~6.1--6.2]{a}, we may replace $U$ by a smaller open subscheme containing $x$ such that $\mathcal{G}|_U$ is concentrated in a single degree, say $d$, and such that $\mathcal{G}[d]|_U \in \cgg U{q(x)+w+1}$. If $d > 0$, then clearly $\Hom(\mathcal{F}|_U, \mathcal{G}|_U) = 0$. Otherwise, we invoke~\cite[Axiom~(S9)]{a} to replace $U$ by a smaller open subscheme such that $\Hom(\mathcal{F}|_U, \mathcal{G}|_U) = 0$. Let $Z$ be the complementary closed subspace to $U$, and consider the exact sequence \[ \lim_{\substack{\to \\ Z'}} \Hom(Li^*_{Z'}\mathcal{F}, Ri^!_{Z'}\mathcal{G}) \to \Hom(\mathcal{F},\mathcal{G}) \to \Hom(\mathcal{F}|_U,\mathcal{G}|_U), \] where $i_{Z'}: Z' \hookrightarrow X$ ranges over all closed subscheme structures on $Z$. We have just seen that the last term vanishes. Since $Li^*_{Z'}\mathcal{F} \in {}_q\dsml {Z'}w$ and $Ri^!_{Z'}\mathcal{G} \in {}_q\dspg {Z'}{w+1}$, the first term vanishes by induction. So $\Hom(\mathcal{F},\mathcal{G}) = 0$, as desired. \end{proof} \begin{prop}\label{prop:baric-dt} For any $\mathcal{F} \in \dgb X$, there is a distinguished triangle $\mathcal{F}' \to \mathcal{F} \to \mathcal{F}'' \to$ with $\mathcal{F}' \in {}_q\dsl Xw$ and $\mathcal{F}'' \in {}_q\dsg X{w+1}$. Moreover, if $\mathcal{F} \in \dgbg X0$, then $\mathcal{F}'$ and $\mathcal{F}''$ lie in $\dgbg X0$ as well. \end{prop} \begin{proof} Once again, we proceed by noetherian induction, and assume the result is known for all proper closed subschemes of $X$. Now, assume first that $\mathcal{F}$ is a sheaf. Let $C \subset X$ be an open (and possibly nonreduced) orbit, and let $i: \overline C \hookrightarrow X$ be the inclusion of its closure. By Lemma~\ref{lem:q-ext}, there exists a subsheaf $\mathcal{F}_1 \subset \mathcal{F}$ such that $\mathcal{F}_1 \in {}_q\cgl Xw$ and $\mathcal{F}_1|_C \cong \sigma_{\le w+q(C)}(\mathcal{F}|_C)$. Next, form a short exact sequence \[ 0 \to \mathcal{F}_1 \to \mathcal{F} \to \mathcal{G} \to 0. \] Let $b = \cod \overline C$. Then $i_*Ri^!\mathbb{D}\mathcal{G} \in \dgpg Xb$, and, by~\cite[Lemma~6.1]{a}, we know that $i_*Ri^!\mathbb{D}\mathcal{G}|_C \cong \mathbb{D}\mathcal{G}|_C$ is concentrated in degree $b$. Furthermore,~\cite[Proposition~6.8]{a} tells us that $\mathbb{D}\mathcal{G}[b]|_C \in \cgl C{\alt \overline C - q(C) -w - 1}$. (If $C$ is reduced, these assertions about $\mathbb{D}\mathcal{G}|_C$ are immediate from the fact that $\mathbb{D}$ is an exact functor, but in general, we must invoke~\cite[Lemma~6.1 and Proposition~6.8]{a}.) Now, we use Lemma~\ref{lem:q-ext} again to find a subsheaf $\mathcal{G}_1 \subset h^b(i_*Ri^!\mathbb{D}\mathcal{G})$ such that $\mathcal{G}_1 \in {}_{\hat q}\cgl X{-w-1}$ and $\mathcal{G}_1|_C \cong \mathbb{D}\mathcal{G}[b]|_C$. Form the composition \[ \mathcal{G}_1[-b] \to h^b(i_*Ri^!\mathbb{D}\mathcal{G})[-b] \cong \tau^{\le b}i_*Ri^!\mathbb{D}\mathcal{G} \to i_*Ri^!\mathbb{D}\mathcal{G} \to \mathbb{D}\mathcal{G}, \] and then complete it to a distinguished triangle \[ \mathcal{G}_1[-b] \to \mathbb{D}\mathcal{G} \to \mathcal{G}' \to. \] Here, $\mathcal{G}'$ is necessarily supported on the complement of $C$. Let $\mathcal{F}_2 = \mathbb{D}(\mathcal{G}_1[-b])$, and let $\mathcal{H} = \mathbb{D}\mathcal{G}'$, so we have a distinguished triangle \[ \mathcal{H} \to \mathcal{G} \to \mathcal{F}_2 \to. \] Since $\cod \overline C = b$, we see that $\mathcal{F}_2 \in \dgbg X0$. This distinguished triangle then implies that $\mathcal{H} \in \dgbg X0$ as well. Note also that $\mathcal{F}_2 \in {}_q\dsg X{w+1}$, and that \[ \mathcal{F} \in \{\mathcal{F}_1\} * \{\mathcal{H}\} * \{\mathcal{F}_2\}. \] Since $\mathcal{F}_1 \in {}_q\dsl Xw$, $\mathcal{F}_2 \in {}_q\dsg X{w+1}$, and $\mathcal{H}$ is supported on a proper closed subscheme, we conclude that $\mathcal{F} \in {}_q\dsl Xw * {}_q\dsg X{w+1}$, as desired. The last statement of the proposition holds by noetherian induction as well, since $\mathcal{F}_1$, $\mathcal{H}$, and $\mathcal{F}_2$ all lie in $\dgbg X0$ by construction. The result also follows for any object of $\dgb X$ that is concentrated in a single degree. Finally, for general objects $\mathcal{F} \in \dgb X$, we proceed by induction on the number of nonzero cohomology sheaves. Let $a \in \mathbb{Z}$ be such that $\tau^{\le a}\mathcal{F}$ and $\tau^{\ge a+1}\mathcal{F}$ are both nonzero. Then, they both have fewer nonzero cohomology sheaves than $\mathcal{F}$, and we assume inductively that there exist distinguished triangles \begin{gather*} \mathcal{F}'_1 \to \tau^{\le a}\mathcal{F} \to \mathcal{F}''_1 \to, \\ \mathcal{F}'_2 \to \tau^{\ge a+1}\mathcal{F} \to \mathcal{F}''_2 \to \end{gather*} with $\mathcal{F}'_1, \mathcal{F}'_2 \in {}_q\dsl Xw$ and $\mathcal{F}''_1, \mathcal{F}''_2 \in {}_q\dsg X{w+1}$. Consider the composition \[ \mathcal{F}'_2[-1] \to (\tau^{\ge a+1}\mathcal{F})[-1] \to \tau^{\le a}\mathcal{F} \to \mathcal{F}''_1. \] By Proposition~\ref{prop:baric-orth}, this composition is $0$, so we see from the exact sequence \[ \Hom(\mathcal{F}'_2[-1], \mathcal{F}'_1) \to \Hom(\mathcal{F}'_2[-1], \tau^{\le a}\mathcal{F}) \to \Hom(\mathcal{F}'_2[-1],\mathcal{F}''_1) \] that the morphism $\mathcal{F}'_2[-1] \to (\tau^{\ge a+1}\mathcal{F})[-1] \to \tau^{\le a}\mathcal{F}$ factors through $\mathcal{F}'_1$. That is, we have a commutative square \[ \xymatrix@=10pt{ \mathcal{F}'_2[-1] \ar[r]\ar[d] & (\tau^{\ge a+1}\mathcal{F})[-1] \ar[d] \\ \mathcal{F}'_1 \ar[r] & \tau^{\le a}\mathcal{F}} \] We define objects $\mathcal{F}', \mathcal{F}'' \in \dgb X$ by completing this diagram as follows, using the $9$-lemma~\cite[Proposition~1.1.11]{bbd}: \[ \xymatrix@=10pt{ \mathcal{F}'_2[-1] \ar[r]\ar[d] & (\tau^{\ge a+1}\mathcal{F})[-1] \ar[r]\ar[d] & \mathcal{F}''_2[-1] \ar[r]\ar[d] & {}\\ \mathcal{F}'_1 \ar[r]\ar[d] & \tau^{\le a}\mathcal{F} \ar[r]\ar[d] & \mathcal{F}''_1 \ar[r]\ar[d] &{}\\ \mathcal{F}' \ar[r]\ar[d] & \mathcal{F} \ar[r]\ar[d] & \mathcal{F}'' \ar[r]\ar[d] & {} \\ &&&} \] Since ${}_q\dsl Xw$ and ${}_q\dsg X{w+1}$ are stable under shift and extensions, we see that $\mathcal{F}' \in {}_q\dsl Xw$ and $\mathcal{F}'' \in {}_q\dsg X{w+1}$, as desired. Moreover, if $\mathcal{F}$ lies in $\dgbg X0$, then so do $\tau^{\le a}\mathcal{F}$ and $\tau^{\ge a+1}\mathcal{F}$, and hence, by induction, the objects $\mathcal{F}'_1$, $\mathcal{F}''_1$, $\mathcal{F}'_2$, and $\mathcal{F}''_2$ all lie in $\dgbg X0$ as well. It then follows that $\mathcal{F}'$ are $\mathcal{F}''$ are in $\dgbg X0$, as desired. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main}] Lemma~\ref{lem:subcat} and Propositions~\ref{prop:baric-orth} and~\ref{prop:baric-dt} together state that all the axioms for a baric structure hold. Moreover, the last part of Proposition~\ref{prop:baric-dt} tells us that the baric truncation functors are left $t$-exact (with respect to the standard $t$-structure), and it is obvious from the definition of ${}_q\dsl Xw$ that it is preserved by the truncation functors $\tau^{\le n}$ and $\tau^{\ge n}$. Thus, the baric structure $(\{{}_q\dsl Xw\}, \{{}_q\dsg Xw\})_{w\in \mathbb{Z}}$ is compatible with the standard $t$-structure. Next, for any closed subscheme $i: Z \hookrightarrow X$, Lemma~\ref{lem:baric-res} tells us that $Li^*$ is right baryexact and that $Ri^!$ is left baryexact. Thus, this baric structure is hereditary, and hence HLR by Theorem~\ref{thm:hlr}. It remains to prove that the baric structure is bounded (and therefore nondegenerate). Every sheaf in $\cg X$ belongs to some $\cgl Xn$, and hence to some ${}_q\cgl Xw$ (simply take $w$ to be the maximum value of $n - q(x)$). Since an object $\mathcal{F} \in \dgb X$ has finitely many nonzero cohomology sheaves, we can clearly find a $w$ such that all its cohomology sheaves belong to ${}_q\cgl Xw$, so that $\mathcal{F} \in {}_q\dsl Xw$. The same reasoning yields an integer $v$ such that $\mathbb{D}\mathcal{F} \in {}_{\hat q}\dsl X{-v}$, and hence $\mathcal{F} \in {}_q\dsg Xv$. Thus, the baric structure is bounded and nondegenerate. \end{proof} We can now verify that the notation ${}_q\dspg Xw$ is consistent with the notation of Section~\ref{sect:baric-coh1}. \begin{cor}\label{cor:qdspg} We have \[ {}_q\dspg Xw = \{ \mathcal{F} \in \dgp X \mid \text{${}_q\beta_{\le w-1}\tau^{\le k}\mathcal{F} \in \dgbg X{k+2}$ for all $k$} \}. \] \end{cor} \begin{proof} We have already observed that the definition of ${}_{\hat q}\dsml X{-w}$ is consistent with the notation of Section~\ref{sect:baric-coh1}, so by Lemma~\ref{lem:unbdd-orth}, for $\mathcal{F} \in \dgm X$, we have $\mathcal{F} \in {}_{\hat q}\dsml X{-w}$ if and only if $\Hom(\mathcal{F},\mathcal{G}) = 0$ for all $\mathcal{G} \in {}_{\hat q}\dsg X{-w+1}$. Applying $\mathbb{D}$, we have $\mathcal{F} \in {}_q\dspg Xw$ if and only if $\Hom(\mathbb{D}\mathcal{F},\mathbb{D}\mathcal{G}) = 0$ for all $\mathcal{G} \in {}_q\dsl X{w-1}$, or, equivalently, if $\Hom(\mathcal{G},\mathcal{F}) = 0$ for all $\mathcal{G} \in {}_q\dsl X{w-1}$. The corollary follows by another application of Lemma~\ref{lem:unbdd-orth}. \end{proof} \begin{rmk}\label{rmk:ineq} The proof of Lemma~\ref{lem:q-ext} depends in an essential way on the assumption of finitely many orbits and a recessed $s$-structure, but no other arguments given in this section do. (The role of the orbit closure $\overline C$ in the proof of Proposition~\ref{prop:baric-dt} could instead have been played by $\overline{Gx}$ for some generic point $x$.) By imposing additional conditions that permit us to evade Lemma~\ref{lem:q-ext}, we can find a version of Theorem~\ref{thm:main} that holds in much greater generality. Specifically, assume that the function $q: X^{\mathrm{gen}} \to \mathbb{Z}$ is \emph{monotone}: that is, if $x \in \overline{Gy}$, then $q(x) \ge q(y)$. Suppose we have a coherent sheaf $\mathcal{G} \in \cg X$, an open subscheme $j: U \hookrightarrow X$, and a subsheaf $\mathcal{F}_1 \subset \mathcal{G}|_U$ with $\mathcal{F}_1 \in {}_q\cgl Uw$. By replacing $U$ by a smaller open subscheme, we may assume that $\mathcal{F}_1 \in \cgl U{q(x)+w}$, where $x$ is a generic point of $U$. Then $\mathcal{F}_1$ is a subsheaf of $\sigma_{\le q(x)+w}\mathcal{G}|_U$, and standard arguments show that there is a subsheaf $\mathcal{F} \subset \sigma_{\le q(x)+w}\mathcal{G}$ supported on $\overline U$ such that $\mathcal{F}|_U \cong \mathcal{F}_1$. The monotonicity assumption then implies that $\mathcal{F} \in {}_q\cgl Xw$. This reasoning can be substituted for invocations of Lemma~\ref{lem:q-ext} for ${}_q\cgl Xw$. Similarly, if $q$ is \emph{comonotone}, meaning that $\hat q$ is monotone, then the reasoning above can replace invocations of Lemma~\ref{lem:q-ext} for the category ${}_{\hat q}\cgl Xw$. The proof of Theorem~\ref{thm:main} uses Lemma~\ref{lem:q-ext} in both these ways. We thus obtain the following result: suppose $X$ is a scheme satisfying the assumptions of Section~\ref{sect:stagt}, equipped with an $s$-structure. In particular, we do not assume that $G$ acts with finitely orbits, or that the $s$-structure is recessed. If $q: X^{\mathrm{gen}} \to \mathbb{Z}$ is both monotone and comonotone, then the collection of subcategories $(\{{}_q\dsl Xw\}, \{{}_q\dsg Xw\})_{w \in \mathbb{Z}}$ is a bounded, nondegenerate HLR baric structure on $X$. \end{rmk} \section{Multiplicative Baric Structures and $s$-structures} \label{sect:mult} In this section we study the relationship between multiplicative baric structures on the triangulated category $\dgb X$ and $s$-structures on the abelian category $\cg X$. The authors had originally hoped that under appropriate conditions the two notions would be equivalent, and that the developments in sections \ref{sect:stagt} and \ref{sect:baric-coh2} could be simplified by replacing the latter concept with the former. In other words, the hope was that there would be a one-to-one correspondence between multiplicative HLR baric structures and $s$-structures on a $G$-scheme $X$. This turns out to be not quite correct. Rather, we prove here that there is a one-to-one correspondence between multiplicative baric structures and a certain class of \emph{pre}-$s$-structures, including all $s$-structures. (A pre-$s$-structure is a collection of subcategories of $\cg X$ satisfying the first six of the ten axioms for an $s$-structure in~\cite{a}.) It would be interesting to look for an additional axiom on multiplicative baric structures that is satisfied precisely by those baric structures corresponding to $s$-structures, but we have not pursued this here. We say that a baric structure $\schemebaric X$ is \emph{multiplicative} if either of the following two equivalent conditions holds: \begin{enumerate} \item If $\mathcal{F} \in \dsl Xw$ and $\mathcal{G} \in \dsl Xv$, then $\mathcal{F} \Lotimes \mathcal{G} \in \dsml X{w+v}$. \item If $\mathcal{F} \in \dsl Xw$ and $\mathcal{G} \in \dsg Xv$, then $\cRHom(\mathcal{F},\mathcal{G}) \in \dspg X{v-w}$. \end{enumerate} \begin{thm}\label{thm:mult} Suppose $\schemebaric X$ is a multiplicative baric structure on $X$. Then the categories \begin{align*} \cgl Xw &= \cg X \cap \dsl Xw, \\ \cgg Xw &= \{ \mathcal{F} \in \cg X \mid \text{$\Hom(\mathcal{G},\mathcal{F}) = 0$ for all $\mathcal{G} \in \cgl X{w-1}$} \} \end{align*} constitute a pre-$s$-structure on $X$. Conversely, given an $s$-structure $(\{\cgl Xw\}, \{\cgg Xw\})_{w\in\mathbb{Z}}$ on a scheme $X$ with finitely many $G$-orbits, the categories \begin{align*} \dsl Xw &= \{ \mathcal{F} \in \dgb X \mid \text{$h^k(\mathcal{F}) \in \cgl Xw$ for all $k \in \mathbb{Z}$} \}, \\ \dsg Xw &= \{ \mathcal{F} \in \dgb X \mid \text{$\Hom(\mathcal{G},\mathcal{F}) = 0$ for all $\mathcal{G} \in \dsl X{w-1}$} \} \end{align*} constitute a multiplicative baric structure on $X$. \end{thm} \begin{proof} Suppose first that $\schemebaric X$ is a multiplicative baric structure on $X$. To show that the categories above constitute a pre-$s$-structure, we must verify axioms (S1)--(S6) from~\cite{a}. (The reader is referred to~\cite{a} for the statements of these axioms.) Axioms (S2) and (S3) are clear from the definitions, and axiom (S1) follows from the fact that $\schemebaric X$ is compatible with the standard $t$-structure. Let us prove axiom (S4). Let $\mathcal{F}$ be an object of $\cg X$. Since $\mathcal{F}$ is noetherian, and $\cgl Xw$ is a Serre subcategory, there is a largest subobject $\mathcal{F}' \subset \mathcal{F}$ belonging to $\cgl Xw$. Then $\mathcal{F}/\mathcal{F}'$ must belong to $\cgg X{w+1}$: otherwise, there is a nonzero map $\mathcal{G} \to \mathcal{F}/\mathcal{F}'$ whose image $\mathcal{I} \neq 0$ belongs to $\cgl Xw$, but the inverse image of $\mathcal{I}$ in $\mathcal{F}$ contains the maximal $\mathcal{F}'$. Axiom (S5) follows from the fact that the baric structure on $\dgb X$ is bounded, and Axiom (S6) follows from the multiplicativity of the baric structure and the fact that for $\mathcal{F}, \mathcal{G} \in \cg X$, we have $\mathcal{F} \otimes \mathcal{G} \cong h^0(\mathcal{F} \Lotimes \mathcal{G})$. Now, suppose we are given an $s$-structure $(\{\cgl Xw\}, \{\cgg Xw\})_{w \in \mathbb{Z}}$. Let $\boldsymbol{0}$ denote the constant function $X^{\mathrm{gen}} \to \mathbb{Z}$ of value $0$. We claim that ${}_{\bz}\cgl Xw = \cgl Xw$. It is clear from the definition that $\cgl Xw \subset {}_{\bz} \cgl Xw$. Conversely, if $x \in X^{\mathrm{gen}}$ is a generic point of the support of an object $\mathcal{F} \notin \cgl Xw$, it follows from the gluing theorem for $s$-structures~\cite[Theorem~5.3]{a} that there is no open subscheme $V \subset \overline{Gx}$ such that the restriction of $\mathcal{F}$ to $V$ lies in $\cgl Vw$, so $\mathcal{F} \notin {}_{\bz} \cgl Xw$. Since ${}_{\bz}\cgl Xw = \cgl Xw$, we see that the categories $\schemebaric X$ defined in the statement of the theorem coincide with the baric structure constructed in Theorem~\ref{thm:main} by taking $q = \boldsymbol{0}$. The fact that this baric structure is multiplicative is a consequence of Proposition~\ref{prop:baric-mult} below. \end{proof} \begin{prop}\label{prop:baric-mult} Let $X$ be a scheme with finitely many $G$-orbits, and let $p,q: X^{\mathrm{gen}} \to \mathbb{Z}$ be functions satisfying~\eqref{eqn:g-const}. Suppose $\mathcal{F} \in {}_p\dsml Xw$. \begin{enumerate} \item If $\mathcal{G} \in {}_q\dsml Xv$, then $\mathcal{F} \Lotimes \mathcal{G} \in {}_{p+q}\dsml X{w+v}$. \item If $\mathcal{G} \in {}_q\dspg Xv$, then $\cRHom(\mathcal{F},\mathcal{G}) \in {}_{\hat p + q}\dspg X{v-w}$. \end{enumerate} \end{prop} \begin{proof} (1)~We will show by noetherian induction that $\Hom(\mathcal{F} \Lotimes \mathcal{G}, \mathcal{H}) = 0$ for all $\mathcal{H} \in {}_{p+q}\dsg X{w+v+1}$. Assume the result is known for all proper closed subschemes of $X$, and let $C \subset X$ be an open orbit. Let $Z$ denote the closed subset $X \smallsetminus C$, and consider the exact sequence \[ \lim_{\substack{\to \\ Z'}} \Hom(Li^*(\mathcal{F} \Lotimes \mathcal{G}), Ri^!\mathcal{H}) \to \Hom(\mathcal{F} \Lotimes \mathcal{G}, \mathcal{H}) \to \Hom((\mathcal{F} \Lotimes \mathcal{G})|_C,\mathcal{H}|_C), \] where $i': Z' \hookrightarrow X$ ranges over all closed subscheme structures on $Z$. Now, $Li^*(\mathcal{F} \Lotimes \mathcal{G}) \cong Li^*\mathcal{F} \Lotimes Li^*\mathcal{G}$. We have $Li^*\mathcal{F} \in {}_p\dsml {Z'}w$ and $Li^*\mathcal{G} \in {}_q\dsml {Z'}v$ by Lemma~\ref{lem:baric-res}, and then $Li^*\mathcal{F} \Lotimes Li^*\mathcal{G} \in {}_{p+q}\dsml {Z'}{w+v}$ by assumption. We also have $Ri^!\mathcal{H} \in {}_{p+q}\dspg {Z'}{w+v+1}$, so the first term above clearly vanishes. It now suffices to show that $(\mathcal{F} \Lotimes \mathcal{G})|_C \in {}_{p+q}\dsml C{w+v}$: that implies the vanishing of the last term in the exact sequence above, and hence of the middle term as well. Recall that on a single $G$-orbit, the tensor product functor is exact (because all objects of $\cg C$ are locally free), so there is a natural isomorphism \[ h^r((\mathcal{F} \Lotimes \mathcal{G})|_C) \cong \bigoplus_{i+j = r} h^i(\mathcal{F}|_C) \otimes h^j(\mathcal{G}|_C). \] We know that $h^i(\mathcal{F}|_C) \in \cgl C{w+p(C)}$ for all $i$, and that $h^j(\mathcal{G}|_C) \in \cgl C{v+q(C)}$ for all $j$. It follows that $h^r((\mathcal{F} \Lotimes \mathcal{G})|_C) \in \cgl C{w+v+p(C)+q(C)}$ for all $r$, and hence that $(\mathcal{F} \Lotimes \mathcal{G})|_C \in {}_{p+q}\dsml C{w+v}$, as desired. (2)~Consider $\mathbb{D}\mathcal{G} \in {}_{\hat q}\dsml X{-v}$. By part~(1), $\mathcal{F} \Lotimes \mathbb{D}\mathcal{G} \in {}_{p + \hat q}\dsml X{w-v}$. Since $\cRHom(\mathcal{F},\mathcal{G}) \cong \mathbb{D}(\mathcal{F} \Lotimes \mathbb{D}\mathcal{G})$, the result follows. \end{proof} \section{Staggered Sheaves} \label{sect:stag2} In this section, we retain the assumptions that $G$ acts on $X$ with finitely many orbits, and that $X$ is equipped with a recessed $s$-structure. Given a function $q: X^{\mathrm{gen}} \to \mathbb{Z}$, let us define full subcategories of $\dgm X$ and $\dgp X$ as follows: \begin{align*} {}^q\dgml X0 &= \{ \mathcal{F} \in \dgm X \mid \text{$h^k(\mathcal{F}) \in {}_q\dsl X{-k}$ for all $k \in \mathbb{Z}$} \}, \\ {}^q\dgpg X0 &= \{ \mathcal{G} \in \dgp X \mid \text{$\Hom(\mathcal{F}[1],\mathcal{G}) = 0$ for all $\mathcal{F} \in {}^q\dgml X0$}\}. \end{align*} We also define bounded versions of these categories: \[ {}^q\dgbl X0 = {}^q\dgml X0 \cap \dgb X \quad\text{and}\quad {}^q\dgbg X0 = {}^q\dgpg X0 \cap \dgb X. \] Let $\mathcal{G} \in {}^q\dgpg X0$. There is some integer $n$ such that $\mathcal{G} \in \dgpg Xn$, and then for any $\mathcal{F} \in \dgm X$, we have $\Hom(\mathcal{F}[1],\mathcal{G}) \cong \Hom(\tau^{\ge n}(\mathcal{F}[1]),\mathcal{G})$, with $\tau^{\ge n}(\mathcal{F}[1]) \in \dgb X$. Thus, the definition of ${}^q\dgpg X0$ could be changed to require $\Hom(\mathcal{F}[1],\mathcal{G})$ to vanish only when $\mathcal{F} \in {}^q\dgbl X0$. By Proposition~\ref{prop:compat}\eqref{it:ssorth}, it follows that \[ {}^q\dgbg X0 = \{ \mathcal{G} \in \dgb X \mid \text{${}_q\beta_{\le k}\mathcal{G} \in \dgbg X{-k}$ for all $k \in \mathbb{Z}$} \}. \] The categories ${}^q\dgbl X0$ and ${}^q\dgbg X0$ are none other than the categories associated in Definition~\ref{defn:stag} to the standard $t$-structure on $\dgb X$ with the baric structure $(\{{}_q\dsl Xw\}, \{{}_q\dsg Xw\})_{w\in \mathbb{Z}}$. \begin{thm}\label{thm:stag-coh} The categories $({}^q\dgbl X0,{}^q\dgbg X0)$ constitute a bounded, nondegenerate $t$-structure on $\dgb X$. \end{thm} \begin{defn} The $t$-structure $({}^q\dgbl X0, {}^q\dgbg X0)$ is called the \emph{staggered $t$-structure of perversity $q$.} Its heart, denoted ${}^q\mathcal{M}(X)$, is the category of \emph{staggered sheaves}. \end{defn} This terminology and notation is consistent with that of~\cite{a} by Lemma~\ref{lem:dgml}. That is, if $q$ happens to be a perversity function in the sense of~\cite{a}, then the $t$-structure constructed here coincides with the $t$-structure associated to $q$ in~\cite{a}. However, neither this theorem nor the main result of~\cite{a} encompasses the other: in~\cite{a}, no assumptions were made on the number of orbits or the $s$-structure; here, no restrictions are imposed on the function $q: X^{\mathrm{gen}} \to \mathbb{Z}$. Note that if $q$ happens to be a perversity function in the sense of~\cite{a}, then Theorem~\ref{thm:stag-coh} follows immediately from Lemma~\ref{lem:dgml}, but it general, Theorem~\ref{thm:stag-coh} produces $t$-structures that are not given by the construction of~\cite{a}. \begin{proof}[Proof of Theorem~\ref{thm:stag-coh}] We will prove this theorem by invoking Theorem~\ref{thm:stag-gen}. To that end, we must define an invariant $\mu(\mathcal{F})$ for any object $\mathcal{F} \in \dgb X$ satisfying the hypotheses of that theorem. For any nonzero object $\mathcal{F} \in \dgb X$, let \[ m(\mathcal{F}) = \min \{ k \in \mathbb{Z} \mid h^k(\mathcal{F}) \ne 0 \}. \] Let $C$ be the maximum value of $\cod Z$ as $Z$ ranges over all closed subschemes of $X$. (Of course, $\cod Z$ takes only finitely many distinct values, since $X$ has finite Krull dimension.) Note that $\mathbb{D}(\dgbg X0) \subset \dgbl XC$, and, more generally, $\mathbb{D}(\dgbg Xn) \subset \dgbl X{C-n}$. Let \[ \mu(\mathcal{F}) = \begin{cases} C + 1 - m(\mathcal{F}) - m(\mathbb{D}\mathcal{F}) & \text{if $\mathcal{F} \ne 0$,} \\ 0 & \text{if $\mathcal{F} = 0$.} \end{cases} \] We first prove that $\mu(\mathcal{F}) > 0$ whenever $\mathcal{F} \ne 0$. If $m(\mathcal{F}) = n$, then $\mathcal{F} \in \dgbg Xn$, so $\mathbb{D}\mathcal{F} \in \dgbl X{C-n}$, and in particular, $m(\mathbb{D}\mathcal{F}) \le C-n$. It follows that \[ \mu(\mathcal{F}) = C + 1 - m(\mathcal{F}) - m(\mathbb{D}\mathcal{F}) \ge C + 1 - n - (C-n) = 1, \] as desired. Next, the left $t$-exactness of ${}_q\beta_{\le -n}$ implies that if $m(\mathcal{F}) = n$, then ${}_q\beta_{\le -n}\mathcal{F} \in \dgbg Xn$, so $m({}_q\beta_{\le -n}\mathcal{F}) \ge n$. Now, consider the distinguished triangle \[ \mathbb{D}{}_q\beta_{\ge -n+1}\mathcal{F} \to \mathbb{D}\mathcal{F} \to \mathbb{D}{}_q\beta_{\le -n}\mathcal{F} \to. \] Since $\mathbb{D}{}_q\beta_{\ge -n+1}\mathcal{F} \in {}_{\hat q}\dsl X{n-1}$ and $\mathbb{D}{}_q\beta_{\le -n}\mathcal{F} \in {}_{\hat q}\dsg Xn$, we have canonical isomorphisms \[ \mathbb{D}{}_q\beta_{\ge -n+1}\mathcal{F} \cong {}_{\hat q}\beta_{\le n-1}\mathbb{D}\mathcal{F} \qquad\text{and}\qquad \mathbb{D}{}_q\beta_{\le -n}\mathcal{F} \cong {}_{\hat q}\beta_{\ge n}\mathbb{D}\mathcal{F}. \] Now, the left $t$-exactness of ${}_{\hat q}\beta_{\ge n}$ shows that $m(\mathbb{D}{}_q\beta_{\le -n}\mathcal{F}) \ge m(\mathbb{D}\mathcal{F})$. Finally, consider $\tau^{\ge n+1}{}_q\beta_{\le -n}\mathcal{F}$. Clearly, \[ m(\tau^{\ge n+1}{}_q\beta_{\le -n}\mathcal{F}) \ge n+1 > m(\mathcal{F}). \] Now, let $k = m(\mathbb{D}{}_q\beta_{\le -n}\mathcal{F})$. By definition, ${}_q\beta_{\le -n}\mathcal{F} \in \mathbb{D}(\dgbg Xk)$. The $t$-structure $(\mathbb{D}(\dgbg Xk), \mathbb{D}(\dgbl Xk))$, which is dual to (a shift of) the standard $t$-structure, is an example of a perverse coherent $t$-structure~\cite{bez:pcs}, and therefore of a staggered $t$-structure in the sense of~\cite{a}, so Corollary~\ref{cor:stag-trunc} tells us that $\mathbb{D}(\dgbg Xk)$ is stable under $\tau^{\ge n+1}$. In particular, $\tau^{\ge n+1}{}_q\beta_{\le -n}\mathcal{F} \in \mathbb{D}(\dgbg Xk)$, so \[ m(\mathbb{D}\tau^{\ge n+1}{}_q\beta_{\le -n}\mathcal{F}) \ge k = m(\mathbb{D}{}_q\beta_{\le -n}\mathcal{F}) \ge m(\mathbb{D}\mathcal{F}). \] We conclude that if $m(\mathcal{F}) = n$, then $\mu(\tau^{\ge n+1}\beta_{\le -n}\mathcal{F}) < \mu(\mathcal{F})$. Thus, the hypotheses of Theorem~\ref{thm:stag-gen} are satisfied. \end{proof} \begin{rmk}\label{rmk:ineq2} If $G$ does not act with finitely many orbits, or if the $s$-structure is not recessed, Remark~\ref{rmk:ineq} tells us that $(\{{}_q\dsl Xw\}, \{{}_q\dsg Xw\})_{w\in\mathbb{Z}}$ still constitutes a baric structure if we require $q$ to be monotone and comonotone. The proof of Theorem~\ref{thm:stag-coh} goes through in this setting. However, the conditions imposed on $q$ are more restrictive than the conditions imposed on perversity functions in~\cite{a}, so in this case, the theorem we obtain is actually just a special case of~\cite[Theorem~7.4]{a}. Similar remarks apply to Theorems~\ref{thm:stag-dual} and~\ref{thm:stag-fl} below; {\it cf.}~\cite[Theorems~9.7 and~9.9]{a}. \end{rmk} Next, we study how the duality functor $\mathbb{D}$ interacts with the staggered $t$-structure. Let $j: U \hookrightarrow X$ be an open subscheme. The following functions defined in terms of $q: X^{\mathrm{gen}} \to \mathbb{Z}$ will be useful in the sequel. \begin{equation}\label{eqn:qpm} \mathchoice{{}^\flat}{{}^\flat}{{}^\flat}{\flat} q(x) = \begin{cases} q(x) & \text{if $x \in U^{\mathrm{gen}}$,} \\ q(x) - 1 & \text{if $x \notin U^{\mathrm{gen}}$,} \end{cases} \qquad \mathchoice{{}^\sharp}{{}^\sharp}{{}^\sharp}{\sharp} q(x) = \begin{cases} q(x) & \text{if $x \in U^{\mathrm{gen}}$,} \\ q(x) + 1 & \text{if $x \notin U^{\mathrm{gen}}$.} \end{cases} \end{equation} \begin{lem}\label{lem:stag-res} Let $j: U \hookrightarrow X$ be the inclusion of an open subscheme, and $i: Z \hookrightarrow X$ the inclusion of a closed subscheme. Then: \begin{enumerate} \item $j^*$ takes ${}^q\dgml X0$ to ${}^q\dgml U0$ and ${}^q\dgpg X0$ to ${}^q\dgpg U0$. \item $Li^*$ takes ${}^q\dgml X0$ to ${}^q\dgml Z0$. \item $Ri^!$ takes ${}^q\dgpg X0$ to ${}^q\dgpg Z0$. \item $i_*$ takes ${}^q\dgml Z0$ to ${}^q\dgml X0$ and ${}^q\dgpg Z0$ to ${}^q\dgpg X0$. \end{enumerate} \end{lem} \begin{proof} We will prove the parts of this lemma in the order~(2), (4), (3), (1). (2)~First, suppose $\mathcal{F} \in {}^q\dgbl X0$ is concentrated in a single degree, say $\mathcal{F} \cong h^k(\mathcal{F})[-k]$. Then $\mathcal{F} \in {}_q\dsl X{-k}$, so by Lemma~\ref{lem:baric-res}, $Li^*\mathcal{F} \in {}_q\dsml Z{-k}$. We also clearly have $Li^*\mathcal{F} \in \dgml Zk$, so it follows that $Li^*\mathcal{F} \in {}^q\dgml Z0$. Next, for general $\mathcal{F} \in {}^q\dgbl X0$, induction on the number of nonzero cohomology sheaves (together with the fact that ${}^q\dgml Z0$ is stable under extensions) allows us to reduce to the case already considered, and we conclude that $Li^*$ takes ${}^q\dgbl X0$ to ${}^q\dgml Z0$. Finally, if $\mathcal{F} \in {}^q\dgml X0$, consider the distinguished triangle \[ Li^*\tau^{\le k-1}\mathcal{F} \to Li^*\mathcal{F} \to Li^*\tau^{\ge k}\mathcal{F} \to. \] Since $\tau^{\ge k}\mathcal{F} \in {}^q\dgbl X0$, we know that $Li^*\tau^{\ge k}\mathcal{F} \in {}^q\dgml Z0$. Moreover, we see from the long exact cohomology sequence associated to this distinguished triangle $h^k(Li^*\mathcal{F}) \cong h^k(Li^*\tau^{\ge k}\mathcal{F}) \in {}_q\dsl Z{-k}$. Thus, $Li^*\mathcal{F} \in {}^q\dgml Z0$, as desired. (4)~Because $i_*$ is $t$-exact (with respect to the standard $t$-structure), and because ${}^q\dgml Z0$ and ${}^q\dgml X0$ are defined by conditions on cohomology sheaves, it follows from Lemma~\ref{lem:baric-res} that $i_*$ takes ${}^q\dgml Z0$ to ${}^q\dgml X0$. (The same argument shows that $j^*$ takes ${}^q\dgml X0$ to ${}^q\dgml U0$.) On the other hand, if $\mathcal{F} \in {}^q\dgpg Z0$, then for any $\mathcal{G} \in {}^q\dgml X0$, we have \[ \Hom(\mathcal{G}[1], i_*\mathcal{F}) \cong \Hom(Li^*\mathcal{G}[1], \mathcal{F}) = 0, \] where, in the last step, we have used the fact that $Li^*\mathcal{G} \in {}^q\dgml Z0$. Thus, $i_*\mathcal{F} \in {}^q\dgpg X0$. (3)~Let $\mathcal{F} \in {}^q\dgpg X0$. For any $\mathcal{G} \in {}^q\dgml Z0$, we have \[ \Hom(\mathcal{G}[1], Ri^!\mathcal{F}) \cong \Hom(i_*\mathcal{G}[1], \mathcal{F}) = 0. \] Here, we have used the fact that $i_*\mathcal{G} \in {}^q\dgml X0$. Thus, $Ri^!\mathcal{F} \in {}^q\dgpg Z0$. (1)~It was observed in the proof of part~(4) that $j^*$ takes ${}^q\dgml X0$ to ${}^q\dgml U0$. Next, suppose $\mathcal{F} \in {}^q\dgpg X0$. To show that $j^*\mathcal{F} \in {}^q\dgpg U0$, it suffices, as noted at the beginning of this section, to show that $\Hom(\mathcal{G}[1],\mathcal{F}) = 0$ for all $\mathcal{G} \in {}^q\dgbl U0$. But since ${}^q\dgbl U0$ is stable under $\tau^{\ge n}$ and $\tau^{\le n}$, we can further reduce to showing that $\Hom(\mathcal{G}[1],\mathcal{F}) = 0$ whenever $\mathcal{G}$ is an object of ${}^q\dgbl U0$ that is concentrated in a single degree. Suppose $\mathcal{G} \cong h^k(\mathcal{G})[-k]$. We then have $h^k(\mathcal{G}) \cong \mathcal{G}[k] \in {}_q\cgl U{-k}$. Let $\mathchoice{{}^\flat}{{}^\flat}{{}^\flat}{\flat} q: X^{\mathrm{gen}} \to \mathbb{Z}$ be as in~\eqref{eqn:qpm}. Then of course ${}_q\cgl U{-k} = {}_{\flat q}\cgl U{-k}$, and by Lemma~\ref{lem:q-ext}, $\mathcal{G}[k]$ may be extended to a sheaf $\mathcal{G}' \in {}_{\flat q}\cgl X{-k}$. Consider the exact sequence \[ \Hom(\mathcal{G}'[-k+1],\mathcal{F}) \to \Hom(j^*\mathcal{G}'[-k+1],j^*\mathcal{F}) \to \lim_{\substack{\to \\ Z'}} \Hom(L\kappa^*_{Z'}\mathcal{G}'[-k], R\kappa^!_{Z'}\mathcal{F}), \] where $\kappa_{Z'}: Z' \hookrightarrow X$ ranges over all closed subscheme structures on the complement of $U$. We clearly have that $\mathcal{G}'[-k] \in {}^{\flat q}\dgbl X0 \subset {}^q\dgbl X0$, so $\Hom(\mathcal{G}'[-k+1],\mathcal{F}) = 0$. We already know that $R\kappa^!_{Z'}\mathcal{F} \in {}^q\dgpg {Z'}0$, and that $L\kappa^*_{Z'}\mathcal{G}'[-k] \in {}^{\flat q}\dgml {Z'}0 = {}^q\dgml {Z'}{-1}$, so the last term above vanishes as well. Therefore, $\Hom(j^*\mathcal{G}'[-k+1], j^*\mathcal{F}) \cong \Hom(\mathcal{G}[1],j^*\mathcal{F}) = 0$, so $j^*\mathcal{F} \in {}^q\dgpg U0$, as desired. \end{proof} \begin{prop} We have $\mathbb{D}({}^q\dgml X0) = {}^{\bar q}\dgpg X0$. \end{prop} \begin{proof} We proceed by noetherian induction, and assume the result is known for all proper closed subschemes of $X$. To show that $\mathbb{D}({}^q\dgml X0) \subset {}^{\bar q}\dgpg X0$, let us begin by considering the special case where $\mathcal{F} \in {}^q\dgml X0$ is concentrated in a single degree, say $\mathcal{F} \cong h^k(\mathcal{F})[-k]$. Then $\mathcal{F} \in {}_q\dsl X{-k}$, so $\mathbb{D}\mathcal{F} \in {}_{\hat q} \dsg Xk$. Choose an open orbit $C \subset X$. By~\cite[Lemma~6.6]{a}, $\mathbb{D}\mathcal{F}|_C$ is concentrated in a single degree, {\it viz.}, in degree $\cod \overline C -k$. We claim that $\mathbb{D}\mathcal{F}|_C \in {}^{\bar q}\dgpg C0$. To prove this, it suffices to show that if $\mathcal{G} \in {}^{\bar q}\dgbl C0$, then $\Hom(\mathcal{G}[1],\mathbb{D}\mathcal{F}|_C) = 0$. Consider the exact sequence \begin{multline*} \Hom((\tau^{\ge \cod \overline C -k +1}\mathcal{G})[1],\mathbb{D}\mathcal{F}|_C) \to \Hom(\mathcal{G}[1],\mathbb{D}\mathcal{F}|_C) \to \\ \Hom((\tau^{\le \cod \overline C - k}\mathcal{G})[1],\mathbb{D}\mathcal{F}|_C). \end{multline*} The last term clearly vanishes because $\mathbb{D}\mathcal{F}|_C \in \dgbg C{\cod X-k}$. On the other hand, note that $\tau^{\ge \cod \overline C -k +1}\mathcal{G} \in {}_{\bar q}\dsl C{k-\cod \overline C -1}$. Over the single orbit $C$, the functions $\bar q$ and $\hat q$ differ simply by the constant $\cod \overline C$, and thus ${}_{\bar q}\dsl C{k-\cod \overline C-1} = {}_{\hat q}\dsl C{k-1}$. Since $\mathbb{D}\mathcal{F}|_C \in {}_{\hat q} \dsg Ck$, the first term above vanishes, and hence so does the middle term. We have shown that $\mathbb{D}\mathcal{F}|_C \in {}^{\bar q}\dgpg C0$. Next, let $\mathcal{G} \in {}^{\bar q}\dgbl X0$, and consider the exact sequence \[ \lim_{\substack{\to \\ Z'}} \Hom(Li^*_{Z'}\mathcal{G}[1], Ri^!_{Z'}\mathbb{D}\mathcal{F}) \to \Hom(\mathcal{G}[1],\mathbb{D}\mathcal{F}) \to \Hom(\mathcal{G}[1]|_C,\mathcal{D}\mathcal{F}|_C), \] where $i_{Z'}:Z' \hookrightarrow X$ ranges over all closed subscheme structures on the complement of $C$. We have just seen that the last term vanishes. Also, $Li^*_{Z'}\mathcal{G} \in {}^{\bar q}\dgml {Z'}0$ and $Ri^!_{Z'}\mathbb{D}\mathcal{F} \cong \mathbb{D}(Li^*_{Z'}\mathcal{F}) \in {}^{\bar q}\dgpg {Z'}0$ by Lemma~\ref{lem:stag-res} and the inductive assumption, so the first term vanishes as well. Therefore, the middle term vanishes, and we conclude that for $\mathcal{F} \in {}^q\dgml X0$ concentrated in a single degree, we have$\mathbb{D}\mathcal{F} \in {}^{\bar q}\dgbg X0$. It follows by induction on the number of nonzero cohomology sheaves that $\mathbb{D}$ also takes all objects of the bounded category ${}^q\dgbl X0$ to ${}^{\bar q}\dgbg X0$. Finally, let us consider a general object $\mathcal{F} \in {}^q\dgml X0$. We wish to show that $\Hom(\mathcal{G}[1], \mathbb{D}\mathcal{F}) = 0$ for all $\mathcal{G} \in {}^{\bar q}\dgbl X0$. By the previous paragraph, $\mathbb{D}\mathcal{G} \in {}^q\dgbg X0 \subset {}^q\dgpg X0$, so \[ \Hom(\mathcal{G}[1],\mathbb{D}\mathcal{F}) \cong \Hom(\mathcal{F}, \mathbb{D}(\mathcal{G}[1])) \cong \Hom(\mathcal{F}[1],\mathbb{D}\mathcal{G}) = 0. \] Thus, $\mathbb{D}({}^q\dgml X0) \subset {}^{\bar q}\dgpg X0$. The argument for the opposite inclusion is similar, and we again use noetherian induction, but we cannot begin with the case of an object concentrated in one degree, since ${}^{\bar q}\dgpg X0$ is not stable under the standard truncation functors. The bounded category ${}^{\bar q}\dgbg X0$ is, however, stable under the baric truncation functors ${}_{\bar q}\beta_{\le k}$ and ${}_{\bar q}\beta_{\ge k}$. Suppose, then, that $\mathcal{F} \in {}^{\bar q}\dgbg X0$ is ``baric-pure'': that is, $\mathcal{F} \in {}_{\bar q}\dsl Xk \cap {}_{\bar q}\dsg Xk$ for some $k$. If we prove that $\mathbb{D}\mathcal{F} \in {}^q\dgbl X0$, then it will follow by induction on ``baric length'' that $\mathbb{D}$ takes all objects of ${}^{\bar q}\dgbg X0$ to ${}^q\dgbl X0$. The assumptions on $\mathcal{F}$ imply that $\mathcal{F} \in \dgbg X{-k}$. Once again, let $C \subset X$ be an open orbit. It follows from~\cite[Lemma~6.6]{a} that $\mathbb{D}\mathcal{F}|_C \in \dgbl C{\cod \overline C +k}$. We also know that $\mathbb{D}\mathcal{F} \in {}_{\Hat{\Bar q}}\dsl X{-k}$, where \[ \Hat{\Bar q}(C) = \alt \overline C - (\alt \overline C + \cod \overline C - q(C)) = q(C) - \cod \overline C. \] In particular, ${}_{\Hat{\Bar q}}\dsl C{-k} = {}_q\dsl C{-\cod \overline C-k}$. Since $\mathbb{D}\mathcal{F}|_C \in \dgbl C{\cod \overline C +k}$, we see that $\mathbb{D}\mathcal{F}|_C \in {}^q\dgbl C0$. To show that $\mathbb{D}\mathcal{F} \in {}^q\dgbl X0$, it suffices, by Proposition~\ref{prop:compat}, to show that $\Hom(\mathbb{D}\mathcal{F}[1],\mathcal{G}) = 0$ for all $\mathcal{G} \in {}^q\dgbg X0$. Consider the exact sequence \[ \lim_{\substack{\to \\ Z'}} \Hom(Li^*_{Z'}\mathbb{D}\mathcal{F}[1], Ri^!_{Z'}\mathcal{G}) \to \Hom(\mathbb{D}\mathcal{F}[1],\mathcal{G}) \to \Hom(\mathbb{D}\mathcal{F}[1]|_C,\mathcal{G}|_C), \] where $i_{Z'}: Z' \hookrightarrow X$ ranges over all closed subscheme structures on the complement of $C$. The last term above vanishes because $\mathbb{D}\mathcal{F}|_C \in {}^q\dgbl C0$. We also have $Li^*_{Z'}\mathbb{D}\mathcal{F} \cong \mathbb{D}(Ri^!_{Z'}\mathcal{F}) \in {}^q\dgml {Z'}0$ and $Ri^!_{Z'}\mathcal{G} \in {}^q\dgpg {Z'}0$ by Lemma~\ref{lem:stag-res} and the inductive assumption. Hence, the first term in the sequence above vanishes, so the middle term vanishes as well, and we conclude that $\mathbb{D}\mathcal{F} \in {}^q\dgbl X0$. Thus, $\mathbb{D}({}^{\bar q}\dgbg X0) \subset {}^q\dgbl X0$. Finally, we must consider a general object $\mathcal{F} \in {}^{\bar q}\dgpg X0$. Showing that $\mathbb{D}\mathcal{F} \in {}^q\dgml X0$ is equivalent to showing that $\tau^{\ge k}\mathbb{D}\mathcal{F} \in {}^q\dgbl X0$ for all $k$. If the latter condition fails for some $k$, then there exists an object $\mathcal{G} \in {}^q\dgbg X1$ such that $\Hom(\tau^{\ge k}\mathbb{D}\mathcal{F}, \mathcal{G}) \ne 0$. By replacing $k$ by a smaller integer if necessary, we may assume that $\mathcal{G} \in \dgbg Xk$. We then have \[ \Hom(\tau^{\ge k}\mathbb{D}\mathcal{F}, \mathcal{G}) \cong \Hom(\mathbb{D}\mathcal{F},\mathcal{G}) \cong \Hom(\mathbb{D}\mathcal{G},\mathcal{F}) \ne 0. \] By exchanging the roles of $q$ and $\bar q$ in the previous paragraph, we see that $\mathbb{D}\mathcal{G} \in {}^{\bar q}\dgbl X{-1}$, but this contradicts the fact that $\mathcal{F} \in {}^{\bar q}\dgpg X0$. Therefore, $\mathbb{D}\mathcal{F} \in {}^q\dgml X0$, and $\mathbb{D}({}^{\bar q}\dgpg X0) = {}^q\dgml X0$, as desired. \end{proof} The next theorem follows immediately from the last proposition. \begin{thm}\label{thm:stag-dual} The dual of the staggered $t$-structure $({}^q\dgbl X0, {}^q\dgbg X0)$ is the staggered $t$-structure $({}^{\bar q}\dgbl X0, {}^{\bar q}\dgbg X0)$. In particular, in the case where every orbit $C \subset X$ has even staggered codimension, and $q$ is the function $q(C) = {\textstyle\frac 12}\scod C$, the $t$-structure $({}^q\dgbl X0, {}^q\dgbg X0)$ is self-dual.\qed \end{thm} We conclude with a study of simple objects in ${}^q\mathcal{M}(X)$. The statements below and their proofs are very similar to those in~\cite[Section~3.2]{bez:pcs} or~\cite[Section~9]{a}, and most details of the proofs will be omitted. Instead, each statement is followed by brief remarks clarifying the relationship to statements in~\cite{bez:pcs} or~\cite{a}. \begin{prop}\label{prop:ic} Let $j: U \hookrightarrow X$ be a dense open subscheme. Given a function $q: X^{\mathrm{gen}} \to \mathbb{Z}$, define $\mathchoice{{}^\flat}{{}^\flat}{{}^\flat}{\flat} q, \mathchoice{{}^\sharp}{{}^\sharp}{{}^\sharp}{\sharp} q: X^{\mathrm{gen}} \to \mathbb{Z}$ as in~\eqref{eqn:qpm}, and define a full subcategory ${}^q\mathcal{M}^{!*}(X) \subset {}^q\mathcal{M}(X)$ by ${}^q\mathcal{M}^{!*}(X) = {}^{\flat q}\dgbl X0 \cap {}^{\sharp q}\dgbg X0$. The functor $j^*$ induces an equivalence of categories ${}^q\mathcal{M}^{!*}(X) \to {}^q\mathcal{M}(U)$. Moreover, objects of ${}^q\mathcal{M}^{!*}(X)$ have no subobjects or quotients in ${}^q\mathcal{M}(X)$ that are supported on $X \smallsetminus U$. \end{prop} \begin{proof}[Remarks on proof] This statement corresponds to~\cite[Theorem~2]{bez:pcs} or~\cite[Proposition~9.2]{a}, but both those statements impose a condition on the function $q$ (denoted $p$ in {\it loc.~cit.}) that is not imposed here. The reason is that the proof requires that the categories $({}^{\flat q}\dgbl X0, {}^{\flat q}\dgbg X0)$ and $({}^{\sharp q}\dgbl X0, {}^{\sharp q}\dgbg X0)$ associated to $\mathchoice{{}^\flat}{{}^\flat}{{}^\flat}{\flat} q$ and $\mathchoice{{}^\sharp}{{}^\sharp}{{}^\sharp}{\sharp} q$ (denoted $p^+$ and $p^-$ in {\it loc.~cit.}) actually constitute $t$-structures. In the present paper, Theorem~\ref{thm:stag-coh} tells us that this is the case with no assumptions, whereas in both~\cite{bez:pcs} and~\cite{a}, the $t$-structure is constructed only for $p$ obeying certain inequalities. \end{proof} \begin{defn} The inverse equivalence to that of the preceding proposition, denoted $j_{!*}: {}^q\mathcal{M}(U) \to {}^q\mathcal{M}^{!*}(X)$, is known as the \emph{intermediate-extension functor}. \end{defn} \begin{defn}\label{defn:ic} Let $Y$ be a locally closed subscheme of $X$. Let $h: Y \hookrightarrow \overline Y$ and $\kappa: \overline Y \hookrightarrow X$ denote the inclusion maps. For any $\mathcal{F} \in {}^q\mathcal{M}(Y)$, we define an object of ${}^q\mathcal{M}(X)$ by \[ \mathcal{IC}(\overline Y, \mathcal{F}) = \kappa_*(h_{!*}\mathcal{F}). \] This is called the \emph{(staggered) intersection cohomology complex} associated to $\mathcal{F}$. \end{defn} Recall that the \emph{step} of a coherent sheaf is defined to be the unique integer $w$ (if such an integer exists) such that the sheaf belongs $\cgl Xw \cap \cgg Xw$. An irreducible vector bundle on an orbit always has a well-defined step. \begin{prop}\label{prop:simple} Let $\mathcal{F} \in {}^q\mathcal{M}(X)$. $\mathcal{F}$ is a simple object if and only if $\mathcal{F} \cong \mathcal{IC}(\overline C, \mathcal{L}[-q(C)+\step \mathcal{L}])$ for some orbit $C \subset X$ and some irreducible vector bundle $\mathcal{L} \in \cg C$. \end{prop} \begin{proof}[Remarks on proof] This statement is analogous to~\cite[Corollary~4]{bez:pcs} and to~\cite[Theorem~9.7]{a}. The main difference is that in~\cite{a}, $\mathcal{F}$ is assumed at the outset to be supported on (a possibly nonreduced subscheme structure on) the closure of one orbit. (The statement of~\cite[Theorem~9.7]{a} also imposes conditions on $q$, but those are unnecessary here for reasons explained in the remarks following Proposition~\ref{prop:ic}.) In~\cite{bez:pcs}, it is shown that a simple object must be supported on an orbit closure using Rosenlicht's Theorem, but that argument cannot be used here for the reasons given in~\cite[Remark~9.8]{a}. To reduce this statement to one where the proof of~\cite[Theorem~9.7]{a} can be repeated verbatim, we must show by other means that the support of a simple object is an ({\it a priori} possibly nonreduced) orbit closure. Since $X$ is assumed to consist of finitely many $G$-orbits, it suffices to show that the support of a simple object is irreducible. Let $\kappa: X' \hookrightarrow X$ be the scheme-theoretic support of $\mathcal{F}$; that is, $\mathcal{F} \cong \kappa_*\mathcal{F}'$, and the restriction of $\mathcal{F}'$ to any open subscheme of $X'$ is nonzero. Assume $X'$ is reducible; let $i:Z \hookrightarrow X'$ and $i':Z' \to X'$ be proper closed subschemes such that $Z \cup Z' = X'$. Let $U = Z \smallsetminus (Z \cap Z')$ and $U' = Z' \smallsetminus (Z \cap Z')$. Clearly, $U$ and $U'$ are disjoint open subschemes of $X'$. Let $V = U \cup U'$. The natural morphism \[ i_*Ri^!\mathcal{F}'|_V \to \mathcal{F}'|_V \] is the inclusion of the direct summand of $\mathcal{F}|_V$ supported on $U$. In particular, the above morphism is neither $0$ nor an isomorphism. But it is also the restriction to $V$ of the natural morphism \[ {}^q h^0(i_*Ri^!\mathcal{F}') \to \mathcal{F}', \] so this latter is also neither $0$ nor an isomorphism. Therefore, $\mathcal{F}'$ is not simple, and hence neither is $\mathcal{F}$. \end{proof} \begin{thm}\label{thm:stag-fl} ${}^q\mathcal{M}(X)$ is a finite-length category. \end{thm} \begin{proof}[Remarks on proof] This statement and its proof are identical to those of~\cite[Corollary~5]{bez:pcs} or of~\cite[Theorem~9.9]{a}, except that here, as in Propositions~\ref{prop:ic} and~\ref{prop:simple}, we impose no restrictions on $q$. \end{proof}
1,314,259,996,175
arxiv
\section{Introduction} Magneto-inertial fusion (MIF) is one of the main approaches in inertial-confinement fusion (ICF). Traditional ICF approaches based on laser-driven implosions require high implosion velocities to achieve plasma conditions at stagnation that can produce significant fusion yields in the laboratory. MIF introduces strong magnetic fields in the fuel in order to relax the stringent requirements on high pressures and high implosion velocities. One particular MIF concept is the Magnetized Liner Inertial Fusion (MagLIF) platform,\cite{Slutz:2010hd} which is currently being studied at the Z Pulsed Power facility at Sandia National Laboratories.\cite{Gomez:2014eta,Knapp:2019gf,Gomez:2019bg,Gomez:2020cd,YagerElorriaga:2022cp,Sinars:2020bv} The Z facility delivers a 20-MA electrical current pulse to the cylindrical MagLIF z-pinch, which then implodes under the action of the Lorentz force.\citep{Gomez:2020cd} Since MagLIF utilizes a relatively thick and heavy metallic cylindrical tamper, or liner, the achievable implosion velocities are substantially lower than those achieved in traditional ICF. Therefore, the fuel is not shock-heated; instead, a 2--4-kJ 1-TW laser is used to preheat the fuel in order to achieve an efficient adiabatic compression.\cite{HarveyThompson:2018dd,HarveyThompson:2019ff,Weis:2021id} The implosions are considerably slower (on the order of 100 ns) so the fuel must be premagnetized to reduce thermal conduction losses. This is achieved by external electromagnetic coils which provide a 10--16 T axial magnetic field. The combination of these key elements has led to significant thermonuclear yield production in laboratory experiments\cite{Gomez:2014eta,Knapp:2019gf,Gomez:2019bg,Gomez:2020cd,YagerElorriaga:2022cp} and plasma magnetization inferred via secondary DT neutron emission.\cite{Schmit:2014fg,Knapp:2015kc,Lewis:2021kz} Given the relative success of MagLIF and its demonstrated confinement parameter $P\tau\simeq 3.6$~Gbar-ns at $20$-MA peak current,\cite{foot:Knapp} there is high interest in scaling the platform to higher peak currents, \eg, to 45 MA or even 60 MA. However, scaling MagLIF is not straightforward. The space of experimental input parameters describing MagLIF is at least eight dimensional. Aside from peak current, experimental parameters include the current rise time, the liner inner and outer radii, the liner material, the height of the liner, the delivered preheat energy, the imposed external magnetic field, and the initial fuel density. Given the need to explore a relatively large parameter space, scoping future MagLIF designs at higher peak currents with radiation--magneto-hydrodynamic (rad-MHD) modeling tools can become expensive in terms of the computational resources. Nevertheless, several numerical studies have explored the potential for MagLIF to generate high fusion yields on future, higher-energy pulsed-power drivers.\cite{Slutz:2012gp,Sefkow:2014ik,McBride:2015ga,Slutz:2016cf,Slutz:2018iq} In order to reduce the dimensionality of the design space, these studies often constrain certain design parameters such as the current rise-time of the pulsed-power generator, the imploding liner height, the liner AR, the liner material (usually beryllium or gold in some cases\citep{Slutz:2018iq}), and the external magnetic field $B_0$. Then, with the remaining basic experimental input parameters (liner outer radius, fuel preheat, and initial fuel density), an optimized configuration is sought that maximizes the fusion yield or energy gain of the implosions at a given peak current.\cite{Slutz:2016cf,Slutz:2018iq} Results from these \emph{optimized-scaling} studies were obtained from hundreds of 1D \textsc{lasnex} simulations and subsequent more refined 2D \textsc{lasnex} simulations. Interestingly, these optimized-scaling studies predict $Y\simeq18$-MJ and $Y\simeq 440$-MJ DT yields for scaled ``gas-burning" MagLIF platforms, \ie loads with gaseous fuel configurations, driven at $I_{\rm max} \simeq 48$-MA and $I_{\rm max} \simeq65$-MA peak currents, respectively.\cite{Slutz:2016cf} One disadvantage of the optimized-scaling approach is that the solution to the optimization problem may have implosion dynamics and energy-transport regimes different from those presently studied on the Z facility. These changes evidently increment the risk of achieving the desired performance of extrapolated MagLIF loads even when the rad-MHD modeling tools may account for these changes. Following the results of \Refa{foot:Ruiz_framework} (further called Paper I), here we propose an alternative scaling approach based on \emph{similarity (or similitude) scaling}. As discussed in Paper I, similarity scaling MagLIF loads preserves much of the physics regimes already known or being studied on today's Z pulsed-power driver. By avoiding significant deviations into unexplored and/or less well-understood regimes, the risk of unexpected outcomes on future scaled-up experiments is reduced. In this work, we shall derive the scaling rules for the experimental input parameters characterizing a MagLIF load, and we shall test the estimated scaling rules for various performance metrics against 2D radiation--magneto-hydrodynamic (rad-MHD) \textsc{hydra} simulations.\cite{Marinak:1996fs,Koning:2009} Scaling MagLIF loads to higher currents by using nondimensional-analysis was first proposed in \Refa{Schmit:2020jd}. Both scaling approaches in Paper I and \Refa{Schmit:2020jd} leverage similarity scaling to derive scaling rules for the experimental input parameters of MagLIF loads. However, the scaling rules derived in Paper I and \Refa{Schmit:2020jd} differ in two ways. First, Paper I takes into account effects due to the changing liner thickness: the liner inner and outer radii do not follow the same scaling rules. Therefore, the scaling laws for other fuel parameters, \eg the density and the magnetic field, need to be modified to take this effect into account. One consequence of this is that the scaling rules for the fuel temperature and pressure are modified. Second, Paper I proposes to conserve the relative ion-conduction losses since their effects dominate over electron-conduction losses near stagnation. This is in contrast to \Refa{Schmit:2020jd} where conserving relative electron-conduction losses leads to the scaling rule of the externally applied magnetic field. Overall, \Refa{Schmit:2020jd} is the first piece of work where the foundations of similarity current-scaling of MagLIF loads were laid down, and Paper I presents a refined scaling model based on that work. This paper is organized as follows. In \Sec{sec:prescriptions}, we derive the scaling rules of the input parameters for MagLIF when varying the peak current. In \Sec{sec:numerical}, we introduce the numerical modeling tools used to test the similarity-scaling predictions and give the specific input parameters for the anchor load. In \Sec{sec:implosion}, we compare the implosion dynamics of the similarity-scaled liners. In \Sec{sec:stagnation}, we study the scaling rules for various metrics describing stagnation conditions. In \Sec{sec:loss}, we discuss the scaling of the burn-width time of fusion yield and the energy-loss mechanisms. In \Sec{sec:performance}, we test the theory predictions for various metrics measuring performance. In \Sec{sec:conclusions}, we summarize our main results. In \App{app:correction}, we discuss the origins of a correction factor introduced to the scaling rule of the liner outer radius. \section{Current-scaling prescriptions} \label{sec:prescriptions} In Paper I, we derived the general framework for similarity scaling of MagLIF loads.\cite{foot:Ruiz_framework} Here we shall focus on the scaling of MagLIF loads with respect to the characteristic current $I_\star \doteq \varphi_0 (Z_0 +L_{\rm tot}/t_\varphi)^{-1}$, where $\varphi_0$ is the characteristic voltage of the external voltage drive, $t_\varphi$ is the characteristic time of the voltage drive, $Z_0$ is the impedance of the generator, and $L_{\rm tot}$ is the total initial inductance of the circuit. We shall consider that the voltage characteristic time $t_\varphi$ is constant. Thus, all timescales are expected to be maintained--an assumption that will be tested throughout this paper. For the sake of completeness, we rewrite the scaling prescriptions of Paper I for the specific scaling scenario where only the characteristic current $I_\star$ is varied. The scaling prescriptions for the liner dimensions are derived by enforcing conservation of the dimensionless quantities characterizing the magnetic drive of the liner implosion ($\Pi$ parameter) and the robustness of the liner towards magneto-Rayleigh--Taylor instabilities ($\Psi$ parameter). From Paper I, the scaling prescription for the initial outer radius $R_{\rm out,0}$ of the liner is \begin{align} \frac{R_{\rm out,0}'}{R_{\rm out,0}} &= \left( \frac{I_\star'}{I_\star} \right)^{\frac{\gamma-1}{2\gamma-1}} \notag \\ & \simeq \left( \frac{I_\star'}{I_\star} \right)^{\frac{\gamma-1}{2\gamma-1}} \left[ 1 + \mc{C} \left( \frac{I_\star'}{I_\star} -1 \right) \right] . \label{eq:scaling:Rout} \end{align} From hereon, for an arbitrary quantity $Q$ corresponding to the \emph{baseline} MagLIF load, the quantity $Q'$ denotes the value of the corresponding \textit{scaled} MagLIF load. In \Eq{eq:scaling:Rout}, $\gamma$ is a polytropic index that characterizes the adiabatic compression of the liner material. In the second line of \Eq{eq:scaling:Rout}, we have included a correction term, which takes into account the differences in the liner shock-compression when changing the characteristic current driving the load. (A further discussion on this topic is given in \App{app:correction}.) Specific values for $\gamma$ and $\mc{C}$ are given in \Sec{sec:numerical}. Following the results in Paper I, the scaling prescription for the liner mass per-unit-length $\widehat{m}$ is given by \begin{equation} \frac{\widehat{m}'}{\widehat{m}} = \left( \frac{I_\star'}{I_\star} \right)^{2\gamma/(2\gamma-1)} . \label{eq:scaling:mhat} \end{equation} The scaling rules \eq{eq:scaling:Rout} and \eq{eq:scaling:mhat} guarantee that the liner will implode in a similar fashion and that its robustness towards instabilities will be maintained when increasing current. After finding the scaled liner outer radius $R_{\rm out,0}'$ and its mass per-unit-length $\widehat{m}'$, we can determine the liner inner radius $R_{\rm in,0}'$ by using the definition of $\widehat{m}'$: \begin{equation} R_{\rm in,0}' = \left[ R_{\rm out,0}'^2 - \widehat{m}'/(\pi \rho_{\rm liner,0}) \right]^{1/2}, \label{eq:scaling:Rin} \end{equation} where $\rho_{\rm liner,0}$ is the initial density of the liner. Upon knowing the liner dimensions of the anchor load and the parameters $\gamma$ and $\mc{C}$, \Eqs{eq:scaling:Rout}--\eq{eq:scaling:Rin} determine the scaling prescriptions for the radial dimensions of the liner. As the liner implodes, the fuel pressure increases and eventually decelerates the liner. In Paper I, this process is characterized by the dimensionless parameter $\Phi$. Conserving $\Phi$ leads to the scaling prescription of the preheat energy per-unit-length delivered to a MagLIF load: \begin{equation} \frac{\widehat{E}_{\rm preheat}'}{\widehat{E}_{\rm preheat} } = \left( \frac{I_\star'}{I_\star} \right)^2. \label{eq:scaling:Epreheathat} \end{equation} In other words, the preheat energy per-unit-length $\smash{\widehat{E}_{\rm preheat}}$ scales as the square of the characteristic current $I_\star$ of the system. Note that the \textit{total} preheat energy delivered to the fuel will scale as \begin{equation} \frac{E_{\rm preheat}'}{E_{\rm preheat}} = \frac{\widehat{E}_{\rm preheat}'}{\widehat{E}_{\rm preheat}} \frac{h'}{h}, \label{eq:scaling:Epreheat} \end{equation} where $h$ is the imploding height of the liner. Finally, since $t_\varphi = \const$ in this study, the time $t_{\rm preheat}$ at which preheat occurs remains unchanged: \begin{equation} t_{\rm preheat}' = t_{\rm preheat} . \label{eq:scaling:tpreheat} \end{equation} Following Paper I, we scale the initial fuel density $\rho_0$, the external magnetic field $B_{z,0}$, and the liner height $h$ in order to conserve the relative radiation, ion-conduction, and end-flow energy losses, respectively. In the specific case of $t_\varphi=\const$, the scaling prescriptions for these quantities are given by \begin{align} \frac{\rho_0'}{\rho_0}& = \left( \frac{I_\star'}{I_\star} \right)^{2/3} \left( \frac{R_{\rm in,0}'}{R_{\rm in,0} } \right)^{-2/3} , \label{eq:scaling:rho} \\ \frac{B_{z,0}'}{B_{z,0}} & = \left( \frac{I_\star'}{I_\star} \right)^{4/3} \left( \frac{R_{\rm in,0}'}{R_{\rm in,0} } \right)^{-10/3}, \label{eq:scaling:Bz} \\ \frac{h'}{h}& = \left( \frac{I_\star'}{I_\star} \right)^{2/3} \left( \frac{R_{\rm in,0}'}{R_{\rm in,0} } \right)^{-2/3}. \label{eq:scaling:h} \end{align} Note that the initial fuel density $\rho_0$ and the imploding height $h$ follow the same scaling prescriptions. Equations \eq{eq:scaling:Rout}--\eq{eq:scaling:h} represent the scaling rules for the most important input parameters characterizing a MagLIF load. Note, however, that there are other specific features of a MagLIF load that are absent from the model introduced in Paper I. As an example, other parameters defining the platform are the laser-spot size $R_{\rm spot}$ and the inner radius of the cushions $R_{\rm cushion}$. Note that the cushions are cylindrical washers placed within the liner ends (see \Fig{fig:liners}) that help mitigate the wall instability,\cite{McBride:2013gda} which occurs where the liner meets the electrode surfaces. In Paper I, we did not invoke any models to describe the propagation of the preheat-induced blast-wave,\cite{HarveyThompson:2019ff} laser-plasma interactions,\cite{Geissel:2018ee} or the wall instability.\cite{McBride:2013gda} For simplicity, here we shall invoke geometric similarity when scaling $R_{\rm spot}$ and $R_{\rm cushion}$. In other words, we assume that these quantities scale proportionally to the initial inner radius of the liner: \begin{equation} \frac{R_{\rm spot}'}{R_{\rm spot}} = \frac{R_{\rm cushion}'}{R_{\rm cushion}} = \frac{R_{\rm in,0}'}{R_{\rm in,0}}. \label{eq:scaling:Rcushion} \end{equation} Likewise, other parameters such as the axial length of the cushions, the anode--cathode gap length, and the axial location of the laser-entrance-hole window are geometrically scaled according to the liner height $h$.\cite{foot:axial_scaling} We denote these axial dimensions by $H$, and the scaling rule is then \begin{equation} \frac{H'}{H} = \frac{h'}{h}. \label{eq:scaling:H} \end{equation} Equations \eq{eq:scaling:Rout}--\eq{eq:scaling:H} completely determine the similarity scaling of a MagLIF load with respect to peak current. \begin{figure} \includegraphics[scale=0.44]{fig02_radii} \caption{Scaling curves for the initial inner and outer radii of a MagLIF liner. These curves are based on a typically fielded MagLIF liner with $R_{\rm out,0} =2.79$~mm and AR=6 driven at 20-MA peak current.\cite{Gomez:2020cd} The curves are obtained from \Eqs{eq:scaling:Rout}--\eq{eq:scaling:Rin}. Shaded region denotes the area where the MagLIF liner is initially located.} \label{fig:numerical:radii} \end{figure} \begin{figure*} \includegraphics[scale=0.43]{fig07_Epreheat} \hspace{0.3cm} \includegraphics[scale=0.43]{fig05_density} \includegraphics[scale=0.43]{fig08_Bz} \hspace{0.3cm} \includegraphics[scale=0.43]{fig04_height} \caption{Scaling curves for the preheat energy, initial fuel density, initial magnetic field, and imploding height of the liner. The legends in the subfigures give the approximate power-law fits to the scaling prescriptions.} \label{fig:numerical:parameters} \end{figure*} To maintain the coupling between the circuit and the imploding MagLIF load, Paper I presented scaling rules for the input parameters characterizing the circuit model. When the voltage characteristic time is conserved, the scaling prescriptions for the circuit parameters are \begin{equation} \frac{Z_0'}{Z_0} = \frac{L_0'}{L_0} = \frac{L_1'}{L_1} = \frac{R_{\rm loss,i}'}{R_{\rm loss,i}} = \frac{R_{\rm loss,f}'}{R_{\rm loss,f}} = \frac{h'}{h}, \label{eq:scaling:Z} \end{equation} \begin{equation} \frac{C'}{C} = \frac{h}{h'}. \label{eq:scaling:C} \end{equation} The quantities above appear in the circuit diagram in Paper I. $Z_0$ is the effective impedance of the pulsed-power generator, $L_0$ is the inductance of the outer magnetically-insulated transmission (MITL) line, $L_1$ is the inductance of the inner-MITL region, $R_{\rm loss,i}$ and $R_{\rm loss,f}$ are the initial and final resistance values of the shunt resistor, and $C$ is the capacitance in the circuit. As shown, all inductances and resistances appearing in the electrical circuit scale proportionally to the liner height $h$. Only the capacitance $C$ scales inversely proportionally to $h$. Since $t_\varphi$ remains constant, all timescales appearing in the problem are maintained. Therefore, the time parameters for the shunt resistor introduced in Paper I are left unchanged: \begin{equation} t_{\rm loss}' = t_{\rm loss}, \qquad \Delta t_{\rm loss}' = \Delta t_{\rm loss}. \label{eq:scaling:tloss} \end{equation} The characteristic voltage $\varphi_0$ of the external drive scales proportionally to the characteristic current $I_\star$ and the imploding height $h$ of the liner: \begin{equation} \frac{\varphi_0'}{\varphi_0} = \frac{I_\star'}{I_\star} \frac{L_0'}{L_0} = \frac{I_\star'}{I_\star} \frac{h'}{h}. \label{eq:scaling:varphi} \end{equation} This scaling rule is only valid when the characteristic time $t_\varphi$ of the voltage drive is held constant. In order to increase the peak current by a fraction $I_\star'/I_\star$, the voltage drive will have to be multiplied by the factor $\varphi_0'/\varphi_0$ given in \Eq{eq:scaling:varphi}. From \Eq{eq:scaling:h}, we know that MagLIF liners change in axial length when scaling the current. Hence, $\varphi_0$ does not scale linearly with the characteristic current $I_\star$ (or peak current, for that matter). Instead, it shows a stronger scaling, which translates to higher voltage requirements for scaled-up MagLIF loads. As a final remark of this section, when scaling MagLIF loads and the circuit parameters according to the prescriptions above, it is expected that the normalized current delivered to the load $\bar{I}_l \doteq I_l/I_{\star}$ will remain invariant. (This will be tested in \Sec{sec:implosion}.) Therefore, the peak current delivered to the load $\I \doteq \max(I_l)$ scales linearly with $I_\star$; in other words, $\I'/\I = I_\star'/I_\star$. Since peak current $\I$ is a commonly used metric for current delivery to z-pinch devices, we shall express the scaling laws in terms of $\I$ in the rest of this paper. \section{Numerical simulations and baseline load parameters} \label{sec:numerical} We conducted 2D \textsc{hydra} simulations to test the similarity-scaling theory. \textsc{hydra} is a massively parallel arbitrary Lagrangian--Eulerian (ALE) radiation, resistive-diffusion, magneto-hydrodynamics code\cite{Marinak:1996fs,Koning:2009} and is one of the main design tools for MagLIF experiments.\cite{Sefkow:2014ik,HarveyThompson:2018dd,Weis:2021id} For the calculations presented in this paper, the simulations were performed in cylindrical geometry with azimuthal symmetry. A generalized Ohm's law was used that includes effects such as Nernst advection, which can affect the fuel magnetization. The equation of state and the transport coefficients for the nonideal thermal and magnetic conduction of the DT fuel, Be liner, Al electrodes, and stainless-steel cushions were taken from pregenerated LEOS and SESAME tables.\cite{More:1988jx,foot:SESAME} The radiation field was modeled using implicit Monte-Carlo photonics. The simulations are externally driven using the circuit model shown in Fig.~1 of Paper I. Specifically, the parameters for the anchor circuit model are $Z_0=0.18~\Omega$, $L_0 = 9.58$~nH, $C = 0.1$~nF, and $L_1 = 5$~nH. For the shunt resistor, we use $R_{\rm loss,i} = 80$~Ohm and $R_{\rm loss,f} = 0.25$~Ohm.\cite{foot:circuit} The time parameters for the shunt resistor are $t_{\rm loss} = 0$~ns and $\Delta t_{\rm loss} = 5$~ns. The 20-MA, baseline load is driven using the open-source voltage shown in Fig.~2 of Paper~I. The circuit parameters and the voltage drive were scaled according to \Eqs{eq:scaling:Z}--\eq{eq:scaling:varphi}. For the baseline MagLIF configuration, we consider an initial liner outer radius of $R_{\rm out,0}=2.79$~mm and an initial inner radius of $R_{\rm in,0}=2.325$~mm. Thus, the initial aspect ratio AR$\doteq R_{\rm out,0}/(R_{\rm out,0}-R_{\rm in,0})$ of the liner is six. In simulations, the liner is made of Be with initial density 1.858~g/cm$^3$, so the mass per-unit-length is approximately 139~mg/cm. Regarding the initial fuel parameters, we consider an equimolar DT gas fill at $\rho_0 = 2.25$~mg/cm$^3$ density. As a reminder, the Z facility does not have the capability of fielding MagLIF loads with equimolar DT fuel. When maintaining the initial number of electrons in the fuel, $\rho_0$ corresponds to 1.8~mg/cm$^3$ of DD fuel. The preimposed initial axial magnetic field is $B_{z,0}=16$~T. For the preheat energy deposition, the fuel is heated uniformly by adding 2.1~kJ of energy into a plasma column of radius $R_{\rm spot} = 0.75$~mm coaxial to the liner. The deposition of energy begins approximately 70~ns before burn time and lasts for about 10 ns.\cite{foot:preheat} The imploding height of the liner is 10 mm. With the exception of the slightly higher density and preheat energy, these chosen parameters are representative of the input parameters of MagLIF loads typically fielded in present-day experiments on Z.\cite{Gomez:2020cd,YagerElorriaga:2022cp} In this paper, we consider a polytropic index for the Be liner of $\gamma = 2.25$ in \Eqs{eq:scaling:Rout} and \eq{eq:scaling:mhat}. This value is slightly larger than a fit to the cold EOS curve $\gamma_{\rm cold} \simeq 1.9$. We use this higher value for $\gamma$ in order to account for the shock heating of the liner and the ensuing larger incompressibility of the liner shell. The correction factor $\mc{C}$ in \Eq{eq:scaling:Rout} was chosen to be $\mc{C} = 0.03/(48/20-1)\simeq0.021$ to better conserve the implosion time of the scaled MagLIF loads. For a MagLIF load driven at a peak current close to 48~MA, this correction factor denotes that the scaled outer radius of the liner is shifted outwards by 3$\%$ compared to its nominal scaled value without the correction. With the scaling prescriptions in \Eqs{eq:scaling:Rout}--\eq{eq:scaling:Rin} and the parameters given in the preceding paragraphs, we plot in \Fig{fig:numerical:radii} the initial inner and outer radii of the similarity-scaled MagLIF loads. We note that, when increasing the characteristic current (or equivalently, the peak current), the liner becomes larger in radius. This is mainly a consequence of constraining the liner to implode in a similar fashion by conserving the $\Pi$ parameter of Paper I. When approximately fitting the two curves in \Fig{fig:numerical:radii} using power laws with respect to peak current, we find that \begin{equation} \frac{R_{\rm out,0}'}{R_{\rm out,0}} \simeq \left( \frac{\I'}{\I} \right)^{0.39} , \qquad \frac{R_{\rm in,0}'}{R_{\rm in,0}} \simeq \left( \frac{\I'}{\I} \right)^{0.23}. \label{eq:numerical:R} \end{equation} The liner mass per-unit-length follows a scaling $\widehat{m} \propto \I^{1.29}$ so the liner inner radius cannot increase as strongly and instead follows a weaker $R_{\rm in,0} \propto \I^{0.23}$ scaling rule. This leads to our second observation: the liner becomes significantly thicker when scaling to higher currents. To be more quantitative, the initial aspect ratio AR for the anchor liner equals 6 while the AR for the corresponding scaled 60-MA liner is close to 3.2. The increase of the liner thickness is a consequence of constraining the liner to maintain its robustness towards the magneto-Rayleigh--Taylor (MRT) instability. \begin{figure} \includegraphics[scale=.45]{fig_liners} \caption{Top left: Logarithmic density plot for the anchor AR=6 MagLIF load driven at a 20-MA peak current. Top right: Similarity scaled, AR=3.2 MagLIF load driven at 60-MA peak current. Bottom: Corresponding logarithmic density plots near stagnation calculated using \textsc{hydra}. Note that, when increasing the current drive, liners becomes larger in radius and thicker to maintain robustness to MRT instabilities.} \label{fig:liners} \end{figure} The scaling rules for the preheat energy $E_{\rm preheat}$, the initial fuel density $\rho_0$, the applied axial magnetic field $B_{z,0}$, and the imploding height $h$ are shown in \Fig{fig:numerical:parameters}. When fitting the scaling curves to power laws, we find \begin{gather} \frac{E_{\rm preheat}'}{E_{\rm preheat}} \simeq \left( \frac{\I'}{\I} \right)^{2.51} , \label{eq:numerical:Epreheathat} \end{gather} \begin{gather} \frac{\rho_0'}{\rho_0} = \frac{h'}{h} \simeq \left( \frac{\I'}{\I} \right)^{0.51}, \label{eq:numerical:rho}\\ \frac{B_{z,0}'}{B_{z,0}} \simeq \left( \frac{\I'}{\I} \right)^{0.57} . \label{eq:numerical:Bz} \end{gather} Therefore, a preheat energy of 2.1 kJ at 20-MA peak current scales to 34 kJ at 60 MA. From \Eqs{eq:scaling:rho} and \eq{eq:scaling:h}, the scaling rules for the initial fuel density and the liner height are identical. When scaling between 20~MA and 60~MA, we find that the initial fuel density $\rho_0$ increases from 2.25 mg/cm$^3$ to 4.0 mg/cm$^3$. To mitigate end losses, the liner height also increases substantially from 10~mm to 17.8~mm. To maintain relative thermal ion-conduction losses, the externally preimposed magnetic field must increase from 16~T to 31~T. Figure~\ref{fig:liners} (top) presents logarithmic-density plots of the initial configurations of the baseline 20-MA MagLIF load and of a similarity-scaled 60-MA load. When increasing the current drive, MagLIF liners become larger in radius, taller, and thicker. Figure~\ref{fig:liners} (bottom) illustrates that the similarity-scaled MagLIF loads look qualitatively the same near stagnation. This is a signature of similarity scaling. In Secs.~\ref{sec:implosion}--\ref{sec:performance}, we shall present a quantitative comparison of the implosion dynamics, the stagnation conditions of the plasma fuel, and the performance of the scaled MagLIF loads. \begin{figure} \includegraphics[scale=.43]{figcont00_radius_current} \caption{Liner radii and current delivered to the load versus time for different current scales calculated using \textsc{hydra}. The 20-MA anchor load (shown in blue) stagnates at the same time as its similarity-scaled counterparts driven at 40 MA and 60 MA (shown in yellow and green, respectively). In this work, the outer boundary of the liner is tracked using a $1/e\simeq37\%$ threshold of the maximum density (similar to \Refa{Bose:2017jf}), and the inner boundary is tracked using a Lagrangian marker.} \label{fig:radius_vs_time} \end{figure} At this point, it is worth commenting on the differences between the power-law scaling rules in \Eqs{eq:numerical:R}--\eq{eq:numerical:Bz} and the scaling rules proposed in \Refa{Schmit:2020jd}. In this paper, we focus on current scaling and keep the characteristic time of the voltage source constant. Therefore, the scaling rules discussed here correspond to the ``implosion-time conserving, radiation-conserving" (ITC-rad) scaling strategy of \Refa{Schmit:2020jd}. Neglecting the small correction added to take into account shock-compression effects, the scaling rules for the liner radial dimensions are identical. Since this work considers the finite thickness of the liners, the scaling prescriptions for the initial gas density and the liner height increase slightly more rapidly (\eg, $\rho_0 \propto \I^{0.51}$ in this paper versus $\rho_0 \propto \I^{0.42}$ in \Refa{Schmit:2020jd}). In addition, \Refa{Schmit:2020jd} suggests to keep $B_{z,0}$ constant when increasing peak current since thermal electron-conduction losses decrease. However, Paper I argues that ion-conduction losses become more important near stagnation of the plasma fuel and therefore need to be conserved. This leads to the $B_{z,0}\propto \I^{0.57}$ scaling prescription in \Eq{eq:numerical:Bz}. When increasing the peak current, the scaling rules in this paper lead to hotter and higher-pressure stagnation columns, which then modify the scaling laws for important performance metrics, for example, the fusion yield and the ignition parameter $\chi$. \section{Liner-implosion dynamics} \label{sec:implosion} \begin{figure} \includegraphics[scale=.43]{figcont02_radius_current_normalized} \caption{Normalized implosion trajectory of the outer liner radius and normalized current traces calculated using \textsc{hydra}. When following the scaling prescriptions provided in \Sec{sec:prescriptions}, the normalized trajectories of the outer liner radius are almost identical. Likewise, the normalized current traces are also in very close agreement, as expected from the scaling theory.} \label{fig:radius_norm_vs_time} \end{figure} Figure \ref{fig:radius_vs_time} shows the radius-versus-time and current-versus-time plots for three similarity scaled MagLIF loads driven at 20~MA, 40~MA, and 60~MA. As shown from the shaded regions in \Fig{fig:radius_vs_time}, the liners of the scaled MagLIF loads tend to be larger in radius and thicker. When adopting the scaling prescriptions in \Sec{sec:prescriptions} for the liner radial dimensions and the electrical circuit, we find that all three loads implode at similar times according to \textsc{hydra} calculations. This is not surprising because all timescales are expected to be conserved. In \Fig{fig:radius_norm_vs_time}, we normalize the liner outer radius by its initial value so that $\bar{R}_{\rm out}\doteq R_{\rm out}/R_{\rm out}(0)$ is plotted. The currents delivered to the loads are also normalized by 20, 40,~and~60~MA, which are the expected scaled peak currents. Due to similarity, $\bar{R}_{\rm out}(t)$ should remain invariant when scaling. As shown in \Fig{fig:radius_norm_vs_time}, $\bar{R}_{\rm out}$ is indeed conserved to a high degree. The normalized current delivered to the load $\bar{I}_l$ is almost perfectly scale invariant, which indicates that the scaling relations derived in \Sec{sec:prescriptions} for the electrical circuit and the liner radial dimensions hold. To provide a more quantitative comparison of similarity between liner implosions, we plot in \Fig{fig:implosion_time} the simulated implosion times\cite{foot:timplosion} for a family of similarity-scaled MagLIF loads. For the simulations without $\alpha$ heating, the implosion times are conserved within a burn-width fraction. (In no-$\alpha$ MagLIF calculations, the burn-widths are roughly $\sim2$~ns in duration.) For the calculations with $\alpha$ heating, the deviation of the calculated implosion time is about one burn-width time. Overall, the differences between the implosion times are less than 2\% suggesting that the scaling relations for the circuit and liner parameters are indeed preserving the liner implosion dynamics. \begin{figure} \includegraphics[scale=.43]{fig09_implosion_time} \caption{Implosion times of the similarity-scaled MagLIF loads. Red and blue points denote simulation results with and without $\alpha$ heating, respectively. Both datasets have the same experimental input parameters. For reference purposes, the error bars denote the full-width, half-maximum burn time of the neutron-production event in the simulations. } \label{fig:implosion_time} \end{figure} The in-flight aspect ratio (IFAR) is often used as a measure of the robustness of ICF shell implosions towards Rayleigh--Taylor instabilities.\cite{Bose:2017jf} Higher IFAR values are usually correlated to less stable implosions. Figure \ref{fig:IFAR_vs_time} shows the IFAR trajectories plotted versus time for the MagLIF loads shown in \Fig{fig:radius_vs_time}. The IFAR increases during the early stages of the implosions due to the shock compression of the liners. After shock breakout, the liners then relax and accelerate as a whole. This occurs roughly when the outer convergence ratio $\mathrm{CR}_{\rm out}(t) \doteq R_{\rm out,0}/R_{\rm out}(t)$ has reached a value of 1.5 or close to $\sim$75 ns in simulation time. From \Fig{fig:IFAR_vs_time}, it is clear that the initial AR of the scaled-up liners must decrease in order to compensate for the stronger magnetic compression of the liner. This design feature was not taken into account in previous scaling works.\cite{Slutz:2016cf,Slutz:2018iq} Note that the peak IFAR values for the similarity-scaled liners are smaller than that of the 20-MA anchor load. Therefore, the scaling prescriptions \eq{eq:numerical:R} for the liner dimensions obtained using $\gamma=2.25$ could be considered slightly ``over-conservative" with respect to robustness of the liner towards instabilities. This is a favorable feature since performance of MagLIF implosions can be significantly degraded by MRT instabilities in simulations. \begin{figure} \includegraphics[scale=.43]{figcont03_IFAR} \caption{IFAR trajectories plotted versus time for the MagLIF loads shown in \Fig{fig:radius_vs_time}. Before the inner surface of the liner has moved, the initial shock transiting the liner decreases its thickness and increases the IFAR. Once shock breakout occurs, the liner relaxes and begins to move as a whole.} \label{fig:IFAR_vs_time} \end{figure} The scaling rules in \Sec{sec:prescriptions} do not guarantee that every physical process in a MagLIF implosion will be conserved. One example of a process that is not conserved is the strength of the blast wave occurring after preheat. Figure~\ref{fig:radius_vs_time} shows that the inner radius of the 60-MA scaled liner slightly increases at around 55~ns, which is the time when the blast wave impacts the liner. This effect is not visible for the loads with lower preheat energy. The increase in the blast-wave strength may lead to unaccounted interface mixing between the fuel and the liner during the preheat stage. Figure~\ref{fig:radius_norm_vs_time} also shows that the normalized outer radius of the scaled 60-MA load is more strongly magnetically compressed (near 50 ns) before the liner begins to accelerate rapidly. This is understandable since the magnetic pressure driving the liners scales as \begin{equation} \frac{p_{\rm mag,ext}'}{p_{\rm mag,ext}} = \left( \frac{\I'}{\I} \frac{R_{\rm out,0}}{R_{\rm out,0}'} \right)^2 \simeq \left( \frac{\I'}{\I} \right)^{1.22}, \end{equation} so the higher-current liners are subject to stronger shock compression. Because the outer radius of the liner moves slightly inwards, the inductance in the load region increases slightly. This reduces the current delivered to the load, which explains the small reduction in the normalized current at 50~ns shown in \Fig{fig:radius_norm_vs_time} for the MagLIF liners driven at larger currents. Overall, the similarity-scaling framework presented in Paper I will not conserve all the physics involved in a MagLIF implosion. However, this framework can preserve the \emph{leading-order} physical processes and provide reasonable estimates of the scaled performance metrics. \section{Stagnation conditions} \label{sec:stagnation} \begin{figure} \includegraphics[scale=.43]{fig11_HotSpotCReff} \caption{Effective inner convergence ratio $\mathrm{CR}_{\rm in,eff}$ evaluated as defined in \Eq{eq:CReff} for the similarity-scaled MagLIF loads. Red and blue points denote simulation results with and without $\alpha$ heating, respectively. Both data sets have the same experimental input parameters. Error bars correspond to a standard deviation when collecting statistics of the radial position of the inner liner interface along its axial length.} \label{fig:CR} \end{figure} In this section, we examine the scaling rules for the fuel thermodynamic conditions that are achieved near peak burn and compare these against simulation results. In the absence of energy-loss mechanisms, the inner-convergence ratio $\mathrm{CR}_{\rm in}(t) \doteq R_{\rm in,0} / R_{\rm in}(t)$ is the main factor determining the thermodynamic conditions of the fuel plasma. When scaling MagLIF loads to higher currents, it is desirable that $\mathrm{CR}_{\rm in}$ near peak burn be maintained since higher convergence ratios often correlate with more unstable plasma columns at stagnation [refer to Sec. IV of Paper I]. Since peak burn may occur before or after peak compression of the fuel column (depending on the relative importance of $\alpha$ heating), we introduce the effective inner convergence ratio: \begin{equation} \mathrm{CR}_{\rm in,eff} \doteq \left\{ \begin{array}{ll} \mathrm{CR}_{\rm in}(t_{\rm bang}),\qquad t_{\rm bang} \leq t_{\rm stag} \\ \mathrm{CR}_{\rm in}(t_{\rm stag}),\qquad ~t_{\rm bang} > t_{\rm stag} \end{array} \right. , \label{eq:CReff} \end{equation} where $t_{\rm bang}\doteq \argmax(\dot{Y})$ is the time at which peak burn occurs, $\smash{\dot{Y}(t) \doteq \mathrm{d}Y/\mathrm{d}t}$ is the neutron yield rate, and $t_{\rm stag} \doteq \argmax({\rm CR}_{\rm in})$ is the time at which peak compression of the fuel occurs. This measure of the inner convergence ratio may be more representative of the risks associated to hydrodynamical instabilities affecting the burn event. Similarity scaling aims to preserve the effective inner-convergence ratio of MagLIF implosions. As shown in \Fig{fig:CR}, $\mathrm{CR}_{\rm in,eff}$ for the no-$\alpha$ calculations is maintained within error bars, thus indicating that the scaled-up MagLIF loads are not converging more. Interestingly, simulations with $\alpha$ heating show a reduction of $\mathrm{CR}_{\rm in,eff}$ when going beyond 35-MA peak current. As we shall discuss later on, calculations of similarity-scaled MagLIF loads suggest that $\alpha$ heating becomes more important for peak currents greater than 35~MA. The additional heat source from the $\alpha$ particles lead to higher fuel pressures causing the fuel to stagnate at lower $\mathrm{CR}_{\rm in,eff}$ values. \begin{figure} \includegraphics[scale=.43]{fig13_pressureB} \caption{Burn-history averaged plasma pressure. Red and blue points denote simulation results with and without $\alpha$ heating, respectively. Dashed lines are power-law fits to the simulation data. The legend shows the fitted scaling exponents. Error bars denote the burn-weighted standard deviation associated to temporal variations of the plasma pressure near peak burn. The yellow curve is the theoretical scaling law in \Eq{eq:stagnation:pion2}. } \label{fig:pion} \end{figure} Figure \ref{fig:pion} shows the plasma pressure averaged over the burn history. In this paper, quantities $Q$ averaged over the burn history are calculated as follows: \begin{equation} \langle Q \rangle_{\rm b.h.} \doteq \frac{ \int \, \int_{V_{\rm fuel}} n_i^2 \langle \sigma v \rangle \, Q \, \mathrm{d}V \, \mathrm{d}t } {\int \int_{V_{\rm fuel}} n_i^2 \langle \sigma v \rangle \, \mathrm{d}V \, \mathrm{d}t}, \label{eq:stagnation:average} \end{equation} where $n_i^2 \langle \sigma v \rangle$ is proportional to the neutron yield rate per-unit-volume and $V_{\rm fuel}(t)$ is the volume of the fuel plasma. According to the discussion presented in Sec.~X~A of Paper I, all dimensionless no-$\alpha$ quantities describing the plasma conditions (for example, $\bar{\rho}$, $\bar{T}$, $\bar{p}$, and $\bar{B}_z$) are expected to remain invariant when following the similarity-scaling prescriptions. In Paper I, the fuel pressure is normalized by the preheat pressure defined by $p_{\rm preheat} \doteq (2/3)\smash{E_{\rm preheat} /(\pi R_{\rm in,0}^2 h) }$. Therefore, the no-$\alpha$ fuel pressure $p_{\rm fuel,no \, \alpha}$ satisfies the scaling relation: \begin{equation} \frac{p_{\rm fuel,no \, \alpha}'}{p_{\rm fuel,no \, \alpha}} = \frac{p_{\rm preheat}'}{p_{\rm preheat}} = \frac{\widehat{E}_{\rm preheat}'}{\widehat{E}_{\rm preheat}} \left( \frac{R_{\rm in,0}}{R_{\rm in,0}'}\right)^2 . \label{eq:stagnation:pion} \end{equation} Upon using the derived scaling rules in \Eqs{eq:scaling:Epreheat} and \eq{eq:numerical:R}, we find that the plasma pressure approximately scales as: \begin{equation} \frac{p_{\rm fuel,no \, \alpha}'}{p_{\rm fuel,no \, \alpha}} \simeq \left( \frac{\I'}{\I}\right)^{1.54}. \label{eq:stagnation:pion2} \end{equation} In \Fig{fig:pion}, the predicted scaling law \eq{eq:stagnation:pion2} shows good agreement with the burn-history averaged, no-$\alpha$ fuel pressures of the similarity-scaled MagLIF loads. Since $\alpha$ heating is not a process that is conserved, the simulation results with $\alpha$ heating show a stronger scaling law for the fuel pressure. When the drive current exceeds 35-MA peak current, the power law fit for the simulation outputs is $p_{\rm fuel,\alpha} \propto \I^{2.34}$. This stronger scaling curve is a signature of a MagLIF-operating regime where $\alpha$ heating effects become more prominent. \begin{figure} \includegraphics[scale=.43]{fig15_tionB} \caption{Burn-history averaged ion temperature. Dashed lines are power-law fits to the simulation data. Error bars denote the burn-weighted standard deviation associated with temporal variations of the ion temperature near peak burn. The yellow curve is the theoretical scaling curve \eq{eq:stagnation:tion2}.} \label{fig:tion} \end{figure} To further constrain the plasma thermodynamic conditions near stagnation, we compare the burn-history averaged fuel temperature $\langle T \rangle_{\rm b.h.}$. Similar to the fuel pressure, the no-$\alpha$ fuel temperature scales as the preheat temperature $k_B T_{\rm preheat} \doteq p_{\rm preheat}/(2 \rho_0 / m_i)$. We obtain \begin{equation} \frac{T_{\rm no \,\alpha}'}{T_{\rm no \,\alpha}} = \frac{T_{\rm preheat}'}{T_{\rm preheat}} = \frac{p_{\rm preheat}'}{p_{\rm preheat}} \frac{\rho_0}{\rho_0'} = \left( \frac{\I'}{\I}\frac{R_{\rm in,0}}{R_{\rm in,0}'}\right)^2 \frac{\rho_0}{\rho_0'} , \label{eq:stagnation:tion} \end{equation} where we used \Eq{eq:stagnation:pion}. Upon substituting the scaling prescriptions of \Sec{sec:numerical}, we obtain the approximate power-law scaling rule for the no-$\alpha$ fuel temperature: \begin{equation} \frac{T_{\rm no \,\alpha}'}{T_{\rm no \,\alpha}} \simeq \left( \frac{\I'}{\I} \right)^{1.03}. \label{eq:stagnation:tion2} \end{equation} Therefore, the fuel temperature is expected to grow approximately linearly with peak current when following the scaling rules proposed in this paper. Figure \ref{fig:tion} compares the theoretical scaling law to results from the simulations. As shown, the scaling theory slightly overpredicts the growth of the no-$\alpha$ ion temperatures. (This discrepancy will be discussed in \Sec{sec:loss}.) For this anchor configuration driven at 20~MA, $\langle T_{\rm ion} \rangle_{\rm b.h.} = 2.8$~keV and grows to $\langle T_{\rm ion} \rangle_{\rm b.h.} =7.3$~keV at 60 MA, which clearly exceeds the temperature threshold needed to have $\alpha$ heating dominate radiation losses. As shown in the same figure, once the peak current exceeds 35 MA, calculations with $\alpha$ heating show that the fuel temperature markedly increases with a fitted scaling curve of $T_{\rm ion,\alpha} \propto \I^{2.6}$ and can reach 20~keV at 60~MA. \begin{figure} \includegraphics[scale=.43]{fig31_Bz_bh} \caption{Burn-history averaged axial magnetic field. Error bars denote the burn-weighted standard deviation associated to temporal variations near peak burn.} \label{fig:Bz} \end{figure} \begin{figure} \includegraphics[scale=.43]{fig20_HotSpotIntE} \caption{Fuel internal energy averaged over the neutron yield-rate history. Error bars denote the burn-weighted standard deviation associated to temporal variations near peak burn.} \label{fig:U} \end{figure} Another quantity of interest is the magnetic field within the fuel column since it is important for limiting ion-conduction losses and trapping $\alpha$ particles. Upon ignoring magnetic-transport effects such as Nernst advection and magnetic diffusion, we find that the burn-history averaged axial magnetic field $\langle B_{\rm z} \rangle_{\rm b.h.}$ must scale with the initial axial magnetic field: \begin{equation} \frac{B_{z,\,\mathrm{no \,\alpha}}'}{B_{z,\,\mathrm{no \,\alpha}}} = \frac{B_{z,0}'}{B_{z,0}} \simeq \left( \frac{\I'}{\I} \right)^{0.57}. \label{eq:stagnation:Bz} \end{equation} where we used \Eq{eq:numerical:Bz}. Figure~\ref{fig:Bz} compares the scaling rule \eq{eq:stagnation:Bz} to the simulation outputs for $\langle B_z \rangle_{\rm b.h.}$. The scaling law shows agreement with the no-$\alpha$ simulation results. Beyond 35~MA, the simulation results with $\alpha$ heating show smaller axial magnetic fields due to the weaker compression of the fuel column when $\alpha$ heating becomes more important (as shown in \Fig{fig:CR}). The present similarity-scaling theory allows us to estimate scaling laws for other volume-integrated quantities such as the fuel internal energy and the kinetic energy of the liner. Using the former as an example, the no-$\alpha$ fuel internal energy $U$ evaluated at peak burn should satisfy \begin{equation} \frac{U_{\rm no\, \alpha}'}{U_{\rm no\, \alpha}} = \frac{E_{\rm preheat}'}{E_{\rm preheat}} = \left( \frac{\I'}{\I} \right)^{2.51}. \end{equation} As shown in \Fig{fig:U}, the theoretical scaling curve shows agreement with the averaged no-$\alpha$ fuel internal energy near peak burn. This confirms that, when similarity scaling MagLIF loads, the no-$\alpha$ fuel internal energy scales linearly with the preheat energy. From \Fig{fig:U}, we also note that, when scaling from 20-MA to 60-MA peak current, a roughly 15-fold increase is expected in the no-$\alpha$ internal energy of the fuel. As previously shown, for peak currents exceeding 35 MA, calculations with $\alpha$ heating included show a sharp increase in the fuel internal energy. \section{Burn width and conservation of relative losses} \label{sec:loss} \begin{figure} \includegraphics[scale=.43]{fig24_tau2} \caption{Burn width $\tau_{\rm bw}$ calculated by measuring the full-width, half-maximum of the neutron yield-rate time traces. $\tau_{\rm bw}$ shows 10$\%$ variability for the simulations without $\alpha$ heating. For the calculations with $\alpha$ heating, we obtain a power law fit $\tau_{\rm bw} \propto I_{\rm max}^{0.47\pm0.05}$ for $I_{\rm max} \leq 35$~MA and $\tau_{\rm bw} \propto I_{\rm max}^{-0.48\pm0.19}$ for $I_{\rm max} \geq 35$~MA.} \label{fig:tau} \end{figure} In the scaling approach presented in this paper, all timescales in a MagLIF implosion are expected to remain constant. We further test the invariance of timescales by comparing the burn-width time $\tau_{\rm bw}$ of the simulated yield rates. As shown in \Fig{fig:tau}, $\tau_{\rm bw}\simeq 2.3$~ns is closely conserved for the simulations without $\alpha$ heating. For the calculations with $\alpha$ heating, $\tau_{\rm bw}$ increases when increasing the peak current up to 35 MA. Around 40-MA peak current, the burn width reaches a maximum value of $\tau_{\rm bw}\simeq3.9$~ns but then decreases at higher currents. The physical explanation for this non-monotonic behavior is the following. For $I_{\rm max} \leq 35$~MA, the time $t_{\rm bang}$ of peak burn occurs \emph{before} the time $t_{\rm stag}$ of peak compression of the fuel (see \Fig{fig:tdelay}). In this regime, the pdV work rate $P_{\rm pdV}$ done on the fuel [see Eq.~(53) of Paper~I] is positive since the fuel is still imploding. In this regime, the burn width $\tau_{\rm bw}$ increases as $\alpha$-heating effects become more dominant since fuel pressures and temperatures are increasing with current (see \Figs{fig:pion} and \ref{fig:tion}). As shown in \Fig{fig:tdelay}, peak burn occurs \emph{after} peak compression for $I_{\rm max}>35$~MA. Although $\alpha$ heating continues to be more prominent as current is increased, $P_{\rm pdV}$ is negative during peak burn since the fuel column is expanding. Therefore, $P_{\rm pdV}$ acts as an energy sink that causes the burn width to decrease. Interestingly, the change in regime shown in \Fig{fig:tdelay} correlates with the change of behavior in the scaling of the stagnation quantities discussed in \Sec{sec:stagnation}. \begin{figure} \includegraphics[scale=.43]{fig25_tdelay} \caption{Time delay between the peak-burn time $t_{\rm bang}$ and the peak-compression time $t_{\rm stag}$. For reference purposes, the error bars denote the full-width, half-maximum burn width time $\tau_{\rm bw}$ of the neutron-production events.} \label{fig:tdelay} \end{figure} \begin{figure*} \includegraphics[scale=.43]{fig44_LossRadBurn} \hspace{0.5cm} \includegraphics[scale=.43]{fig50_LossCiBurn} \caption{Left: Relative radiation energy losses as characterized by the parameter $\Upsilon_{\rm rad}$ in \Eq{eq:loss:Upsilon_rad}. Right: Relative ion-conduction energy losses as characterized by the parameter $\Upsilon_{\rm ci}$ in \Eq{eq:loss:Upsilon_ci}. These quantities are evaluated by substituting burn-weighted plasma parameters using \Eq{eq:stagnation:average} representing the plasma conditions near the hot plasma column. Error bars denote the burn-weighted standard deviation associated with temporal variations near peak burn.} \label{fig:Loss} \end{figure*} There are two main factors that determine the burn-width time. The first is the implosion dynamics of the liner, which we have shown to be conserved in \Sec{sec:implosion}. The second is the energy-gain and energy-loss mechanisms. In the no-$\alpha$ heating calculations, only energy-loss mechanisms are present. As discussed in \Sec{sec:prescriptions}, the scaling prescriptions for the initial fuel density and for the axial magnetic field are designed to conserve relative radiation losses and ion-conduction losses. Near peak burn, the relative effects of these processes can be measured by the dimensionless parameters $\Upsilon_{\rm rad}$ and $\Upsilon_{\rm ci}$ given in Sec.~V of Paper~I: \begin{align} \Upsilon_{\rm rad} & \doteq 0.27~ \frac{\left[\rho ({\rm g/cm^3})\right] \cdot [\tau_{\rm bw}({\rm ns} )] } {[T (\mathrm{keV})]^{1/2} }, \label{eq:loss:Upsilon_rad} \\ \Upsilon_{\rm ci} & \doteq 7.9 \cdot (10)^{-7} ~ \frac{ [T(\mathrm{keV})]^{5/2} \cdot [\tau_{\rm bw}({\rm ns} )] \cdot g_i(x_{i})} { \ln \Lambda \cdot \left[ \rho ({\rm g/cm^3})\right] \cdot [R_{\rm in}({\rm cm})]^2 }, \label{eq:loss:Upsilon_ci} \end{align} where $\ln \Lambda$ is the Coulomb logarithm and $g_i(x_{i})$ is a function dependent of the ion Hall parameter that limits ion-conduction losses in the radial direction (see paper I for further details). In \Fig{fig:Loss}, the parameters $\Upsilon_{\rm rad}$ and $\Upsilon_{\rm ci}$ are evaluated using the calculated stagnation conditions characterizing the hot fuel column. For the no-$\alpha$ calculations, both parameters $\Upsilon_{\rm rad}$ and $\Upsilon_{\rm ci}$ do not deviate significantly from their baseline values. $\Upsilon_{\rm rad,no~\alpha}$ tends to deviate to larger values than the nominal, while $\Upsilon_{\rm ci,no~\alpha}$ tends to shift towards smaller values. This behavior is explained by the deviation observed in \Fig{fig:tion} for the ion temperature. Since the plasma pressures and the inner convergence ratios follow the expected scaling trends, the slightly lower power law observed in the no-$\alpha$ simulations in \Fig{fig:tion} suggests that the fuel density increases slightly faster than expected causing radiation losses to become slightly stronger. In a similar manner, the weaker scaling in ion temperature shown in \Fig{fig:tion} explains the decrease in $\Upsilon_{\rm ci,no~\alpha}$ shown in \Fig{fig:Loss}. \begin{figure} \includegraphics[scale=.43]{fig55_HotSpotNormMass} \caption{Burn-history averaged normalized fuel mass $\bar{m}_{\rm fuel}(t) \doteq m_{\rm fuel}(t)/m_{\rm fuel}(0)$. Error bars denote the burn-weighted standard deviation associated to temporal variations near peak burn.} \label{fig:NormMass} \end{figure} In order to conserve relative end-flow energy losses and fuel-mass losses when scaling in peak current, the axial height of MagLIF loads is varied according to the scaling prescription in \Eq{eq:numerical:rho}. To measure the robustness of this scaling rule, we tallied the total fuel mass inventory $m_{\rm fuel}(t)$ located within the imploding region of a MagLIF load. Based on similarity-scaling arguments, it is expected that the normalized fuel mass $\bar{m}_{\rm fuel}(t) \doteq m_{\rm fuel}(t)/m_{\rm fuel}(0)$ should remain invariant when scaling across currents. Figure~\ref{fig:NormMass} shows the normalized fuel mass evaluated near peak burn. Our calculations with and without $\alpha$ heating suggest that about 60\%--65\% of the initial fuel inventory remains in the imploding region up to the moment of peak burn. Figure~\ref{fig:NormMass} shows a variation below 10\% in the normalized fuel-mass inventory, which confirms that the scaling law \eq{eq:numerical:rho} for the load height is overall conserving relative end losses. However, we must note that \Eq{eq:numerical:rho} dictates a relatively large increase in the axial length of a MagLIF liner (as shown in \Figs{fig:numerical:parameters} and \ref{fig:liners}). This is inconvenient due to the additional initial inductance associated with longer liners which make more difficult delivering higher peak currents with a given pulsed-power generator. It may be possible to reduce the scaling exponent of the load height by modifying the scaling prescriptions of the radial dimensions of the LEH window and the cushions. In this work, these parameters were scaled linearly with the initial liner inner radius [see \Eq{eq:scaling:Rcushion}]. However, smaller end openings could reduce end losses and thus decrease the axial length of the similarity-scaled MagLIF loads. The investigation of this alternative and others will be left for future work. \section{Scaling of MagLIF performance} \label{sec:performance} The similarity-scaling framework in \Sec{sec:prescriptions} leads to good agreement between the theory and simulations for the estimated plasma stagnation conditions and the burn-width time $\tau_{\rm bw}$. Now, we compare the metrics for the expected performance of the similarity-scaled MagLIF loads. The fusion yield follows the scaling of the characteristic yield number $Y_{\rm ref}$ introduced in Paper I. The no-$\alpha$ yield $Y_{\rm no \, \alpha}$ obeys the following scaling rule: \begin{equation} \frac{Y_{\rm no \, \alpha}'}{Y_{\rm no \, \alpha}} = \left( \frac{\rho_0'}{\rho_0} \right)^2 \left( \frac{T_{\rm preheat}'}{T_{\rm preheat}} \right)^{3.77} \left( \frac{R_{\rm in,0}'}{R_{\rm in,0}} \right)^2 \frac{h'}{h}. \end{equation} Here we used the power-law fit in Eq.~(109) of Paper~I for the DT fusion reactivity so that $\langle \sigma v \rangle_{\rm DT} \propto T^{3.77}$ valid within the 2--8~keV range shown in \Fig{fig:tion} for the no-$\alpha$ temperatures. Substituting \Eqs{eq:numerical:R}, \eq{eq:numerical:rho}, and \eq{eq:stagnation:tion2}, we obtain the scaling rule for the no-$\alpha$ fusion yield: \begin{equation} \frac{Y_{\rm no \, \alpha}'}{Y_{\rm no \, \alpha}} = \left( \frac{\I'}{\I} \right)^{5.87}. \label{eq:performance:yield} \end{equation} \begin{figure} \includegraphics[scale=.43]{fig27_yieldB} \caption{Fusion yield of similarity-scaled MagLIF loads. Red and blue points denote simulation results with and without $\alpha$ heating, respectively. Dashed lines are power-law fits to the simulation data. The legend shows the fitted scaling exponents. The yellow curve is the scaling law in \Eq{eq:performance:yield}.} \label{fig:yield} \end{figure} The scaling of the yield per-unit-length $\smash{\widehat{Y}_{\rm no \, \alpha}}$ is found by removing the factor related to the liner height: \begin{equation} \frac{\widehat{Y}_{\rm no \, \alpha}'}{\widehat{Y}_{\rm no \, \alpha}} = \left( \frac{\I'}{\I} \right)^{5.36}. \label{eq:performance:yieldhat} \end{equation} This scaling law is more favorable than the often quoted $\smash{\widehat{Y}_{\rm no \, \alpha} \propto \I^4}$ scaling for z-pinch devices.\cite{Velikovich:2007hq} This occurs for two reasons. First, as a consequence of the scaling constraints on the preheat energy and on the liner inner radius (which scales relatively weakly with $\I$ to mitigate MRT feedthrough), the relatively more compact scaled fuel volumes are predicted to achieve higher fuel pressures and temperatures. Second, the initial fuel density is scaled sublinearly with respect to current to maintain the relative effects of radiation losses. This leads to the almost linear increase in ion temperatures near stagnation shown in \Fig{fig:tion}, which in turn increases the DT neutron reactivity. Figure \ref{fig:yield} shows the fusion yields for the similarity-scaled MagLIF loads and compares them to the analytical estimate in \Eq{eq:performance:yield}. The theory and the simulation results without $\alpha$ heating show excellent agreement. It is worth noting that the no-$\alpha$ yield varies by nearly three orders of magnitude when varying the peak current from 15~MA to 60~MA. In terms of absolute yield numbers, the no-$\alpha$ fusion yield for the anchor load driven at 20 MA is $10$~kJ, and the theoretically expected no-$\alpha$ yield for the 60-MA load is $0.010(60/20)^{5.87} \simeq 6.3$~MJ. In contrast, the no-$\alpha$ simulation at 60 MA gives a yield of $5.4$~MJ. Interestingly as well, the calculations with $\alpha$ heating included show that similarity-scaled MagLIF loads can self-heat at higher currents and can lead to yields of roughly $Y\simeq67$~MJ at the 60-MA level. \begin{figure} \includegraphics[scale=.43]{fig57_HeatAlphaB} \caption{Burn-history averaged Lawson ignition parameter $\chi$ of the similarity-scaled MagLIF loads. Error bars denote the burn-weighted standard deviation associated to temporal variations near peak burn of the fuel pressure and temperature entering \Eq{eq:performance:chi_convenient}.} \label{fig:chi} \end{figure} Our results for various stagnation and performance metrics suggest that scaled-up MagLIF loads can potentially reach robust $\alpha$-heating regimes. A metric often used in the literature to measure this effect is the Lawson-ignition parameter $\chi$.\cite{Betti:2010fc} Following Paper I, this quantity can be written as follows: \begin{equation} \chi = 0.09 [p_{\rm fuel}({\rm Gbar})] \cdot [\tau_{\rm burn}({\rm ns})] \, \frac{[ \langle \sigma v \rangle (10^{-18} {\rm cm^3/s})] }{[T ({\rm keV}) ]^2}, \label{eq:performance:chi_convenient} \end{equation} where we have dropped the term $\eta_\alpha$ measuring the fraction of trapped $\alpha$ particles. The no-$\alpha$ Lawson parameter $\chi_{\rm no\, \alpha}$ obeys the following scaling rule: \begin{equation} \frac{\chi_{\rm no\, \alpha}'}{\chi_{\rm no\, \alpha}} =\frac{p_{\rm preheat}'}{p_{\rm preheat}} \left( \frac{T_{\rm preheat}'}{T_{\rm preheat}} \right)^{1.77} \simeq \left( \frac{\I'}{\I} \right)^{3.36}. \end{equation} Figure \ref{fig:chi} compares the theoretical scaling curve for the $\chi$ parameter and the burn-history averaged $\chi$ values obtained from simulations. When comparing to the no-$\alpha$ results, we find that the theoretical scaling rule $\chi_{\rm no\, \alpha} \propto \I^{3.36}$ over-predicts the simulated scaling curve which shows a $\chi_{\rm no\, \alpha} \propto \I^{2.95}$. The discrepancy between the theoretical and the fitted scaling exponents is explained by the observed deviation in the scaling in the fuel temperature shown in \Fig{fig:tion}. It is worth noting that MagLIF experiments at the 20-MA scale have demonstrated $\chi_{\rm no\, \alpha}\simeq 0.1$.\cite{foot:Knapp} Therefore, MagLIF on Z is far from a robust $\alpha$-heating regime. This is expected since the Z facility does not have enough energy to reach such thermonuclear conditions. However, when similarity scaling the MagLIF platform to 60 MA, we find that $\chi_{\rm no\, \alpha}\simeq 2.0$, which surpasses the ignition threshold of unity. The calculated $\chi$ parameter for the simulations with $\alpha$ heating surpass the $\chi=1$ threshold around 35-MA peak current. This correlates well with the regime changes observed for the burn-averaged fuel parameters, such as the fuel pressure and temperature. Although $\chi_\alpha$ increases sharply in the 35-45 MA range, the fitted scaling curve at higher currents has a similar exponent as that of the no-$\alpha$ calculations. When considering the fitted power-laws for $p_{\rm fuel, \alpha}$ and $T_{\rm \alpha}$, this behavior for $\chi_\alpha$ can be partially attributed to the decrease in the burn-width time discussed in \Fig{fig:tau} and to a smaller power-law exponent for the DT reactivity for high temperatures between 5-20 keV $(\langle \sigma v\rangle_{\rm DT} \propto T^{2.4})$. As a cautionary note, the simulation results in this paper are obtained from 2D ``clean" simulations that do not consider interfacial mixing nor initial seeding of the MRT instability in the outer surface of the liner. Therefore, these results are inherently optimistic and over-predict the fusion yields that would be observed in experiment. Nevertheless, the main takeaways are (1)~we propose a new paradigm for scaling MagLIF loads to higher currents, (2)~we have tested the theory against several metrics describing the implosion dynamics, stagnation conditions, and performance, and (3)~the theory and the no-$\alpha$ simulation results show agreement. These results increase our confidence of using this scaling paradigm to explore the performance of scaled-up MagLIF configurations. For future work, we shall use this scaling framework to scale MagLIF loads while including mixing and instability effects in order to better assess the potential of MagLIF to reach high yields at higher currents. \section{Conclusions} \label{sec:conclusions} The MagLIF platform is a magneto-inertial-fusion concept studied on the Z Pulsed Power Facility.\cite{Gomez:2014eta,Knapp:2019gf,Gomez:2019bg,Gomez:2020cd,YagerElorriaga:2022cp,Sinars:2020bv} Given the relative success of this platform, we proposed a novel method to scale MagLIF to higher currents in order to reach higher yields. Our method is based on similarity scaling.\cite{Schmit:2020jd,foot:Ruiz_framework} Similarity scaling attempts to preserve many of the physics regimes already known or being studied on today's Z machine with the goal of reducing unexpected outcomes on future scaled-up experiments. In this paper, we derived scaling rules for the experimental input parameters characterizing a MagLIF load as unique functions of the characteristic current driving the implosions. We also derived scaling rules for various no-$\alpha$ metrics describing the liner-implosion dynamics, stagnation conditions, and performance. The scaling rules were compared against 2D radiation--magneto-hydrodynamic (rad-MHD) \textsc{hydra} simulations.\cite{Marinak:1996fs,Koning:2009} Overall, agreement was found between the scaling theory and simulation results. In particular, analytical and 2D ``clean" numerical calculations showed that MagLIF loads have the potential to reach 67-MJ yield in a 60-MA--class pulsed-power facility \textit{when similarity scaled} from MagLIF configurations presently studied on Z.\cite{Gomez:2020cd,YagerElorriaga:2022cp} It is worth mentioning that the projected yields in this work are lower than the $\sim440$-MJ yield at $\sim 65$~MA peak current calculated from the optimized-scaling studies in \Refa{Slutz:2016cf}. The reduction in the projected yields is likely caused by three reasons. First, similarity scaling imposes strict constraints on the scaling of the MagLIF liner (specifically, the liner thickness) to maintain the implosion stability. These constraints can limit the stagnation pressures that can be achieved at higher currents. In contrast, the scaling studies in \Refa{Slutz:2016cf} assumed a constant initial AR=6 for all scaled liners, which are relatively thinner and more unstable at higher currents than those discussed in this paper. Second, at high peak currents, the initial fuel densities suggested in \Refa{Slutz:2016cf} are significantly higher than those shown in this paper ($\sim$10~mg/cc compared to $\sim$4~mg/cc). Denser fuel configurations allow for better coupling of the $\alpha$ particles with the background fuel and therefore lead to higher fusion yields. However, laser preheat becomes a challenge with such relatively high initial fuel densities. For comparison, 15\% critical density of a 3$\omega$ laser propagating in DT plasma is 5.55~mg/cc. Therefore, significant LPI could be expected at such high densities. Third, the circuit models used in this work and in \Refa{Slutz:2016cf} are different. In this work, the circuit models are derived from the similarity-scaling rules and use the canonical circuit model for Z as a baseline. The scaling rules for the circuit are designed to maintain the pulse shape of the normalized current traces (see \Fig{fig:radius_norm_vs_time}). In contrast, the calculations shown in \Refa{Slutz:2016cf} are based on circuit models of two conceptual designs of two future petawatt-class pulsed-power accelerators (Z300 and Z800).\cite{Stygar:2015kza} The circuit models have different assumptions on the behavior of current losses not reaching the MagLIF load. Differences in the power delivery and current losses assumed between this work and in \Refa{Slutz:2016cf} can affect the comparisons in extrapolated performance of MagLIF loads even when considering the same peak current. The present work can be extended in several directions. First, it is important to identify the role of unknown physical processes (or ``hidden" variables) that can affect the scaling results presented in this paper. Interfacial instabilities and mix within the fuel are a particular concern. In this regard, it is important to assess how the seeding of the MRT instability by the electro-thermal instability\cite{Oreshkin:2008dy,Peterson:2012bu,Peterson:2013eh,Yu:2020gl,Awe:2021jp} behaves at higher current densities. Future research should also focus on better the \textit{ab initio} modeling of the spontaneously generated, helical MRT modes observed in MagLIF-type implosions.\cite{Awe:2014gba,Awe:2013dt} Important questions to answer are: (1)~how do helical modes scale with peak current and initial axial magnetic field, (2)~do helical modes lead to strong mixing of Be liner material into the fuel, and (3)~can helical MRT modes decrease the confinement of $\alpha$ particles and consequently truncate self-heating of the fuel. These questions concerning the scaling of interfacial instabilities and mix can be addressed via dedicated experiments and 3D simulations. Preheat delivery is another area deemed of ``higher risk" when scaling MagLIF to higher peak currents. Regarding this topic, it will be important to experimentally demonstrate the feasibility to deliver the preheat energies required by the scaling theory, understand the effects of vortex flows seeded within the fuel by the laser preheat,\cite{Weis:2021id} and estimate the degree of laser--plasma instabilities that will occur in future, more energetic preheat configurations. In this regard, dedicated preheat experiments \textit{at scale} are currently underway at the National Ignition Facility.\cite{foot:Pollock} For the interested reader, it is worth mentioning that \Refa{YagerElorriaga:2022cp} reviews the present-day research status of the MagLIF effort, and \Refa{Ruiz:2022} summarizes the research needs and challenges for MagLIF with a particular emphasis towards theory, simulations, and scaling to higher peak currents. As a second research direction, the work presented in this paper can be extended to applying the present scaling paradigm to ``ice-burning" MagLIF configurations. In this paper, we only considered ``gas-burning" MagLIF loads, \ie loads with gaseous fuel configurations. Numerical simulations presented in \Refs{Slutz:2012gp,Sefkow:2014ik,Slutz:2016cf} showed that ``ice-burning" MagLIF loads, \ie loads with DT ice layers on the liner inner wall, can perform significantly better at higher currents beyond 55 MA. It would be interesting to extend the present similarity-scaling theory to this second class of MagLIF loads and assess the potential of such similarity-scaled configurations. As a third research direction, the similarity-scaling framework provides a roadmap to experimentally study MagLIF scaling physics on the Z facility. This can be done by turning down the machine charge voltage and self-consistently down-scaling the MagLIF load and estimating the current delivery. An experimental effort will soon be underway at Sandia to test the similarity-scaling theory against experiments by varying the peak current within the 14--20 MA range. If agreement is found, results from these experiments will bolster the confidence in scaling MagLIF and will help reduce uncertainties when extrapolating MagLIF performance to higher currents. One of the authors (D.~E.~Ruiz) was supported in part by Sandia National Laboratories (SNL) Laboratory Directed Research and Development (LDRD) Program, Project 223312. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology $\&$ Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
1,314,259,996,176
arxiv
\section{Introduction} Clustering algorithms are a popular choice when tackling problems requiring exploratory data analysis. In this scenario, analysts can draw conclusions about data at hand without having information regarding the class membership of the given entities. Clustering algorithms aim at partitioning a given data set $Y$ into $K$ homogeneous clusters $S=\{S_1, S_2, ..., S_K\}$ without requiring any label learning process. These algorithms summarise information about each cluster by producing $K$ centroids, often called prototypes, $C=\{c_1, c_2, ..., c_K\}$. The ability to partition data and to provide information about each part has made the application of clustering popular in many fields, including: data mining, computer vision, security, and bioinformatics \cite{jain2010data,mirkin2012clustering,leiva2013warped,steinley2006k,makarenkov2001optimal,maldonado2015kernel}. There are various approaches to data clustering, with algorithms often divided into partitional and hierarchical. Originally, partitional algorithms produced only disjoint clusters so that each entity $y_i \in Y$ was assigned to a single cluster $S_k$. This hard clustering approach has been variously extended to fuzzy sets \cite{zadeh1965fuzzy}. Fuzzy clustering allows a given entity $y_i \in Y$ to belong to each cluster $S_k \in S$ with different degrees of membership. There are indeed a number of partitional algorithms, with \textit{k}-means \cite{ball1967clustering,macqueen1967some} and fuzzy c-means \cite{bezdek1984fcm} being arguably the most popular under the hard and fuzzy approach, respectively. Hierarchical algorithms provide additional information about data. They generate a clustering $S$ and related set of centroids $C$, very much like partitional algorithms, but they also give information regarding the relationships among clusters. This information comes as a nested sequence of partitions. This tree-like relationship can be visualized with a dendrogram (i.e., an ultrametric tree). In this type of clustering, an entity $y_i \in Y$ may be assigned to more than one cluster as long as the clusters are related and the assignment occurs at different levels of the hierarchy. Hierarchical algorithms can be divided into agglomerative and divisive \cite{mirkin2012clustering}. Agglomerative algorithms follow a bottom-up approach. They start by setting each entity $y_i \in Y$ as the centroid of its own cluster (singleton). Pairs of clusters are then merged stepwise until all the entities have been collected in the same cluster, or until a pre-specified number of clusters is found. Divisive algorithms do the opposite by following a top-down approach. There is indeed a wide variety of algorithms to apply when using hierarchical clustering. The Ward method \cite{ward1963hierarchical} is one of the most popular hierarchical algorithms. It follows the agglomerative approach, merging at each iteration the two clusters that minimise the within-cluster variance. This variance is measured as a weighted sum of squares, taking into account the cardinality of each cluster, and leading to the cost function as follows: \begin{equation} \label{Eq:Ward} Ward(S_a, S_b) = \frac{N_a N_b}{N_a+N_b} \sum_{v=1}^V (c_{av} - c_{bv})^2, \end{equation} where $V$ is the number of features used to describe each entity $y_i \in Y$. $N_a$ and $c_a$ represent the cardinality and centroid of cluster $S_a \in S$, respectively. Similarly, we have $N_b$ and $c_b$ for cluster $S_b \in S$. The fraction in (\ref{Eq:Ward}) ensures that if two pairs of clusters are equally apart, those of lower cardinalities are merged first. Previously, we extended the traditional Ward algorithm by introducing Ward$_p$ \cite{de2015feature}. Our algorithm applies cluster dependent feature weights and extends the squared Euclidean distance in (\ref{Eq:Ward}) to the $p$-th power of the weighted Minkowski distance. With these we: (i) ensure that relevant features have a higher impact in the clustering than those that are less relevant; (ii) can set the distance bias to other shapes than that of a spherical cluster, a problem traditionally addressed by methods following model-based clustering \cite{fraley1998many}. The contribution of this paper is two-fold. First, we introduce what we believe to be the first non-trivial initialisation method for a hierarchical clustering algorithm. Our method generates an initial partition with a sufficiently large number of clusters. Then, the merging process applies starting from this partition rather than from the singletons. In this way, the running time of a given hierarchical clustering algorithm is substantially reduced. Second, we advance hierarchical clustering by introducing A-Ward$_{p\beta}$, an extension of Ward$_p$ to the situation in which our initialisation method applies and the feature weight exponent can differ from the exponent of the Minkowski distance. We give a rule for choosing these two exponents for any given data set. We run numerous computational experiments, with and without noise in data sets. It is worth noting that the ``noise'' in this paper has nothing to do with the conventional meaning of measurement errors, which are usually modelled by an additive or multiplicative Gaussian distribution affecting every data entry. Here, the noise is modelled by either of two ways: (1) inserting additional random noise features, and (2) blurring some features within some clusters. We establish that: (i) the initial clustering generated by our method does decrease the time a hierarchical clustering algorithm takes to complete; (ii) A-Ward$_{p\beta}$ provides a better cluster recovery under different noise models, than either the Ward or the Ward$_p$ algorithms, especially for noisy data. We direct readers interested to know more of feature weighting in the square-error clustering to reviews such as \cite{kriegel2009clustering}, and references within. \section{Ward clustering using anomalous patterns} \subsection{Ward and anomalous pattern Ward} \textit{K}-means is arguably the most popular partitional clustering algorithm \cite{jain2010data,steinley2006k}. It can be considered an analogue to the general expectation-maximisation algorithm (EM) \cite{dempster1977maximum}. Note, however, that EM recovers a mixed distribution density function, whereas \textit{k}-means just finds a set of non-overlapping clusters and their centres. \textit{K}-means alternatingly minimises the within cluster sum of squares: \begin{equation} \label{Eq:KMeans} W(S,C) = \sum_{k=1}^K \sum_{y_i \in S_k} \sum_{v=1}^V (y_{iv} - c_{kv})^2 \end{equation} to obtain a partition of the given set of $N$ entities in a set of non-overlapping clusters $S_k \in S$, each represented by its centroid $c_k$, $k=1,2,..., K$. This minimisation is usually done by following the three straightforward steps: (i) set the coordinates of each centroid $c_k \in C$ to a randomly chosen entity $y_i \in Y$; (ii) assign each entity $y_i \in Y$ to the cluster $S_k$ whose centroid $c_k$ is the nearest to $y_i$; (iii) update each centroid $c_k \in C$ to the component-wise mean of $y_i \in S_k$. Steps (ii) and (iii) are repeated until convergence. The popular Ward agglomeration algorithm \cite{ward1963hierarchical} uses the same criterion to build a sequence of partitions, each obtained by merging two clusters $S_a$ and $S_b$, that are the nearest according to (\ref{Eq:Ward}), sometimes referred to as Ward distance between the clusters. The Ward algorithm can be formulated as follows:\\ \textbf{Ward agglomeration algorithm} \begin{enumerate} \setlength{\itemsep}{-1pt} \item \textit{Initial Setting}. Set the initial number of clusters $K=N$ and the related singleton clustering $S=\{S_1, S_2, ..., S_{N}\}$ in which every cluster consists of a single element of the data set, so that its centroid is the same element. \item \textit{Merge clusters}. Using (\ref{Eq:Ward}), find the two nearest clusters $\{S_a, S_b\} \subseteq S$. Merge $S_a$ and $S_b$, creating a new cluster $S_{ab}$. Remove all references to $S_a$, $S_b$, $c_a$, and $c_b$. \item \textit{Centroid update}. Set the centroid of $S_{ab}$ to the component-wise mean of $y_i \in S_{ab}$. \item \textit{Stop condition}. Reduce $K$ by 1. If $K>1$ or if $K$ is still higher than the desired number of clusters, go back to Step 2. \end{enumerate} Both \textit{k}-means and the Ward method minimise the sum of squared errors, but there are considerable differences in their time-complexity. \textit{K}-means has a linear time-complexity on the number of entities, of $\mathcal{O}(NIKV)$ \cite{tan2006introduction}, where $I$ is the number of iterations it takes to converge and $K$ is the given number of classes. The number of iterations, $I$, is often small and can be reduced even further if \textit{k}-means is supplied with good initial centroids. The first implementations of Ward had the time complexity of $\mathcal{O}(N^3)$ and $\mathcal{O}(N^2log^2(N))$ \cite{eppstein2000fast} when a dissimilarity matrix between entities of size $(N\times N)$ was used as input. However, the optimal implementation of Ward, which is due to the development of the nearest neighbour chain and reciprocal nearest neighbour algorithms \cite{Juan1982Programme, murtagh1983survey}, is in $\mathcal{O}(N^2)$. For instance, Murtagh \cite{murtagh1985multidimensional} and, more recently, Murtagh and Legendre \cite{murtagh2014ward} discussed in detail the nearest neighbour chain algorithm using either ``stored data'' or ``stored dissimilarities'' implementations, leading to $\mathcal{O}(N^2)$ computational complexity of Ward. Nowadays, optimal implementations of the Ward algorithm became standard and are widely used in the popular software packages, such as R \cite{r_stats}, Clustan \cite{Clustan} or MATLAB \cite{matlab_stats}. There are many initialisation methods for \textit{k}-means \cite{bradley1998refining,pena1999empirical,steinley2006k}. Milligan \cite{milligan1980} pointed out that the results of \textit{k}-means heavily depend on initial partitioning. He suggested that a good final clustering can be obtained using Ward's hierarchical algorithm to initialise it, which was confirmed later in computational experiments (see, for example, \cite{steinley2006k}). There are also other examples of using hierarchical clustering to initialise \textit{k}-means \cite{su2007search,cao2009initialization,celebi2012deterministic}. Conversely, \textit{k}-means is beneficial as a device for carrying out divisive clustering, see, for example, what is referred to as the ``bisecting \textit{k}-means'' \cite{steinbach2000comparison, mirkin2012clustering}. The author of the Clustan package \cite{Clustan}, David Wishart, was first to propose the \textit{k}-means Cluster Model Tree method which allows one to summarize a \textit{k}-means cluster solution by a hierarchy. For instance, a mini-tree for each \textit{k}-means cluster, showing how the entities combine within this cluster, can be constructed and visualized using Clustan \cite{Clustan}. However, to the best of our knowledge, the problem of accelerating agglomerative clustering using \textit{k}-means has not been addressed so far. This problem is related to the problem of pre-selecting the right value for the number of clusters $K$ when running \textit{k}-means. Such a pre-selected number of clusters should be greater than the number of expected clusters, but not too much. We propose using the method of intelligent \textit{k}-means (\textit{ik}-means) \cite{chiang2010intelligent, mirkin2012clustering} for this purpose. This method finds and removes ``anomalous'' clusters, one-by-one, from the data set, so that the number of these clusters is not pre-specified but rather obtained according to the data set structure by using a threshold $\theta$ that is the minimum number of entities required to form a cluster. When this threshold is set to 1, the number of anomalous clusters has been experimentally found to be always larger than the number of generated clusters. The \textit{ik}-means algorithm finds the current anomalous cluster $S_t$ and respective centroid $c_t$ by alternatingly minimising: \begin{equation} W(S_t,c_t) = \sum_{i \in S_t} d(y_i,c_t) + \sum_{i \notin S_t} d(y_i,0), \end{equation} where $d(y_i,c_t)$ is the squared Euclidean distance between entity $y_i$ and centroid $c_t$, and $d(y_i,0)$ is the squared Euclidean distance between entity $y_i$ and the centre of the data set $Y$. The algorithm then removes $S_t$ from the data set and re-applies the process to the remaining entities as explained below. Thus, the number of anomalous clusters, $K^*$, is our criterion for a fast preliminary estimation of the true number of clusters in the data set.\\ \textbf{Anomalous cluster identification algorithm (\textit{ik}-means)} \begin{enumerate} \setlength{\itemsep}{-1pt} \item \textit{Initial setting.} Set the user-defined $\theta$. Set the centroid $c_Y$ to be the component-wise mean of $y_i \in Y$. \item \textit{Tentative centroid.} Set $S_t=\emptyset$. Set $c_t$, a tentative centroid, to coincide with the entity $y_i \in Y$ that is farthest from $c_Y$ according to the squared Euclidean distance. \item \textit{Entity assignment.} Assign each entity $y_i \in Y$ to either $c_t$ or to $c_Y$ depending on which is the nearest. Those assigned to $c_t$ form the cluster $S_t$. If there are no changes in $S_t$, go to Step 5. \item \textit{Centroid update.} Update $c_t$ to the component-wise mean of $y_i \in S_t$. Go to Step 3. \item \textit{Save centroid.} If $|S_t|\geq \theta$, include $c_t$ into $C$. \item \textit{Remove clusters.} Remove each $y_i \in S_t$ from $Y$. If $|Y|>0$, go to Step 2. \item \textit{Cluster.} Run \textit{k}-means on the original data set $Y$, using as initial centroids those in $C$. \end{enumerate} The above is a rather successful initialisation for \textit{k}-means \cite{chiang2010intelligent}. We use it as a base for our anomalous pattern initialisation method for agglomerative clustering algorithms described later in this section. The traditional Ward algorithm starts from a trivial clustering $S=\{S_1, S_2, ..., S_N\}$ in which every cluster is a singleton. The sole purpose of $S$ is to serve as a base for the clustering generated in the next iteration of Ward. Obviously, this trivial set is useless to any data analyst. With the above in mind, one could wonder whether the clustering generated in the next iteration of Ward, that with $N-1$ clusters, would be of any interest to a data analyst. This will be a clustering in which only one of the $N-1$ clusters is not a singleton. Of course, we cannot state if it is of any interest or not because the degree of usefulness of such clustering is problem-dependent. However, classifying $N$ entities into $N-1$ classes would be trivial in most of the practical situations. If neither $N$ nor $N-1$ clusters would constitute a useful clustering, we could challenge the usefulness of the solution with $N-2$ clusters and so on. Clearly, at some stage we will reach a number of clusters, $K^*$, that leads to a useful clustering in terms of partitions. $K^*$ is not a reference to the true number of clusters in $Y$, even if such number is known. Instead, $K^*$ represents the number of clusters in which the data begins to manifest some form of cluster structure. Since in this paper we follow the agglomerative approach, $K^*$ can be also viewed as the maximum number of anomalous patterns in $Y$. Above, we described the \textit{ik}-means. This is an algorithm able to find anomalous patterns in a data set, as well as the related partitions. The maximum number of anomalous patterns $K^*$ in $Y$ is given by \textit{ik}-means when the parameter $\theta$ is set to 1. This setting leads to two important points: (i) it allows for the possibility of singletons; (ii) $K^*$ is greater than the true number of clusters in $Y$. Ideally, Ward should be initialised with $K^*$ and the related clustering generated by \textit{ik}-means. The point (i) is important because $Y$ may be a sample of a larger real-world data set. It is possible that the larger data set contains a cluster $|S_k|>1$ for which the sample $Y$ contains a single entity $\in S_k$. Moreover, since $K^*$ is an overestimation of the true number of clusters in $Y$ (ii), our version of Ward will generate a tree structure from $K^*$ until the true number of clusters is found. If the latter is unknown, we can generate a binary hierarchy beginning with $K=K^*$ and finishing with $K=2$. The main objective of our method is to reduce the number of steps Ward takes to complete, and by consequence, the time required for its execution. The results we present later in this section show that the gain in running time provided by this strategy can be very significant (see Figures \ref{Fig:Ward_Time} and \ref{Fig:TTWard_Time}). Now we can formalise Ward with anomalous pattern initialisation, further on referred to as A-Ward, as follows:\\ \textbf{A-Ward algorithm} \begin{enumerate} \setlength{\itemsep}{-1pt} \item \textit{Initial Setting}. Set $\theta = 1$. Obtain the initial number of clusters $K=K^*=|C|$ and the related clustering $S=\{S_1, S_2, ..., S_{K}\}$ by running \textit{ik}-means on $Y$. \item \textit{Merge clusters}. Using (\ref{Eq:Ward}), find the two closest clusters $\{S_a, S_b\} \subseteq S$. Merge $S_a$ and $S_b$, creating a new cluster $S_{ab}$. Remove all references to $S_a$, $S_b$, $c_a$, and $c_b$. \item \textit{Centroid update}. Set the centroid of $S_{ab}$ to the component-wise mean of $y_i \in S_{ab}$. \item \textit{Stop condition}. Reduce $K$ by 1. If $K>2$ or if $K$ is still higher than the desired number of clusters, go back to Step 2. \end{enumerate} \subsection{Comparing Ward and A-Ward} \label{Sec:ExperimentsSetting} When defining the A-Ward algorithm, we intended to define a method that has a similar cluster recovery capability with Ward, while being somewhat faster. To test a new clustering method, it is quite natural to define a collection of data sets with a predefined cluster structure, which is the easiest to achieve by generating synthetic data sets. Using synthetic data with and without noise, we can apply both Ward and A-Ward clustering algorithm and assess both the speed and the level of cluster recovery. To measure the level of cluster recovery, we compare the cluster-found partition with the generated reference partition by using the conventional Adjusted Rand Index \cite{hubertarabie1985rand}. This popular index is the corrected for chance version of the Rand index: \begin{equation} \label{Eq:ARI} ARI = \frac{ \sum_{ij} \binom{n_{ij}}{2} - [\sum_i \binom{a_i}{2} \sum_j \binom{b_j}{2}] / \binom{n}{2} }{ \frac{1}{2} [\sum_i \binom{a_i}{2} + \sum_j \binom{b_j}{2}] - [\sum_i \binom{a_i}{2} \sum_j \binom{b_j}{2}] / \binom{n}{2} }, \end{equation} where $n_{ij}=|S_i \cap S_j|$, $a_i = \sum_{j=1}^K |S_i \cap S_j|$ and $b_i = \sum_{i=1}^K |S_i \cap S_j|$. The range of (\ref{Eq:ARI}) is within the interval from -1 to 1. ARI reaches 1 if and only if the two compared partitions coincide, i.e., $S_p=S_q$. We begin by generating 20 synthetic data sets for each of the configurations 1000x6-3, 1000x12-6 and 1000x20-10 (for details see Table \ref{Tab:GMMs}). In these data sets, all clusters are spherical. That is, each cluster is generated from a Gaussian distribution whose covariance matrix is diagonal with the same diagonal value $\sigma^2$ generated randomly between $0.5$ and $1.5$. Each of the centroid components was generated independently using the standard normal distribution $N(0,1)$. The cardinality of each cluster was selected from a uniform distribution, with the constraint that it should have at least 20 entities. Then we introduced noise in these data sets according to either of the two following noise generation models: \begin{enumerate} \item {\bf Noise model 1: Random feature to be inserted.} A noise feature is generated according to a uniform distribution in the range between the minimum and maximum values in the data set. \item {\bf Noise model 2: Blurring a cluster over a feature.} Any feature in a generated data set contains $K$ cluster-specific fragments. By randomly selecting a feature and cluster, such a fragment is substituted by a uniform random noise. \end{enumerate} The noise model 1 addresses the issue of generic clustering methods based on the least-squares criterion (\ref{Eq:KMeans}): they cannot distinguish between useful and inadequate features. It has been used in \cite{cordeiro2011minkowski, de2015feature,de2015recovering,de2016applying} to test the weighted feature versions of $k$-means and Ward algorithms; those showed good cluster recovery properties against such noise features. The noise model 2 is novel. It is supposed to be applied for testing the ability of clustering algorithms to perform under the cluster-specific noise. In practice this type of noise can be found in various fields, including computer vision \cite{freytag2012efficient}, financial economics \cite{wilcox2014hierarchical} and genomics\cite{monni2009stochastic}. We added 50\% of noise data to each of the original data sets according to each of the above-defined noise models. For example, each of the 20 data sets generated according to the configuration 1000x12-6 contains 12 original features; six noise features have been inserted into each of them (leading to a total of 18 features). We refer to this new configuration as 1000x12-6+6NF, where NF stands for "noise feature". Similarly, 50\% of all the $KV$ cluster-specific fragments have been blurred according to the noise model 2, which is denoted here as 1000x12-6 50\%N. Our simulations were carried out using a 64-bit computer equipped with an Intel i5-4690T CPU, running at 2.5GHz, and 8Gb of RAM. Our algorithms were implemented using MATLAB R2013 running on Linux (Ubuntu). We did not use the partially pre-compiled MATLAB's \texttt{linkage} function as it would introduce bias to our experiments. \begin{table}\small \caption{The nine cluster structure configurations used in simulations.} \begin{center} \tabcolsep=0.11cm \begin{tabular}{lccccc} &Entities&Features&Clusters&Noise&Cluster-specific\\ &&&&features&noise (\%)\\ \cline{2-6} 1000x6-3&1000&6&3&0&0\\ 1000x6-3 +3NF&1000&6&3&3&0\\ 1000x6-3 50\%N&1000&6&3&0&50\\ \hline 1000x12-6&1000&12&6&0&0\\ 1000x12-6 +6NF&1000&12&6&6&0\\ 1000x12-6 50\%N&1000&12&6&0&50\\ \hline 1000x20-10&1000&20&10&0&0\\ 1000x20-10 +10NF&1000&20&10&10&0\\ 1000x20-10 50\%N&1000&20&10&0&50\\ \end{tabular} \end{center} \label{Tab:GMMs} \end{table} The results of running Ward and A-Ward over the $180=9\times 20$ generated data sets confirm our assumptions: \begin{enumerate} \item A-Ward is significantly faster than Ward (see Figures \ref{Fig:Ward_Time} and \ref{Fig:TTWard_Time} demonstrating time box-plots for each of the data configurations); \item A-Ward and Ward have similar cluster recovery capabilities over each of the data set configurations (see Table \ref{Tab:GMM_Baseline_Ward_TTWard}). \end{enumerate} Table \ref{Tab:GMM_Baseline_Ward_TTWard} reports the number of anomalous clusters $K^*$ found by \textit{ik}-means. The presented results suggest that this number is indeed greater than the number of generated clusters. We also computed the average ARI values between the solutions provided by Ward and A-Ward (see the last two columns in Table \ref{Tab:GMM_Baseline_Ward_TTWard}). This additional ARI is close to the ARI between the solutions provided by both algorithms and the known truth for data sets without noise. The ARI values increase with the increase in the number of features, still for data not affected by noise. For data sets including noise, the trend is quite the opposite. In these cases, we can conclude that the solutions yielded by Ward and A-Ward diverge, and this divergence can be very significant as the quantity of noise increases. Clearly, both Ward and A-Ward appear to be absolutely impractical in the presence of noise. The optimal time complexity of the Ward algorithm is $\mathcal{O}(N^2V)$ given that an object-to-feature $(N\times V)$ data matrix is used as input \cite{murtagh2014ward}. Our anomalous pattern method initialises Ward with $K^*$ clusters instead of $N$, leading to the time complexity of the remaining Ward operations of $\mathcal{O}(K^{*2}V)$, i.e., after initialisation with \textit{ik}-means. The average values of $K^*$ over the processed data sets (see Table \ref{Tab:GMM_Baseline_Ward_TTWard}) vary from 19.90 to 49.95. Obviously, the initialisation stage of A-Ward has also a computational cost expressed via the time complexity of \textit{ik}-means, which is of $\mathcal{O}(NK^*IV)$, where $I$ is the number of iterations \textit{ik}-means takes to converge. Thus, we can claim, after dividing the involved time complexities by $V$, that our A-Ward algorithm decreases the amount of time that traditional Ward takes to complete as long as: $\mathcal{O}_k(NIK^*)<\mathcal{O}_w(N^2-K^{*2})$, where $\mathcal{O}_k$ is the upper bound of \textit{ik}-means and $\mathcal{O}_w$ is the upper bound of Ward. \begin{figure}[t] \caption{Time in seconds the conventional Ward algorithm takes to complete.} \centering \includegraphics[width=1\textwidth]{Times_WardFast} \label{Fig:Ward_Time} \end{figure} \begin{figure}[b] \caption{Time in seconds the A-Ward algorithm takes to complete.} \centering \includegraphics[width=1\textwidth]{Times_WardFast_ikm} \label{Fig:TTWard_Time} \end{figure} \begin{table} \caption{The average ARI, its standard deviation and the number of pre-selected clusters $K^*$ for the Ward and A-Ward algorithms obtained over 20 synthetic data sets for each of the nine parameter configurations.} \begin{center} \tabcolsep=0.11cm \begin{tabular}{lccccccccccc} &\multicolumn{2}{c}{Ward}&&\multicolumn{5}{c}{A-Ward}&&\multicolumn{2}{c}{Ward/A-Ward}\\ \cline{5-9} &&&&\multicolumn{2}{c}{ARI}&&\multicolumn{2}{c}{$K^*$}&&\multicolumn{2}{c}{ARI}\\ \cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12} &avg&sd&&avg&sd&&avg&sd&&avg&sd\\ 1000x6-3&0.5448&0.231&&0.5285&0.197&&19.90&2.245&&0.5217&0.204\\ 1000x6-3 +3NF&0.0400&0.109&&0.0501&0.132&&22.70&2.934&&0.3046&0.153\\ 1000x6-3 50\%N&0.0545&0.090&&0.0877&0.108&&20.20&2.262&&0.2910&0.157\\ \hline 1000x12-6&0.6929&0.166&&0.7102&0.188&&33.55&6.082&&0.6669&0.185\\ 1000x12-6 +6NF&0.1375&0.130&&0.1267&0.123&&26.20&4.937&&0.2093&0.079\\ 1000x12-6 50\%N&0.1276&0.089&&0.1208&0.078&&28.65&4.221&&0.2096&0.057\\ \hline 1000x20-10&0.8998&0.060&&0.9058&0.061&&36.40&7.229&&0.8704&0.078\\ 1000x20-10 +10NF&0.2418&0.084&&0.2326&0.096&&49.75&8.226&&0.1871&0.055\\ 1000x20-10 50\%N&0.1360&0.048&&0.1283&0.043&&49.95&8.636&&0.1617&0.035\\ \end{tabular} \end{center} \label{Tab:GMM_Baseline_Ward_TTWard} \end{table} Usually, hierarchical algorithms are sensitive to perturbations that affect all entities in data sets. Thus, we carried out experiments to determine the impact of our initialisation method in such a case. To do so we substituted 20\% of the entities, rather than features, of each data set by uniformly random noise. We then calculated the ARI between the clusterings obtained with Ward and A-Ward to the known truth, without taking the substituted entities into account. We performed this set of experiments on data sets without any additional noise. The obtained results are presented in Figure \ref{Fig:ARIWard_AWard}. We can observe that A-Ward produces the largest ARI range for the 1000x6-3 and 1000x12-6 data set configurations. However, A-Ward provides the highest first and third quartiles, as well as the median, in all the cases. \begin{figure}[ht] \caption{ARI of Ward (left of each pair of boxes) and A-Ward (right of each pair of boxes) for data sets in which 20\% of entities were substituted by within-domain uniformly random noise. The ARI was calculated without taking the substituted entities into account.} \centering \includegraphics[width=1.2\textwidth]{ARI_20PNoiseEntity} \label{Fig:ARIWard_AWard} \end{figure} \subsection{Case study} In this subsection we present an example of application of our A-Ward algorithm. Our main objective is to demonstrate that the \textit{ik}-means initialisation used by A-Ward does not negatively impact its ability to recover clusters. To do so, we considered the popular Zoo data set, which can be found in the UCI machine learning repository \cite{Lichman:2013}. Species hierarchies are usually relatively easy to understand and interpret. The Zoo data set contains 101 entities, described over 16 features, and partitioned into seven clusters. We have treated all features as numeric and standardised them as follows: \begin{equation} \label{Eq:Stand} y^{\prime}_{iv} = \frac{y_{iv}-\overline{y_v}}{max(y_v)-min(y_v)}, \end{equation} where $\overline{y_v}$ is the average value of $v$ over all entities in $Y$, and $y^{\prime}_{iv}$ is the standardised value of $y_{iv}$. Our choice of standardisation method has two important implications. First, unlike \textit{z}-score it does not favour a unimodal distribution. This is probably easier to explain with an example. Consider a unimodal feature $v_1$ and bimodal feature $v_2$. The standard deviation of $v_2$ is likely to be higher than that of $v_1$, leading to $y^{\prime}_{iv_{2}} < y^{\prime}_{iv_{1}}$. This is particularly problematic because clustering would usually target the groups associated with the modes in $v_2$. Second, if $v$ is a binary feature its range will be one. This means that the standardised value of $y_{iv}$ is simply $y_{iv}-\overline{y_v}$. With this, features with a higher frequency lead to lower standardised values than features with lower frequencies. For example, the binary features with multiple zero values will have a very significant impact on the clustering process. Since the complete Zoo data set is too large to be shown as a single tree, we selected randomly four entities of each of its seven clusters; 28 entities in total. The only misclassified species in the A-Ward hierarchy presented in Figure \ref{Fig:Ward_AWard} is $tortoise$ (from Class 3), which is clustered with the species of Class 2. It is worth noting that a misclassification of $tortoise$ is also characteristic for the traditional Ward algorithm. However, A-Ward produces the top part of the hierarchy, without the computational cost of Ward. \begin{figure}[t] \caption{Zoo hierarchy found by our A-Ward algorithm for 28 species of the Zoo dataset (4 species from each of the 7 original Zoo classes were selected randomly). The species content by class is as follows: Class 1: porpoise, platypus, reindeer, fruitbat; Class 2: dove, gull, swan, rhea; Class 3: pitviper, slowworm, tortoise, tuatara; Class 4: herring, sole, carp, stingray; Class 5: frog1, frog2, newt, toad; Class 6: wasp, honeybee, housefly, gnat; Class 7: crayfish, seawasp, crab, clam. Red circles in the tree represent 11 clusters found by \textit{ik}-means during the initialization step of A-Ward. Red edges of the hierarchy represent the tree found by A-Ward during its tree building step. Green edges of the hierarchy represent mini-trees found by the conventional Ward algorithm (this step is optional) for the 11 clusters provided by \textit{ik}-means.} \centering \includegraphics[width=0.9\textwidth]{Ward.jpg} \label{Fig:Ward_AWard} \end{figure} \section{Using the weighted Minkowski distance} \subsection{Weighted Minkowski k-means and Ward algorithms} We previously dealt with noise data sets by introducing the intelligent Minkowski weighted \textit{k}-means algorithm (\textit{imwk}-means)\cite{cordeiro2011minkowski}. This algorithm minimises the following objective function: \begin{equation} \label{Eq:MWKMeans} W(S,C,w) = \sum_{k=1}^K \sum_{y_i \in S_k} \sum_{v=1}^V w_{kv}^{p}|y_{iv} - c_{kv}|^p, \end{equation} where $p$ is a user-defined exponent related to what can be called the curvature bias. Assuming a two-dimensional space (for an easier visualisation), the bias at $p=1$, $p=2$, and $p\rightarrow\infty$ is towards diamonds, circles and squares, respectively. The \textit{imwk}-means criterion clearly sets the exponent of the distance and the feature weight to the same value, $p$. Thus, the feature weights can be seen as re-scaling factors for any value of $p$. These rescaling factors can be used in the data pre-processing stage of a wide variety of tasks in machine learning. For instance, rescaling a data set with these factors increases the likelihood of recovering the correct number of clusters contained in the data \cite{de2015recovering}. The weight of feature $v$ at cluster $S_k$ is inversely proportional to the dispersion of $v$ at $S_k$ since the first-order necessary minimum conditions of (\ref{Eq:MWKMeans}) imply that: \begin{equation} \label{Eq:MWK_Weight} w_{kv} = \frac{1}{\sum_{u=1}^V[D_{kv}/D_{ku}]^{1/(p-1)}}, \end{equation} where $D_{kv}=\sum_{i \in S_k} |y_{iv} -c_{kv}|^p$ is the dispersion of $v$ at $S_k$. The above is true for crisp clustering where each entity $y_i \in Y$ is assigned to one and only one cluster $S_k$, leading to $\sum_{v=1}^V w_{kv}=1$, for $k=1, 2, ..., K$. At $p=1$ the minimum of (\ref{Eq:MWKMeans}) is reached at the median. Moreover, because this criterion has a linear shape at $p=1$, the first-order minimum conditions are not applicable here and, therefore, formula (\ref{Eq:MWK_Weight}) is not applicable either. Thus, we run experiments at $p>1$, only. Given the success of the above-discussed \textit{imwk}-means algorithm, the agglomerative Ward$_p$ was introduced in \cite{de2015feature}, using a hierarchical clustering heuristic in which cluster-dependent feature weights are determined according to (\ref{Eq:MWK_Weight}). Ward$_p$ is an agglomerative hierarchical clustering algorithm. At each iteration, it merges the two clusters that minimise the following dissimilarity function: \begin{equation} \label{Eq:MW_Ward} Ward_p(S_a, S_b) = \frac{N_a N_b}{N_a + N_b}\sum_{v=1}^V (\frac{w_{av}+w_{bv}}{2})^p |c_{av} - c_{bv}|^p. \end{equation} Unlike the distance calculations in \textit{imwk}-means, those of Ward$_p$ are only between centroids $\{c_a, c_b\} \subseteq C$. Thus, the weight in (\ref{Eq:MW_Ward}) is the average of $w_{av}$ and $w_{bv}$, each calculated using (\ref{Eq:MWK_Weight}). Ward$_p$ minimises (\ref{Eq:MW_Ward}) following the steps below:\\ \textbf{Ward$_p$ agglomerative clustering algorithm} \begin{enumerate} \setlength{\itemsep}{-1pt} \item \textit{Initial setting}. Select the value of $p$, starting from a partition consisting of $N$ singleton clusters. Each centroid $c_k \in C$ is set to the corresponding entity $y_i \in Y$. Set $w_{kv} = 1/V$ for $k=1, 2, ..., K$ and $v = 1 , 2, ..., V$. \item \textit{Merge clusters}. Find the two nearest clusters $\{S_a, S_b\} \subseteq S$ with respect to (\ref{Eq:MW_Ward}). Merge $S_a$ and $S_b$, thus creating a new cluster $S_{ab}$. Remove all references to $S_a$, $S_b$, $c_a$, and $c_b$. \item \textit{Centroid update}. Set the centroid of $S_{ab}$ to the component-wise Minkowski centre of $y_i \in S_{ab}$. \item \textit{Weight update}. Using (\ref{Eq:MWK_Weight}) compute weights $w_{kv}$ for $k=1, 2, ..., K$ and $v=1, 2, ..., V$. \item \textit{Stop condition}. Reduce $K$ by 1. If $K>1$ or if $K$ is still greater than the desired number of clusters, go back to Step 2. \end{enumerate} The algorithm Ward$_p$ requires the computation of the Minkowski centre at different values of $p$. This centre can be calculated using a steepest descent method \cite{cordeiro2011minkowski,de2015recovering}. \subsection{Ward$_{p\beta}$ algorithm initialised with anomalous patterns} Both \textit{imwk}-means and Ward$_p$ apply the same exponent $p$ to the feature weights and the distance in their respective criteria. There are two major reasons to apply the same exponent. First, by doing so there is a single problem-specific parameter to be defined by the user. Since the optimal value of this parameter is usually unknown to the user, it can be estimated by analysing the clusterings produced at different values of $p$. For instance, the user can carry out Ward$_p$ at $p=1.1, 1.2, ..., 5.0$ and choose as optimal the value of $p$ that optimizes a given cluster validity index. In our previous experiments, we successfully applied the Silhouette width \cite{de2015feature}. Obviously, there are many other cluster validity indices that could be used instead (see a recent survey \cite{arbelaitz2012extensive}). The second reason is that if the same exponent is employed with the feature weights and the distance, then the weights can be seen as feature rescaling factors. These factors can be used in the data pre-processing stage as an instrument to standardise a data set. For instance, rescaling data sets with these factors improves the likelihood of clustering validity indexes to return the true number of clusters in data sets, particularly in those comprising noise features \cite{de2015recovering}. The above is helpful when the number of clusters in a data set is unknown. Still, in this paper we deal solely with cluster recovery where the number of clusters is known. Clearly, estimating a single parameter is easier than estimating two. However, by using two exponents we detach the cluster shape from the weight exponent, increasing considerably the variety of clustering possibilities. Taking all of the above into account, we extend here Ward$_p$ to allow the use of different exponents for the distance and the feature weights. During the initialisation step, our new algorithm, A-Ward$_{p\beta}$, measures the distance between an entity $y_i \in Y$ and the centroid $c_k \in C$ of cluster $S_k$ by: \begin{equation} \label{Eq:TwoExponentMinkDist} d_{p\beta}(y_i, c_k) = \sum_{v=1}^V w_{kv}^{\beta} |y_{iv}-c_{kv}|^p, \end{equation} where $p$ and $\beta$ are user-defined parameters. In Section \ref{Sec:Estimating_pb} we introduce a method to estimate good values for these parameters. Our new algorithm makes use of our anomalous pattern initialisation, this time also applying the weighted Minkowski distance, as presented below:\\ \textbf{Anomalous pattern initialisation for A-Ward$_{p\beta}$ and \textit{imwk}-means$_{p\beta}$} \begin{enumerate} \setlength{\itemsep}{-1pt} \item \textit{Initial setting}. Select the values of $p$ and $\beta$. Set the data centre $c_Y$ to be the component-wise Minkowski centre of $y_i \in Y$. \item \textit{Tentative centroid}. Set $S_t = \emptyset$. Set $w_{kv}=1/V$ for $k=1, 2$ and $v=1, 2, ..., V$. Set $c_t$, a tentative centroid, to the values of the furthest entity $y_i \in Y$ from $c_Y$ as per (\ref{Eq:TwoExponentMinkDist}). \item \textit{Entity assignment}. Assign each entity $y_i \in Y$ that is closer to $c_t$ than to $c_Y$ as per (\ref{Eq:TwoExponentMinkDist}) to the cluster $S_t$. If this step produces no changes in $S_t$, go to Step 6. \item \textit{Centroid update}. Update $c_t$ to the component-wise Minkowski centre of $y_i \in S_t$. \item \textit{Weight update}. Update the feature weights as per (\ref{Eq:MWK_Weight}). Go to Step 3. \item \textit{Save parameters}. Include $c_t$ into $C$, and $w$ into $W$. \item \textit{Remove cluster}. Remove each $y_i \in S_t$ from $Y$. If there are still entities in $Y$, go to Step 2. \end{enumerate} We can further minimise the distance between entities and centroids by using centroids $C$ and weights $W$ generated above as starting points for the version of our \textit{imwk}-means$_{p\beta}$ algorithm below: \\ \textbf{\textit{imwk}-means$_{p\beta}$ algorithm} \begin{enumerate} \setlength{\itemsep}{0pt} \item \textit{Initial setting}. Set $K=|C|=K^*$, and $S=\emptyset$. \item \textit{Entity assignment}. Assign each entity $y_i \in Y$ to the cluster $S_k \in S$ that is represented by the centroid $c_k \in C$ that is the closest to $y_i$ as per (\ref{Eq:TwoExponentMinkDist}). If there are no changes in $S$, go to Step 5. \item \textit{Centroid update}. Update each centroid $c_k \in C$ to the component-wise Minkowski centre of $y_i \in S_k$. \item \textit{Weight update}. Update each weight $w_{kv}$ for $k=1, 2, ..., K$ and $v=1, 2, ..., V$ as per (\ref{Eq:MWK_Weight}). Go to Step 2. \item \textit{Output}. Output the clustering $S$, centroids $C$ and weights $W$. \end{enumerate} Upon completion of the algorithm above we obtain a clustering $S$, centroids $C$ and weights $w_{kv}$ for $k=1, 2, ..., K$ and $v=1, 2, ..., V$. As we will show in the following sections, these parameters represent good initial settings for our A-Ward$_{p\beta}$. We use this criterion for building a cluster hierarchy with the following cluster-to-cluster dissimilarity measure: \begin{equation} \label{Eq:MW_Wardpb} Ward_{p\beta}(S_a, S_b) = \frac{N_a N_b}{N_a + N_b}\sum_{v=1}^V (\frac{w_{av}+w_{bv}}{2})^{\beta} |c_{av} - c_{bv}|^p. \end{equation} Now we can run the agglomerative A-Ward$_{p\beta}$ algorithm as follows:\\ \textbf{A-Ward$_{p\beta}$ agglomerative algorithm} \begin{enumerate} \setlength{\itemsep}{-1pt} \item \textit{Initial setting}. Take the values of $p$ and $\beta$ used in the \textit{imwk}-means$_{p\beta}$ algorithm and start from the clustering $S$, centres $C$ and weights $w_{kv}$ provided by \textit{imwk}-means$_{p\beta}$. \item \textit{Merge clusters}. Find the two nearest clusters $\{S_a, S_b\} \subseteq S$ with respect to (\ref{Eq:MW_Wardpb}). Merge $S_a$ and $S_b$, thus creating a new cluster $S_{ab}$. Remove all references to $S_a$, $S_b$, $c_a$, and $c_b$. \item \textit{Centroid update}. Set the centroid of $S_{ab}$ to the component-wise Minkowski centre of $y_i \in S_{ab}$. \item \textit{Weight update}. Using (\ref{Eq:MWK_Weight}) compute weights $w_{kv}$ for $k=1, 2, ..., K$ and $v=1, 2, ..., V$. \item \textit{Stop condition}. Reduce $K$ by 1. If $K>1$ or if $K$ is still greater than the desired number of clusters, go back to Step 2. \end{enumerate} \subsection{Validation of the A-Ward$_{p\beta}$ algorithm} Analogously to our previous simulation studies \cite{de2016applying,de2015feature}, we first found a set of partitions, each corresponding to a different combination of values of $p$ and $\beta$. The set of all possible values of $p$ and $\beta$ was modelled using a grid of $p$ and $\beta$ values varying from $1.1$ to $5.0$ with the step of $0.1$, as in \cite{cordeiro2011minkowski}. We obtained the results for Ward$_p$ by running it with $p=1.1, 1.2, ..., 5.0$ and selecting the clustering with the highest ARI in relation to the known truth. Similarly, the results under Ward$_{p\beta}$ are given with respect to the clusterings with the highest ARI in relation to the known truth. These settings give us an indication of the best possible results we could obtain if we were able to estimate the best possible values of the exponents. \begin{table} \caption{The best possible average cluster recovery, in terms of ARI, provided by Ward$_p$ and A-Ward$_{p\beta}$. The ARI's standard deviation and the pre-selected number of clusters, $K^*$, found at the anomalous pattern initialisation step of A-Ward$_{p\beta}$ are also indicated.} \begin{center} \tabcolsep=0.11cm \begin{tabular}{lcccccccccccccccccc} &\multicolumn{2}{c}{Ward$_p$}&&\multicolumn{5}{c}{A-Ward$_{p \beta}$}\\ \cline{5-9} &&&&\multicolumn{2}{c}{ARI}&&\multicolumn{2}{c}{$K^*$}\\ \cline{2-3} \cline{5-6} \cline{8-9} &avg&sd&&avg&sd&&avg&sd\\ 1000x6-3&0.6568&0.154&&0.7314&0.135&&18.45&3.220\\ 1000x6-3 +3NF&0.3193&0.249&&0.6348&0.195&&16.20&3.650\\ 1000x6-3 50\%N&0.2831&0.163&&0.4851&0.190&&16.50&3.502\\ \hline 1000x12-6&0.7412&0.148&&0.8066&0.121&&21.25&4.253\\ 1000x12-6 +6NF&0.3440&0.212&&0.7467&0.161&&15.90&2.532\\ 1000x12-6 50\%N&0.2535&0.191&&0.6138&0.147&&17.10&3.655\\ \hline 1000x20-10&0.9119&0.035&&0.9564&0.021&&22.20&5.988\\ 1000x20-10 +10NF&0.4638&0.098&&0.9258&0.025&&27.20&6.118\\ 1000x20-10 50\%N&0.2021&0.096&&0.8440&0.042&&23.25&4.833\\ \end{tabular} \end{center} \label{Tab:GMM_Baseline} \end{table} Table \ref{Tab:GMM_Baseline} shows that the best possible average ARI of Ward$_p$ and A-Ward$_{p\beta}$ decreases when noise is added to the data sets, but not as much as it decreases in the case of traditional Ward (see Table \ref{Tab:GMM_Baseline_Ward_TTWard}). A-Ward$_{p\beta}$ is particularly impressive at the largest structure configuration, 1000x20-10. When 10 noise features are added to data sets (configuration 1000x20-10 +10NF), the average ARI obtained by Ward falls from $0.8998$ to $0.2418$. If instead of adding 10 noise features we substitute 50\% of the cluster-specific data with noise (configuration 1000x20-10 50\%N), the ARI falls even further to $0.1360$. That is a decrease of over six times. Ward$_p$ presents considerable falls of ARI in the same scenario, too. In contrast, the accuracy decrease of Ward$_{p\beta}$ is only about $0.03$ when 10 noise features are added to the data. Furthermore, the average ARI obtained with A-Ward$_{p\beta}$ over the data sets 1000x20-10 + 10NF is nearly twice that of Ward$_p$, and nearly four times that of Ward. The experiments with the data sets 1000x20 50\%N show a very similar trend. The average ARI obtained by A-Ward$_{p\beta}$ is about four times higher than that of Ward$_p$, and about six times higher than that of Ward. Thus, in an ideal situation of the known best $p$ and $\beta$, A-Ward$_{p\beta}$ is capable of obtaining really good clusterings that are much superior to those yielded by Ward and Ward$_p$. \subsection{Estimating the optimal values of the exponents $p$ and $\beta$} \label{Sec:Estimating_pb} To find good values for $p$ and $\beta$ in an unsupervised situation, we opted for the Silhouette width cluster validity index \cite{rousseeuw1987silhouettes} which proved successful in the literature \cite{arbelaitz2012extensive} as well as in our previous experiments \cite{de2016applying,de2015feature,de2015recovering}. The Silhouette width of a partition $S$ is the average Silhouette width of entities $y_i \in Y$, defined as follows: \begin{equation} \label{Eq:Silhouette} Sil(y_i) = \frac{b(y_i)-a(y_i)}{max\{a(y_i), b(y_i)\}}, \end{equation} where $a(y_i)$ is the average dissimilarity of $y_i \in S_k$ to all other $y_j \in S_k$, and $b(y_i)$ the minimum dissimilarity over all clusters $S_q \in S$, to which $y_i$ is not assigned, of the average dissimilarities to $y_j \in S_q, q\not=k$. Therefore, $-1\leq Sil(y_i) \leq 1$. A $Sil(y_i)$ value near zero indicates that $y_i$ could be assigned to another cluster without much damaging both cluster cohesion and separation. A negative $Sil(y_i)$ suggests that $y_i$'s cluster assignment is damaging to the cluster cohesion and separation, whereas an $Sil(y_i)$ closer to one means the opposite. We can then quantify the validity of the whole clustering $S$ by the Silhouette index, defined as $Sil=1/N\sum_{i \in Y} Sil(y_i)$. Table \ref{Tab:GMM_Wardpb} reports the average ARI and standard deviations of Ward$_{p\beta}$, obtained with the estimated values of $p$ and $\beta$, for each of the nine parameter configurations. The exponents $p$ and $\beta$ have been estimated as those corresponding to the highest values of the average Silhouette width (\ref{Eq:Silhouette}). We have experimented with the Silhouette width validity index measured using the squared Euclidean, Manhattan and Minkowski distances. The exponent of the latter was set to the same value of $p$ that was used in A-Ward$_{p\beta}$. \begin{table}\small \caption{Average ARI and its standard deviations for clustering solutions found using A-Ward$_{p\beta}$. The best possible results for this algorithm are presented under the column \textit{Best}. Under \textit{Silhouette}, we present the results for $p$ and $\beta$ estimated using this cluster validity index, with either the squared Euclidean distance, or Manhattan distance, or Minkowski distance. In the latter case, the Minkowski exponent was set to the same value of $p$ that was used in A-Ward$_{p\beta}$.} \begin{center} \tabcolsep=0.11cm \begin{tabular}{lccccccccccc} &&&&\multicolumn{8}{c}{Silhouette}\\ \cline{5-12} &\multicolumn{2}{c}{Best}&&\multicolumn{2}{c}{sq. Euclidean}&&\multicolumn{2}{c}{Manhattan}&&\multicolumn{2}{c}{Minkowski}\\ \cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12} &avg&sd&&avg&sd&&avg&sd&&avg&sd\\ 1000x6-3&0.7314&0.135&&0.6476&0.189&&0.6351&0.193&&0.6706&0.170\\ 1000x6-3 3NF&0.6348&0.195&&0.1785&0.269&&0.3475&0.299&&0.1838&0.289\\ 1000x6-3 50\%N&0.4851&0.190&&0.1285&0.219&&0.1715&0.243&&0.1026&0.199\\ \hline 1000x12-6&0.8066&0.121&&0.7109&0.178&&0.7035&0.183&&0.7200&0.185\\ 1000x12-6 6NF&0.7467&0.161&&0.4693&0.237&&0.6279&0.236&&0.5818&0.232\\ 1000x12-6 50\%N&0.6138&0.147&&0.2596&0.213&&0.2937&0.237&&0.2592&0.237\\ \hline 1000x20-10&0.9564&0.021&&0.9254&0.035&&0.9216&0.037&&0.9185&0.036\\ 1000x20-10 10NF&0.9258&0.025&&0.8585&0.076&&0.8849&0.052&&0.8732&0.044\\ 1000x20-10 50\%N&0.8440&0.042&&0.5122&0.211&&0.7271&0.096&&0.6363&0.195\\ \end{tabular} \end{center} \label{Tab:GMM_Wardpb} \end{table} Table \ref{Tab:GMM_Wardpb} replicates the best possible average ARI values of A-Ward$_{p\beta}$ from Table \ref{Tab:GMM_Baseline}. The results reported in Table \ref{Tab:GMM_Wardpb} show some interesting patterns. Probably the most striking of them is that all the average ARI values obtained by A-Ward$_{p\beta}$ using the estimated values of $p$ and $\beta$ are much better than the average ARI values of the conventional Ward shown in Table \ref{Tab:GMM_Baseline_Ward_TTWard}. The results obtained by A-Ward$_{p\beta}$ are also superior to the best possible results of Ward$_p$ in a number of occasions. This is particularly true for the experiments carried out at greater numbers of clusters: 1000x12-6 6NF, 1000x12-6 50\%N, 1000x20-10 10NF, and 1000x20-10 50\%N. It should be pointed out that, in these experiments, using Manhattan distance for calculation of the Silhouette width index leads to better cluster recovery results overall. It would be fair to say that the results provided by A- Ward$_{p\beta}$, with the exponents $p$ and $\beta$ estimated using the Silhouette cluster validity index, are promising indeed. \section{Conclusion} This paper makes two novel contributions to hierarchical clustering. First, we introduced an initialisation method, A-Ward, for hierarchical clustering algorithms. This method generates initial partitions with a sufficiently large number of clusters. Thus, the cluster merging process begins from this partition rather than from a trivial partition composed solely of singletons. The anomalous pattern initialisation method can reduce substantially the time a hierarchical clustering algorithm takes to complete without negatively impacting its cluster recovery ability. Second, we introduced A-Ward$_{p\beta}$, a novel hierarchical clustering algorithm which can be viewed as an extension of the popular Ward algorithm. Ward$_{p\beta}$ applies a feature weighted version of the Minkowski distance, making it able to detect clusters with shapes other than spherical. The feature weights are cluster-specific. They follow the intuitive idea that the relevance of a feature at a particular cluster is inversely proportional to its dispersion within that cluster. Thus, a feature with a low dispersion within a certain cluster has a higher degree of relevance than a feature with a high dispersion. The computation process according to A-Ward$_{p\beta}$ incorporates this concept via the use of cluster specific feature weights. The new algorithm is initialised with our anomalous pattern identification method. We empirically validated the anomalous pattern initialisation method in the framework of both Ward and Ward$_{p\beta}$ by running a number of simulations with synthetic data sets. We experimented with numerous data sets containing Gaussian clusters, with and without noise added to them. In contrast to our previous experiments, here noise has been added in two different ways: (i) each data set was supplemented with features composed entirely of uniform random values, the number of features added was equal to the half of the number of original features; (ii) cluster specific noise was generated by substituting 50\% of the cluster-specific data fragments by uniform random values. In our experiments we compared the Ward, A-Ward, Ward$_p$ and A-Ward$_{p\beta}$ algorithms in terms of cluster recovery. To do so, we measured the average Adjusted Rand Index for the obtained clustering solutions found by these algorithms in relation to the known truth. Our main conclusion is that A-Ward$_{p\beta}$ is capable of good cluster recovery in difficult practical situations. It produces superior results to those of Ward and Ward$_p$, especially when data sets are affected by the presence of noise features. This is in fact the case for most real-world data. Our future research will investigate other methods for estimation of $p$ and $\beta$ as well as further advancements into the problem of evaluation of the true number of clusters using both divisive and agglomerative hierarchical clustering algorithms.
1,314,259,996,177
arxiv
\section{Introduction} In a series of papers \cite{vitt56, vitt57, vitt62, vitt65} William Leavitt studied algebras that are now denoted by $L_K(n,n+k)$ and have been coined Leavitt algebras. Let $X=(x_{ij})$ and $Y=(y_{ji})$ be $(n+k)\times n$ and $n\times (n+k)$ matrices consisting of symbols $x_{ij}$ and $y_{ji}$, respectively. Then for a field $K$, $L_K(n,n+k)$ is the unital $K$-algebra generated by all $x_{ij}$ and $y_{ji}$ subject to the relations $XY=I_{n+k}$ and $YX=I_n$. The algebra $L_K(n,n+k)$ can be described as the $K$-algebra $A$ with a universal left $A$-module isomorphism $A^n\rightarrow A^{n+k}$, cf. \cite[second paragraph on p. 35]{bergman74}. (Unweighted) Leavitt path algebras are algebras associated to directed graphs. They were introduced by G. Abrams and G. Aranda Pino in 2005 \cite{aap05} and independently by P. Ara, M. Moreno and E. Pardo in 2007 \cite{Ara_Moreno_Pardo}. For the directed graph \[ \xymatrix{ & \bullet\ar@{.}@(l,d) \ar@(ur,dr)^{e^{(1)}} \ar@(r,d)^{e^{(2)}} \ar@(dr,dl)^{e^{(3)}} \ar@(l,u)^{e^{(k+1)}}& } \] with one vertex and $k+1$ loops one recovers the Leavitt algebra $L_K(1,k+1)$. The definition and the development of the theory were inspired on the one hand by Leavitt's construction of $L_K(1,k+1)$ and on the other hand by the Cuntz algebras $\mathcal{O}_n$ \cite{cuntz1} and the Cuntz-Krieger algebras in $C^*$-algebra theory \cite{raeburn}. The Cuntz algebras and later Cuntz-Krieger type $C^*$-algebras revolutionised $C^*$-theory, leading ultimately to the astounding Kirchberg-Phillips classification theorem~\cite{phillips}. The Leavitt path algebras have created the same type of stir in the algebraic community There have been several attempts to introduce a generalisation of the Leavitt path algebras which would cover the algebras $L_K(n,n+k),~n\geq 2$ as well. In 2013, R. Hazrat \cite{hazrat13} introduced weighted Leavitt path algebras. These are algebras associated to weighted graphs. For the weighted graph \[ \xymatrix{ & \bullet \ar@{.}@(l,d) \ar@(ur,dr)^{e^{(1)},n} \ar@(r,d)^{e^{(2)},n} \ar@(dr,dl)^{e^{(3)},n} \ar@(l,u)^{e^{(n+k)},n}& } \] with one vertex and $n+k$ loops of weight $n$ one recovers the Leavitt algebra $L_K(n,n+k)$. If the weights of all the edges are $1$, then the weighted Leavitt path algebras reduce to the unweighted Leavitt path algebras. Which are the new examples in the class of weighted Leavitt path algebras? In \cite{hazrat-preusser} it was shown that any simple or graded simple weighted Leavitt path algebra is isomorphic to an unweighted Leavitt path algebra. In \cite{preusser} and \cite{preusser1} it was shown that any finite-dimensional or Noetherian weighted Leavitt path algebra is isomorphic to an unweighted Leavitt path algebra. Furthermore, graph-theoretic criterions that are sufficient and necessary for $L_K(E,w)$ being finite-dimensional/Noetherian were found (see \cite[Theorems 25 and 52]{preusser1}). On the other hand, it was shown in \cite[Corollary 16]{preusser-1}, that the class of weighted Leavitt path algebras contains infinitely many domains which are neither isomorphic to an unweighted Leavitt path algebra nor to a Leavitt algebra $L_K(n,n+k)$. As examples consider the weighted graphs \[ (E,w):~\xymatrix@C+15pt{ \bullet& \bullet\ar[l]_{1}\ar[r]^{2}& \bullet},\hspace{0.7cm}(E',w'):~\xymatrix@C+15pt{ \bullet& \bullet\ar@/_1.7pc/[l]_{1}\ar@/^1.7pc/[l]^{2}} \hspace{0.5cm}\text{ and }\hspace{0.5cm} (E'',w''):\quad\xymatrix{ \bullet\ar@(ur,ul)_{1}\ar@(dr,dl)^{2}}~~ \] where a number above or below an edge indicates the weight of that edge. In \cite[Example 40]{preusser} it was shown that $L_K(E,w)\cong L_K(F)$ where $F$ is the directed graph \[ F: ~\vcenter{\vbox{ \xymatrix@C+15pt{ &\bullet\ar[d]&\\\bullet& \bullet\ar[l]\ar[r]& \bullet.} }} \] In \cite[Example 21]{preusser-1} it was shown that $L_K(E',w')\cong L_K(F')$ where $F'$ is the directed graph \vspace{0.7cm} \[ F':~ \xymatrix@C+15pt{\bullet& \bullet\ar@/^1.7pc/[l]\ar@/_1.7pc/[l]\ar@/^1.7pc/[r]& \bullet.\ar@/^1.7pc/[l]} \] $~$\\ But it remained unclear if $L_K(E'',w'')$ is isomorphic to an unweighted Leavitt path algebra. It will follow from the results of this paper that $L_K(E'',w'')$ cannot be isomorphic to an unweighted Leavitt path algebra. In this paper we obtain a graph-theoretic criterion that is sufficient and necessary for $L_K(E,w)$ being isomorphic to an unweighted Leavitt path algebra (Condition (LPA), cf. Definition \ref{defLPA}). Moreover, we prove that if $L_K(E,w)$ is Artinian, or von Neumann regular, or has finite Gelfand-Kirillov dimension, then $L_K(E,w)$ is isomorphic to an unweighted Leavitt path algebra. The rest of the paper is organised as follows. In Section 2 we recall some standard notation which is used throughout the paper. In Section 3 we recall the definitions of the unweighted and weighted Leavitt path algebras. In Section 4 we introduce Condition (LPA). In Section 5 we prove that if $(E,w)$ is a row-finite weighted graph that satisfies Condition (LPA), then $L_K(E,w)$ is isomorphic to an unweighted Leavitt path algebra. In Section 6 we prove that if $(E,w)$ is a row-finite weighted graph that does not satisfy Condition (LPA), then $L_K(E,w)$ is not isomorphic to an unweighted Leavitt path algebra. Moreover, we prove that if $L_K(E,w)$ is Artinian, or von Neumann regular, or has finite Gelfand-Kirillov dimension, then $L_K(E,w)$ is isomorphic to an unweighted Leavitt path algebra. We also prove again that if $L_K(E,w)$ is locally finite or Noetherian, then $L_K(E,w)$ is isomorphic to an unweighted Leavitt path algebra (that has already been shown in \cite{preusser1}, but the paper was never published in a journal). In Section 7 we summarise the main results of this paper. \section{Notation} Throughout the paper $K$ denotes a field. By a $K$-algebra we mean an associative (but not necessarily commutative or unital) $K$-algebra. By an ideal we mean a two-sided ideal. $\mathbb{N}$ denotes the set of positive integers, $\mathbb{N}_0$ the set of nonnegative integers, $\mathbb{Z}$ the set of integers and $\mathbb{R}_+$ the set of positive real numbers. \section{Unweighted and weighted Leavitt path algebras} \begin{definition}\label{defdg} A {\it (directed) graph} is a quadruple $E=(E^0,E^1,s,r)$ where $E^0$ and $E^1$ are sets and $s,r:E^1\rightarrow E^0$ maps. The elements of $E^0$ are called {\it vertices} and the elements of $E^1$ {\it edges}. If $e$ is an edge, then $s(e)$ is called its {\it source} and $r(e)$ its {\it range}. \end{definition} \begin{remark}$~$\\ \vspace{-0.6cm} \begin{enumerate}[(a)] \item Let $E$ be a graph, $v\in E^0$ a vertex and $e\in E^1$ an edge. Then we say that $v$ {\it emits} $e$ if $s(e)=v$ and $v$ {\it receives} $e$ if $r(e)=v$. \item In this article all graphs are assumed to be row-finite. Recall that a graph $E=(E^0,E^1,s,r)$ is called {\it row-finite} if $s^{-1}(v)$ is a finite set for any vertex $v$. \end{enumerate} \end{remark} \begin{definition}\label{deflpa} Let $E$ be a graph. The $K$-algebra $L_K(E)$ presented by the generating set $\{v,e,e^*\mid v\in E^0,e\in E^1\}$ and the relations \begin{enumerate}[(i)] \item $uv=\delta_{uv}u\quad(u,v\in E^0)$, \medskip \item $s(e)e=e=er(e),~r(e)e^*=e^*=e^*s(e)\quad(e\in E^1)$, \medskip \item $e^*f= \delta_{ef}r(e)\quad(v\in E^0,e,f\in s^{-1}(v))$ and \medskip \item $\sum\limits_{e\in s^{-1}(v)}ee^*= v\quad(v\in E^0,s^{-1}(v)\neq\emptyset)$ \end{enumerate} is called the {\it (unweighted) Leavitt path algebra} of $E$. \end{definition} \begin{remark}\label{remunproplpa} Let $E$ be a graph and $A$ a $K$-algebra that contains a set $X=\{\alpha_v, \beta_{e}, \gamma_{e}\mid v\in E^0, e\in E^1\}$ such that\\ \vspace{-0.1cm} \begin{enumerate}[(i)] \item the $\alpha_v$'s are pairwise orthogonal idempotents, \medskip \item $\alpha_{s(e)}\beta_{e}=\beta_{e}=\beta_{e}\alpha_{r(e)},~\alpha_{r(e)}\gamma_{e}=\gamma_{e}=\gamma_{e}\alpha_{s(e)}\quad(e\in E^1)$, \medskip \item $\gamma_{e}\beta_{f}= \delta_{ef}\alpha_{r(e)}\quad(v\in E^0,e,f\in s^{-1}(v))$ and \medskip \item $\sum\limits_{e\in s^{-1}(v)}\beta_{e}\gamma_{e}= \alpha_{v}\quad(v\in E^0,s^{-1}(v)\neq \emptyset)$. \end{enumerate} We call $X$ an {\it $E$-family} in $A$. By the relations defining $L_K(E)$, there exists a unique $K$-algebra homomorphism $\phi: L_K(E)\rightarrow A$ such that $\phi(v)=\alpha_v$, $\phi(e)=\beta_{e}$ and $\phi(e^*)=\gamma_{e}$ for all $v\in E^0$ and $e\in E^1$. We will refer to this as the {\it Universal Property of $L_K(E)$}. \end{remark} \begin{definition} A {\it weighted graph} is a pair $(E,w)$ where $E$ is a graph and $w:E^1\rightarrow \mathbb{N}$ is a map. If $e\in E^1$, then $w(e)$ is called the {\it weight} of $e$. For a vertex $v\in E^0$ we set $w(v):=\max\{w(e)\mid e\in s^{-1}(v)\}$ with the convention $\max \emptyset=0$. \end{definition} \begin{definition}\label{def3} Let $(E,w)$ be a weighted graph. The $K$-algebra $L_K(E,w)$ presented by the generating set $\{v,e_i,e_i^*\mid v\in E^0, e\in E^1, 1\leq i\leq w(e)\}$ and the relations \begin{enumerate}[(i)] \item $uv=\delta_{uv}u\quad(u,v\in E^0)$, \medskip \item $s(e)e_i=e_i=e_ir(e),~r(e)e_i^*=e_i^*=e_i^*s(e)\quad(e\in E^1, 1\leq i\leq w(e))$, \medskip \item $\sum\limits_{1\leq i\leq w(v)}e_i^*f_i= \delta_{ef}r(e)\quad(v\in E^0,e,f\in s^{-1}(v))$ and \medskip \item $\sum\limits_{e\in s^{-1}(v)}e_ie_j^*= \delta_{ij}v\quad(v\in E^0,1\leq i, j\leq w(v))$ \end{enumerate} is called the {\it weighted Leavitt path algebra} of $(E,w)$. In relations (iii) and (iv) we set $e_i$ and $e_i^*$ zero whenever $i > w(e)$. \end{definition} \begin{example}\label{exex1} If $(E,w)$ is a weighted graph such that $w(e)=1$ for all $e \in E^{1}$, then $L_K(E,w)$ is isomorphic to the unweighted Leavitt path algebra $L_K(E)$. \end{example} \begin{example}\label{wlpapp} Let $n\geq 1$ and $k\geq 0$. Let $(E,w)$ be the weighted graph \begin{equation*} \xymatrix{ & v \ar@{.}@(l,d) \ar@(ur,dr)^{e^{(1)},n} \ar@(r,d)^{e^{(2)},n} \ar@(dr,dl)^{e^{(3)},n} \ar@(l,u)^{e^{(n+k)},n}& } \end{equation*} with one vertex $v$ and $n+k$ edges $e^{(1)},\dots,e^{(n+k)}$ each of which has weight $n$. Then $L_K(E,w)$ is isomorphic to the Leavitt algebra $L_K(n,n+k)$, for details see \cite[Example 5.5]{hazrat13} or \cite[Example 4]{hazrat-preusser}. \end{example} \begin{remark}\label{remunpropwlpa} Let $(E,w)$ be a weighted graph and $A$ a $K$-algebra that contains a set $X=\{\alpha_v, \beta_{e,i}, \gamma_{e,i}\mid v\in E, e\in E^1, 1\leq i\leq w(e)\}$ such that\\ \vspace{-0.1cm} \begin{enumerate}[(i)] \item the $\alpha_v$'s are pairwise orthogonal idempotents, \medskip \item $\alpha_{s(e)}\beta_{e,i}=\beta_{e,i}=\beta_{e,i}\alpha_{r(e)},~\alpha_{r(e)}\gamma_{e,i}=\gamma_{e,i}=\gamma_{e,i}\alpha_{s(e)}\quad(e\in E^1, 1\leq i\leq w(e))$, \medskip \item $\sum\limits_{1\leq i\leq w(v)}\gamma_{e,i}\beta_{f,i}= \delta_{ef}\alpha_{r(e)}\quad(v\in E^0,e,f\in s^{-1}(v))$ and \medskip \item $\sum\limits_{e\in s^{-1}(v)}\beta_{e,i}\gamma_{e,j}= \delta_{ij}\alpha_{v}\quad(v\in E^0,1\leq i,j\leq w(v))$. \end{enumerate} In relations (iii) and (iv) we set $\beta_{e,i}$ and $\gamma_{e,i}$ zero whenever $i > w(e)$. We call $X$ an {\it $(E,w)$-family} in $A$. By the relations defining $L_K(E,w)$, there exists a unique $K$-algebra homomorphism $\phi: L_K(E,w)\rightarrow A$ such that $\phi(v)=\alpha_v$, $\phi(e_{i})=\beta_{e,i}$ and $\phi(e^*_{i})=\gamma_{e,i}$ for all $v\in E^0$, $e\in E^1$ and $1\leq i\leq w(e)$. We will refer to this as the {\it Universal Property of $L_K(E,w)$}. \end{remark} \begin{remark} Let $(E,w)$ be a weighted graph. Then $L_K(E,w)$ has the properties (a)-(d) below, for details see \cite[Proposition 5.7]{hazrat13}. \begin{enumerate}[(a)] \item If $E^0$ is a finite set, then $L_K(E,w)$ is a unital ring (with $\sum\limits_{v\in E^0} v$ as multiplicative identity). \item $L_K(E,w)$ has a set of local units, namely the set of all finite sums of distinct elements of $E^0$. Recall that an associative ring $R$ is said to have a {\it set of local units} $X$ in case $X$ is a set of idempotents in $R$ having the property that for each finite subset $S\subseteq R$ there exists an $x\in X$ such that $xsx=s$ for any $s\in S$. \item There is an involution $*$ on $L_K(E,w)$ mapping $k\mapsto k$, $v\mapsto v$, $e_i\mapsto e_i^*$ and $e_i^*\mapsto e_i$ for any $k\in K$, $v\in E^0$, $e\in E^1$ and $1\leq i\leq w(e)$. \item Set $n:=\sup\{w(e) \mid e \in E^{1}\}$. One can define a $\mathbb Z^n$-grading on $L_K(E,w)$ by setting $\deg(v):=0$, $\deg(e_i):=\epsilon_i$ and $\deg(e_i^*):=-\epsilon_i$ for any $v\in E^0$, $e \in E^{1}$ and $1\leq i\leq w(e)$. Here $\epsilon_i$ denotes the element of $\mathbb Z^n$ whose $i$-th component is $1$ and whose other components are $0$. \end{enumerate} \end{remark} \section{The Condition (LPA)} We start with a couple of definitions. \begin{definition} Let $E$ be a graph. A {\it path} is a nonempty word $p=x_1\dots x_n$ over the alphabet $E^0\cup E^1$ such that either $x_i\in E^1~(i=1,\dots,n)$ and $r(x_i)=s(x_{i+1})~(i=1,\dots,n-1)$ or $n=1$ and $x_1\in E^0$. By definition, the {\it length} $|p|$ of $p$ is $n$ in the first case and $0$ in the latter case. We set $s(p):=s(x_1)$ and $r(p):=r(x_n)$ (here we use the convention $s(v)=v=r(v)$ for any $v\in E^0$). \end{definition} \begin{definition} Let $E$ be a graph and $v\in E^0$. A {\it closed path (based at $v$)} is a path $p$ such that $|p|>0$ and $s(p)=r(p)=v$. A {\it cycle (based at $v$)} is a closed path $p=x_1\dots x_n$ based at $v$ such that $s(x_i)\neq s(x_j)$ for any $i\neq j$. \end{definition} \begin{definition}\label{deftre} Let $E$ be a graph. If $u,v\in E^0$ and there is a path $p$ in $E$ such that $s(p)=u$ and $r(p)=v$, then we write $u\geq v$. If $u\in E^0$, then $T(u):=\{v\in E^0 \mid u\geq v\}$ is called the {\it tree} of $u$. If $X\subseteq E^0$, we define $T(X):=\bigcup\limits_{v\in X}T(v)$. Two edges $e,f\in E^1$ are called {\it in line} if $e=f$ or $r(e)\geq s(f)$ or $r(f)\geq s(e)$ \end{definition} \begin{definition} Let $(E,w)$ be a weighted graph. An edge $e\in E^{1}$ is called {\it unweighted} if $w(e)=1$ and {\it weighted} if $w(e)>1$. The subset of $E^1$ consisting of all unweighted edges is denoted by $E_{uw}^1$ and the subset consisting of all weighted edges by $E_{w}^1$ \end{definition} Now we can introduce Condition (LPA). \begin{definition}\label{defLPA} We say that a weighted graph $(E,w)$ {\it satisfies Condition (LPA)} if the following holds true: \begin{enumerate}[({LPA}1)] \item Any vertex $v\in E^0$ emits at most one weighted edge. \item Any vertex $v\in T(r(E^1_w))$ emits at most one edge. \item If two weighted edges $e,f\in E^1_w$ are not in line, then $T(r(e))\cap T(r(f))=\emptyset$. \item If $e\in E^1_w$ and $c$ is a cycle based at some vertex $v\in T(r(e))$, then $e$ belongs to $c$. \end{enumerate} \end{definition} Each of the Conditions (LPA1)-(LPA4) in Definition \ref{defLPA} above ``forbids" a certain constellation in the weighted graph $(E,w)$. The pictures below illustrate these forbidden constellations. Symbols above or below edges indicate the weight. A dotted arrow stands for a path. \begin{enumerate}[({LPA}1)] \item \[\xymatrix@R-1.5pc{& \bullet\\\bullet \ar_{>1}[dr] \ar^{>1}[ur]& \\& \bullet.}\] \item \[\xymatrix@R-1.5pc{& & & \bullet\\\bullet \ar^{>1}[r] & \bullet \ar@{.>}[r] & \bullet \ar[dr] \ar[ur]& \\& & & \bullet.}\] \item \[\xymatrix{\bullet \ar^{>1}[r] & \bullet \ar@{.>}[r] & \bullet &\bullet \ar@{.>}[l] & \bullet. \ar_{>1}[l]}\] \item \[\xymatrix@R-1pc@C-0.5pc{ &&&\bullet \ar[r]&\bullet \ar[rd]&\\ \bullet \ar^{>1}[r] & \bullet \ar@{.>}[r] &\bullet \ar[ru]&&&\bullet. \ar[ld]\\ &&&\bullet \ar[lu]&\bullet \ar@{.}[l]} \] \end{enumerate} \begin{remark} Conditions (LPA1), (LPA2) and (LPA3) already appeared in \cite{preusser} and \cite{preusser1}. They were independently found by N. T. Phuc. Condition (LPA4) is new, this condition is slightly weaker then Condition (iv) in \cite[Definition 19]{preusser1}. \end{remark} \section{Presence of Condition (LPA)} \begin{lemma}\label{lemLPA} Let $(E,w)$ be a weighted graph that satisfies Condition (LPA). If $e$ and $f$ are distinct edges such that $s(e), s(f)\in T(r(E^1_w))$, then $r(e)\neq r(f)$. \end{lemma} \begin{proof} Let $e,f\in E^1$ such that $s(e), s(f)\in T(r(E^1_w))$ and $r(e)=r(f)$. We will show that $e=f$. Since $s(e), s(f)\in T(r(E^1_w))$, there are $g,h\in E^1_w$ such that $s(e)\in T(r(g))$ and $s(f)\in T(r(h))$. It follows that $r(e)=r(f)\in T(r(g))\cap T(r(h))$. Since $(E,w)$ satisfies Condition (LPA3), $g$ and $h$ are in line. It follows that $s(e), s(f)\in T(r(g))$ or $s(e), s(f)\in T(r(h))$. W.l.o.g. assume that $s(e), s(f)\in T(r(g))$. \begin{enumerate} \item[Case 1] Assume that there is a cycle $c$ based at some vertex $v\in T(r(g))$. Since $(E,w)$ satisfies (LPA4), $g$ belongs to $c$. Write $c=\alpha^{(1)}\dots\alpha^{(n)}$ where $\alpha^{(1)},\dots,\alpha^{(n)}\in E^1$. Set $x_i:=s(\alpha^{(i)})~(1\leq i\leq n)$. Then, in view of (LPA2), we have $T(r(g))=\{x_1,\dots,x_n\}$. Moreover, each $x_i$ emits precisely one edge, namely $\alpha^{(i)}$. Since $s(e),s(f)\in T(r(g))$, we get that $s(e)=x_i$ and $s(f)=x_j$ for some $1\leq i,j\leq n$. Hence $e=\alpha^{(i)}$ and $f=\alpha^{(j)}$. Since $r(e)=r(f)$, it follows that $i=j$ and hence $e=f$. \medskip \item[Case 2] Assume that no cycle is based at a vertex in $T(r(g))$. Since $s(e),s(f)\in T(r(g))$, there are paths $p$ and $q$ such that $s(p)=r(g)=s(q)$, $r(p)=s(e)$ and $r(q)=s(f)$. Clearly $pe$ and $qf$ are paths starting at $r(g)$ and ending at $r(e)=r(f)$. It follows from (LPA2) and the assumption that no cycle is based at a vertex in $T(r(g))$, that $pe=qf$. Hence $e=f$. \end{enumerate} \end{proof} Recall that if $E$ is a graph, then a vertex $v$ that does not emit any edges is called a {\it sink}. \begin{lemma}\label{lemkey1} Let $(E,w)$ be a weighted graph that satisfies Condition (LPA). Then there is a weighted graph $(\tilde E,\tilde w)$ such that the ranges of the weighted edges in $(\tilde E,\tilde w)$ are sinks, no vertex in $(\tilde E,\tilde w)$ emits or receives two distinct weighted edges, and $L_K(\tilde E,\tilde w)\cong L_K(E,w)$. \end{lemma} \begin{proof} Set $Z:=T(r(E^1_w))$. Define a weighted graph $(\tilde E,\tilde w)$ by $\tilde E^0=E^0$, $\tilde E^1=\tilde E^1_Z\sqcup \tilde E^1_{Z^c}$ where \[\tilde E^1_Z=\{e^{(1)},\dots,e^{(w(e))}\mid e\in E^1, s(e)\in Z\}\text{ and }\tilde E^1_{Z^c}=\{e\mid e\in E^1,s(e)\not\in Z\},\] $\tilde s(e^{(i)})=r(e)$, $\tilde r(e^{(i)})=s(e)$ and $\tilde w(e^{(i)})=1$ for any $e^{(i)}\in \tilde E^1_Z$ and $\tilde s(e)=s(e)$, $\tilde r(e)=r(e)$ and $\tilde w(e)=w(e)$ for any $e\in \tilde E^1_{Z^c}$. We have divided the rest of the proof into three parts. In Part I we show that the ranges of the weighted edges in $(\tilde E,\tilde w)$ are sinks, in Part II we show that no vertex in $(\tilde E,\tilde w)$ emits or receives two distinct weighted edges, and in Part III we show that $L_K(\tilde E,\tilde w)\cong L_K(E,w)$.\\ \\ {\bf Part I} Let $\tilde e\in \tilde E^1_w$. We will show that $\tilde r(\tilde e)$ is a sink in $(\tilde E,\tilde w)$. Clearly $\tilde e\in \tilde E^1_{Z^c}$ since all the edges in $\tilde E^1_{Z}$ have weight one in $(\tilde E,\tilde w)$. Hence there is an $e\in E^1, s(e)\not\in Z$ such that $\tilde e=e$. Clearly $w(e)=\tilde w(e)=\tilde w(\tilde e)>1$. Now suppose that there is an $\tilde f\in \tilde E^1$ such that $\tilde s(\tilde f)=\tilde r(\tilde e)$. \begin{enumerate} \item[Case 1] Assume that $\tilde f\in \tilde E^1_Z$. Then there is an $f\in E^1, s(f)\in Z$ and an $i\in\{1,\dots,w(f)\}$ such that $\tilde f=f^{(i)}$ (note that $e\neq f$, since $s(e)\not\in Z$). It follows that $r(e)=\tilde r(e)=\tilde r(\tilde e)=\tilde s(\tilde f)=\tilde s(f^{(i)})=r(f)$. Since $s(f)\in Z=T(r(E^1_w))$, there is a $g\in E^1_w$ such that $s(f)\in T(r(g))$. It follows that $r(f)\in T(r(e))\cap T(r(g))$. Since $(E,w)$ satisfies Condition (LPA3), we get that $e$ and $g$ are in line and hence $e=g$ or $r(e)\geq s(g)$ or $r(g)\geq s(e)$. \medskip \begin{enumerate} \item[Case 1.1] Assume that $e=g$. Since $s(f)\in T(r(g))=T(r(e))$, there is a path $p$ such that $s(p)=r(e)$ and $r(p)=s(f)$. Since $r(f)=r(e)$, we have a closed path $pf$ based at $r(e)$. That implies the existence of a cycle $c$ based at $r(e)$. Since $(E,w)$ satisfies (LPA4), $e$ belongs to $c$ and therefore $s(e)\in T(r(e))$. Now we get the contradiction $s(e)\in T(r(e))\subseteq T(r(E^1_w))=Z$. \medskip \item[Case 1.2] Assume that $r(e)\geq s(g)$. Then there is a path $p$ such that $s(p)=r(e)$ and $r(p)=s(g)$. Since $s(f)\in T(r(g))$, there is a path $q$ such that $s(q)=r(g)$ and $r(q)=s(f)$. Since $r(f)=r(e)$, we have a closed path $pgqf$ based at $r(e)$. Now we can proceed as in Case 1.1 to get a contradiction. \medskip \item[Case 1.3] Assume that $r(g)\geq s(e)$. Then we get the contradiction $s(e)\in T(r(g))\subseteq T(r(E^1_w))=Z$. \end{enumerate} \medskip \item[Case 2] Assume that $\tilde f\in \tilde E^1_{Z^c}$. Then there is an $f\in E^1, s(f)\not\in Z$ such that $\tilde f=f$. It follows that $r(e)=\tilde r(e)=\tilde r(\tilde e)=\tilde s(\tilde f)=\tilde s(f)=s(f)$. Hence we get the contradiction $s(f)=r(e)\in T(r(E^1_w))\subseteq Z$. \end{enumerate} Thus the ranges of the weighted edges in $(\tilde E,\tilde w)$ are sinks. \\ \\ {\bf Part II} Assume that there are distinct $\tilde e,\tilde f\in \tilde E_w^1$ such that $\tilde s(\tilde e)=\tilde s(\tilde f)$. Clearly $\tilde e,\tilde f\in \tilde E^1_{Z^c}$ since all the edges in $\tilde E^1_{Z}$ have weight one in $(\tilde E,\tilde w)$. Hence there are distinct $e,f\in E^1, s(e),s(f)\not\in Z$ such that $\tilde e=e$ and $\tilde f=f$. It follows that $s(e)=\tilde s(e)=\tilde s(\tilde e)=\tilde s(\tilde f)=\tilde s(f)=s(f)$ which contradicts the assumption that $(E,w)$ satisfies Condition (LPA1) (note that $w(e)=\tilde w(\tilde e)>1$ and $w(f)=\tilde w(\tilde f)>1$). Thus no vertex emits two distinct weighted edges in $(\tilde E,\tilde w)$.\\ Now assume that there are distinct $\tilde e,\tilde f\in \tilde E_w^1$ such that $\tilde r(\tilde e)=\tilde r(\tilde f)$. Clearly $\tilde e,\tilde f\in \tilde E^1_{Z^c}$ since all the edges in $\tilde E^1_{Z}$ have weight one in $(\tilde E,\tilde w)$. Hence there are distinct $e,f\in E^1, s(e),s(f)\not\in Z$ such that $\tilde e=e$ and $\tilde f=f$. It follows that $r(e)=\tilde r(e)=\tilde r(\tilde e)=\tilde r(\tilde f)=\tilde r(f)=r(f)$. Since $(E,w)$ satisfies Condition (LPA3), we get that $e$ and $f$ are in line. Since $e$ and $f$ are distinct, it follows that $r(e)\geq s(f)$ or $r(f)\geq s(e)$. But in the first case we get the contradiction $s(f)\in Z$ and in the second case the contradiction $s(e)\in Z$. Thus no vertex receives two distinct weighted edges in $(\tilde E,\tilde w)$.\\ \\ {\bf Part III} It remains to show that $L_K(\tilde E,\tilde w)\cong L_K(E,w)$. Set $X:=\{v, e_i, e_i^*\mid v\in E^0, e\in E^1, 1\leq i\leq w(e)\}$ and $\tilde X:=\{\tilde v, \tilde e_i, \tilde e_i^*\mid \tilde v\in \tilde E^0, \tilde e\in \tilde E^1, 1\leq i\leq \tilde w(\tilde e)\}$. Let $K\langle X \rangle$ and $K\langle \tilde X\rangle$ be the free $K$-algebras generated by $X$ and $\tilde X$, respectively. Then the bijection $X\rightarrow \tilde X$ mapping \begin{alignat*}{2} v&\mapsto v&\hspace{1cm}&(v\in E^0),\\ e_i&\mapsto (e^{(i)}_{1})^*&&(e\in E^1, s(e)\in Z,1\leq i\leq w(e)),\\ e_i^*&\mapsto e^{(i)}_{1}&&(e\in E^1, s(e)\in Z,1\leq i\leq w(e)),\\ e_i&\mapsto e_i&&(e\in E^1, s(e)\not\in Z,1\leq i\leq w(e)),\\ e_i^*&\mapsto e^*_{i}&&(e\in E^1, s(e)\not\in Z,1\leq i\leq w(e)) \end{alignat*} induces an isomorphism $\phi:K\langle X \rangle\rightarrow K\langle \tilde X\rangle$. Let $I$ and $\tilde I$ be the ideals of $K\langle X \rangle$ and $K\langle \tilde X\rangle$ generated by the relations (i)-(iv) in Definition~\ref{def3}, respectively (hence $L_K(E,w)\cong K\langle X \rangle/I$ and $L_K(\tilde E, \tilde w)\cong K\langle \tilde X\rangle/\tilde I$). In order to show that $L_K(E,w)\cong L_K(\tilde E,\tilde w)$ it suffices to show that $\phi(I)=\tilde I$. Set \[A^{(i)}:=\big \{uv-\delta_{uv}u \ | \ u,v\in E^0 \big \},\] \[A^{(ii)}:=\big \{s(e)e_i-e_i,~ e_ir(e)-e_i, ~r(e)e_i^*-e_i^*,~ e_i^*s(e)-e^*_i \ | \ e\in E^{1}, 1\leq i\leq w(e)\big \},\] and for any $v\in E^0$ \[A^{(iii)}_v:=\Big\{\sum\limits_{1\leq i \leq w(v)}e_i^{*}f_i-\delta_{ef}r(e)\ | \ e,f\in s^{-1}(v)\Big\}\] and \[A^{(iv)}_v:=\Big\{\sum\limits_{e\in s^{-1}(v)}e_ie_j^*-\delta_{ij}v \ | \ 1\leq i,j\leq w(v)\Big\}.\] Then $I$ is generated by $A^{(i)}$, $A^{(ii)}$, the $A^{(iii)}_v$'s and the $A^{(iv)}_v$'s. Analogously define subsets $B^{(i)},B^{(ii)},B^{(iii)}_v~(v\in \tilde E^0),B^{(iv)}_v~(v\in \tilde E^0)$ of $K\langle \tilde X\rangle$. Then $\tilde I$ is generated by $B^{(i)}$, $B^{(ii)}$, the $B^{(iii)}_v$'s and the $B^{(iv)}_v$'s. Clearly $\phi(A^{(i)})=B^{(i)}$ and $\phi(A^{(ii)})=B^{(ii)}$. One checks easily that $\phi(A_v^{(iii)})=B_v^{(iii)}$ and $\phi(A_v^{(iv)})=B_v^{(iv)}$ if $v\not\in Z$. \\ Let now $v\in Z$ be not a sink in $(E,w)$ (if $v\in Z$ is a sink in $(E,w)$, then $A_v^{(iii)}=A_v^{(iv)}=\emptyset$). Then we have $s^{-1}(v)=\{e\}$ for some $e\in E^1$ since $(E,w)$ satisfies Condition (LPA2). Set $\bar v:=r(e)$. Clearly \[A^{(iii)}_v=\Big\{\sum\limits_{1\leq i\leq w(e)}e_i^{*}e_i-\bar v\Big\}\] and \[A^{(iv)}_v=\big \{e_ie_j^*-\delta_{ij}v \ | \ 1\leq i,j\leq w(e) \big \}.\] It follows from Lemma \ref{lemLPA} that $\tilde s^{-1}(\bar v)=\{e^{(1)},\dots,e^{(w(e))}\}$. Hence \[B^{(iii)}_{\bar v}=\big \{(e^{(i)}_1)^{*}e_1^{(j)}-\delta_{ij}v\ | \ 1\leq i,j\leq w(e) \big \}\] and \[B^{(iv)}_{\bar v}=\Big\{\sum\limits_{1\leq i\leq w(e)}e^{(i)}_1(e^{(i)}_1)^*-\bar v\Big\}\] Clearly $\phi(A^{(iii)}_v)=B^{(iv)}_{\bar v}$ and $\phi(A^{(iv)}_v)=B^{(iii)}_{\bar v}$. It follows from Lemma \ref{lemLPA} that the map $~\bar{}~: v\mapsto \bar v$ defines a bijection between the elements of $Z$ that are not a sink in $(E,w)$ and the elements of $Z$ that are not a sink in $(\tilde E,\tilde w)$. Hence $\phi(I)=\tilde I$ and thus $L_K(\tilde E,\tilde w)\cong L_K(E,w)$. \end{proof} \begin{example}\label{exlpa1} Consider the weighted graph \[ (E,w):\quad\xymatrix@C+15pt{ t\ar@/^1.7pc/[r]^{a,2}&u\ar@/^1.7pc/[l]^{b,1}& v\ar[l]_{c,1}\ar@(ul,ur)^{d,1}\ar@/^1.9pc/[rr]^{e,1}\ar[r]^{f,2}\ar@/_1.7pc/[r]_{g,1}&x\ar[r]^{h,1}&y\ar[r]^{k,2}&z}. \] One checks easily that $(E,w)$ satisfies Condition (LPA) (note that $T(r(E^1_w))=\{t,u,x,y,z\}$). Let $(\tilde E,\tilde w)$ be defined as in the proof of Lemma \ref{lemkey1}. Then $(\tilde E,\tilde w)$ is the weighted graph \[ (\tilde E,\tilde w):\quad\xymatrix@C+15pt{ t\ar@/_1.7pc/[r]_{b^{(1)},1}&u\ar@/_1.7pc/[l]_{a^{(1)},1}\ar@/_1.0pc/[l]^{a^{(2)},1}& v\ar[l]_{c,1}\ar@(ul,ur)^{d,1}\ar@/^1.9pc/[rr]^{e,1}\ar[r]^{f,2}\ar@/_1.7pc/[r]_{g,1}&x&y\ar[l]_{h^{(1)},1}&z\ar@/_1.7pc/[l]_{k^{(1)},1}\ar@/^1.7pc/[l]^{k^{(2)},1} }. \] There is only one weighted edge in $(\tilde E,\tilde w)$, namely $f$, and its range is a sink. The proof of Lemma \ref{lemkey1} shows that $L_K(E,w)\cong L_K(\tilde E,\tilde w)$. \end{example} \begin{lemma}\label{lemkey2} Let $(E,w)$ be a weighted graph such that the ranges of the weighted edges are sinks and no vertex emits or receives two distinct weighted edges. Then there is a graph $\tilde E$ such that $L_K(E,w)\cong L_K(\tilde E)$. \end{lemma} \begin{proof} If $v\in r(E^1_w)$, then there is a unique edge $g^v\in E^1_w$ such that $r(g^v)=v$ (since no vertex in $(E,w)$ receives two distinct weighted edges). Define a graph $\tilde E$ by \begin{align*} \tilde E^0&=M\sqcup N \text{ where }\\ M&=E^0\setminus r(E^1_w),\\ N&=\{v^{(1)},\dots,v^{(w(g^v))}\mid v\in r(E^1_w)\},\\ \tilde E^1&=A\sqcup B\sqcup C \sqcup D\text{ where }\\ A&=\{e\mid e\in E^1_{uw},r(e)\not\in r(E^1_w)\},\\ B&=\{e^{(1)},\dots,e^{(w(g^{r(e)}))}\mid e\in E^1_{uw}, r(e)\in r(E^1_w)\},\\ C&=\{e^{(1)}\mid e\in E^1_{w}\},\\ D&=\{e^{(2)},\dots,e^{(w(e))}\mid e\in E^1_{w}\},\\ \tilde s(e)&=s(e),~\tilde r(e)=r(e)\quad (e\in A),\\ \tilde s(e^{(i)})&=s(e),~\tilde r(e^{(i)})=r(e)^{(i)}\quad (e^{(i)}\in B),\\ \tilde s(e^{(1)})&=s(e),~\tilde r(e^{(1)})=r(e)^{(1)}\quad (e^{(1)}\in C),\\ \tilde s(e^{(i)})&=r(e)^{(i)},~\tilde r(e^{(i)})=s(e)\quad (e^{(i)}\in D), \end{align*} (note that if $e\in E^1$, then $s(e)\in E^0\setminus r(E^1_w)$ since the elements of $r(E^1_w)$ are sinks). We have divided the rest of the proof into three parts. In Part I we define a homomorphism $\phi:L_K(E,w)\rightarrow L_K(\tilde E)$, in Part II we define a homomorphism $\tilde\phi:L_K(\tilde E)\rightarrow L_K(E,w)$, and in Part III we show that $\phi$ and $\tilde\phi$ are inverse to each other.\\ \\ {\bf Part I} Set \begin{align*} \alpha_v&:=\begin{cases}v,&\text{ if } v\not\in r(E^1_w),\\\sum\limits_{i=1}^{w(g^v)}v^{(i)},&\text{ if }v\in r(E^1_w),\end{cases}\\ \beta_{e,i}&:=\begin{cases}e,&\text{ if } e\in E_{uw}^1,r(e)\not\in r(E^1_w),i=1,\\\sum\limits_{j=1}^{w(g^{r(e)})}e^{(j)},&\text{ if }e\in E_{uw}^1,r(e)\in r(E^1_w),i=1,\\ e^{(1)},&\text{ if }e\in E_{w}^1,i=1,\\ (e^{(i)})^*,&\text{ if }e\in E_{w}^1,i>1, \end{cases}\\ \gamma_{e,i}&:=\begin{cases}e^*,&\text{ if } e\in E_{uw}^1,r(e)\not\in r(E^1_w),i=1,\\\sum\limits_{j=1}^{w(g^{r(e)})}(e^{(j)})^*,&\text{ if }e\in E_{uw}^1,r(e)\in r(E^1_w),i=1,\\ (e^{(1)})^*,&\text{ if }e\in E_{w}^1,i=1,\\ e^{(i)},&\text{ if }e\in E_{w}^1,i>1. \end{cases} \end{align*} In order to show that $X:=\{\alpha_v,\beta_{e,i},\gamma_{e_i}\mid v\in E^0,e\in E^1, 1\leq i\leq w(e)\}$ is an $(E,w)$-family in $L_K(\tilde E)$, one has to show that the relations (i)-(iv) in Remark \ref{remunpropwlpa} are satisfied. We leave (i) and (ii) to the reader and show (iii) and (iv).\\ First we check (iii). Let $v\in E^0$ and $e,f\in s^{-1}(v)$. We have to show that $\sum\limits_{1\leq i\leq w(v)}\gamma_{e,i}\beta_{f,i}=\delta_{ef}\alpha_{r(e)}$. \begin{enumerate} \item[Case 1] Assume that $e,f\in E^1_{uw}$. \medskip \begin{enumerate} \item[Case 1.1] Assume that $r(e),r(f)\not\in r(E^1_w)$. Then \[\sum\limits_{1\leq i\leq w(v)}\gamma_{e,i}\beta_{f,i}=e^*f=\delta_{ef}\tilde r(e)=\delta_{ef}r(e)=\delta_{ef}\alpha_{r(e)}.\] \medskip \item[Case 1.2] Assume that $r(e)\not\in r(E^1_w)$ and $r(f)\in r(E^1_w)$. Then \[\sum\limits_{1\leq i\leq w(v)}\gamma_{e,i}\beta_{f,i}=e^*\sum\limits_{j=1}^{w(g^{r(f)})}f^{(j)}=\sum\limits_{j=1}^{w(g^{r(f)})}e^*f^{(j)}=0=\delta_{ef}\alpha_{r(e)}.\] \medskip \item[Case 1.3] Assume that $r(e)\in r(E^1_w)$ and $r(f)\not\in r(E^1_w)$. Then \[\sum\limits_{1\leq i\leq w(v)}\gamma_{e,i}\beta_{f,i}=\sum\limits_{j=1}^{w(g^{r(e)})}(e^{(j)})^*f=0=\delta_{ef}\alpha_{r(e)}.\] \medskip \item[Case 1.4] Assume that $r(e),r(f)\in r(E^1_w)$. Then \[\sum\limits_{1\leq i\leq w(v)}\gamma_{e,i}\beta_{f,i}=\sum\limits_{j=1}^{w(g^{r(e)})}(e^{(j)})^*\sum\limits_{k=1}^{w(g^{r(f)})}f^{(k)}=\sum\limits_{j=1}^{w(g^{r(e)})}\sum\limits_{k=1}^{w(g^{r(f)})}(e^{(j)})^*f^{(k)}=\delta_{ef}\sum\limits_{j=1}^{w(g^{r(e)})}r(e)^{(j)}=\delta_{ef}\alpha_{r(e)}.\] \end{enumerate} \medskip \item[Case 2] Assume that $e\in E^1_{uw}$ and $f\in E^1_w$. \medskip \begin{enumerate} \item[Case 2.1] Assume that $r(e)\not\in r(E^1_w)$. Then \[\sum\limits_{1\leq i\leq w(v)}\gamma_{e,i}\beta_{f,i}=e^*f^{(1)}=0=\delta_{ef}\alpha_{r(e)}.\] \medskip \item[Case 2.2] Assume that $r(e)\in r(E^1_w)$. Then \[\sum\limits_{1\leq i\leq w(v)}\gamma_{e,i}\beta_{f,i}=\sum\limits_{j=1}^{w(g^{r(e)})}(e^{(j)})^*f^{(1)}=0=\delta_{ef}\alpha_{r(e)}.\] \end{enumerate} \medskip \item[Case 3] Assume that $e\in E^1_{w}$ and $f\in E^1_{uw}$. This case is similar to Case 2 and therefore is ommitted. \medskip \item[Case 4] Assume that $e,f\in E^1_w$. Since no vertex emits two distinct weighted edges in $(E,w)$, it follows that $e=f$ and $w(v)=w(e)$. Clearly \[\sum\limits_{1\leq i\leq w(v)}\gamma_{e,i}\beta_{f,i}=(e^{(1)})^*e^{(1)}+\sum\limits_{j=2}^{w(e)}e^{(j)}(e^{(j)})^*=r(e)^{(1)}+\sum\limits_{j=2}^{w(e)} r(e)^{(j)}=\delta_{ef}\alpha_{r(e)}\] (note that $e^{(j)}$ is the only edge emitted by $r(e)^{(j)}$ in $\tilde E$). \end{enumerate}\medskip Thus (iii) holds. \\ Next we check (iv). Let $v\in E^0$ and $1\leq i,j\leq w(v)$. Note that the existence of $i,j$ with the property $1\leq i,j\leq w(v)$ implies that $w(v)\geq 1$, i.e. that $v$ is not a sink in $(E,w)$. It follows that $v\in E^0\setminus r(E^1_w)$. We have to show that $\sum\limits_{e\in s^{-1}(v)}\beta_{e,i}\gamma_{e,j}= \delta_{ij}\alpha_{v}$. \begin{enumerate}[C{a}se (a)] \item Assume that $i=j=1$. Clearly \begin{align*} &\sum\limits_{e\in s^{-1}(v)}\beta_{e,1}\gamma_{e,1}\\ =&\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\not\in r(E^1_w)}}\beta_{e,1}\gamma_{e,1}+\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\in r(E^1_w)}}\beta_{e,1}\gamma_{e,1}+\sum\limits_{e\in s^{-1}(v)\cap E^1_{w}}\beta_{e,1}\gamma_{e,1}\\ =&\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\not\in r(E^1_w)}}ee^*+\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\in r(E^1_w)}}\sum\limits_{j=1}^{w(g^{r(e)})}e^{(j)}\sum\limits_{k=1}^{w(g^{r(e)})}(e^{(k)})^*+\sum\limits_{e\in s^{-1}(v)\cap E^1_{w}}e^{(1)}(e^{(1)})^*\\ =&\underbrace{\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\not\in r(E^1_w)}}ee^*+\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\in r(E^1_w)}}\sum\limits_{j,k=1}^{w(g^{r(e)})}e^{(j)}(e^{(k)})^*+\sum\limits_{e\in s^{-1}(v)\cap E^1_{w}}e^{(1)}(e^{(1)})^*}_{T_1:=}. \end{align*} Since $\tilde r(e^{(j)})=r(e)^{(j)}$ for any $e\in E^1_{uw},r(e)\in r(E^1_w)$, we have $e^{(j)}(e^{(k)})^*=0$ in $L_K(\tilde E)$ whenever $j\neq k$. Hence \[T_1=\underbrace{\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\not\in r(E^1_w)}}ee^*+\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\in r(E^1_w)}}\sum\limits_{j=1}^{w(g^{r(e)})}e^{(j)}(e^{(j)})^*+\sum\limits_{e\in s^{-1}(v)\cap E^1_{w}}e^{(1)}(e^{(1)})^*}_{T_2:=}.\] One checks easily that \begin{align*} \tilde s^{-1}(v)=&\{e\mid e\in s^{-1}(v)\cap E^1_{uw},r(e)\not\in r(E^1_w)\}\\ &\sqcup\{e^{(j)}\mid e\in s^{-1}(v)\cap E^1_{uw}, r(e)\in r(E^1_w),1\leq j\leq w(g^{r(e)})\}\\ &\sqcup\{e^{(1)}\mid e\in s^{-1}(v)\cap E^1_{w}\}. \end{align*} Hence $T_2=v=\delta_{11}\alpha_{v}$. \medskip \item Assume that $i=1$ and $j>1$. Then $w(v)\geq j>1$ and hence $v$ emits precisely one weighted edge $f$. Since $\gamma_{e,j}=0$ whenever $j\geq w(e)$, we have \[\sum\limits_{e\in s^{-1}(v)}\beta_{e,1}\gamma_{e,j}=\beta_{f,1}\gamma_{f,j}=f^{(1)}f^{(j)}=0=\delta_{1j}\alpha_{v}\] (note that $\tilde r(f^{(1)})=r(f)^{(1)}\neq r(f)^{(j)}=\tilde s(f^{(j)})$). \medskip \item Assume that $i>1$ and $j=1$. Then $v$ emits precisely one weighted edge $f$. Clearly \[\sum\limits_{e\in s^{-1}(v)}\beta_{e,i}\gamma_{e,1}=\beta_{f,i}\gamma_{f,1}=(f^{(i)})^*(f^{(1)})^*=0=\delta_{i1}\alpha_{v}\] (note that $\tilde s(f^{(i)})=r(f)^{(i)}\neq r(f)^{(1)}=\tilde r(f^{(1)})$). \medskip \item Assume that $i,j>1$. Then $v$ emits precisely one weighted edge $f$. Clearly \[\sum\limits_{e\in s^{-1}(v)}\beta_{e,i}\gamma_{e,j}=\beta_{f,i}\gamma_{f,j}=(f^{(i)})^*f^{(j)}=\delta_{ij}\tilde r(f^{(i)})=\delta_{ij}v=\delta_{ij}\alpha_{v}.\] \end{enumerate}\medskip Thus (iv) holds too and hence $X$ is an $(E,w)$-family in $L_K(\tilde E)$. By the Universal Property of $L_K(E,w)$ there is a unique $K$-algebra homomorphism $\phi: L_K(E,w)\rightarrow L_K(\tilde E)$ such that $\phi(v)=\alpha_v$, $\phi(e_{i})=\beta_{e,i}$ and $\phi(e^*_{i})=\gamma_{e,i}$ for all $v\in E^0$, $e\in E^1$ and $1\leq i\leq w(e)$.\\ \\ {\bf Part II} Set \begin{align*} \tilde \alpha_{\tilde v}&:=\begin{cases}v,&\text{ if } \tilde v=v\in M,\\(g^v_{i})^*g^v_i,&\text{ if }\tilde v=v^{(i)}\in N,\end{cases}\\ \tilde\beta_{\tilde e}&:=\begin{cases}e_1,&\text{ if } \tilde e=e\in A,\\e_1(g^{r(e)}_{i})^*g^{r(e)}_i,&\text{ if }\tilde e=e^{(i)}\in B,\\ e_1,&\text{ if }\tilde e=e^{(1)}\in C,\\ e_i^*,&\text{ if }\tilde e=e^{(i)}\in D, \end{cases}\\ \tilde\gamma_{\tilde e}&:=\begin{cases}e_1^*,&\text{ if } \tilde e=e\in A,\\ (g^{r(e)}_{i})^*g^{r(e)}_ie_1^*,&\text{ if }\tilde e=e^{(i)}\in B,\\ e_1^*,&\text{ if }\tilde e=e^{(1)}\in C,\\ e_i,&\text{ if }\tilde e=e^{(i)}\in D. \end{cases} \end{align*} In order to show that $\tilde X:=\{\tilde\alpha_{\tilde v},\tilde\beta_{\tilde e},\tilde\gamma_{\tilde e}\mid \tilde v\in \tilde E^0,\tilde e\in \tilde E^1\}$ is an $\tilde E$-family in $L_K(E,w)$, one has to show that the relations (i)-(iv) in Remark \ref{remunproplpa} are satisfied. We leave (i) and (ii) to the reader and show (iii) and (iv).\\ First we check (iii). Let $\tilde v\in \tilde E^0$ and $\tilde e, \tilde f\in \tilde s^{-1}(\tilde v)$. We have to show that $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$. \begin{enumerate} \item[Case 1] Assume that $\tilde v\in M$. Then $\tilde e,\tilde f\in A\cup B\cup C$ since $\tilde s^{-1}(D)\subseteq N$. \medskip \begin{enumerate} \item[Case 1.1] Assume that $\tilde e,\tilde f\in A$. Then there are $e,f\in E^1_{uw}, r(e),r(f)\not\in r(E^1_w)$ such that $\tilde e=e$ and $\tilde f=f$. Clearly $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=e_1^*f_1=\delta_{ef}r(e)=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$.\medskip \item[Case 1.2] Assume that $\tilde e\in A$ and $\tilde f\in B$. Then there is an $e\in E^1_{uw}, r(e)\not\in r(E^1_w)$ such that $\tilde e=e$. Moreover, there is an $f\in E^1_{uw}, r(f)\in r(E^1_w)$ and an $1\leq i \leq w(g^{r(f)})$ such that $\tilde f=f^{(i)}$. Clearly $e\neq f$ and $\tilde e\neq\tilde f$. Hence $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=e_1^*f_1(g_i^{r(f)})^*g_i^{r(f)}=\delta_{ef}(g_i^{r(f)})^*g_i^{r(f)}=0=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$.\medskip \item[Case 1.3] Assume that $\tilde e\in A$ and $\tilde f\in C$. Then there is an $e\in E^1_{uw}, r(e)\not\in r(E^1_w)$ such that $\tilde e=e$. Moreover, there is an $f\in E^1_{w}$ such that $\tilde f=f^{(1)}$. Clearly $e\neq f$ and $\tilde e\neq\tilde f$. Hence $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=e_1^*f_1=\delta_{ef}r(e)=0=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$.\medskip \item[Case 1.4] Assume that $\tilde e\in B$ and $\tilde f\in A$. Then there is an $e\in E^1_{uw}, r(e)\in r(E^1_w)$ and an $1\leq i \leq w(g^{r(e)})$ such that $\tilde e=e^{(i)}$. Moreover, there there is an $f\in E^1_{uw}, r(f)\not\in r(E^1_w)$ such that $\tilde f=f$. Clearly $e\neq f$ and $\tilde e\neq \tilde f$. Hence $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=(g^{r(e)}_{i})^*g^{r(e)}_ie_1^*f_1=\delta_{ef}(g^{r(e)}_{i})^*g^{r(e)}_i=0=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$.\medskip \item[Case 1.5] Assume that $\tilde e,\tilde f\in B$. Then there are $e,f\in E^1_{uw}, r(e),r(f)\in r(E^1_w)$ and $1\leq i\leq w(g^{r(e)}), 1\leq j\leq w(g^{r(f)})$ such that $\tilde e=e^{(i)}$ and $\tilde f=f^{(j)}$. Clearly $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=(g^{r(e)}_{i})^*g^{r(e)}_ie_1^*f_1(g_j^{r(f)})^*g_j^{r(f)}=\delta_{ef}(g^{r(e)}_{i})^*g^{r(e)}_i(g_j^{r(f)})^*g_j^{r(f)}=\delta_{ef}\delta_{ij}(g^{r(e)}_{i})^*g^{r(e)}_i=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$. \medskip \item[Case 1.6] Assume that $\tilde e\in B$ and $\tilde f\in C$. Then there is an $e\in E^1_{uw}, r(e)\in r(E^1_w)$ and a $1\leq i\leq w(g^{r(e)})$ such that $\tilde e=e^{(i)}$. Moreover, there is an $f\in E^1_{w}$ such that $\tilde f=f^{(1)}$. Clearly $e\neq f$ and $\tilde e\neq\tilde f$. Hence $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=(g^{r(e)}_{i})^*g^{r(e)}_ie_1^*f_1=\delta_{ef}(g^{r(e)}_{i})^*g^{r(e)}_i=0=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$.\medskip \item[Case 1.7] Assume that $\tilde e\in C$ and $\tilde f\in A$. Then there is an $e\in E^1_w$ such that $\tilde e=e^{(1)}$. Moreover, there is an $f\in E^1_{uw}, r(f)\not\in r(E^1_w)$ such that $\tilde f=f$. Clearly $e\neq f$ and $\tilde e\neq\tilde f$. Hence $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=e_1^*f_1=\delta_{ef}r(e)=0=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$.\medskip \item[Case 1.8] Assume that $\tilde e\in C$ and $\tilde f\in B$. Then there is an $e\in E^1_w$ such that $\tilde e =e^{(1)}$. Moreover, there is an $f\in E^1_{uw}, r(f)\in r(E^1_w)$ and an $1\leq i \leq w(g^{r(f)})$ such that $\tilde f=f^{(i)}$. Clearly $e\neq f$ and $\tilde e\neq\tilde f$. Hence $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=e_1^*f_1(g_i^{r(f)})^*g_i^{r(f)}=\delta_{ef}(g_i^{r(f)})^*g_i^{r(f)}=0=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$.\medskip \item[Case 1.9] Assume that $\tilde e,\tilde f\in C$. Then there are $e,f\in E^1_{w}$ such that $\tilde e=e^{(1)}$ and $\tilde f=f^{(1)}$. Since $s(e)=\tilde s(e^{(1)})=\tilde s(\tilde e)=\tilde s(\tilde f)=\tilde s(f^{(1)})=s(f)$, we have $e=f$ (because no vertex in $(E,w)$ emits two distinct weighted edges). It follows that $\tilde e=\tilde f$. Clearly $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=e_1^*e_1=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$. \end{enumerate} \medskip \item[Case 2] Assume that $\tilde v\in N$. Then $\tilde v=v^{(i)}$ for some $v\in r(E^1_w)$ and $1\leq i\leq w(g^v)$. One checks easily that $\tilde s^{-1}(\tilde v)=\emptyset$ if $i=1$ and $\tilde s^{-1}(\tilde v)=\{(g^v)^{(i)}\}$ if $i>1$. It follows that $i>1$ and $\tilde e=\tilde f=(g^v)^{(i)}$. Hence $\tilde\gamma_{\tilde e}\tilde\beta_{\tilde f}=g^v_i(g^v_i)^*=s(g^v)=\delta_{\tilde e \tilde f}\tilde\alpha_{\tilde r(\tilde e)}$. \end{enumerate} Thus (iii) holds. \\ Next we check (iv). Let $\tilde v\in \tilde E^0$ such that $\tilde s^{-1}(v)\neq\emptyset$. We have to show that $\sum\limits_{\tilde e\in \tilde s^{-1}(\tilde v)}\tilde\beta_{\tilde e}\tilde\gamma_{\tilde e}= \tilde\alpha_{\tilde v}$. \begin{enumerate}[C{a}se (a)] \item Assume that $\tilde v\in M$. Then $\tilde v=v$ for some $v\in E^0\setminus r(E^1_w)$. One checks easily that \begin{align*} \tilde s^{-1}(\tilde v)=&\{e\mid e\in s^{-1}(v)\cap E^1_{uw},r(e)\not\in r(E^1_w)\}\\ &\sqcup\{e^{(i)}\mid e\in s^{-1}(v)\cap E^1_{uw}, r(e)\in r(E^1_w),1\leq i\leq w(g^{r(e)})\}\\ &\sqcup\{e^{(1)}\mid e\in s^{-1}(v)\cap E^1_{w}\}. \end{align*} Hence \begin{align*} &\sum\limits_{\tilde e\in \tilde s^{-1}(\tilde v)}\tilde \beta_{\tilde e}\tilde \gamma_{\tilde e}\\ =&\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\not\in r(E^1_w)}}\tilde\beta_{e}\tilde\gamma_{e}+\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\in r(E^1_w)}}\sum\limits_{i=1}^{w(g^{r(e)})}\tilde\beta_{e^{(i)}}\tilde\gamma_{e^{(i)}}+\sum\limits_{e\in s^{-1}(v)\cap E^1_{w}}\tilde\beta_{e^{(1)}}\tilde\gamma_{e^{(1)}}\\ =&\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\not\in r(E^1_w)}}e_1e_1^*+\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\in r(E^1_w)}}\sum\limits_{i=1}^{w(g^{r(e)})}e_1(g^{r(e)}_{i})^*g^{r(e)}_i(g^{r(e)}_{i})^*g^{r(e)}_ie_1^*+\sum\limits_{e\in s^{-1}(v)\cap E^1_{w}}e_1e_1^*\\ =&\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\not\in r(E^1_w)}}e_1e_1^*+\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\in r(E^1_w)}}\sum\limits_{i=1}^{w(g^{r(e)})}e_1(g^{r(e)}_{i})^*g^{r(e)}_ie_1^*+\sum\limits_{e\in s^{-1}(v)\cap E^1_{w}}e_1e_1^*\\ =&\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\not\in r(E^1_w)}}e_1e_1^*+\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\in r(E^1_w)}}e_1\big(\sum\limits_{i=1}^{w(g^{r(e)})}(g^{r(e)}_{i})^*g^{r(e)}_i\big)e_1^*+\sum\limits_{e\in s^{-1}(v)\cap E^1_{w}}e_1e_1^*\\ =&\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\not\in r(E^1_w)}}e_1e_1^*+\sum\limits_{\substack{e\in s^{-1}(v)\cap E^1_{uw},\\r(e)\in r(E^1_w)}}e_1e_1^*+\sum\limits_{e\in s^{-1}(v)\cap E^1_{w}}e_1e_1^*\\ =&v=\tilde\alpha_{\tilde v}. \end{align*} \item Assume that $\tilde v\in N$. Then $\tilde v=v^{(i)}$ for some $v\in r(E^1_w)$ and $1\leq i\leq w(g^v)$. As mentioned above we have $\tilde s^{-1}(\tilde v)=\emptyset$ if $i=1$ and $\tilde s^{-1}(\tilde v)=\{(g^v)^{(i)}\}$ if $i>1$. Since by assumption $\tilde s^{-1}(\tilde v)\neq\emptyset$, it follows that $i>1$ and $\sum\limits_{\tilde e\in \tilde s^{-1}(\tilde v)}\tilde \beta_{\tilde e}\tilde \gamma_{\tilde e}=\tilde\beta_{(g^v)^{(i)}}\tilde\gamma_{(g^v)^{(i)}}=(g^v_i)^*g^v_i= \tilde\alpha_{\tilde v}$. \end{enumerate} Thus (iv) holds too and hence $\tilde X$ is an $\tilde E$-family in $L_K(E,w)$. By the Universal Property of $L_K(\tilde E)$ there is a unique $K$-algebra homomorphism $\tilde \phi: L_K(\tilde E)\rightarrow L_K(E,w)$ such that $\tilde\phi(\tilde v)=\tilde\alpha_{\tilde v}$, $\tilde\phi(\tilde e)=\tilde\beta_{\tilde e}$ and $\tilde\phi(\tilde e^*)=\tilde\gamma_{\tilde e}$ for all $\tilde v\in \tilde E^0$ and $\tilde e\in \tilde E^1$. \\ \\ {\bf Part III} First we show that $\tilde \phi\circ\phi=\operatorname{id}_{L_K(E,w)}$. Clearly it suffices to show that $\tilde \phi\circ\phi$ fixes all elements of $\{v,e_i,e_i^*\mid v\in E^0, e\in E^1, 1\leq i\leq w(e)\}$ since these elements generate $L_K(E,w)$ as a $K$-algebra. One checks easily that $\tilde \phi\circ\phi$ fixes all elements $v,e_i,e_i^*$ where $v\in E^0$ and $e\in E_w^1$ or $e\in E_{uw}^1, r(e)\not\in r(E^1_w)$. Let now $e\in E_{uw}^1, r(e)\in r(E^1_w)$. Then \[\tilde \phi(\phi(e_1))=\tilde \phi(\sum\limits_{j=1}^{w(g^{r(e)})}e^{(j)})=\sum\limits_{j=1}^{w(g^{r(e)})}e_1(g_j^{r(e)})^*g_j^{r(e)}=e_1\sum\limits_{j=1}^{w(g^{r(e)})}(g_j^{r(e)})^*g_j^{r(e)}=e_1r(e)=e_1.\] Similarly one can show that $\phi(\phi(e_1^*))=e_1^*$ in this case. Hence $\tilde \phi\circ\phi=\operatorname{id}_{L_K(E,w)}$.\\ Now we show that $\phi\circ\tilde\phi=\operatorname{id}_{L_K(\tilde E)}$. Clearly it suffices to show that $\phi\circ\tilde\phi$ fixes all elements of $\{\tilde v,\tilde e,\tilde e^*\mid \tilde v\in \tilde E^0, \tilde e\in \tilde E^1\}$ since these elements generate $L_K(\tilde E)$ as a $K$-algebra. One checks easily that $\phi\circ\tilde\phi$ fixes all elements $\tilde v,\tilde e,\tilde e^*$ where $\tilde v \in \tilde E^0$ and $\tilde e\in \tilde E^1\setminus B$. Let now $\tilde e\in B$. Then $\tilde e=e^{(i)}$ for some $e\in E^1_{uw}, r(e)\in r(E^1_w)$ and $1\leq i \leq w(g^{r(e)})$. Clearly \[\phi(\tilde\phi(\tilde e))=\tilde \phi(e_1(g_i^{r(e)})^*g_i^{r(e)})=\begin{cases}\sum\limits_{j=1}^{w(g^{r(e)})}e^{(j)}((g^{r(e)})^{(1)})^*(g^{r(e)})^{(1)},&\text{ if }i=1,\\\sum\limits_{j=1}^{w(g^{r(e)})}e^{(j)}(g^{r(e)})^{(i)}((g^{r(e)})^{(i)})^*,&\text{ if }i>1.\end{cases}.\] But $((g^{r(e)})^{(1)})^*(g^{r(e)})^{(1)}=\tilde r((g^{r(e)})^{(1)})=r(g^{r(e)})^{(1)}=r(e)^{(1)}$ in $L_K(\tilde E)$. Since $\tilde r(e^{(j)})=r(e)^{(j)}$, it follows that $\sum\limits_{j=1}^{w(g^{r(e)})}e^{(j)}((g^{r(e)})^{(1)})^*(g^{r(e)})^{(1)}=e^{(1)}=\tilde e$ if $i=1$. Now assume that $i>1$. One checks easily that $\tilde s^{-1}(r(e)^{(i)})=\{(g^{r(e)})^{(i)}\}$. Hence $(g^{r(e)})^{(i)}((g^{r(e)})^{(i)})^*=r(e)^{(i)}$ in $L_K(\tilde E)$. Since $\tilde r(e^{(j)})=r(e)^{(j)}$, it follows that $\sum\limits_{j=1}^{w(g^{r(e)})}e^{(j)}(g^{r(e)})^{(i)}((g^{r(e)})^{(i)})^*=e^{(i)}=\tilde e$. Hence we have shown that $\phi(\tilde\phi(\tilde e))=\tilde e$ if $\tilde e\in B$. Similarly one can show that $\phi(\tilde \phi(\tilde e^*))=\tilde e^*$ in this case. Hence $\phi\circ\tilde \phi=\operatorname{id}_{L_K(\tilde E)}$ and thus $L_K(E,w)\cong L_K(\tilde E)$. \end{proof} \begin{example}\label{exlpa2} Consider the weighted graph \[ (E,w):\quad\xymatrix@C+15pt{ t\ar@/_1.7pc/[r]_{b^{(1)},1}&u\ar@/_1.7pc/[l]_{a^{(1)},1}\ar@/_1.0pc/[l]^{a^{(2)},1}& v\ar[l]_{c,1}\ar@(ul,ur)^{d,1}\ar@/^1.9pc/[rr]^{e,1}\ar[r]^{f,2}\ar@/_1.7pc/[r]_{g,1}&x&y\ar[l]_{h^{(1)},1}&z\ar@/_1.7pc/[l]_{k^{(1)},1}\ar@/^1.7pc/[l]^{k^{(2)},1} }. \] Let $\tilde E$ be defined as in the proof of Lemma \ref{lemkey2}. Then $\tilde E$ is the graph \[ \tilde E:\quad\xymatrix@C+15pt{ t\ar@/_1.7pc/[r]_{b^{(1)}}&u\ar@/_1.7pc/[l]_{a^{(1)}}\ar@/_1pc/[l]^{a^{(2)}}& v\ar[l]_{c}\ar@(ul,ur)^{d}\ar@/^2.8pc/[rr]^{e}\ar[r]^{f^{(1)}}\ar@/_0.7pc/[r]_{g^{(1)}}\ar@/_2.5pc/[dr]_{g^{(2)}}&x^{(1)}&y\ar[l]_{(h^{(1)})^{(1)}}\ar[dl]^{(h^{(1)})^{(2)}}&z\ar@/_1.7pc/[l]_{k^{(1)}}\ar@/^1.7pc/[l]^{k^{(2)}}\\ &&&x^{(2)}\ar@/^0.7pc/[ul]^{f^{(2)}}&& }. \] The proof of Lemma \ref{lemkey2} shows that $L_K(E,w)\cong L_K(\tilde E)$. \end{example} Lemma \ref{lemkey1} and \ref{lemkey2} directly imply the theorem below. \begin{theorem}\label{thmm} Let $(E,w)$ be a weighted graph that satisfies Condition (LPA). Then the weighted Leavitt path algebra $L_K(E,w)$ is isomorphic to an unweighted Leavitt path algebra. \end{theorem} \begin{example}\label{exlpa3} Consider the weighted graph \[ (E,w):\quad\xymatrix@C+15pt{ t\ar@/^1.7pc/[r]^{a,2}&u\ar@/^1.7pc/[l]^{b,1}& v\ar[l]_{c,1}\ar@(ul,ur)^{d,1}\ar@/^1.9pc/[rr]^{e,1}\ar[r]^{f,2}\ar@/_1.7pc/[r]_{g,1}&x\ar[r]^{h,1}&y\ar[r]^{k,2}&z}, \] which satisfies Condition (LPA), and the graph \[ \tilde E:\quad\xymatrix@C+15pt{ t\ar@/_1.7pc/[r]_{b^{(1)}}&u\ar@/_1.7pc/[l]_{a^{(1)}}\ar@/_1pc/[l]^{a^{(2)}}& v\ar[l]_{c}\ar@(ul,ur)^{d}\ar@/^2.8pc/[rr]^{e}\ar[r]^{f^{(1)}}\ar@/_0.7pc/[r]_{g^{(1)}}\ar@/_2.5pc/[dr]_{g^{(2)}}&x^{(1)}&y\ar[l]_{(h^{(1)})^{(1)}}\ar[dl]^{(h^{(1)})^{(2)}}&z\ar@/_1.7pc/[l]_{k^{(1)}}\ar@/^1.7pc/[l]^{k^{(2)}}\\ &&&x^{(2)}\ar@/^0.7pc/[ul]^{f^{(2)}}&& }. \] By Examples $\ref{exlpa1}$ and \ref{exlpa2} we have $L_K(E,w)\cong L_K(\tilde E)$. \end{example} \section{Abscence of Condition (LPA)} Throughout this subsection $(E,w)$ denotes a weighted graph. We start by recalling the basis result of \cite{hazrat-preusser}. Set $X:=\{v,e_i,e_i^*\mid v\in E^0,e\in E^1,1\leq i\leq w(e)\}$, let $\langle X \rangle$ the set of all nonempty words over $X$ and set $\overline{\langle X \rangle}:=\langle X \rangle\cup\{\text{empty word}\}$. Together with juxtaposition of words $\langle X \rangle$ becomes a semigroup and $\overline{\langle X \rangle}$ a monoid. If $A,B\in \overline{\langle X \rangle}$, then $B$ is called a {\it subword of $A$} if there are $C,D \in\overline{\langle X \rangle}$ such that $A=CBD$ and a {\it suffix of $A$} if there is a $C \in \overline{\langle X \rangle}$ such that $A=CB$. \begin{definition} Let $p=x_1\dots x_n\in\langle X \rangle$. Then $p$ is called {\it a d-path} if either $x_1,\dots,x_n\in X\setminus E^0$ and $r(x_i)=s(x_{i+1})~(1\leq i \leq n-1)$ or $x_1\in E^0$ and $n=1$. Here we use the convention $s(v):=v$, $r(v):=v$, $s(e_i):=s(e)$, $r(e_i):=r(e)$, $s(e^*_i):=r(e)$ and $r(e^*_i):=s(e)$ for any $v\in E^0$, $e \in E^1$ and $1\leq i \leq w(e)$. \end{definition} \begin{remark} Let $\hat E$ be the directed graph associated to $(E,w)$ and $\hat E_d$ the double graph of $\hat E$ (see \cite[Definitions 2 and 8]{preusser}). The d-paths are precisely the paths in the double graph $\hat E_d$. \end{remark} Fix for any $v\in E^0$ such that $s^{-1}(v)\neq\emptyset$ an edge $e^v\in s^{-1}(v)$ such that $w(e^v)=w(v)$. The $e^v$'s are called {\it special edges}. \begin{definition} The words $e^v_i(e^v_j)^*~(v\in E^0,1\leq i,j\leq w(v))$ and $e^*_1f_1~(e,f\in E^1)$ in $\langle X \rangle$ are called {\it forbidden}. A {\it normal d-path} or {\it nod-path} is a d-path $p$ such that none of its subwords is forbidden. \end{definition} Let $K\langle X \rangle$ the free $K$-algebra generated by $X$ (i.e. the $K$-vector space with basis $\langle X \rangle$ which becomes a $K$-algebra by linearly extending the juxtaposition of words). Then $L_K(E,w)$ is the quotient of $K\langle X \rangle$ by the ideal generated by the relations (i)-(iv) in Definition \ref{def3}. Let $K\langle X \rangle_{\operatorname{nod}}$ be the linear subspace of $K\langle X \rangle$ spanned by the nod-paths. \begin{theorem}[Hazrat, Preusser, 2017] \label{thmbasis} The canonical map $K\langle X \rangle_{\operatorname{nod}}\rightarrow L_K(E,w)$ is an isomorphism of $K$-vector spaces. In particular the images of the nod-paths under this map form a linear basis for $L_K(E,w)$. \end{theorem} \begin{proof} See \cite[Theorem 16]{hazrat-preusser} and its proof. \end{proof} The following lemma will be used in the proofs of Theorems \ref{thm1},\ref{thm2},\ref{thm3},\ref{thm4},\ref{thm5},\ref{thm6} and \ref{thmm2}. \begin{keylemma}\label{lemimp} Suppose that $(E,w)$ does not satisfy Condition (LPA). Then there is a nod-path whose first letter is $e_2$ and whose last letter is $e_2^*$ for some $e\in E^1_w$. \end{keylemma} \begin{proof} \cite[Proof of Lemma 35]{preusser} shows that if one of the Conditions (LPA1), (LPA2) and (LPA3) is not satisfied, then then there is a nod-path whose first letter is $e_2$ and whose last letter is $e_2^*$ for some $e\in E^1_w$. Assume now that $(E,w)$ does not satisfy Condition (LPA4). Then there is an $e\in E^1_w$, a path $p$ and a cycle $c$ such that $s(p)=r(e)$, $r(p)=s(c)$ and $e$ does not belong to $c$. Write $c=f^{(1)}\dots f^{(m)}$ where $f^{(1)},\dots,f^{(m)}\in E^1$. If $p=r(e)$, then $e_2f^{(1)}_1\dots f^{(m)}_1e_2^*$ is a nod-path (since $f^{(m)}\neq e$). Now assume that $p=g^{(1)}\dots g^{(n)}$ where $g^{(1)},\dots,g^{(n)}\in E^1$. Clearly we assume that no letter of $p$ is a letter of $c$. One checks easily that $e_2g^{(1)}_1\dots g^{(n)}_1f^{(1)}_1\dots f^{(m)}_1(g^{(n)}_1)^*\dots(g^{(1)}_1)^*e_2^*$ is a nod-path (note that $f^{(m)}\neq g^{(n)}$). \end{proof} \begin{theorem}\label{thm1 Suppose that $(E,w)$ does not satisfy Condition (LPA). Then $L_K(E,w)$ is neither simple nor graded simple. \end{theorem} \begin{proof} By Lemma \ref{lemimp}, there is a nod-path $p$ whose first letter is $e_2$ and whose last letter is $e_2^*$ for some $e\in E^1_w$. One checks easily that the ideal $I$ generated by $p$ equals the linear span of all nod-paths that contain $p$ as a subword (note that $e_2$ is not the second letter of a forbidden word and $e_2^*$ not the first letter of a forbidden word). It follows that $I$ is a proper ideal of $L_K(E,w)$ (it is not the zero ideal since it contains the basis element $p$ and it is not equal to $L_K(E,w)$ since it does not contain any vertex). Since $I$ is generated by a homogeneous element, it is a graded ideal. \end{proof} Recall that a group graded $K$-algebra $A=\bigoplus\limits_{g\in G} A_g$ is called {\it locally finite} if $\dim_K A_g < \infty$ for every $g\in G$. \begin{theorem}\label{thm2 Suppose that $(E,w)$ does not satisfy Condition (LPA). Then $L_K(E,w)$ is not locally finite. \end{theorem} \begin{proof} By Lemma \ref{lemimp}, there is a nod-path $p=x_1\dots x_n$ such that $x_1=e_2$ and $x_n=e_2^*$ for some $e\in E^1_w$. Set $p^*:=x_n^*\dots x_1^*$ (where $(f_i^*)^*=f_i$ for any $f\in E^1$ and $1\leq i\leq w(f)$). One checks easily that for any $n\in\mathbb{N}$, $(pp^*)^n$ is a nod-path that lies in the homogeneous $0$-component $L_K(E,w)_0$. It follows from Theorem \ref{thmbasis} that $\dim_K(L_K(E,w)_0)=\infty$. \end{proof} \begin{theorem}\label{thm3 Suppose that $(E,w)$ does not satisfy Condition (LPA). Then $L_K(E,w)$ is not Noetherian. \end{theorem} \begin{proof} By Lemma \ref{lemimp}, there is a nod-path $p$ whose first letter is $e_2$ and whose last letter is $e_2^*$ for some $e\in E^1_w$. Let $q$ be the nod-path one gets by replacing the first letter of $p$ by $e_1$. For any $n\in\mathbb{N}$ let $I_n$ be the left ideal generated by the nod-paths $p,pq,\dots,pq^n$. One checks easily that $I_n$ equals the linear span of all nod-paths $o$ such that one of the words $p,pq,\dots,pq^n$ is a suffix of $o$. It follows that $I_n\subsetneq I_{n+1}$ (clearly none of the words $p,pq,\dots,pq^n$ is a suffix of $pq^{n+1}$ since $p$ and $q$ have the same length but are distinct; hence $pq^{n+1}\not\in I_n$). \end{proof} \begin{theorem}\label{thm4} Suppose that $(E,w)$ does not satisfy Condition (LPA). Then $L_K(E,w)$ is not Artinian. \end{theorem} \begin{proof} By Lemma \ref{lemimp}, there is a nod-path $p$ whose first letter is $e_2$ and whose last letter is $e_2^*$ for some $e\in E^1_w$. For any $n\in\mathbb{N}$ let $I_n$ be the left ideal generated by $p^n$. One checks easily that $I_n$ equals the linear span of all nod-paths $o$ such that $p^n$ is a suffix of $o$. Hence $I_n\supsetneq I_{n+1}$ (clearly $p^{n+1}$ is not a suffix of $p^n$ and hence $p^n\not\in I_{n+1}$). \end{proof} \begin{theorem}\label{thm5} Suppose that $(E,w)$ does not satisfy Condition (LPA). Then $L_K(E,w)$ is not von Neumann regular. \end{theorem} \begin{proof} By Lemma \ref{lemimp}, there is a nod-path $p$ whose first letter is $e_2$ and whose last letter is $e_2^*$ for some $e\in E^1_w$. One checks easily that for any $x\in L_K(E,w)$, $pxp$ is a linear combination of nod-paths of length $\geq 2|p|$. Hence the equation $pxp=p$ has no solution $x\in L_K(E,w)$. \end{proof} We recall some general facts on the growth of algebras. Let $A\neq\{0\}$ be a finitely generated $K$-algebra. Let $V$ be a {\it finite-dimensional generating subspace} of $A$, i.e. a finite-dimensional subspace of $A$ that generates $A$ as a $K$-algebra. For $n\geq 1$ let $V^n$ denote the linear span of the set $\{v_1\dots v_k\mid k\leq n, v_1,\dots,v_k\in V\}$. Then \[V =V^1\subseteq V^2\subseteq V^3\subseteq \dots, \quad A =\bigcup\limits_{n\in \mathbb{N}}V^n\text{ and }d_V(n):=\dim V^n<\infty.\] Given functions $f, g:\mathbb{N}\rightarrow \mathbb{R}^+$, we write $f\preccurlyeq g$ if there is a $c\in\mathbb{N}$ such that $f(n)\leq cg(cn)$ for all $n$. If $f\preccurlyeq g$ and $g\preccurlyeq f$, then the functions $f, g$ are called {\it asymptotically equivalent} and we write $f\sim g$. If $W$ is another finite-dimensional generating subspace of $A$, then $d_V\sim d_W$. The {\it Gelfand-Kirillov dimension} or {\it GK dimension} of $A$ is defined as \[\operatorname{GKdim} A := \limsup\limits_{n\rightarrow \infty}\log_nd_V(n).\] The definition of the GK dimension does not depend on the choice of the finite-dimensional generating subspace $V$. If $d_V\preccurlyeq n^m$ for some $m\in \mathbb{N}$, then $A$ is said to have {\it polynomial growth} and we have $\operatorname{GKdim} A \leq m$. If $d_V\sim a^n$ for some real number $a>1$, then $A$ is said to have {\it exponential growth} and we have $\operatorname{GKdim} A =\infty$. If $A$ does not happen to be finitely generated over $K$, then the GK dimension of $A$ is defined as \[\operatorname{GKdim}(A) := \sup\{\operatorname{GKdim}(B)\mid B \text{ is a finitely generated subalgebra of }A\}.\] For the algebra $A=\{0\}$ we set $\operatorname{GKdim} A:=0$. \begin{theorem}\label{thm6} Suppose that $(E,w)$ does not satisfy Condition (LPA). Then $\operatorname{GKdim}(L_K(E,w))=\infty$. \end{theorem} \begin{proof} Suppose first that $(E,w)$ is finite (in our setting that means that $E^0$ is a finite set). By Lemma \ref{lemimp}, there is a nod-path $p$ in $( E, w)$ whose first letter is $e_2$ and whose last letter is $e_2^*$ for some $e\in E^1_w$. Let $q$ be the nod-path one gets by replacing the first letter of $p$ by $e_1$. Let $n\in \mathbb{N}$. Consider the nod-paths \begin{equation} p^{i_1}q^{i_2}\dots p^{i_{k-1}}q^{i_{k}}~ (k\text{ even}), \text{ and }p^{i_1}q^{i_2}\dots p^{i_{k-2}}q^{i_{k-1}}p^{i_{k}}~(k\text{ odd}) \end{equation} where $k,i_1,\dots,i_k\in \mathbb{N}$ satisfy \begin{equation} (i_1+\dots+i_k)|p|\leq n. \end{equation} Clearly different solutions $(k,i_1,\dots,i_k)$ and $(k',i'_1,\dots,i'_{k'})$ of inequality (2) correspond to different nod-paths in (1) since $|p|=|q|$. Let $V$ denote the finite-dimensional subspace of $L_K( E, w)$ spanned by $\{v,f_i,f_i^*\mid v\in E^0, f\in E_1, 1\leq i\leq w(f)\}$. By Theorem \ref{thmbasis} the nod-paths in (1) are linearly independent in $V^n$. The number of solutions of (2) is $\sim 2^n$ and hence $L_K( E, w)$ has exponential growth.\\ Now suppose that $(E,w)$ is not finite. One checks easily that there is a finite complete weighted subgraph $(\tilde E, \tilde w)$ of $(E,w)$ that does not satisfy Condition (LPA) (see \cite[p. 884 and Proof of Lemma 5.19]{hazrat13}). By the previous paragraph $L_K( \tilde E, \tilde w)$ has exponential growth. Clearly the inclusion $(\tilde E,\tilde w)\hookrightarrow (E,w)$ induces an algebra monomorphism $L_K(\tilde E,\tilde w)\rightarrow L_K(E,w)$ since one can choose the special edges such that distinct nod-paths are mapped to distinct nod-paths. Hence $L_K(E,w)$ has a finitely generated subalgebra with exponential growth. It follows from the definition of the GK dimension that $\operatorname{GKdim} L_K(E,w)=\infty$. \end{proof} The main result of this section is Theorem \ref{thmm2}. In order to prove it we need two lemmas. \begin{lemma}\label{lemidem} Let $p$ be a nod-path starting with $e_2$ and ending with $e_2^*$ for some $e\in E^1_w$. Then the ideal $I$ of $L_K(E,w)$ generated by $p$ contains no nonzero idempotent. \end{lemma} \begin{proof} For a nod-path $q=x_1\dots x_n$ define $m(q)$ as the largest nonnegative integer $m$ such that there are indices $i_1,\dots,i_m\in \{1,\dots,n\}$ such that $i_j+|p|-1<i_{j+1}~(1\leq j \leq m-1)$, $i_m+|p|-1\leq n$ and $x_{i_j}\dots x_{i_j+|p|-1}=p~(1\leq j \leq m)$. Hence $m(q)$ is maximal with the property that $q$ contains $m(q)$ not overlapping copies of $p$.\\ Now let $a\in I\setminus \{0\}$. By Theorem \ref{thmbasis} we can write $a=\sum \limits_{r=1}^{t}k_rq_r$ where $k_1,\dots,k_t\in K\setminus\{0\}$ and $q_1,\dots,q_t$ are pairwise distinct nod-paths. Clearly $m(q_r)\geq 1$ for any $1\leq r\leq t$, since $I$ consists of all linear combinations of nod-paths containing $p$ as a subword. It easy to show, using the fact that $e_2$ is not the second letter of a forbidden word and $e_2^*$ not the first letter of a forbidden word, that for any $1\leq r,s\leq t$ the product $q_rq_s$ is a linear combination of nod-paths $o$ such that $m(o)\geq m(q_r)+m(q_s)$ (cf. \cite[Proof of Proposition 40]{hazrat-preusser}). It follows that $a^2=\sum \limits_{r,s=1}^{t} k_rk_sq_rq_s$ is a linear combination of nod-paths $o$ such that $m(o)\geq 2m(q_{r_{\min}})>m(q_{r_{\min}})$ where $1\leq r_{\min}\leq t$ is chosen such that $m(q_{r_{\min}})$ is minimal. Hence $a^2$ is a linear combination of nod-paths none of which equals $q_{r_{\min}}$. Thus $a^2$ cannot be equal to $a$. \end{proof} If $\Lambda$ is an infinite set and $S$ is a unital ring, then we denote by $M_{\Lambda}(S)$ the $K$-algebra consisting of all square matrices $M$, with rows and columns indexed by $\Lambda$, with entries from $S$, for which there are at most finitely many nonzero entries in $M$ (cf. \cite[Notation 2.6.3]{abrams-ara-molina}). \begin{lemma}\label{lemmorita} Let $\Lambda$ be an infinite set and $S$ a left Noetherian, unital ring. Let $I_1\subseteq I_2\subseteq\dots$ be an ascending chain of left ideals of $M_{\Lambda}(S)$. Suppose there is a finite subset $\Lambda^{\operatorname{fin}}$ of $\Lambda$ such that $\sigma_{\lambda\mu}=0$ for any $n\in\mathbb{N}$, $\sigma\in I_n$, $\lambda\in\Lambda$ and $\mu\in \Lambda\setminus \Lambda^{\operatorname{fin}}$. Then the chain $I_1\subseteq I_2\subseteq\dots$ eventually stabilises. \end{lemma} \begin{proof} Write $\Lambda^{\operatorname{fin}}=\{\lambda_1,\dots,\lambda_m\}$. Fix a $\tau\in \Lambda$. For any $n\in\mathbb{N}$, let $N_n$ be the left $S$-submodule of $S^m$ consisting of all row vectors $(\sigma_{\tau\lambda_1},\dots,\sigma_{\tau\lambda_m})$ where $\sigma$ varies over all matrices in $I_{n}$. Then $I_n$ equals the set of all matrices $\sigma\in M_{\Lambda}(S)$ such that $\sigma_{\lambda\mu}=0$ for any $\lambda\in \Lambda,\mu\in \Lambda\setminus \Lambda^{\operatorname{fin}}$ and $(\sigma_{\lambda\lambda_1},\dots,\sigma_{\lambda\lambda_m})\in N_n$ for any $\lambda\in \Lambda$. Since $S$ is a left Noetherian ring, $S^m$ is a Noetherian module. It follows that the chain $N_1\subseteq N_2\subseteq \dots$ eventually stabilises and thus the chain $I_1\subseteq I_2\subseteq \dots$ eventually stabilises. \end{proof} \begin{theorem}\label{thmm2} Suppose that $(E,w)$ does not satisfy Condition (LPA). Then $L_K(E,w)$ is not isomorphic to an unweighted Leavitt path algebra. \end{theorem} \begin{proof} Assume there is a graph $F$ and an isomorphism $\phi:L_K(E,w)\rightarrow L_K(F)$. By Lemma \ref{lemimp}, there is a nod-path $p$ whose first letter is $e_2$ and whose last letter is $e_2^*$ for some $e\in E^1_w$. Let $q$ be the nod-path one gets by replacing the last letter of $p$ by $e_1^*$. By Lemma \ref{lemidem}, the ideal $I$ of $L_K(E,w)$ generated by $p$ contains no nonzero idempotent. Similarly, for any $n\in \mathbb{N}$, the ideal $I_n$ of $L_K(E,w)$ generated by $qp^n$ contains no nonzero idempotent. It follows from \cite[Proposition 2.7.9]{abrams-ara-molina}, that $\phi(I),\phi(I_n)\subseteq I(P_c(F))~(n\in\mathbb{N})$ where $I(P_c(F))$ is the ideal of $L_K(F)$ generated by all vertices in $F^0$ which belong to a cycle without an exit. It follows that $\phi(p),\phi(qp^n)\in I(P_c(F))~(n\in\mathbb{N})$. By \cite[Theorem 2.7.3]{abrams-ara-molina} we have \begin{equation} I(P_c(F))\cong \bigoplus\limits_{i\in \Gamma}M_{\Lambda_i}(K[x,x^{-1}]) \end{equation} as a $K$-algebra. The sets $\Gamma$ and $\Lambda_i~(i\in \Gamma)$ in (3) might be infinite if $F$ is not finite. \\ It follows from the previous paragraph that there is a subalgebra $A$ of $L_K(E,w)$ such that $p,qp^n\in A~(n\in \mathbb{N})$ and $A\cong \bigoplus\limits_{i\in \Gamma}M_{\Lambda_i}(K[x,x^{-1}])$. For any $n\in \mathbb{N}$ let $J_n$ be the left ideal of $A$ generated by $qp^2,\dots,qp^{n+1}$. Then $J_n$ is contained in the linear span of all nod-paths $o$ such that one of the words $qp^2,\dots,qp^{n+1}$ is a suffix of $o$. It follows that $J_n\subsetneq J_{n+1}$ (clearly none of the words $qp^2,\dots,qp^{n+1}$ is a suffix of $qp^{n+2}$ since $p$ and $q$ have the same length but are distinct). If the sets $\Gamma$ and $\Lambda_i~(i\in \Gamma)$ are finite, then we already have a contradiction since it is well-known that $\bigoplus\limits_{i\in \Gamma}M_{\Lambda_i}(K[x,x^{-1}])$ is Noetherian in this case. Hence the next two paragraphs are only needed if one of the sets $\Gamma$ and $\Lambda_i~(i\in \Gamma)$ is infinite.\\ If $a\in A$, then we identify $a$ with its image in $\bigoplus\limits_{i\in \Gamma}M_{\Lambda_i}(K[x,x^{-1}])$ and write $a_i$ for the $i$-th component of $a$. Set $\Gamma^{\operatorname{fin}}:=\{i\in \Gamma\mid p_i\neq 0\}$. Then $\Gamma^{\operatorname{fin}}$ is a finite subset of $\Gamma$. Clearly $(qp^n)_i=0$ for any $i\in \Gamma\setminus\Gamma^{\operatorname{fin}}$ and $n\geq 2$ (since $(qp^n)_i=(qp^{n-1}p)_i=(qp^{n-1})_ip_i$ for any $n\geq 2$). Hence we can reduce to the case that $\Gamma$ is finite.\\ For any $n\in \mathbb{N}$ and $i\in \Gamma$, let $J_{n,i}$ be the left ideal of $M_{\Lambda_i}(K[x,x^{-1}])$ generated by $(qp^2)_i,\dots,(qp^{n+1})_i$. Then $J_n=\bigtimes\limits_{i\in\Gamma}J_{n,i}$ since each $M_{\Lambda_i}(K[x,x^{-1}])$ has local units. Now fix an $i\in \Gamma$. Let $\Lambda_i^{\operatorname{fin}}$ be the finite subset of $\Lambda_i$ consisting of all $\lambda\in \Lambda_i$ such that the $\lambda$-th column of $p_i$ has a nonzero entry. Then clearly $\sigma_{\lambda\mu}=0$ for any $n\in \mathbb{N}$, $\sigma\in J_{n,i}$, $\lambda\in\Lambda_i$ and $\mu\in\Lambda_i\setminus \Lambda_i^{\operatorname{fin}}$ (since any element of $J_{n,i}$ is a left multiple of $p_i$). Hence, by Lemma \ref{lemmorita}, the chain $J_{1,i}\subseteq J_{2,i}\subseteq \dots$ eventually stabilises. Since this holds for any $i\in \Gamma$, we get the contradiction that the chain $J_{1}\subseteq J_{2}\subseteq \dots$ eventually stabilises. \end{proof} \section{Summary} \begin{theorem} Let $(E,w)$ be a row-finite weighted graph and $K$ a field. Then $L_K(E,w)$ is isomorphic to an unweighted Leavitt path algebra iff $(E,w)$ satisfies Condition (LPA) (see Definition \ref{defLPA}). \end{theorem} \begin{proof} Follows from the Theorems \ref{thmm} and \ref{thmm2}. \end{proof} \begin{theorem} Let $(E,w)$ be a row-finite weighted graph and $K$ a field. If $L_K(E,w)$ is simple, or graded simple, or locally finite, or Noetherian, or Artinian, or von Neumann regular, or has finite GK dimension, then $L_K(E,w)$ is isomorphic to an unweighted Leavitt path algebra. \end{theorem} \begin{proof} Follows from the Theorems \ref{thm1},\ref{thm2},\ref{thm3},\ref{thm4},\ref{thm5},\ref{thm6} and \ref{thmm}. \end{proof}
1,314,259,996,178
arxiv
\section*{I. Introduction} \hspace{5mm}It is well known that the individual lepton numbers $L_{e}, L_{\mu}$, and $L_{\tau}$ are automatically conserved and the tree level lepton flavor violating $(LFV)$ processes are absent in the standard model $(SM)$. However, the neutrino oscillation experiments have made one believe that neutrinos are massive, oscillate in flavors, which presently provide the only experimental hints of new physics and imply that the separated lepton numbers are not conserved[1]. Thus, the $SM$ requires some modification to account for the pattern of neutrino mixing, in which the $LFV$ processes are allowed. The observation of the $LFV$ signals in present or future high energy experiments would be a clear signature of new physics beyond the $SM$. Some of popular specific models beyond the $SM$ generally predict the presence of new particles, such as new gauge bosons and new scalars, which can naturally lead to the tree level $LFV$ coupling. In general, these new particles could enhance branching ratios for some $LFV$ processes and perhaps bring them into the observable threshold of the present and next generations of collider experiments. Furthermore, nonobservability of these $LFV$ processes can lead to strong constraints on the free parameters of new physics. Thus, studying the possible $LFV$ signals of new physics in various high energy collider experiments is very interesting and needed. Little Higgs models[2] employ an extended set of global and gauge symmetries in order to avoid the one-loop quadratic divergences and thus provide a new method to solve the hierarchy between the $TeV$ scale of possible new physics and the electroweak scale $\nu=246 GeV=(\sqrt{2} G_{F})^{-\frac{1}{2}}$. In this kind of models, the Higgs boson is a pseudo-Goldstone boson of a global symmetry which is spontaneously broken at some high scales. Electroweak symmetry breaking $(EWSB)$ is induced by radiative corrections leading to a Coleman-Weinberg type of potential. Quadratic divergence cancellation of radiative corrections to the Higgs boson mass are due to contributions from new particles with the same spin as the $SM$ particles. This type of models can be regarded as one of the important candidates of the new physics beyond the $SM$. The littlest Higgs model $(LH)$[3] is one of the simplest and phenomenologically viable models, which realizes the little Higgs idea. Recently, using of the fact that the $LH$ model contains a complex triplet Higgs boson $\Phi$, Refs.[4,5,6] have discussed the possibility to introduce lepton number violating interactions and generation of neutrino mass in the little Higgs scenario. Ref.[5] has shown that most satisfactory way of incorporating neutrino masses is to include a lepton number violating interaction between the triplet scalars and lepton doublets. The tree level neutrino masses are mainly generated by the vacuum expectation value $(VEV)$ $\nu'$ of the complex triplet $\Phi$, which does not affect the cancellation of quadratic divergences in the Higgs mass. The neutrino masses can be given by the term $Y_{ij}\nu'$, in which $Y_{ij}$ ($i, j$ are generation indices) is the Yukawa coupling constant. As long as the triplet $VEV$ $\nu'$ is restricted to be extremely small, the value of $Y_{ij}$ is of natural order one, i.e. $Y_{ij}$ $\approx$ 1, which might produce large contributions to some of $LFV$ processes[6,7]. The aim of this paper is to study the contributions of the $LFV$ couplings predicted by the $LH$ model to the $LFV$ processes $l_{i} \rightarrow l_{j} \gamma$ and $l_{i} \rightarrow l_{j}l_{k}l_{k}$ and compare our numerical results with the present experimental bounds on these $LFV$ processes, and see whether the constraints on the free parameter $Y_{ij}$ can be obtained. We further calculate the contributions of the $LH$ model to the $LFV$ processes $e^{\pm}e^{\pm}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$ and $e^{+}e^{-} \rightarrow l_{i}^{\pm}l_{j}^{\pm}$($l_{i}$ or $l_{j}$$\neq e$) , and discuss the possibility of detecting the $LFV$ signals of the $LH$ model via these processes in the future high energy linear $e^{+}e^{-}$ collider $(ILC)$ experiments. This paper is organized as follows. Section II contains a short summary of the relevant $LFV$ couplings of the scalars (doubly charged scalar $\Phi^{\pm\pm}$, charged scalars $\Phi^{\pm}$, and neutral scalar $\Phi^{0}$) to lepton doublets. The contributions of these $LFV$ couplings to the $LFV$ processes $l_{i} \rightarrow l_{j}\gamma$ and $l_{i} \rightarrow l_{j}l_{k}l_{k}$ are calculated in section III. Using the current experimental upper limits on these $LFV$ processes, we try to give the constraints on the coupling constant $Y_{ij}$ in this section. Section IV is devoted to the computation of the production cross sections of the $LFV$ processes $e^{\pm}e^{\pm}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$ and $e^{+}e^{-} \rightarrow l_{i}^{\pm}l_{j}^{\pm}$ induced by the doubly charged scalars $\Phi^{\pm\pm}$. Some phenomenological analyses are also included in this section. Our conclusions are given in section V. \section*{II. The $LFV$ couplings of the triplet scalars } \hspace{5mm} The $LH$ model[3] consists of a nonlinear $\sigma$ model with a global $SU(5)$ symmetry and a locally gauged symmetry $[SU(2) \times U(1)]^{2}$. The global $SU(5)$ symmetry is broken down to its subgroup $SO(5)$ at a scale $f \sim TeV$, which results in 14 Goldstone bosons $(GB's)$. Four of these $GB's$ are eaten by the gauge bosons $(W^{\pm}_{H}, Z_{H}, B_{H})$, resulting from the breaking of $[SU(2) \times U(1)]^{2}$, giving them masses. The Higgs boson remains as a light pseudo Goldstone boson and other $GB's$ give masses to the $SM$ gauge bosons and form a scalar triplet $\Phi$. The complex triplet $\Phi$ offers a chance to introduce lepton number violating interactions in the theory. In the context of the $LH$ model, the lepton number violating interaction which is invariant under the full gauge group, can be written as[5,7]: \begin{equation} \mathscr{L}=-\frac{1}{2}Y_{ij}(L_{i}^{T})_{\alpha}\Sigma ^{\ast}_{\alpha\beta}C^{-1}(L_{j}^{T})_{\beta}+h.c. \end{equation} Where $i$ and $j$ are generation indices, $\alpha$ and $\beta$ (= 1, 2) are $SU(5)$ indices, and $L^{T}=(l_{L},\nu_{L})$ is a left handed lepton doublet. $Y_{ij}$ is the Yukawa coupling constant and $C$ is the charge-conjugation operator. Because of non-linear nature of $\Sigma_{\alpha\beta}^{\ast}$, this interaction can give rise to a mass matrix for the neutrinos as: \begin{equation} M_{ij}=Y_{ij}(\nu'+\frac{\nu^{2}}{4f}). \end{equation} One can see from Eq.(2) that, if we would like to stabilize the Higgs mass and at the same time ensure neutrino masses consistent with experimental data[8], the coupling constant $Y_{ij}$ must be of order $10^{-11}$, which is unnaturally small. However, it has been shown[4,5] that the lepton number violating interaction only involving the complex scalar triplet $\Phi$ can give a neutrino mass matrix $M_{ij}=Y_{ij}\nu'$. Considering the current bounds on the neutrino mass[8], there should be: \begin{equation} Y_{ij}\nu'\sim10^{-10}GeV. \end{equation} Thus, the coupling constant $Y_{ij}$ can naturally be of order one or at least not be unnaturally small provided the $VEV$ $\nu'$ of the triplet scalar $\Phi$ is restricted to be extremely small. In this scenario, the triplet scalar $\Phi$ has the $LFV$ couplings to the left handed lepton pairs, which can be written as[5]: \begin{equation} \mathscr{L}_{LFV}=Y_{ij}[l_{Li}^{T}C^{-1}l_{Lj}\Phi^{++}+\frac{1}{\sqrt{2}} (\nu_{Li}^{T}C^{-1}l_{Lj}+l_{Li}^{T}C^{-1}\nu_{Lj})\Phi^{+} +\nu_{Li}^{T}C^{-1}\nu_{Lj}\Phi^{0}]+h.c. \end{equation} Considering these $LFV$ couplings, Ref.[5] has investigated the decays of the scalars $\Phi^{\pm\pm}$ and $\Phi^{\pm}$, and found that the most striking signature comes from the doubly charged scalars $\Phi^{\pm\pm}$. The constraints on the coupling constant $Y_{ij}$ and the triplet scalar mass parameter $M_{\Phi}$ coming from the muon anomalous magnetic moment $a_{\mu}$ and the $LFV$ process $\mu^{-} \rightarrow e^{+}e^{-}e^{-}$ are studied in Ref.[7]. In the next section, we will calculate the contributions of the charged scalars $\Phi^{\pm\pm}$ and $\Phi^{\pm}$ to the $LFV$ processes $l_{i} \rightarrow l_{j}\gamma$ and $l_{i} \rightarrow l_{j}l_{k}l_{k}$. \section*{III. The charged scalars and the $LFV$ processes $l_{i} \rightarrow l_{j}\gamma$ \hspace*{1.0cm} and $l_{i} \rightarrow l_{j}l_{k}l_{k}$ } \begin{center} { \begin{small} \begin{tabular}{|c|c|c|} \hline Decay\ Process&Current\ limit&Bound$(GeV^{-4})$ \\ \hline $\mu\rightarrow e\gamma$&$1.2\times 10^{-11}$ [10]&----- \\ $\tau\rightarrow e\gamma$&$1.1\times 10^{-7}$ [12]&-----\\ $\tau\rightarrow \mu\gamma$&$6.8\times 10^{-8}$ [13]&-----\\ $\mu\rightarrow 3e$&$1.0\times10^{-12}$ [11]&$\mid Y_{\mu e}Y_{ee}^{\ast}\mid^{2}/M_{\Phi}^{4} \leq 2.2\times 10^{-19}$\\ $\tau\rightarrow 3e$&$2.0\times 10^{-7}$ [14]&$\mid Y_{\tau e}Y_{ee}^{\ast}\mid^{2}/M_{\Phi}^{4} \leq 2.4\times 10^{-13}$\\ $\tau\rightarrow 2\mu e$&$3.3\times 10^{-7}$ [14]&$\mid Y_{\tau e}Y_{\mu\mu}^{\ast}\mid^{2}/M_{\Phi}^{4} \leq 8.1\times 10^{-13}$\\ $\tau\rightarrow 2e\mu$&$2.7\times 10^{-7}$ [14]&$\mid Y_{\tau\mu}Y_{ee}^{\ast}\mid^{2}/M_{\Phi}^{4} \leq 6.6\times 10^{-13}$\\ $\tau\rightarrow 3\mu$&$1.9\times 10^{-7} [14]$&$\mid Y_{\tau\mu}Y_{\mu\mu}^{\ast}\mid^{2}/M_{\Phi}^{4} \leq 2.3\times 10^{-13}$\\ \hline\end{tabular} \end{small} }\end{center} \hspace{0.3cm} Table 1: The current experimental upper limits on the branching ratios of some $LFV$ \hspace*{2cm} processes and the corresponding upper constraints on the free parameters. \hspace*{1.9cm} \vspace*{0.0cm} The observation of neutrino oscillations[1] implies that the individual lepton numbers $L_{e,\mu,\tau}$ are violated, suggesting the appearance of the $LFV$ processes, such as $l_{i} \rightarrow l_{j}\gamma$ and $l_{i} \rightarrow l_{j}l_{k}l_{k}$. The branching ratios of these $LFV$ processes are extremely small in the $SM$ with right handed neutrinos. For example, Ref.[9] has shown $Br(\mu \rightarrow e\gamma) <10^{-47}$. Such small branching ratio is unobservable. The present experimental upper limits on the branching ratios $Br(\mu \rightarrow e\gamma)$[10], $Br(\mu \rightarrow 3e)$[11], $Br(\tau \rightarrow e\gamma)$[12], $Br(\tau \rightarrow \mu\gamma)$[13], and $Br(\tau \rightarrow l_{i}l_{k}l_{k})$[14] are given in Table 1. Future experiments with increased sensitivity can reduce these current limits by a few orders of magnitude(see, e.g.[15]). In this section, we will use these data to give the constraints on the free parameters $Y_{ij}$ and $M_{\Phi}$. \begin{figure}[htb] \vspace{-7.0cm} \begin{center} \epsfig{file=fig1.ps,width=800pt,height=1000pt} \vspace{-25.0cm} \caption{Feynman diagrams contributing to the radiative decay $l_{i}^{-}\rightarrow l_{j}^{-}\gamma$ due to the \hspace*{1.8cm}charged scalars $\Phi^{--}(\Phi^{-})$.} \label{ee} \vspace{-0.5cm} \end{center} \end{figure} The $LFV$ couplings of the charged scalars $\Phi^{--}$ and $\Phi^{-}$ given in Eq.(4) can lead to the $LFV$ radiative decays $l_{i}^{-}\rightarrow l_{j}^{-}\gamma$ at the one loop level mediated by the exchange of the charged scalars $\Phi^{--}$ and $\Phi^{-}$, as shown in Fig.1. For the doubly charged scalar $\Phi^{--}$, the photon can be attached either to the internal lepton line or to the scalar line. For the charged scalar $\Phi^{-}$, the photon can be only attached to the scalar line[16]. Using Eq.(4), the expression of the branching ratio $Br(l_{i}^{-}\rightarrow l_{j}^{-}\gamma)$ can be written as at leading order: \begin{equation} Br(l_{i}^{-}\rightarrow l_{j}^{-}\gamma)=\frac{\alpha_{e}}{96\pi G_{F}^{2}}\sum_{k=\tau,\mu,e}(Y_{ik}Y_{kj}^{\ast})^{2}[\frac{3\delta_{ki(j)}+1}{M_{\Phi^{--}}^{2}} +\frac{1}{M_{\Phi^{-}}^{2}}]^{2}Br(l_{i}\rightarrow e\nu_{e}\overline{\nu}_{i}). \end{equation} Where $\alpha_{e}$ is the fine structure constant and $G_{F}$ is the Fermi constant. The factor $3\delta_{ki(j)}$ means that, when the internal lepton is the same as one of the leptons $l_{i}$ and $l_{j}$, the contributions of $\Phi^{--}$ to $Br(l_{i}^{-}\rightarrow l_{j}^{-}\gamma)$ is four times those for $k \neq i$ and $j$. $M_{\Phi^{--}}$ and $M_{\Phi^{-}}$ are the masses of the scalars $\Phi^{--}$ and $\Phi^{-}$, respectively. In the $LH$ model, the scalar mass is generated through the Coleman-Weinberg mechanism and the scalars $\Phi^{--}$, $\Phi^{-}$ and $\Phi^{0}$ degenerate at the lowest order[5]. Thus, we can assume $M_{\Phi^{--}}= M_{\Phi^{-}}$ and write the branching ratio as: \begin{equation} Br(l_{i}^{-}\rightarrow l_{j}^{-}\gamma)=\frac{\alpha_{e}}{96\pi G_{F}^{2}M_{\Phi}^{4}}\sum_{k=\tau,\mu,e}(Y_{ik}Y_{kj}^{\ast})^{2} [3\delta_{ki(j)}+2]^{2}Br(l_{i}\rightarrow e\nu_{e}\overline{\nu}_{i}). \end{equation} In particular, for the decay process $\mu^{-}\rightarrow e^{-}\gamma$, we obtain the following expression for the branching ratio $Br(\mu^{-}\rightarrow e^{-}\gamma)$: \begin{equation} Br(\mu^{-}\rightarrow e^{-}\gamma)=\frac{\alpha_{e}}{96\pi G_{F}^{2}M_{\Phi}^{4}}[25(Y_{\mu e}Y_{ee}^{\ast})^{2}+25(Y_{\mu\mu}Y_{\mu e}^{\ast})^{2}+4(Y_{\mu \tau}Y_{\tau e}^{\ast})^{2}]. \end{equation} \begin{figure}[htb] \vspace{0cm} \begin{center} \epsfig{file=fig2.eps,width=270pt,height=250pt} \vspace{-1.0cm} \caption{The $FD$ coupling constant $Y$ as a function of the scalar mass $M_{\Phi}$ for different \hspace*{1.8cm}values of the $FX$ coupling constant $Y'$.} \label{ee} \end{center} \end{figure} From above equations, we can see that the $LFV$ process $l_{i}\rightarrow l_{j}\gamma$ can not be able to constrain $Y_{ij}$ independently. However, if we assume $Y_{ik}=Y$ for $i=k$ ($Y$ is the flavor-diagonal $(FD)$ coupling constant) and $Y_{ik}=Y'$ for $i\neq k$ ($Y'$ is the flavor-mixing $(FX)$ coupling constant), then we can obtain the constraints on the combination of the free parameters $Y$, $Y'$ and $M_{\Phi}$. Observably, the most stringent constraint should come from the current experimental upper limits on the branching ratio $Br(\mu\rightarrow e\gamma)$. Thus, in Fig.2, we have shown the $FD$ coupling constant $Y$ as a function of the mass parameter $M_{\Phi}$ for $Y'=1\times 10^{-2}$, $1\times 10^{-3}$ and $1\times 10^{-4}$. From Fig.2, one can see the upper limit on $Y$ strongly depend on the values of $M_{\Phi}$ and $Y'$. For $M_{\Phi}\leq 2000 GeV$ and $Y'\geq 1\times 10^{-4}$, there must be $Y\leq 64$. In the $LH$ model, the $LFV$ processes $l_{i}\rightarrow l_{j}l_{k}l_{k}$ can be generated at tree level through the exchange of doubly charged scalar $\Phi^{\pm\pm}$, as depicted in Fig.3. \begin{figure}[htb] \vspace{-8.0cm} \begin{center} \epsfig{file=fig3.ps,width=800pt,height=1000pt} \vspace{-25.0cm} \caption{Tree level Feynman diagram for the $LFV$ processes $l_{i}^{-}\rightarrow l_{j}^{+}l_{k}^{-}l_{k}^{-}$ mediated by \hspace*{1.8cm}the doubly charged scalar $\Phi^{--}$.} \label{ee} \vspace{-0.5cm} \end{center} \end{figure} The expressions of the branching ratios for the processes $l_{i}^{-}\rightarrow l_{j}^{+}l_{k}^{-}l_{k}^{-}$ are given by[16,17] \begin{eqnarray} Br(\mu^{-}\rightarrow e^{+}e^{-}e^{-})&=&\frac{\mid Y_{\mu e}Y_{ee}^{\ast}\mid^{2}}{16G_{F}^{2}M_{\Phi}^{4}},\\ Br(\tau^{-}\rightarrow e^{+}e^{-}e^{-})&=&\frac{\mid Y_{\tau e}Y_{ee}^{\ast}\mid^{2}}{16G_{F}^{2}M_{\Phi}^{4}}Br(\tau\rightarrow e\nu_{e}\overline{\nu}_{\tau}),\\ Br(\tau^{-}\rightarrow \mu^{+}e^{-}e^{-})&=&\frac{\mid Y_{\tau\mu}Y_{ee}^{\ast}\mid^{2}}{32G_{F}^{2}M_{\Phi}^{4}}Br(\tau\rightarrow e\nu_{e}\overline{\nu}_{\tau}),\\ Br(\tau^{-}\rightarrow e^{+}\mu^{-}\mu^{-})&=&\frac{\mid Y_{\tau e}Y_{\mu\mu}^{\ast}\mid^{2}}{32G_{F}^{2}M_{\Phi}^{4}}Br(\tau\rightarrow e\nu_{e}\overline{\nu}_{\tau}),\\ Br(\tau^{-}\rightarrow \mu^{+}\mu^{-}\mu^{-})&=&\frac{\mid Y_{\tau\mu}Y_{\mu\mu}^{\ast}\mid^{2}}{16G_{F}^{2}M_{\Phi}^{4}}Br(\tau\rightarrow e\nu_{e}\overline{\nu}_{\tau}). \end{eqnarray} Certainly, up to one loop, the $LFV$ processes $l_{i}\rightarrow l_{j}l_{k}l_{k}$ get additional contributions from the processes $l_{i}\rightarrow l_{j}\gamma^{\ast}\rightarrow l_{j}l_{k}l_{k}$. Thus, the charged scalars $\Phi^{\pm\pm}$ and $\Phi^{\pm}$ have contributions to the $LFV$ processes $l_{i}\rightarrow l_{j}l_{k}l_{k}$ at one loop. However, compared with the tree level contributions, they are very small, which can be safely neglected. The $LFV$ processes $l_{i}\rightarrow l_{j}l_{k}l_{k}$ also can not give the constraints on the coupling constants $Y_{ij}$ independently, but would be able to constrain the combination $\mid Y_{ij}Y_{kk}^{\dag}\mid^{2}/M_{\Phi}^{4}$. Our numerical results are given in Table 1. In the following section, we will take into account these constraints coming from the $LFV$ processes $l_{i}\rightarrow l_{j}\gamma$ and $l_{i}\rightarrow l_{j}l_{k}l_{k}$, estimate the contributions of the doubly charged scalars $\Phi^{\pm\pm}$ to the processes $e^{\pm}e^{\pm}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$ and $e^{+}e^{-}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$, and discuss the possibility of detecting the signals for the doubly charged scalars $\Phi^{\pm\pm}$ at the $ILC$ experiments. \section*{IV. The doubly charged scalars $\Phi^{\pm\pm}$ and the $LFV$ \\ \hspace*{1.2cm}processes $e^{\pm}e^{\pm}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$ and $e^{+}e^{-}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$} \begin{figure}[htb] \vspace{-8.0cm} \begin{center} \hspace*{-2.5cm} \epsfig{file=fig4.ps,width=800pt,height=1000pt} \vspace{-24.2cm} \caption{Main Feynman diagram for the processes $e^{+}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}$ predicted by $\Phi^{--}$. } \label{ee} \vspace{-0.5cm} \end{center} \end{figure} In general, the doubly charged scalars can not couple to quarks and their couplings to leptons break the lepton number by two units, leading to a distinct signature, namely a pair of same sign leptons. The discovery of a doubly charged scalar would have important implications for our understanding of the Higgs sector and more importantly, for what lies beyond the $SM$. This fact has made one give more elaborate theoretical calculations in the framework of some specific models beyond the $SM$ and see whether the signatures of this kind of new particles can be detected in the future high energy experiments. For example, the production and decay of the doubly charged scalars and their possible signals at the $ILC$ have been extensively studied in Refs.[18,19]. In this section, we will consider the contributions of the doubly charged scalars $\Phi^{\pm\pm}$ predicted by the $LH$ model to the processes $e^{\pm}e^{\pm}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$ and $e^{+}e^{-}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$($l_{i}$ or $l_{j}$$\neq e$). The processes $e^{\pm}e^{\pm}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$ can be seen as the subprocesses of the processes $e^{+}e^{-}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$. For example, the doubly charged scalar $\Phi^{--}$ generates contributes to the process $e^{+}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}$ through the subprocess $e^{-}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}$, as shown in Fig.4. Using Eq.(4), the expression of the cross section for the subprocess $e^{-}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}$ can be easily written as: \begin{equation} \widehat{\sigma}(\widehat{s})=\frac{Y_{ee}^{2}Y_{ij}^{2}}{8\pi}\frac{\widehat{s}}{(\widehat{s}-M_{\Phi}^{2})^{2} +M_{\Phi}^{2}\Gamma_{\Phi}^{2}}. \end{equation} Where $\sqrt{\widehat{s}}$ is the center-of-mass $(C.M.)$ energy of the subprocess $e^{-}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}$. $\Gamma_{\Phi}$ is the total decay width of the doubly charged scalar $\Phi^{--}$, which has been given by Ref.[5] in the case of the triplet scalars $(\Phi^{\pm\pm}, \Phi^{\pm},$ and $\Phi^{0})$ degenerating at lowest order with a common mass $M_{\Phi}$: \begin{eqnarray} \Gamma_{\Phi}&=&\sum_{ij}\Gamma(\Phi^{--}\rightarrow l_{i}^{-}l_{j}^{-})+\Gamma(\Phi^{--}\rightarrow W_{L}^{-}W_{L}^{-})+\Gamma(\Phi^{--}\rightarrow W_{T}^{-}W_{T}^{-})\nonumber\\ &\approx&\frac{M_{\Phi}}{8\pi}[3Y^{2}+6Y'^{2}]+\frac{\nu'^{2}M_{\Phi}^{3}}{2\pi\nu^{4}} +\frac{g^{4}\nu'^{2}}{4\pi M_{\Phi}}. \end{eqnarray} Where $Y=Y_{ij}$ $(i=j)$ is the $FD$ coupling constant, $Y'=Y_{ij}$ $(i\neq j)$ is the $FX$ coupling constant. In above equation, the final-state masses have been neglected compared to the mass parameter $M_{\Phi}$. It has been shown that, for $\nu'< 1\times 10^{-5}$, the main decay modes of $\Phi^{--}$ are $l_{i}^{-}l_{j}^{-}$. Furthermore, the FX coupling constant $Y'$ are subject to very stringent bounds from the $LFV$ process $\mu\rightarrow eee$. In this case, the decay width $\Gamma_{\Phi}$ can be approximately written as: \begin{equation} \Gamma_{\Phi}\approx\frac{3M_{\Phi}Y^{2}}{8\pi}. \end{equation} Considering the current bounds on the neutrino mass[8], there should be: \begin{equation} Y_{ij}\nu'\sim10^{-10}GeV, \end{equation} so $\nu'< 1\times 10^{-5}$ leads to $Y_{ij}> 1\times 10^{-5}$, which does not conflict with the most stringent constraint from the $LFV$ process $\mu\rightarrow eee$. Thus, in our numerical calculation, we will take Eq.(15) as the total decay width of $\Phi^{--}$. Using the equivalent particle approximation method[20], the effective cross section for the process $e^{+}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}$ can be approximately written as[19]: \begin{equation} \sigma(E_{e^{+}},s)=\int_{x_{min}}^{1}dx F_{e^{+}}^{e^{-}}(x,E_{e^{+}}) \widehat{\sigma}(\widehat{s}). \end{equation} Where $\widehat{s}=xs$ and $x_{min}=(m_{l_{i}}+m_{l_{j}})^{2}/s$. $F_{e^{+}}^{e^{-}}(x,E_{e^{+}})$ is the equivalent electron distribution function of the initial positron, which gives the probability that an electron with energy $E_{e^{-}}=xE_{e^{+}}$ is emitted from a positron beam with energy $E_{e^{+}}$. The relevant expression can be written as[21]: \begin{equation} F_{e^{+}}^{e^{-}}(x,E_{e^{+}})=\frac{\alpha_{e}^{2}}{8\pi^{2}x}[ln(\frac{E_{e^{+}}}{m_{e}})^{2}-1]^{2} [\frac{4}{3}+x-x^{2}-\frac{4}{3}x^{3}+2x(1+x)lnx]. \end{equation} \begin{figure}[htb] \begin{center} \hspace{10cm}\vspace{0.5cm} \epsfig{file=fig5.eps,width=220pt,height=200pt}\hspace{-0.5cm}\hspace{0cm} \vspace{-0.25cm} \put(-230,-8){Figure 5: The cross section $\widehat{\sigma}(\widehat{s})$ as a function \put(-190,-20){of $Y$ for three values of the mass $M_{\Phi}$.}} \epsfig{file=fig6.eps,width=220pt,height=200pt} \put(-190,-8){Figure 6: Same as Fig.5 but for $\sigma(s)$. }\hspace{-0.5cm} \hspace{10cm}\vspace{-1cm} \label{ee} \end{center} \end{figure} \vspace{5cm} \vspace{-4.5cm}In Fig.5 and Fig.6, we plot the production cross sections $\widehat{\sigma}(\widehat{s})$ and $\sigma(s)$ for the processes $e^{-}e^{-}\rightarrow \mu^{-}\mu^{-}$ and $e^{+}e^{-}\rightarrow \mu^{-}\mu^{-}$ as function of the FD coupling constant $Y$, respectively. In these figures, we have assumed $0.15\leq Y\leq 0.9$ and taken $\sqrt{s}=500 GeV$ and $M_{\Phi}=1.0 TeV,1.5 TeV,2.0TeV$. From Fig.5 and Fig.6 one can see that the values of $\widehat{\sigma}(\widehat{s})$ and $\sigma(s)$ are strongly depend on the value of the $FD$ coupling constant $Y(Y_{ee})$. For $Y\geq0.7$ and $M_{\Phi}\leq1.5 TeV$, the values of the subprocess cross section $\widehat{\sigma}(\widehat{s})$ and the effective cross section $\sigma(s)$ are larger than $1.1\times 10^{2}$ fb and $4.3\times 10^{-2}$ fb, respectively. The signal of the doubly charged scalar $\Phi^{--}$ given by the process $e^{+}e^{-}\rightarrow \mu^{-}\mu^{-}$ is so distinctive and is the $SM$ background free, discovery would be signalled by even few events. In Fig.7, we plot the discovery region in the $Y-M_{\Phi}$ plane at 95\% confidence level $(C.L.)$ for seeing 5 $\mu^{-}\mu^{-}$ events, in which we have assumed the future $ILC$ with the $C.M.$ energy $\sqrt{s}=500 GeV$ and the yearly integrated luminosity of $\mathscr{L}=500fb^{-1}$[22]. From this figure, one can see that, in wide range of the parameter space, the signals of $\Phi^{--}$ should be detected in the future ILC experiments. \begin{figure}[htb] \vspace{0cm} \begin{center} \epsfig{file=fig7.eps,width=270pt,height=250pt} \vspace{-1.0cm} \put(-360,-5){Figure 7: Discovery region in the $Y-M_{\Phi}$ plane at 95\% $C.L.$ for seeing 5 $\mu^{-}\mu^{-}$ events.} \label{ee} \end{center} \end{figure} \vspace{0.5cm}The doubly charged scalar $\Phi^{--}$ can also has contributions to the $LFV$ processes $e^{+}e^{-}\rightarrow\tau^{-}\mu^{-}, \tau^{-}e^{-},$ and $\mu^{-}e^{-}$. However, the experimental upper limits on the $LFV$ processes $\tau\rightarrow \mu ee$, $\tau\rightarrow eee$, and $\mu\rightarrow eee$ can give severe constraints on the combination $\mid Y_{ij}Y_{kk}^{\dag}\mid^{2}/M_{\Phi}^{4}$, which makes the production cross sections of these processes very small. For example, even if we take $Y=1$ and $M_{\Phi}\leq 2 TeV$, the production cross sections $\sigma(\tau\mu),$ $\sigma(\tau e)$, and $\sigma(\mu e)$ are smaller than $6.9\times 10^{-3}$fb, $2.1\times 10^{-3}$fb, and $1.9\times 10^{-9}$fb, respectively. Thus, it is very difficult to detect the signals of $\Phi^{--}$ via the processes $e^{+}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}(i\neq j)$ in the future $ILC$ experiments. Certainly, the doubly charged scalar $\Phi^{++}$ has contributions to the processes $e^{+}e^{+}\rightarrow l_{i}^{+}l_{j}^{+}$ and $e^{+}e^{-}\rightarrow l_{i}^{+}l_{j}^{+}$. Similar with above calculation, we can give the values of the production cross sections for these processes. We find that the cross section $\sigma(l_{i}^{+}l_{j}^{+})$ is equal to the cross section $\sigma(l_{i}^{-}l_{j}^{-})$. Thus, the conclusions for the doubly charged scalar $\Phi^{--}$ are also apply to the doubly charged scalar $\Phi^{++}$. \section*{V. Conclusions } \hspace{5mm}To solve the so-called hierarchy or fine tuning problem of the $SM$, the little Higgs theory was proposed as a kind of models to $EWSB$ accomplished by a naturally light Higgs boson. The $LH$ model is one of the simplest and phenomenologically viable models. In the $LH$ model, neutrino masses and mixings can be generated by coupling the scalar triplet $\Phi$ to the leptons in a $\bigtriangleup L=2$ interaction, whose magnitude is proportional to the triplet $VEV \nu'$ multiplied by the Yukawa coupling constant $Y_{ij}$ without invoking a right handed neutrino. This scenario predicts the existence of doubly charged scalars $\Phi^{\pm\pm}$. For smaller values of $\nu'$ i.e. $\nu'\leq 1\times 10^{-5}$, the doubly charged scalars $\Phi^{\pm\pm}$ have large flavor changing coupling to leptons, which can generate significantly contributions to some $LFV$ processes and give characteristic signatures in the future high energy experiments. In this paper, we first consider the $LFV$ processes $l_{i}\rightarrow l_{j}\gamma$ and $l_{i}\rightarrow l_{j}l_{k}l_{k}$ in the context of the $LH$ model. For the $LFV$ process $l_{i}\rightarrow l_{j}\gamma$, it involves all of the FX coupling constants $Y_{ij} (i\neq j)$, we can not give the simple constraints about the free parameters $Y_{ij}$ and $M_{\Phi}$. Thus, for the fixed values of the FX coupling constant $Y'=Y_{ij}(i\neq j)$, we take into account the current experimental upper limit of the $LFV$ $\mu\rightarrow e\gamma$ and plot the FD coupling constant $Y=Y_{ij}(i=j)$ as a function of the mass parameter $M_{\Phi}$. Our numerical results show that the upper limit on $Y$ is strongly depend on the free parameters $M_{\Phi}$ and $Y'$. Using the present experimental upper limits on the branching ratios $Br(l_{i}\rightarrow l_{j}l_{k}l_{k})$, we obtain the constraints on the combination $\mid Y_{ij}Y^{*}_{kk}\mid^{2}/M_{\Phi}^{4}$. We find that the most stringent constraint comes from the $LFV$ process $\mu\rightarrow eee$. In all of the parameter space, there must be $\mid Y_{\mu e}Y_{ee}^{\ast}\mid^{2}/M_{\Phi}^{4}\leq 2.2\times 10^{-19} GeV^{-4}$. The characteristic signals of the processes $e^{+}e^{-}\rightarrow l_{i}^{\pm}l_{j}^{\pm}$ is same-sign dileptons or two same-sign different flavor leptons, which is the $SM$ background free and offers excellent potential for doubly charged scalar discovery. To see whether the doubly charged scalar $\Phi^{--}$ can be detected in the future $ILC$ experiments, we discuss the contributions of $\Phi^{--}$ to the processes $e^{-}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}$ and $e^{+}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}$. We find that the triplet scalar $\Phi^{--}$ can give significantly contributions to the processes $e^{+}e^{-}\rightarrow l_{i}^{-}l_{i}^{-}$. In wide range of the parameter space of the $LH$ model, the possible signals of $\Phi^{--}$ might be observed in the future $ILC$ experiments. However, the production cross sections of the $LFV$ processes $e^{+}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}(i\neq j)$ mediated by $\Phi^{--}$ are very small. The contributions of the triplet scalar $\Phi^{++}$ to the processes $e^{+}e^{-}\rightarrow l_{i}^{+}l_{j}^{+}$ are equal to those of $\Phi^{--}$ for the processes $e^{+}e^{-}\rightarrow l_{i}^{-}l_{j}^{-}$, Thus, our conclusions are also apply to the doubly charged scalar $\Phi^{++}$. Some popular models beyond the $SM$ predict the existence of doubly charged scalars, which generally have the lepton number and lepton flavor changing couplings to leptons and might produce distinct experimental signatures in the current or future high energy experiments. Their observation would signal physics outside the current paradigm and further test the new physics models. Search for this kind of new particles has been one of the important goals of the high energy experiments[23]. Thus, the possibly signals of the doubly charged scalars $\Phi^{\pm\pm}$ predicted by the little Higgs models should be more studied in the future. \vspace{0.5cm} \noindent{\bf Acknowledgments} This work was supported in part by Program for New Century Excellent Talents in University(NCET-04-0290), the National Natural Science Foundation of China under the Grants No.10475037 and 10675057. \newpage
1,314,259,996,179
arxiv
\section{Introduction} \label{section1} \IEEEPARstart{M}{otion} planning refers to the problem of finding a collision-free and dynamically-feasible path between the pre-specified starting configuration and the goal configuration in obstacle-cluttered environments \cite{ref1}, \cite{ref2}. Since it is often hard to achieve both collision-free and smooth paths simultaneously, we usually decompose the problem into two stages: first finding a collision-free path and then smoothing it \cite{ref3}. There were various roadmap algorithms \cite{ref4} -\cite{ref9} to compute collision-free paths. These algorithms first determine the free space and then find the collision-free paths characterized by a few waypoints within the free space. However, it is difficult for roadmap algorithms to simultaneously satisfy both dynamic constraints and exploring the configuration space. Their returned paths tend to be unsmooth and may have sharp turns. To make the path more suitable for mobile robots, researchers proposed different path smoothing algorithms \cite{ref10}-\cite{ref11} which can be divided into three kinds: interpolation-based, shortcut, and optimization-based algorithms \cite{ref3}, \cite{ref12}, \cite{ref13}. Interpolation-based algorithms attempt to relocate waypoints to fit some types of curves (e.g., polynomial curve \cite{ref14}, spline \cite{ref14}, or NURBS curve \cite{ref15}) that have good smoothness. Shortcut algorithms \cite{ref16} check and replace the jerky portions of a path with some transition curves (e.g., Dubin’s curve \cite{ref17}, clothoid \cite{ref18}, or hypocycloid \cite{ref19}). However, both the two kinds of algorithms restrict the moving scopes of the waypoints and cannot significantly change the shape of the whole path. In contrast, optimization-based algorithms allow waypoints to move significantly away from their original locations. For example, the Convex Elastic Smoothing (CES) algorithm \cite{ref3} uses a set of "bubbles" along the initial path to approximate the free space and let the waypoints to move within these "bubbles". Then, the CES algorithm solves two convex optimization problems alternately for path smoothing or speed optimization. Furthermore, the Convex Feasible Set (CFS) algorithm \cite{ref20} -\cite{ref23} uses the intersection of convex cones to define a larger free space and leads to a faster convergence speed. To distinguish optimization-based algorithms from interpolation-based and shortcut algorithms, we would like to call their iteration process as "path reshaping" rather than "path smoothing". However, many optimization-based algorithms neglect the selection of the initial path and just choose a random path to start. As a result, these algorithms often fail to give a valid path, because the properties of the final path still highly depend on the initial path in optimization-based algorithms, although we have assigned the path more deformation capability. To solve this problem, we propose a new algorithm to generate collision-free and smooth paths by combining the merits of the roadmap algorithm and the path reshaping algorithm. Specifically, we first grid the whole configuration space and dilate obstacles to find an appropriate initial path by Dijkstra’s algorithm \cite{ref24}. Then, we apply the CFS algorithm to reshape it. Gridding the configuration and dilating obstacles can guarantee the existence of a feasible final path and also help reduce the time cost of reshaping. To further accelerate our algorithm, we propose a modified algorithm by using the divide-and-conquer strategy. We adopt the Beamlet algorithm \cite{ref25} to further select an initial path that is more suitable for the given curvature constraints. We also design an iterative optimization algorithm to adjust the path to rigorously meet the curvature constraints. Numerical testing results show that our proposed algorithm can almost surely find a feasible path and requires less computation time, if compared to the CFS algorithm. To give a detailed explanation of our finding, the rest of this paper is organized as follows. Section \ref{section2} formulates the motion planning problem as an optimization problem and then introduces the CFS algorithm as a basis for further discussions. Section \ref{section3} presents our new algorithm, proves its feasibility, and explains its time complexity. Section \ref{section4} further discusses how to handle curvature constraints for the initial and final path. Section \ref{section5} presents some numerical testing results to validate our new algorithm. Finally, Section \ref{section6} concludes the paper. \section{Problem Formulation} \label{section2} \subsection{The Optimiaztion Problem} \label{subsection2_1} Our goal is to find a path $\pmb{x}$ that is characterized by a series of $(n+1)$ waypoints, i.e. $\pmb{x}=[\pmb{x}_0^T,\pmb{x}_1^T,\dots,\pmb{x}_n^T]^T$, where $\pmb{x}_i\in\mathbb{R}^2$ represents the position of the robot at $i$th time in the configuration space. In this paper, the robotic motion planning problem is formulated as the optimization problem \eqref{equ1}-\eqref{equ4} as below. \begin{align} \label{equ1}\max_{\pmb{x}}\quad &J(\pmb{x})=\pmb{x}(\pmb{V}^T\pmb{V}+\lambda\pmb{A}^T\pmb{A})\pmb{x}\\ \label{equ2}s.t.\quad & \pmb{x}_0=\pmb{x}_{start},~\pmb{x}_n=\pmb{x}_{end}\\ \label{equ3}&d(\pmb{x}_i,O_j)\ge d_{min},~i=0,\dots,n,~j=1,\dots,q\\ \label{equ4}&f_k(\pmb{x})\leq 0,~k=1,2,\dots,s \end{align} where $\pmb{x}_{start},\pmb{x}_{end}\in\mathbb{R}^2$ are the starting waypoint and the ending waypoint, respectively, and $O_1,O_2,\dots,O_q\in\mathbb{R}^2$ denote $q$ obstacles in the configuration space. The distances between the robot and the $j$th obstacle in the $i$th time is denoted as $d(\pmb{x}_i,O_j)$. $d_{min}$ is the minimum clearance between the robot and obstacles. The objective function \eqref{equ1} is typically designed as a quadratic function, where the matrices $\pmb{V}\in\mathbb{R}^{2n\times 2(n+1)}$, $\pmb{A}\in\mathbb{R}^{2(n-1)\times 2(n+1)}$ are pre-defined weighting matrices \cite{ref20} that indicate the need to reduce the path length and acceleration/deceleration values of the path, respectively, and $\lambda\in\mathbb{R}^+$. The major difficulty of the above optimization problem lies in its nonconvex constraints \eqref{equ3} that are designed for obstacle avoidance and its intricate constraints \eqref{equ4} on robots' dynamic. In this paper, we will first present how to handle the problem when constraints \eqref{equ3} alone are considered and then discuss how to solve the problem when both constraints \eqref{equ3}-\eqref{equ4} are considered. \subsection{The Convex Feasible Set Algorithm} \label{subsection2_2} The CFS algorithm proposed in \cite{ref20} -\cite{ref23} assumes that every two consecutive waypoints are linked with line segments to form a piecewise linear path that connects the starting waypoint and the ending waypoint. Given the initial path, these waypoints will be adjusted in an iterative way so that the final path is feasible for the above constraints \eqref{equ2}-\eqref{equ3} and meanwhile minimize objective \eqref{equ1}. This setting is similar to the Convex Elastic Smoothing (CES) algorithm. The difference lies in the setting of the local collision-free feasible regions in which the waypoints can be adjusted. The CES algorithm assumes that a sequence of "bubbles" is placed along the reference path to identify the local feasible region. In contrast, the CFS algorithm assumes the local feasible regions are the intersection of convex cones that can be found online. This new setting is less conservative and only add a few more computation costs. The complete CFS algorithm can be summarized as Algorithm \ref{algorithm1} below. More details can be found in \cite{ref20}. \begin{algorithm} \caption{The Convex Feasible Set Algorithm} \label{algorithm1} \begin{algorithmic}[1] \STATE\pmb{Initinalize}\\ \STATE\quad Set an initial path $\pmb{x}^{(0)}=[\pmb{x}_0^{(0)T},\pmb{x}_1^{(0)T},\dots,\pmb{x}_n^{(0)T}]^T$.\\ \STATE\quad Set the stopping threshold $\epsilon>0$.\\ \STATE\quad Set $k=0$.\\ \STATE\pmb{Loop}\\ \STATE\quad Compute the convex feasible sets $F(\pmb{x}^{(k)})$.\\ \STATE\quad\pmb{If} $F(\pmb{x}^{(k)})=\emptyset$\\ \STATE\qquad Stop.\\ \STATE\quad\pmb{Else}\\ \STATE\qquad $\pmb{x}^{(k+1)}=\textrm{arg}\min\limits_{\pmb{x}\in F(\pmb{x}^{(k)})}J(\pmb{x})$.\\ \STATE\quad\pmb{End If}\\ \STATE\quad\pmb{If} $\lVert J(\pmb{x}^{(k+1)})-J(\pmb{x}^{(k)})\rVert<\epsilon$ or $\lVert\pmb{x}^{(k+1)}-\pmb{x}^{(k)}\rVert<\epsilon$\\ \STATE\qquad Stop.\\ \STATE\quad\pmb{End If}\\ \STATE\quad $k=k+1$\\ \STATE\pmb{End Loop}\\ \STATE\pmb{Return} path $\pmb{x}^{(k+1)}=[\pmb{x}_0^{(k+1)T},\pmb{x}_1^{(k+1)T},\dots,\pmb{x}_n^{(k+1)T}]^T$. \end{algorithmic} \end{algorithm} However, testing results indicate that the CFS algorithm may fail to provide sufficient free spaces to reshape the path to avoid obstacles, if the initial path is inappropriately chosen, as shown in examples of Section \ref{subsection5_1}. Existing literature \cite{ref20}-\cite{ref23} neither discussed how many waypoints are needed for a particular problem, nor provided a way to set the initial path $\pmb{x}^{(0)}=[\pmb{x}_0^{(0)T},\pmb{x}_1^{(0)T},\dots,\pmb{x}_n^{(0)T}]^T$ to guarantee that we could find a feasible path finally. \section{The Roadmap-Path Reshaping Algorithm} \label{section3} \subsection{The Basic Idea} \label{subsection3_1} Inspired by the roadmap algorithms for path planning, we propose a new algorithm that can be roughly described as follows: \underline{First}, uniformly grid the whole configuration space with a resolution level $\delta$ that is sufficiently small to guarantee the existence of a feasible path. Each intersection point of two gridlines will be taken as a node of the roadmap graph. Each line segment between two neighboring nodes will be taken as an arc of the roadmap graph, if neither of its two nodes is within the obstacles; see Fig.\ref{fig1} for an illustration. \underline{Second}, pick the shortest path in the roadmap graph as the initial path, using Dijkstra’s algorithm. \underline{Third}, apply the CFS algorithm to reshape the initial path until we reach the final path. \begin{figure}[htp] \centering \includegraphics[width=2.5in]{chaoy1.pdf} \caption{An illustration of the gridded configuration space and the dilation of obstacles.} \label{fig1} \end{figure} In this paper, each obstacle is approximated by square grids to reduce the calculation costs of convex feasible sets of the CFS algorithm. Especially, each obstacle is first dilated to fill the grids (dark blue in Fig.\ref{fig1}) that are overlapped with this obstacle and further dilated at least $\lceil d_{min}/\delta\rceil\delta$ (light blue region in Fig.\ref{fig1}) to meet the condition of minimum safety distance $d_{min}$, where $\lceil d_{min}/\delta\rceil\delta$ represents the smallest integer not less than $d_{min}/\delta$. Because the size of each grid is small enough, the size of each obstacle is not overestimated much. In the rest of this paper, we refer to the dilated obstacles (the light blue region in Fig.\ref{fig1}) as the ones that we need to detour. It is worth noting that the distances from the initial waypoints (black dots in Fig.\ref{fig1}) to dilated obstacles are at least $\delta$, since every node of arcs in the roadmap graph is outside dilated obstacles. This setting guarantees the existence of nonempty convex feasible sets; see discussions below. To prove that this new algorithm can guarantee to find a feasible path, we prove the following theorem based on Lemma \ref{lemma1} below. \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \begin{lemma}[Theorem 1 in \cite{ref20}] \label{lemma1} For every initial path whose convex feasible sets have nonempty interiors, the CFS algorithm will converge to a strong local optimum or weak local optimum of the optimization problem \eqref{equ1}-\eqref{equ3}. \end{lemma} \begin{theorem} If the whole configuration space is gridded with a sufficient small resolution level and obstacles are dilated as mentioned above, there must exist nonempty convex feasible sets for each waypoint of the initial path $\pmb{x}^{(0)}=[\pmb{x}_0^{(0)T},\pmb{x}_1^{(0)T},\dots,\pmb{x}_n^{(0)T}]^T$ found by Dijkstra’s algorithm. \end{theorem} \begin{IEEEproof} We prove it by contradiction. Suppose the convex feasible sets $F(\pmb{x}^{(0)})=\emptyset$, there exists a waypoint $\pmb{x}_i^{(0)}$ with $F(\pmb{x}_i^{(0)})=\emptyset$ at least. Without loss of generality, we assume that $\pmb{x}_i^{(0)}$ does not lie in the boundaries of the configuration space. The situation where $\pmb{x}_i^{(0)}$ locates at the boundaries can be proved similarly. As the intersection point of two gridlines that are not boundaries, $\pmb{x}_i^{(0)}$ is the common vertex of four grids, as shown in Fig.\ref{fig2}. Due to $F(\pmb{x}_i^{(0)})=\emptyset$, there will exist dilated obstacles in an arbitrarily small neighborhood $\Big\{\pmb{x}\Big\lvert\big\lVert\pmb{x}-\pmb{x}_i^{(0)}\big\rVert\le\epsilon\Big\}$ of $\pmb{x}_i^{(0)}$. Since $\pmb{x}_i^{(0)}$ is a node of some arc in the roadmap graph, whose distance to the closest dilated obstacle is at least $\delta$, we can assert that the four neighboring grids connecting $\pmb{x}_i^{(0)}$ do not contain dilated obstacles at all. This contradicts the aforementioned result that at least one grid of the four grids contains dilated obstacles. Therefore, we have $F(\pmb{x}^{(0)})\neq\emptyset$. \end{IEEEproof} \begin{figure}[htp] \centering \includegraphics[width=2.5in]{chaoy2.pdf} \caption{An illustration that the four grids connecting $\pmb{x}_i^{(0)}$ do not contain dilated obstacles (light blue regions).} \label{fig2} \end{figure} \subsection{Time Complexity Analysis and a Modified Algorithm} \label{subsection3_2} It is apparent that the time cost of this new algorithm consists of two parts. The first part comes from the calculation of the obstacle dilation and finding the shortest path. The second part comes from the iteration of the CFS algorithm. Suppose we obtain $k\times l$ grids for the configuration space (that is, a roadmap graph with $k\times l$ nodes and $4k\times l$ arcs) and the found shortest path contains $(n+1)$ waypoints. Generally, for a rectangle configuration space, $(n+1)$ should be at the same order of $\sqrt{k\times l}$. The average time complexity for Dijkstra’s algorithm to find the shortest path in this roadmap graph is $O\big((k\times l)\textrm{log}(k\times l)+4k\times l\big)$. The dilation time cost is relatively small and is thus omitted here. With $(n+1)$ waypoints, we have $2(n+1)$ decision variables for the optimization problem \eqref{equ1}-\eqref{equ3}. If we use the interior-point method \cite{ref25} (e.g., Mehrotra-type predictor-corrector algorithm \cite{ref26} used in MATLAB), the average time complexity of solving the optimization problem \eqref{equ1}-\eqref{equ3} should be $O\big(16(n+1)^4\lvert \textrm{log}(\epsilon)\rvert\big)$ \cite{ref26}, with the precision requirement $\epsilon$. It is hard to predict how many iterations are needed for the CFS algorithm to converge. However, we can see that the time complexity of the whole algorithm is dominated by the time complexity of the CFS algorithm. A further look shows that the CFS algorithm may works slowly if the number of waypoints in the initial path is too large. To solve this problem, we propose a modified algorithm that uses the divide-and-conquer method. Beginning from the first waypoint, we divide the whole initial path into segments of every $m$ waypoints. With $(n+1)$ initial waypoints, we will have $\lceil (n+1)/m\rceil$ segments in the path and we will reshape each segment by using the CFS algorithm, respectively. \begin{figure*}[htp] \centering \subfloat[]{\includegraphics[width=2.5in]{chaoy3.pdf}% \label{fig3_a}} \hfil \subfloat[]{\includegraphics[width=2.5in]{chaoy4.pdf}% \label{fig3_b}} \caption{(a) The boundary point $\pmb{x}_j$ is so close to obstacles that the reshaped segment (red dots) is inside the obstacle; (b) the boundary point $\pmb{x}_j$ is substituted by two new boundary points $\pmb{x}_{j-\lceil m_1/2\rceil}$ and $\pmb{x}_{j+\lceil m_2/2\rceil}$} \label{fig3} \end{figure*} This simple trick may fail, since there might exist some sharp turns around the boundary points of these segments. To avoid the abrupt changes of speed caused by these sharp turns, we append an extra constraint that the velocities of boundary points should be equal for two consecutive segments, as the velocities of their predecessors when using the CFS algorithm to reshape every segment. However, this additional constraint sometimes makes the CFS algorithm unable to obtain a feasible segment. Fig.\ref{fig3_a} provides an intuitive illustration. In Fig.\ref{fig3_a}, $\pmb{x}_j$ is a boundary point, $\pmb{x}_{j-1}$ and $\pmb{x}_{j+1}$ are its predecessor and successor waypoint, respectively. Since the speed of $\pmb{x}_{j-1}$ is $[\delta,0]^T$ in Fig.\ref{fig3_a}, the speed of $\pmb{x}_j$ will be also set as $[\delta,0]^T$ according to the extra constraint. Moreover, because the distance from $\pmb{x}_j$ to the obstacle along the speed direction of $\pmb{x}_j$ is exact $\delta$, the continuity requirement of speed will drive $\pmb{x}_{j+1}$ to lie in the boundary of the obstacle, and the following waypoints will be inside the obstacle. To solve this problem, we can select two waypoints $\pmb{x}_{j-\lceil m_1/2\rceil}$ and $\pmb{x}_{j+\lceil m_2/2\rceil}$ as new boundary points to replace $\pmb{x}_j$, where $m_1$ and $m_2$ are the number of waypoints in two segments that connect $\pmb{x}_j$, respectively, as shown in Fig.\ref{fig3_b}. If the boundary point still does not work, we can repeat such boundary point alternation until that the CFS algorithm obtains a feasible final segment. With the aid of the boundary point alternation, the CFS algorithm will have a larger probability (may not always 100\%) to obtain a feasible segment. \begin{algorithm} \caption{Roadmap Path Reshaping-m (RPR-m) Algorithm} \label{algorithm2} \begin{algorithmic}[1] \STATE\pmb{Initinalize}\\ \STATE\quad Set the number of grids: $k\times l$.\\ \STATE\quad Set the number of waypoints in each segment: $m$.\\ \STATE\quad Grid the whole configuration space. \STATE\quad Dilate obstacles at least $\big(\lceil d_{min}/\delta\rceil+1\big)\delta$.\\ {\color{green} \textit{//divide the initial path into several segments}} \STATE Apply Dijkstra to find a collision-free initial path.\\ \STATE Divide the initial path into $d=\lceil(n+1)/m\rceil$ segments and record all boundary points.\\ {\color{green} \textit{//apply the CFS algorithm to modify every segment}} \STATE flag=0.\\ \STATE\pmb{Loop}\\ \STATE\quad Apply the CFS algorithm to modify the $i$th segment.\\ {\color{green} \textit{//if CFS fails, then alter boundary points}} \STATE\quad\pmb{If} CFS fails to modify it\\ \STATE\qquad\pmb{If} the number of waypoints of this segment over 3\\ \STATE\qquad\quad Alter the $(i-1)$th boundary point.\\ \STATE\qquad\quad $d=d+1,i=i-1$.\\ \STATE\qquad\quad continue.\\ \STATE\qquad\pmb{Else} \STATE\qquad\quad flag=1.\\ \STATE\qquad\quad break.\\ \STATE\qquad\pmb{End If} \STATE\quad\pmb{End If}\\ \STATE\quad $i=i+1$.\\ \STATE\pmb{Until} $i\ge d$\\ {\color{green} \textit{//return the final path}} \STATE\pmb{If} flag=0\\ \STATE\quad Connect all segments to generate the complete path.\\ \STATE\quad\pmb{Return} the complete path.\\ \STATE\pmb{Else}\\ \STATE\quad\pmb{Return} the initial path.\\ \STATE\pmb{End If} \end{algorithmic} \end{algorithm} The modified algorithm can be summarized as Algorithm \ref{algorithm2} below. Testing results show that the modified algorithm usually works in practice. In the below, we use the abbreviation RPR-m to refer this algorithm with at most $m$ waypoints in a segment. We generally neglect the time cost of the boundary point alternation due to the low probability of occurrence. Thus, the modified algorithm usually solves $(n+1)/m$ optimization subproblems \eqref{equ1}-\eqref{equ3} with $2m$ decision variables for each subproblem (assume that $(n+1)/m$ is an integer). The time complexity of solving each subproblem is $O\big(16m^4\lvert \textrm{log}(\epsilon)\rvert\big)$. Therefore, the total time complexity should be $O\big(16m^3(n+1)\lvert \textrm{log}(\epsilon)\rvert\big)$, which is much less than the complexity of directly solving the original optimization problem with $2(n+1)$ decision variables. \section{Further Discussions On Curvature Constraints} \label{section4} Different with the CFS algorithm, our new algorithm can be further extended to explicitly consider the curvature constraints between two consecutive waypoints \begin{equation} \theta(\pmb{x}_i-\pmb{x}_{i-1},\pmb{x}_{i-1}-\pmb{x}_{i-2})\le\theta_{max},~i=2,3,\dots,n-2. \label{equ5} \end{equation} where $\theta(\pmb{x}_i-\pmb{x}_{i-1},\pmb{x}_{i-1}-\pmb{x}_{i-2})$ represents the intersection angle between the vector $(\pmb{x}_i-\pmb{x}_{i-1})$ and $(\pmb{x}_{i-1}-\pmb{x}_{i-2})$, and $\theta_{max}$ is the maximum permittable intersection angle. \subsection{Considering Curvature Constraints at Roadmap Step} \label{subsection4_1} The first extension is to handle the curvature constraints in the roadmap step by applying the Beamlet-based Dijkstra’s algorithm \cite{ref27}, named the Beamlet algorithm, instead of Dijkstra’s algorithm to find the initial path. Beamlets refer to the line segments (e.g., the line segment AB in Fig.\ref{fig4}) connecting certain points located at the boundaries of dyadic squares. A dyadic square \cite{ref28} (d-square, e.g., the red squares in Fig.\ref{fig4}) is a set of points in a square region with pre-specified size. A d-square may be further partitioned into four sub-d-squares of the same size to guarantee that beamlets constructed on these d-squares do not pass through obstacles. In this paper, we select the gridlines of the roadmap constructed in Algorithm \ref{algorithm2} as the boundaries of dyadic squares. Furthermore, the nodes of the roadmap that locate at the boundaries of dyadic squares are taken as endpoints of beamlets. \begin{figure}[htp] \centering \includegraphics[width=2.5in]{chaoy5.pdf} \caption{An illustration of beamlets and dyadic squares (red squares), where grey polygons represent obstacles.} \label{fig4} \end{figure} We can further construct a beamlet graph by linking a series of beamlets, with respect to the given curvature constraints. The intersection angle between the beamlet AB and the beamlet BC is defined as the intersection angle between the vector $\overrightarrow{AB}$ and $\overrightarrow{BC}$ ($\theta(\overrightarrow{AB},\overrightarrow{BC})$), i.e. $\theta$ in Fig.\ref{fig4}. In a beamlet graph \cite{ref28}, an arc $e(AB,BC)$ exists from the beamlet AB to the beamlet BC, if and only if they are connected and $\theta(\overrightarrow{AB},\overrightarrow{BC})\le\theta_{max}$. Notably, the length of $e(AB,BC)$ is set as the length of the beamlet BC. The Beamlet algorithm first constructs a beamlet graph based on the original roadmap in Algorithm \ref{algorithm2} and then uses Dijkstra’s algorithm on the beamlet graph to find an initial path. More details can be found in \cite{ref27} -\cite{ref28}. Notably, if some beamlet is too long, the distance between corresponding waypoints (the endpoints of the beamlet) may be also too large. To balance the distance between consecutive waypoints, we suggest inserting extra waypoints along the initial path found by the Beamlet algorithm. \subsection{Considering Curvature Constraints at Reshaping Step} \label{subsection4_2} The second extension is to use an iterative optimization algorithm to make a given path rigorously meet the curvature constraints. Specifically, in this paper, we only consider the situation where $\theta_{max}<\frac{\pi}{2}$. Denoting $\pmb{x}_i=[a_i,b_i]^T$, we iteratively solve the following optimization problem \eqref{equ6}-\eqref{equ9} to sequentially check and modify every waypoint $\pmb{x}_i$ while fixing other waypoints $\pmb{x}_j, j\neq i$, until the whole path meets these constraints or the number of modification is over a pre-specified value; see Fig.\ref{fig5} for an illustration. \begin{figure}[htp] \centering \includegraphics[width=2.5in]{chaoy6.pdf} \caption{An illustration of an iteration, where the light red region is the convex feasible set and the dark red region is the feasible region of the optimization problem \eqref{equ6}-\eqref{equ9}. The grey polygons represent obstacles and the blue contour represents the cost function \eqref{equ6}.} \label{fig5} \end{figure} Since only $\pmb{x}_i$ is modified and other waypoints are unchanged, the constraint \eqref{equ9} becomes convex. Consequently, the obtained optimization problem \eqref{equ6}-\eqref{equ9} is convex. \begin{align} \label{equ6}\max_{\pmb{x}_i}\quad &J(\pmb{x})=\pmb{x}(\pmb{V}^T\pmb{V}+\lambda\pmb{A}^T\pmb{A})\pmb{x}\\ \label{equ7}s.t.\quad & \pmb{x}_0=\pmb{x}_{start},~\pmb{x}_n=\pmb{x}_{end}\\ \label{equ8}&\pmb{x}\in F(\pmb{x})\\ \label{equ9}\begin{split}&c_1\big[(a_{i-2}-a_{i-1})a_i+(b_{i-2}-b_{i-1})b_i+a_{i-1}^2+\\ &b_{i-1}^2-a_{i-1}a_{i-2}-b_{i-1}b_{i-2}\big]+\big\lvert(a_{i-1}-a_{i-2})b_i\\ &-(b_{i-1}-b_{i-2})a_i+b_{i-1}a_{i-2}-b_{i-2}a_{i-1}\big\rvert\le 0\end{split} \end{align} where $F(\pmb{x})$ is the convex feasible set and $c_1=\textrm{tan}\theta_{max}$. The constraint \eqref{equ9} is equivalent to $\textrm{tan}~\theta_i\le c_1,~i=2,3,\dots,n-2$. Our extended algorithm can be summarized as Algorithm \ref{algorithm3} below. In the below, we use the abbreviation ERPR-m to refer this extended algorithm with at most $m$ waypoints in a segment. \begin{algorithm} \caption{Extended Roadmap Path Reshaping-m (RPR-m) Algorithm} \label{algorithm3} \begin{algorithmic}[1] \STATE\pmb{Initinalize}\\ \STATE\quad Set the number of grids: $k\times l$.\\ \STATE\quad Set the number of waypoints in each segment: $m$.\\ \STATE\quad Grid the whole configuration space. \STATE\quad Dilate obstacles at least $\big(\lceil d_{min}/\delta\rceil+1\big)\delta$.\\ \STATE\quad Set the maximum number of iterations: maxiter.\\ {\color{green} \textit{//construct the initial path that meets curvature constraints}} \STATE Apply the Beamlet algorithm to find an initial path.\\ \STATE Insert extra waypoints if necessary.\\ \STATE Divide the initial path into $d=\lceil(n+1)/m\rceil$ segments and record all boundary points.\\ \STATE Apply the CFS algorithm to reshape every segment.\\ {\color{green} \textit{//iteratively adjust the curvature of the path}} \STATE $i=0$.\\ \STATE\pmb{Loop}\\ \STATE\quad Solve the optimization problem \eqref{equ6}-\eqref{equ9} to modify every waypoint sequentially.\\ \STATE\quad $i=i+1$.\\ \STATE\pmb{Until} the whole path meets our goal \pmb{or} $i>$maxiter.\\ {\color{green} \textit{//return the final path}} \STATE\pmb{If} successfully modify the whole path\\ \STATE\quad\pmb{Return} the complete path.\\ \STATE\pmb{Else}\\ \STATE\quad\pmb{Return} the initial path.\\ \STATE\pmb{End If}. \end{algorithmic} \end{algorithm} \subsection{Time Complexity Analysis} \label{subsection4_3} It is apparent that Algorithm \ref{algorithm3} has two additional time costs than Algorithm \ref{algorithm2}. First, Algorithm \ref{algorithm3} applies Dijkstra’s algorithm on the beamlet graph to find the initial path. According to \cite{ref27}, with $k\times l$ nodes in the roadmap, there will be $O\big(\frac{1}{2}(k\times l)\textrm{log}_2(k\times l)\big)$ nodes and $O\Big(\frac{1}{4}(k\times l)^2\big(\textrm{log}_2(k\times l)\big)^3\Big)$ arcs in the beamlet graph that is based on the roadmap. Therefore, the time complexity for Dijkstra’s algorithm to find initial paths on the beamlet graph is $O\Big(\frac{1}{4}(k\times l)^2\big(\textrm{log}_2(k\times l)\big)^3\Big)$, which can be rewritten as $O\Big(2(n+1)^4\big(\textrm{log}_2(n+1)\big)^3\Big)$, since $k\times l$ is at the same order of $(n+1)^2$, the square of the number of waypoints, for rectangle configuration spaces. Here we ignore the complexity of constructing the beamlet graph, since it is much less. Second, Algorithm \ref{algorithm3} needs to check and modify each waypoint, corresponding to solving the additional optimization problem \eqref{equ6}-\eqref{equ9}. We still assume that Mehrotra-type predictor-corrector algorithm is used to solve it, whose average time complexity is $O\big(n^4\lvert \textrm{log}(\epsilon)\rvert\big)$ if there are $n$ decision variables and the precision requirement is $\epsilon$. Since the optimization problem \eqref{equ6}-\eqref{equ9} only has 2 decision variables, the complexity of modifying a waypoint is $O\big(16\lvert \textrm{log}(\epsilon)\rvert\big)$. If the path has totally $(n+1)$ waypoints, then the complexity of modifying the whole path is $O\big(16(n+1)\lvert \textrm{log}(\epsilon)\rvert\big)$. \section{Numerical Tesing Results} \label{section5} \subsection{Comparison between the CFS and the Algorithm \ref{algorithm2}} \label{subsection5_1} In this subsection, the performance of Algorithm \ref{algorithm2} (RPR-m algorithm) is compared with the original CFS algorithm on random configuration spaces. We restrict the whole configuration space in a rectangular region whose length and width are 9 and 6, respectively, and set up a coordinate system as shown in Fig.6. In addition, we set $d_{min}=0.1$ and discretize the configuration space into $60\times 90$ grids with the resolution level $\delta=0.1$. We randomly place non-overlapping rectangular obstacles in the configuration space \cite{ref29}, whose position satisfies the 2D uniform distribution in the configuration space and aspect ratio satisfies the 1D uniform distribution in the interval [1/2.5,2.5]. The area of the $i$th rectangle is set as $A_0/i^{1.1}$ \cite{ref29}, where $A_0$ is pre-specified to guarantee that these rectangles can fill the whole configuration space if their number is infinite. For the motion planning problem (1)-(3), we set $\lambda=1$, $\pmb{x}_{start}=[0,0]^T$, $\pmb{x}_{end}=[9,0]^T$. The other parameters of the RPR-m algorithm and the original CFS algorithm are set as follows: \begin{itemize} \item The RPR-m algorithm. We set $m=60$, considering the capability of our computer and the fact that a small $m$ might make the CFS algorithm inefficient because there are too many segments to reshape. We refer to it as RPR-60. Specially, we also use the abbreviation RPR-ALL to refer our algorithm that uses the CFS algorithm directly reshape the whole path without any segments. \item The original CFS algorithm. We set its initial path as the line segment bounded between the starting point and the ending point, and set $(n+1)$, the number of waypoints, to be equal as the number of waypoints of the initial path in the RPR algorithm. \end{itemize} We compare the performance of the RPR algorithm (including both RPR-ALL and RRP-60) and the original CFS algorithm in the following two criteria. \begin{enumerate} \item The probability to find a feasible path. \item The computation time of the algorithm. \end{enumerate} To avoid bias caused by occasionality, we randomly generated 1000 configuration spaces, which can be divided into 5 groups. There are 200 configuration spaces in each group. The configuration spaces in each group contain 5, 10, 15, 20, and 30 obstacles, respectively. All numerical experiments were performed on a computer with an Intel(R) Core (TM) i7-7700U CPU and 8GB RAM. The RPR algorithm and the original CFS algorithm were implemented in MATLAB2016b. We implement the CFS algorithm according to Liu \cite{ref30}. The main differences between our code and Liu’s code are that we add additional constraints to strict paths in the configuration space and the relative optimization problems were solved by CVX \cite{ref31}. Table.\ref{table1} shows the detailed results of these algorithms. The third column of Table.\ref{table1} shows the number and the percentage of problems that have been solved successfully by each algorithm, respectively. Clearly, both RPR-ALL and RPR-60 can find a feasible path with 100\% probability; while the original CFS algorithm fails to do so especially when the number of obstacles is large. The fourth column of Table.\ref{table1} shows the average computation time of each algorithm. In particular, the numbers in parentheses represent the average computation time of finding initial paths. Since the initial path of the original CFS algorithm is the line segment bounded between the starting point and the ending point, we assume that its computation time is zero. For every group, the computation time for the RPR-ALL algorithm and the RPR-60 algorithm to find initial paths is nearly zero. Moreover, the RPR-ALL algorithm converges slightly faster than the original CFS algorithm but much slower than the RPR-60 algorithm. This is mainly caused by the proper partition of the whole path which may lead to a noticeable reduction of time costs. The fifth column of Table.\ref{table1} shows the average number of waypoints in each group. It is obvious that the number of waypoints tends to increase as the number of obstacles increases, which is consistent with the intuition that more complicated environments usually lead to longer trajectories. To provide an intuitive illustration, Fig.\ref{fig6_a} and Fig.\ref{fig6_b} show an example of the final paths computed by the RPR-60 algorithm and the original CFS algorithm, respectively. We can see that the RPR-60 algorithm successfully found a feasible and smooth final path, while the original CFS algorithm failed. In summary, compared to the original CFS algorithm, the RPR algorithm requires less computation time and can almost surely find a feasible and smooth path. \begin{table}[htbp] \caption{Performance of RPR-ALL, RPR-60, and CFS in 1000 Random Configuration Spaces} \label{table1} \setlength{\tabcolsep}{3pt} \begin{tabular}{|p{28pt}|p{35pt}|p{65pt}|p{57pt}|p{30pt}|} \hline Group & Method&How many configuration spaces that can be solved&Average total time (average time of finding initial paths)& The average number of waypoints.\\ \hline \multirow{3}{*}{Group1}&CFS&150/200(75\%)&2.17s(0s)&108\\ \cline{2-5} &RPR-ALL&\pmb{200/200(100\%)}&1.75s(0.001s)&108\\ \cline{2-5} &RPR-60&\pmb{200/200(100\%)}&1.46s(0.001s)&108\\ \hline \multirow{3}{*}{Group2}&CFS&121/200(60.5\%)&4.75s(0s)&115\\ \cline{2-5} &RPR-ALL&\pmb{200/200(100\%)}&4.27s(0.001s)&115\\ \cline{2-5} &RPR-60&\pmb{200/200(100\%)}&1.98s(0.001s)&115\\ \hline \multirow{3}{*}{Group3}&CFS&89/200(44.5\%)&13.71s(0s)&125\\ \cline{2-5} &RPR-ALL&\pmb{200/200(100\%)}&10.37s(0.001s)&125\\ \cline{2-5} &RPR-60&\pmb{200/200(100\%)}&2.35s(0.001s)&125\\\hline \multirow{3}{*}{Group4}&CFS&79/200(34.5\%)&24.64s(0s)&129\\ \cline{2-5} &RPR-ALL&\pmb{200/200(100\%)}&18.29s(0.001s)&129\\ \cline{2-5} &RPR-60&\pmb{200/200(100\%)}&3.11s(0.001s)&129\\\hline \multirow{3}{*}{Group5}&CFS&33/200(16.5\%)&64.87s(0s)&141\\ \cline{2-5} &RPR-ALL&\pmb{200/200(100\%)}&56.8s(0.001s)&141\\ \cline{2-5} &RPR-60&\pmb{200/200(100\%)}&5.93s(0.001s)&141\\ \hline \end{tabular} \end{table} \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=2.5in]{chaoy7.pdf}% \label{fig6_a}} \hfil \subfloat[]{\includegraphics[width=2.5in]{chaoy8.pdf}% \label{fig6_b}} \caption{(a) The randomly picked-up initial path (green dash curve) and the final path (red solid curve) computed by the CFS algorithm; (b) the initial path (green dash curve) and the final path (blue solid curve) computed by the RPR-60 algorithm.} \label{fig6} \end{figure*} \subsection{Comparison between the Algorithm \ref{algorithm2} and the Algorithm \ref{algorithm3}} \label{subsection5_2} In this subsection, we compare Algorithm \ref{algorithm3} (ERPR-m algorithm) with Algorithm \ref{algorithm2} (RPR-m algorithm) in the perspective of the probability to find feasible paths that meet the curvature constraints \eqref{equ5}. We still adopt the rectangular configuration spaces with the same size as the last subsection and set up the same coordinate system as shown in Fig.\ref{fig7}. The configuration space is also discretized into $60\times 90$ grids with the resolution level $\delta=0.1$ and $d_{min}=0.1$. Here we randomly generate non-overlapping circular obstacles, whose position satisfies the 2D uniform distribution in the configuration space. The area of the $i$th circle is also set as $A_0/i^{1.1}$. For the RPR-m algorithm, we set $m=60$; for the ERPR-m algorithm, we set $\theta_{max}=30^\circ$ and $m=60$. And we set “maxiter” of the ERPR-m algorithm as the number of waypoints, thus we can modify every waypoint in the path. The other parameters are the same as the last subsection. We randomly generated 800 configuration spaces that are divided into 4 groups. There are 200 configuration spaces in each group, which contain 5, 10, 15, and 20 obstacles, respectively. Table.\ref{table2} shows the detailed results of the two algorithms. The third column of Table.\ref{table2} shows the number and the percentage of problems that have been solved successfully by two algorithms, respectively. We can see that the ERPR-60 algorithm always has a larger probability to find feasible paths that meet curvature constraints, if compared to the RPR-60 algorithm. Of course, the probability for both the RPR-60 algorithm and the ERPR-60 algorithm to find feasible paths decreases as the number of obstacles increases. The fourth column of Table.\ref{table2} shows the average computation time of each algorithm, where the numbers in parentheses represent the average computation time of finding initial paths. We can see that the average time for the ERPR-60 algorithm to reshape initial paths is similar to the corresponding time needed by the RPR-60 algorithm; while ERPR-60 algorithm requires more time to find initial paths. This is consistent with our previous time complexity analysis, since the complexity of the Beamlet algorithm is $O\Big(2(n+1)^4\big(\textrm{log}_2(n+1)\big)^3\Big)$, while the complexity of Dijkstra’s algorithm used in RPR-60 algorithm is $O\big((k\times l)\textrm{log}(k\times l)+4k\times l\big)$ (i.e. $O\big(2(n+1)^2\textrm{log}(n+1)+4(n+1)^2\big)$). The increase in computation time is acceptable, especially for cluttered configuration spaces, such as Group4, if we would like to increase the probability to find feasible paths. Besides, we can see that the computation time for the Beamlet algorithm to find initial paths reduces with the number of obstacles increasing. As the number of obstacles increases, the total areas of d-squares that do not contain obstacles decreases, which contributes to a reduction in the number of the nodes and the arcs in the beamlet graph. Therefore, the time of finding initial paths on the beamlet graph reduces. The fifth column of Table.\ref{table2} shows the average number of waypoints in each group. Obviously, the number of waypoints increases as the number of obstacles increases. \begin{table}[htp] \caption{Performance of RPR-60, and ERPR-60 in 800 Random Configuration Spaces} \label{table2} \setlength{\tabcolsep}{4pt} \begin{tabular}{|p{28pt}|p{35pt}|p{65pt}|p{57pt}|p{30pt}|} \hline Group & Method&How many configuration spaces that can be solved&Average total time (average time of finding initial paths)& The average number of waypoints.\\ \hline \multirow{2}{*}{Group1}&RPR-60&193/200(96.5\%)&1.34s(0.001s)&115\\ \cline{2-5} &ERPR-60&\pmb{199/200(99.5\%)}&28.17s(27.09s)&88\\ \hline \multirow{2}{*}{Group2}&RPR-60&177/200(88.5\%)&1.79s(0.001s)&129\\ \cline{2-5} &ERPR-60&\pmb{191/200(95.5\%)}&13.99s(12.59s)&93\\ \hline \multirow{2}{*}{Group3}&RPR-60&165/200(82.5\%)&2.06s(0.001s)&137\\ \cline{2-5} &ERPR-60&\pmb{181/200(90.5\%)}&8.49s(6.50ss)&98\\ \hline \multirow{2}{*}{Group4}&RPR-60&139/200(69.5\%)&2.70s(0.001s)&142\\ \cline{2-5} &ERPR-60&\pmb{154/200(77\%)}&6.77s(3.70s)&102\\ \hline \end{tabular} \end{table} To provide an intuitive illustration, Fig.\ref{fig7_a} and Fig.\ref{fig7_b} show an example of the final paths found by the RPR-60 algorithm and the ERPR-60 algorithm, respectively. We can see that the ERPR-60 algorithm successfully found a feasible and smooth path that met the curvature constraints, while the final path found by the RPR-60 algorithm in Fig.\ref{fig7_a} is infeasible because there exists a sharp turn with a large angle, as shown in Fig.\ref{fig7_a}. \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=2.5in]{chaoy9.pdf}% \label{fig7_a}} \hfil \subfloat[]{\includegraphics[width=2.5in]{chaoy10.pdf}% \label{fig7_b}} \caption{(a) The initial path (green dash curve) and the final path (red solid curve) computed by the RPR-60 algorithm; (b) the initial path (green dash curve) and the final path (blue solid curve) computed by the ERPR-60 algorithm.} \label{fig7} \end{figure*} \section{Conclusion} \label{section6} Compared to the recently proposed CFS algorithm, our algorithm has a larger probability to find feasible paths that meet specific requirements, such as the curvature constraints, and achieves a great reduction in computation time. The key elements that contribute to the success of our algorithm stem from combining roadmap algorithms and the optimization-based path reshaping algorithms. Roadmap algorithms serve to find paths for discrete problems via rough but fast exploration, while the optimization-based path reshaping algorithms are designed to compute paths for continuous problems via refined and local adjustment. Therefore, combining the two kinds of algorithms can effectively balance the breadth and the depth of the search. With the curvature constraint considered, the computation time will increase. This is reasonable and acceptable, as the old saying goes, “A beard well lathered is half shaved”. In this paper, we simplify the shape of the robot into a single point, which is reasonable if the robot is much smaller than obstacles. However, in many applications, such as autonomous driving, the size of the robot is close to, even larger than the size of obstacles. In these situations, the shape of the robot should be considered, which will be handled in our future research.
1,314,259,996,180
arxiv
\section{Introduction} \label{sec-intro} As we all know, in sentiment analysis (SA) task~\cite{chen2019transfer,johnson2015effective,zhang2018deep}, its overall sentiment always depends to a large extent on a few key elements of the inputs. For example. Given a short movie review ``\textit{deflated ending aside, there's much to recommend the film}'' obtained from the {\sc SST-5}\xspace dataset (detailel in later Section), the three words \textit{deflated}, \textit{much}, and \textit{recommend} have larger impacts on the overall sentiment polarity of the review. For this type of task, a lesson from attention mechanism~\cite{bahdanau14neural,vaswani2017attention,velickovic18gat} is worthy of learning, where a weighted sum over all input items is computed. Despite its effectiveness, this strategy remains simple and could not fully reveal nor exploit the unique input structure, \emph{i.e.,}\xspace the existence of a few key elements. To be specific, the input structure is \textbf{implicitly} modeled, it is unclear whether the structure could enhance the model performance in terms of both prediction effectiveness and, better yet, other promising properties such as evaluation and robustness. Moreover, the importance weights of both attention models are \textbf{dense}, as a result of which the key elements are not directly revealed. \begin{figure*}[tbh!] \centering \includegraphics[width=.95\textwidth]{framework.pdf} \caption{General architecture of \modelname (hidden size and the number of input items are 2 and 3).} \label{fig:framework} \end{figure*} To alleviate the above issues and answer the questions, we take one step towards explicitly and separately modeling the input structure. Explicitly means that we explicitly associate each input item with a weight and update the weight during the training. Separately means that the input items and item weights are processed separately. Our work is motivated by the two-streams hypothesis~\cite{goodale1992separate}, which argues that the neural processing of vision and hearing follows two distinct streams. The ventral stream (a.k.a. ``what pathway'') is involved with the object and visual identification and recognition, while the dorsal stream (or, ``where pathway'') is involved with processing the spatial location relative to the viewer and with speech repetition. Such what-and-where decomposition has already shown its usefulness in computer vision~\cite{jacobs1991task, simonyan2014two, 8630333, zhang2021robust} and natural language processing~\cite{zhang2019sentiment} tasks. We assume that the input structure, i.e., input items and items importance, can be processed by different pathways and then be mutually reinforced. To implement this, we explore a neural architecture \modelname, what distinguishes \modelname from previous ones is that it not only learns discriminative representations, but also traces the key input elements at the same time. Central to \modelname are a set of \underline{E}ncoder-\underline{L}ocator \underline{C}ombination\underline{s} (ELCs) such that encoders and locators are responsible for the ``what and where pathways'' respectively. \modelname adopts a layer-wise architecture to organize ELCs, which enables encoders and locators to collaborate for mutual reinforcement between the two sub-tasks, \emph{i.e.,}\xspace representation learning and structure revealing. More specifically, locators utilize the hidden states of encoders to estimate item weights more accurately, and encoders are in turn guided by the item weights of locators to obtain more discriminative hidden states. Also, there is a smoothness regularization between the input item embeddings of adjacent ELCs. This is to prevent the hidden states from changing significantly and ensure the stabilization of learning across layers. For the purpose of tracing, \modelname further enforces sparsity constraints with increasing strength on locators. As a result, locators are taught to identify a small subset of key elements eventually. In addition, \modelname employs a proactive masking strategy, \emph{i.e.,}\xspace proactively masking key elements as indicated by item weights during training. The strategy prevents \modelname from simply learning feature co-adaption and assists it to resist attacks on key elements. We exploit \modelname for SA for evaluation. Experimental results on both sentence- and document-level sentiment classification demonstrate the effectiveness of \modelname. Notably, despite the large-scale training corpus and many engineering efforts for the state-of-the-art pre-trained language models, \modelname built upon \xlnet and \roberta could further increase the classification accuracy over the two. Then, we provide a case study by considering a total of eight types of attacks, and show that \modelname is more robust to attacks than \xlnet, especially on hard attacks such as changing word orders and dropping information. Moreover, our qualitative analysis verifies that the revealed item weights make the outcomes of \modelname easier to understand. Finally, we conduct several experiments to analyse the parameters sensitivity, e.g., masking probability, number of stacked ELCs and hidden state aggregation in each ELCs. \section{Related Work} \label{sec-related} \textbf{Word embedding methods.} GloVe~\cite{pennington2014glove} performs on aggregating global word-word co-occurrence statistics from a corpus, it is an unsupervised learning algorithm for obtaining vector representations for words and is publicly available. Deep learning models, \emph{e.g.,}\xspace convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have already demonstrated their superiority for the task~\cite{cho2014learning,choi2018learning,kim2014convolutional}. Distinct from exploiting the spatial and temporal patterns in texts as done by CNNs and RNNs, \modelname tackles the problem by considering the special input structure such that the outcome is mainly contributed by a few key elements. Recently, large-scale pre-trained language models~\cite{devlin2019bert,liu2019roberta,Yang2019xlnet} have further led to significant performance gains on a broad range of NLP tasks. \modelname is capable of integrating any such effort through its embedding layer, and its contribution is to further enhance model performance by tracing key input elements. While we have also observed a growing trend in aspect-level sentiment analysis~\cite{chen2019transfer,tang2019progressive}, in this work, we only consider the problem at sentence-level and document-level. \textbf{Two-stream hypothesis.} \cite{zhang2019sentiment} also borrows the notation from the two-stream hypothesis, where the segmentation tagging task is considered as a ``where”-task (i.e., the location of entities), and the sentiment recognition as the ``what''-task. The difference between \modelname and \cite{zhang2019sentiment} is that we separately treat the input items and item weights as ``what'' and ``where'', while the latter considers segmentation tagging and sentiment classification and ``where'' and ``what''. Since there are very different settings and evaluation datasets are adopted, we do not include it as our baseline. \section{Proposed Model} \label{sec-method} \subsection{General Architecture} \label{subsec-general} \begin{figure*}[h] \centering \includegraphics[width=.95\textwidth]{encoder-decoder-1.pdf} \caption{Implementation of a single ELC of \modelname ($n=3$, $d'=d=2$ and we omit the bias vectors for computing $\textbf{C}_{k-1}$ and $\textbf{C}_{k}$).} \label{fig:impl} \end{figure*} As mentioned earlier, we consider SA task whose input can be represented as a set of items, and the corresponding outcome is mainly contributed by a few key items. The proposed model is illustrated in Fig.~\ref{fig:framework}. \modelname first transforms the item-based input into continuous vector representation in its \textbf{embedding layer}. The core of \modelname is a set of \textbf{encoder-locator combinations} (ELCs) organized layer-by-layer, as shown in the vertical-middle part of Fig.~\ref{fig:framework}. Each ELC behaves as a basic functional unit of \modelname, which jointly learns task-specific representation and reveals input structure. There is a \textbf{smoothness regularization} between the input item embeddings of adjacent ELCs. This is to prevent the hidden states from changing significantly and ensure the stabilization of learning across layers. \modelname further places a \textbf{sparsity constraint} on the vector to derive sparse item weights. More specifically, it increases the strength of sparsity constraints on locators layer-by-layer, as shown by the varying colors of the sparsity components in Fig.~\ref{fig:framework}. Since it is generally more challenging to identify key elements at the very beginning, the weaker sparsity constraint allows locators to select more key items for better error tolerance. Then the \textbf{proactive masking} strategy masks some input items (\emph{i.e.,}\xspace setting the corresponding embeddings to zero) during training to boost model performance. As we describe the masking process as ``proactive'', it differs from traditional random masking like in BERT~\cite{devlin2019bert} in the way that the probability of each item to be masked is given by its item weight. At the top of \modelname is a \textbf{discriminator} ${D}$ built to derive the corresponding outcome of every given input with respect to the task. \subsection{Input \& Embedding Layer} For sentiment analysis, the input can be unified as a sequence of words $S = [w_1, w_2, \dots, w_n]$. The embedding layer could be any pre-trained language models among which BERT~\cite{devlin2019bert}, XLNet~\cite{Yang2019xlnet}, and RoBERTa~\cite{liu2019roberta} are the most effective and popular. As such, each word $w_i\in S$ is transformed into a continuous vector representation $\bm{x}_i \in \mathbb{R}^{d'}$, $d'$ represent the dimension of embeddings. By stacking these word vectors, we also have the corresponding word embedding matrix $\textbf{X}\in\mathbb{R}^{n\times d'}$. \subsection{ELC \& Sparsity Constraint} For the $k$-th ELC ($k < 1$), given the masked $\textbf{C}_{k-1}$ and $\bm{l}_{k-1}$, the encoder essentially derives the hidden state $\bm{h}_{k}\in\mathbb{R}^{d}$ by summing over rows/words in $\textbf{C}_{k-1}$ such that those more important are given higher weights. $d$ is the dimension of vector representations. To achieve this, it first computes a query vector $\bm{q}_{k} = \bm{l}_{k-1}^\intercal \textbf{C}_{k-1}$, which encodes key items in the current ELC based on the (sparse) item weights in $\bm{l}_{k-1}$. Thus, the query vector $\bm{q}_{k}$ could determine which words the encoder should pay more attention to. The hidden state $\bm{h}_{k}$ is then outputted by an attention layer, given $\bm{q}_{k}$ as query and rows in $\textbf{C}_{k-1}$ as keys/values. Formally, the unnormalized attention weights are given by: \begin{small} \begin{equation} \label{eq:attention-weight} a(\bm{q}_{k}, \bm{c}_i^{k-1}) = \bm{v}_k^\intercal \tanh(\textbf{W}_k^{att,q} \bm{q}_{k} + \textbf{W}_k^{att,c} \bm{c}_i^{k-1} + \bm{b}_k^{att}), \end{equation} \end{small} where $\bm{c}_i^{k-1}$ is the $i$-th row of $\textbf{C}_{k-1}$. Again, $\textbf{W}_k^{att,q} \in \mathbb{R}^{d\times d}$, $\textbf{W}_k^{att,c} \in \mathbb{R}^{d\times d}$, $\bm{v}_k \in \mathbb{R}^{d}$ and $\bm{b}_k^{att} \in \mathbb{R}^d$ are learnable parameters in the $k$-th ELC. Finally, hidden state $\bm{h}_k$ is computed by: \begin{equation} \label{eq:hidden-state} \bm{h}_k = \sum_i \frac{\exp(a(\bm{q}_{k}, \bm{c}_i^{k-1}))}{\sum_j \exp(a(\bm{q}_{k}, \bm{c}_j^{k-1}))} \bm{c}_i^{k-1}. \end{equation} As for the locator to update item weights, it first obtains the \textit{dense} item weight vector $\bm{l}'_k = \textbf{C}_{k} \bm{h}_k \in \mathbb{R}^n$ based on the masked $\textbf{C}_{k}$ and new hidden state $\bm{h}_k$. We adopt the \kw{sparsemax}\ activation~\cite{martins2016sparsemax} to provide sparsity for $\bm{l}'_k$. More specifically, $\kw{sparsemax}(\bm{l}'_k)$ returns the euclidean projection of $\bm{l}'_k$ on the probability simplex of the $n$-dimensional space. By this definition, the sparsity strength of \kw{sparsemax}\ is not controllable. On the other hand, the activation of \kw{sparsemax}\ depends ultimately on the absolute difference between the values in $\bm{l}'_k$. Intuitively, the lower the absolute difference is, the less sparse the activation is. We thus turn to linearly scaling $\bm{l}'_k$ before computing \kw{sparsemax}: \begin{equation} \label{eq:sparsemax-rev} \bm{l}_k = \kw{sparsemax}(\sigma(-\sum_{j=k}^{L-1} w_j^2 + w_L^2) \cdot \bm{l}'_k), \end{equation} where $L$ is the number of layers in \modelname, $\sigma(x) = 1/(1+\exp(-x)) \in (0,1)$ is the sigmoid function, and $w_j\in\mathbb{R}$ $(1\le j \le L)$ are learnable parameters. As can be easily verified, the linearly scaling weights increase with the increment of $k$, resulting in the increasing strength of sparsity. \subsection{Smoothness Regularization} After performing the proper transformation, the word embedding matrix $\textbf{X}$ is fed into encoders and locators repetitively for further learning. To obtain layer-wise smoothness, we adopt the adjacent weight tying approach~\cite{Fung2018mem2seq,sukhbaatar2015end}. Recall that each ELC requires two distinct transformed word embedding matrices that are used by the inside encoder and locator, respectively. The main idea of adjacent weight tying is to let every two adjacent ELCs share one transformed word embedding matrix. Formally, the $k$-th ELC ($k > 1$) only requires a newly-transformed matrix $\textbf{C}_k=\textbf{X}\textbf{W}_k + \bm{b}_k \in \mathbb{R}^{n \times d}$ (the solid arrow from $\textbf{X}$ to $\textbf{C}_k$ in Fig.~\ref{fig:impl}) and re-uses $\textbf{C}_{k-1}=\textbf{X}\textbf{W}_{k-1}+\bm{b}_{k-1} \in \mathbb{R}^{n \times d}$ from the previous ELC (the dashed arrow from $\textbf{X}$ to $\textbf{C}_{k-1}$ in Fig.~\ref{fig:impl}). Here $\textbf{W}_k \in \mathbb{R}^{d' \times d}$ and $\bm{b}_{k} \in \mathbb{R}^{d}$ are learnable parameters in the $k$-th ELC. As for the first ELC, two transformed word embedding matrices are still required. \subsection{Proactive Masking} Before the core computation in the $k$-th ELC, $\textbf{C}_{k-1}$ and $\textbf{C}_{k}$ are further pre-processed by masking with a fixed probability. Take $\textbf{C}_{k-1}$ as an example. With a pre-defined probability $P_{msk}$, $\textbf{C}_{k-1}$ will be masked. We perform independent Bernoulli experiments for each row of $\textbf{C}_{k-1}$ and the success rate of each experiment is equal to the corresponding item weight in $\bm{l}_{k-1} \in \mathbb{R}^{n}$ ($\bm{l}_{k-1}$ is an input to the $k$-th ELC). Afterward, all rows that pass the Bernoulli experiments will be replaced with zero. Note that this step is only turned on during training. Figure~\ref{fig:impl} also illustrates an example of proactive masking. Assume vector $\bm{l}_{k-1} = [0.5, 0, 0.5]^\intercal$ and $P_{msk}=1$. Thus, both $\textbf{C}_{k-1}$ and $\textbf{C}_{k}$ are to be masked. For $\textbf{C}_{k-1}$, it turns out only the first row passes the experiment, resulting in the first row being replaced with zero. Similarly, the last row of $\textbf{C}_{k}$ passes the experiment and we show the masked $\textbf{C}_{k}$ in Fig.~\ref{fig:impl}. \subsection{Discriminator} We simply adopt a single layer feedforward neural network given the mean of all hidden states to build the discriminator: \begin{small} \begin{equation} \label{eq:discriminator} D([w_1,w_2,\dots,w_n]) = \kw{softmax}((\frac{1}{k} \sum_k \bm{h}_k) \textbf{W}^{dis} + \bm{b}^{dis}). \end{equation} \end{small} Here, $D([w_1,w_2,\dots,w_n])$ is the predictive sentiment class of the input. Assuming the number of classes being $C$, we have learnable parameters $\textbf{W}^{dis} \in \mathbb{R}^{d\times C}$ and $\bm{b}^{dis} \in \mathbb{R}^{C}$. \section{Experiments} \label{sec-exp} \begin{table*}[ht!] \centering \label{table:overall} \begin{small} \begin{tabular}{c | c c c c | c c c c} \hline \multirow{4}{*}{{\sc SST-5}\xspace} & \cnnrandom & 39.46 & \lstm & 45.04 & \kw{BERT}\xspace & 51.99 & \modelminus-X & 54.86 \\ & \cnnstatic & 44.32 & \bilstm & 45.18 & \xlnet & 55.20 & \modelname-X & 55.55 \\ & \cnnnons & 44.62 & \treelstm & 40.70 & \roberta & 56.49 & \modelminus-R & 56.59\\ & \cnnmulch & 43.54 & \modelname-G & \bf{46.33} & & & \modelname-R & \bf{57.34} \\ \hline \multirow{4}{*}{{\sc Yelp-5}\xspace} & \cnnrandom & 56.38 & \lstm & 57.14 & \kw{BERT}\xspace & 63.42 & \modelminus-X & 66.89 \\ & \cnnstatic & 56.30 & \bilstm & 55.32 & \xlnet & 66.75 & \modelname-X & 67.23 \\ & \cnnnons & 57.24 & \treelstm & 53.38 & \roberta & 67.66 & \modelminus-R & 66.92\\ & \cnnmulch & 57.14 & \modelname-G & \bf{58.68} & & & \modelname-R & \bf{67.70} \\ \hline \end{tabular} \caption{Overall accuracy (\%) of sentiment classification.} \end{small} \end{table*} \subsection{Experimental Setting} \label{subsec-setting} \textbf{Datasets}. We chose two datasets ({\sc SST-5}\xspace and {\sc Yelp-5}\xspace) to evaluate our \modelname. \begin{itemize} \item {\sc SST-5}\xspace (Stanford Sentiment Treebank)~\cite{socher2013sst5} is a sentence-level sentiment classification with five sentiment classes (\emph{i.e.,}\xspace very negative, negative, neutral, positive, very positive). We adopted the provided data split, resulting in 8,544, 1,101, and 2,210 sentences in the training, validation, and test sets, respectively. The average length of sentences is 18 words. \item {\sc Yelp-5}\xspace is a document-level review corpus released in the Yelp Dataset Challenge 2015. It has five sentiment classes and the full dataset contains approximately 700,000 documents with an average length of 155 tokens. Due to GPU resource limitation, we only tested on a random 5\% sample of the data, resulting in 32,500, 2,500, and 2,500 documents for training, validation, and test, respectively. \end{itemize} \textbf{Metric}. We adopted the classification accuracy ({\sc Acc}\xspace) to evaluate performance, which is the fraction of accurately classified test instances over all test instances. \textbf{Baselines}. We compared \modelname with three types of baselines and one simplified variant. \begin{itemize} \item \cnnrandom, \cnnstatic, \cnnnons, and \cnnmulch are originally proposed in~\cite{kim2014convolutional}. They only differ in word vectors. \item \lstm, \bilstm, and \treelstm are RNN-based baselines. We followed the implementation in~\cite{cho2014learning} for Long Short-Term Memory (\lstm) and bidirectional LSTM (\bilstm). Gumble Tree LSTM~\cite{choi2018learning} (\treelstm) is a tree-structured LSTM which further composes task-specific tree structures. \item \kw{BERT}\xspace~\cite{devlin2019bert}, \xlnet~\cite{Yang2019xlnet}, and \roberta~\cite{liu2019roberta} are the state-of-the-art pre-trained language models. \modelname-G, \modelname-X, \modelname-R represent that the output of \glove, \xlnet and \roberta are treated as the input of \modelname, respectively. \end{itemize} \textbf{Implementation details}. We used the official implementation of all baselines provided by authors. Pre-trained word vectors for CNN and RNN baselines were obtained from \glove~\cite{pennington2014glove}. We started with the hyper-parameters recommended in the original papers and finetuned them on the validation set. Since \kw{BERT}\xspace, \xlnet, and \roberta were sensitive to batch size, learning rate, and maximum length of words on the small {\sc SST-5}\xspace data, we performed a grid search over $\{16, 32, 64\}$, $\{2\mbox{e}{-5}, 3\mbox{e}{-5}, 5\mbox{e}{-5}\}$, and $\{64, 128, 256\}$ for the three parameters, respectively. Please refer to the supplementary material for the concrete parameters. Code will be publicly available when the paper is accepted. \subsection{Main Results} \label{subsec:results} In the first set of tests, we evaluate the overall performance of all approaches for sentiment classification. All tests were repeated five times. The average results are reported in Table~1, where the letters after \modelname and \modelminus indicate the different embedding methods, \emph{i.e.,}\xspace \glove (G), \xlnet (X), and \roberta (R). \begin{table*}[ht] \centering \label{table:robustness} \begin{small} \begin{tabular}{c | c c c | c c} \hline \bf{Attack} & (a) \xlnet & (b) \modelminus-X & (c) \modelname-X & (c)-(a) & (c)-(b) \\ \hline None & 55.20 & 54.86 & \bf{55.55} & 0.35 & 0.69 \\ \hline Replacement (cosine) & 52.01 & 51.83 & \bf{52.82}* & \bf{0.81} & 0.99 \\ Replacement (SWN) & 51.11 & 51.46 & \bf{52.34}** & \bf{1.23} & 0.88 \\ Insertion & 47.69 & \bf{48.30} & 48.13 & \bf{0.44} & -0.17 \\ Shuffle & 41.69 & 43.61 & \bf{43.95}** & \bf{2.25} & 0.33 \\ Deletion & 41.89 & 43.19 & \bf{43.73}** & \bf{1.85} & 0.54 \\ Reversing & 41.67 & 42.99 & \bf{43.39} & \bf{1.72} & 0.40 \\ Replacement (random) & 37.94 & \bf{39.28} & 39.06* & \bf{1.12} & -0.22 \\ Concatenation & 36.56 & 35.93 & \bf{38.96} & \bf{2.40} & 3.03 \\ \hline \end{tabular} \\ */**: significantly outperform \xlnet at the 0.05/0.01 level, t-test \caption{Accuracy (\%) of sentiment classification under attacks on {\sc SST-5}\xspace.} \end{small} \end{table*} We first compare \modelname-G with other CNN and LSTM baselines. Except for \cnnrandom, these approaches all exploit \glove for initializing word embeddings and, therefore, can ensure a fair comparison. According to our tests, CNN and LSTM are generally comparable in terms of sentiment classification. By explicitly revealing the input structure, \modelname-G obtains more promising results, which outperforms all approaches on the sentence-level {\sc SST-5}\xspace data. On the document-level {\sc Yelp-5}\xspace dataset, we find that LSTMs are better than CNNs and \modelname-G is the best among its counterparts. The recent large-scale pre-trained language models significantly increase {\sc Acc}\xspace compared with the aforementioned approaches. We also observe a consistent trend in their performance, such that \roberta is the best, followed by \xlnet and \kw{BERT}\xspace. Built upon these efforts, \modelname is able to further enhance the performance. Notably, it refines the results of \xlnet on both datasets. Finally, by comparing \modelname with \modelminus, we find that the proactive masking strategy consistently has a positive impact. All the above results verify the effectiveness of \modelname. \subsection{Analysis Under Attacks} In the second set of tests, we evaluate the robustness of \modelname under attacks. Here we only experiment on {\sc SST-5}\xspace as the sentiment polarities of sentences are easier to be influenced given its shorter average length. We also only consider \xlnet as the embedding method for \modelname since \roberta (named from \underline{R}obustly \underline{o}ptimized \underline{BERT} \underline{a}pproach) has been augmented with a lot of robust designs including training the model longer, with bigger batches over more data, training on longer sequences, etc.\footnote{As such, we admit that \modelname does not exhibit obviously better robustness compared with \roberta.} We consider eight types of attacks. More specifically, \textbf{Reversing} and \textbf{Concatenation} are deterministic attacks such that the former reverses the word orders and the latter concatenates all words in a sentence into one (it will be sliced by \xlnet later). The rest are stochastic attacks. The manipulation of \textbf{Shuffle} is clear by its name. For \textbf{Insertion}, \textbf{Deletion}, and \textbf{Replacement (random)}, we correspondingly modify one-third of words in a sentence and the new words (if needed) are uniformly sampled following the negative sampling method in word2vec~\cite{mikolov2013word2vec}. Finally, for (a) \textbf{Replacement (cosine)} and (b) \textbf{Replacement (SWN)}, we replace one-third of words in a sentence with (a) their closest terms evaluated by cosine similarity between \glove vectors and (b) alternative terms within the same sentiment groups in SentiWordNet~\cite{baccianella2010swn}. We trained models on the original training data and computed {\sc Acc}\xspace on the attacked test data. The results are reported in Table~2 where the numbers for stochastic attacks are the average results of ten independent runs on different attacked test sets. \eat{ \begin{itemize} \item Insertion: inserting another one-third of words into each sentence such that inserted words are uniformly sampled following the negative sampling of word2vec~\cite{mikolov2013word2vec}. \item Deletion: deleting one-third of words from each sentence \item Replacement: replacing one-third of the words in each sentence with \item Replacement (cosine): replacing one-third of the words in each sentence with the \item Replacement (SWN): \item Shuffle: Shuffling all words in a sentence. \item Reversing: reversing the word order. \item Concatenation: concatenating all words in a sentence into one and leaving word slicing to \xlnet. \end{itemize} } The results are arranged in the ascending order of the strength of attacks, as evaluated by the {\sc Acc}\xspace of \modelname. \textbf{Replacement (cosine)} and \textbf{Replacement (SWN)} are weaker than the other attacks since the semantics or sentiment polarities of terms are not substantially changed. The following is \textbf{Insertion} which only introduces noises. Changing word orders (\textbf{Shuffle} and \textbf{Reversing}) and dropping information (\textbf{Deletion}) almost tie in terms of attack strength. Finally, the hardest attacks are \textbf{Replacement (random)} and \textbf{Concatenation} which both remove original information and introduce noises. Note that the above conclusions should be taken under our attack setting. Under all attacks, \modelname is consistently better than \xlnet, further verifying the effectiveness of explicitly revealing the input structure. More importantly, the absolute improvement of \modelname over \xlnet is higher than on original data (\emph{i.e.,}\xspace 0.35\%), which indicates that \modelname is generally more robust than \xlnet under attacks. Since the {\sc Acc}\xspace decreases under attacks, the relative improvement is indeed more prominent. Notably, \modelname is good at dealing with harder attacks such as changing word orders and dropping information. Finally, comparing \modelname with \modelminus, we can conclude that proactive masking boosts model performance in general under attacks. It is especially effective for \textbf{Concatenation} which will drop much information after re-slicing by \xlnet. However, proactive masking could also lead to negative impacts under \textbf{Insertion} and \textbf{Replacement (random)} since it is not optimized for dealing with inserted noises. \begin{figure*}[ht] \centering \subfigure{\label{exp:itemweight-1} \includegraphics[trim=80 50 80 40,clip,scale=.35]{0.pdf}} \subfigure{\label{exp:itemweight-2} \hspace{3ex} \includegraphics[trim=40 50 30 50,clip,scale=.34]{1.pdf}} \caption{Illustration of item weights identified by \modelname} \label{exp:itemweight} \end{figure*} \begin{figure*}[htb!] \centering \subfigure[{\sc SST-5}\xspace]{\label{exp:pmask-sst} \includegraphics[width=0.85\columnwidth]{probability_sst.pdf}} \subfigure[{\sc Yelp-5}\xspace]{\label{exp:pmask-yelp} \includegraphics[width=0.85\columnwidth]{probability_yelp.pdf}} \caption{Impacts of masking probability $P_{msk}$.} \label{exp:sens-pmask} \end{figure*} \begin{figure*}[h] \centering \subfigure[{\sc SST-5}\xspace]{\label{exp:num_layer-sst} \includegraphics[width=0.85\columnwidth]{N_layer_sst.pdf}} \subfigure[{\sc Yelp-5}\xspace]{\label{exp:num_layer-yelp} \includegraphics[width=0.85\columnwidth]{N_layer_yelp.pdf}} \caption{Impacts of the number $L$ of layers (\emph{i.e.,}\xspace ELCs) on {\sc SST-5}\xspace.} \label{exp:sens-ELCs} \end{figure*} \subsection{Qualitative Analysis of Item Weights} We present a qualitative study on item weights estimated in different ELC layer, shown in Fig.~\ref{exp:itemweight}. The two displayed movie reviews are retrieved from the training set of {\sc SST-5}\xspace, and their ground-truth sentiment labels are \textit{positive} and \textit{very-positive}, respectively. After training, \modelname could produce accurate labels for both. In the left case, the key elements identified are \textit{deflated}, \textit{there's much}, and \textit{recommend}, which make sense for the prediction result. Also note that it remains difficult to find sentiment words at the beginning. However, the multi-layer architecture enables \modelname to eventually refine key elements, \emph{e.g.,}\xspace \textit{deflated} is identified at the second layer and \textit{recommend} is emphasized finally. Similarly, \modelname successfully finds the two key words \textit{dark} and \textit{funny} for the right example after learning layer-by-layer. To conclude, these item weights generally make the outcomes of \modelname easier to understand. \subsection{Analysis on Parameter Sensitivity} \subsubsection{Impacts of masking probability $P_{msk}$} To evaluate the impacts of $P_{msk}$, we varied $P_{msk}$ from 0 to 1 and computed the classification accuracy of both \modelname-X and \modelname-R. We omitted \modelname-G since its effectiveness is not comparable to \modelname-X and \modelname-R. Each $P_{msk}$ was tested 3 times with different seed, and the averaged value is reported in Fig.~\ref{exp:sens-pmask}. It turns out that \modelname is quite sensitive to parameter $P_{msk}$, possibly due to the randomness in choosing sentences to mask and choosing masked key items. However, compared with turning off proactive masking (\emph{i.e.,}\xspace $P_{msk}=0$), our training strategy remains effective within a certain range of $P_{msk}$, \emph{e.g.,}\xspace $[0.3, 0.5]$ on {\sc SST-5}\xspace and $[0.05, 0.4]$ on {\sc Yelp-5}\xspace. \subsubsection{Impacts of the number $L$ of layers (\emph{i.e.,}\xspace ELCs)} To evaluate the impacts of $L$, we varied $L$ from 1 to 6 and computed the {\sc Acc}\xspace of both \modelname-X and \modelname-R on the two datasets. Note that the discriminator combines all the hidden states to derive the final classification results. The results are reported in Fig.~\ref{exp:sens-ELCs}. On the {\sc Yelp-5}\xspace data, using more layers is generally more effective, while the impacts of $L$ are quite gentle. On the other hand, the impacts of $L$ are more complex on the {\sc SST-5}\xspace data. When $L \le 3$, the {\sc Acc}\xspace of \modelname increases with the increment of $L$ in general, indicating that \modelname benefits from its multi-layer organization which enables to learn the input structure for multiple times. Further increase $L$ will lead to the decrease of {\sc Acc}\xspace due to over-fitting. Overall, $L=3$ is a good choice for \modelname, and this conclusion holds for the two variants of \modelname. \subsubsection{Impacts of hidden state aggregation} To evaluate the impacts of hidden state aggregation, we computed the {\sc Acc}\xspace of both \modelname-X and \modelname-R using single hidden states and all the three hidden states on the two datasets. The results are reported in Fig.~\ref{exp:sens-layer}. \begin{figure}[htb!] \centering \subfigure[{\sc SST-5}\xspace]{\label{exp:layer-sst} \includegraphics[width=0.45\columnwidth]{eachLayer_sst.pdf}} \subfigure[{\sc Yelp-5}\xspace]{\label{exp:layer-yelp} \includegraphics[width=0.45\columnwidth]{eachLayer_yelp.pdf}} \caption{Impacts of hidden state aggregation.} \label{exp:sens-layer} \end{figure} For the case of using single hidden states, the best {\sc Acc}\xspace is obtained by the third and second hidden states on the {\sc SST-5}\xspace and {\sc Yelp-5}\xspace data, respectively. This is because of their different characteristics of short and long text, \emph{i.e.,}\xspace the input structure of short sentences is harder to reveal given the limited information than long documents. Moreover, combining hidden states from all layers is consistently better than using single hidden states alone. We guess that combining hidden states enables the discriminator to directly supervise each layer in terms of revealing the input structure, which enhances the effectiveness. \section{Conclusion} \label{sec-conc} In this paper, we proposed \modelname to tackle sentiment analysis task such that the outcome is mainly contributed by a few key elements of the input. The idea behind \modelname, which originates from the two-streams hypothesis, is to learn discriminative representations and reveal input structure simultaneously. To do this, \modelname stacks several encoders and locators layer-by-layer, with increasing-strength sparsity constraints on locators for tracing key elements. Smoothness regularization is enforced on adjacent encoder-locator layer to ensure the stabilization of learning across layers. In addition, a proactive masking strategy is further incorporated into \modelname for robustness. We applied \modelname for sentence- and document-level sentiment analysis. The experiments demonstrated the effectiveness of \modelname. Moreover, considering a total of eight types of attacks, we verified the better robustness of \modelname in general. Finally, our qualitative analysis of item weights showed the advantage of \modelname in terms interpretability.
1,314,259,996,181
arxiv
\section*{ 1.\ Introduction} In the present paper, we consider the large time behavior of the Cauchy problem for a model system of radiating gas, taking the form of $$ \left\{\begin{array}{l} u_{t}+\sum\limits_{j=1}^nf_j(u)_{x_j}+{\rm div}q=0,\ \ \ \ x\in \mathbb{R}^n,\ t>0,\\[3mm] -\nabla{\rm div}q+q+\nabla u=0,\ \ \ \ \ \ \ \ \ \ x\in \mathbb{R}^n,\ t>0, \end{array} \right. \eqno(1.1) $$ with initial data $$ u(x,0)=u_0(x),\ \ \ \ x\in \mathbb{R}^n. \eqno(1.2) $$ Here unknown functions $u=u(x,t):\mathbb{R}^n\times[0,\infty)\rightarrow \mathbb{R}$ and $q=q(x,t):\mathbb{R}^n\times[0,\infty)\rightarrow\mathbb{R}^n$ represent the velocity and radiating heat flux of the gas, respectively. The notations $\nabla$ and ${\rm div}$ are the $n$-dimensional gradient and divergence. $f(u)=(f_1(u),\cdots,f_n(u))\in \mathbb{R}^n$ is a given smooth function of $u$ satisfying $f_j(u)=O(|u|^2)\ (j=1,\cdots,n)$ for $u\rightarrow0$. System (1.1) simplifies the model for the motion of radiating gas in several space variables. Indeed, in a certain physical situation, (1.1) is well approximated to the fundamental system describing the motion of radiating gas: $$ \left\{\begin{array}{l} \rho_{t}+{\rm div}(\rho u)=0,\\[3mm] (\rho u)_t+{\rm div}(\rho u\otimes u+pI)=0\\[3mm] [\rho(e+\frac{|u|^2}{2})]_t+{\rm div}[\rho u(e+\frac{|u|^2}{2})+pu+q]=0\\[3mm] -\nabla{\rm div}q+a_1q+a_2\nabla\theta^4=0, \end{array} \right. \eqno(1.3) $$ where $\rho,u,p,e,\theta$ are respectively the mass density, velocity, pressure, internal energy and absolute temperature of the gas, while $q$ is the radiative heat flux. $a_1$ and $a_2$ are given positive constants depending on the gas itself. The simplified model (1.1) was first investigated by Hamer \cite{Hamer}, and the reduction of the full system (1.3) to (1.1) was given in \cite{Gao2, Hamer, Kawashima7}. A lot of important works have been done on system (1.1). For one-dimensional case, we refer to \cite{Francesco2,Serre1,Serre2} for $L^1$ stability results, \cite{Kawashima6,Lattanzio1,Laurenot1} for a singular limit and relaxation limit, \cite{Kawashima3,Kawashima4,Kawashima5,Lattanzio2, Lattanzio3,Lin1,Lin2,LiuH,Nishibata,Nguyen} for shock waves, \cite{Iguchi,Kawashima1,Kawashima2} for diffusion waves and \cite{Kawashima7} for rarefaction waves. However, there are fewer studies for system (1.1) in the case of multi-dimensional space. Recently, Francesco in \cite{Francesco1} obtained the global existence of weak entropy solutions of system (1.1) and the relaxation limits. Later, Wang and Wang in \cite{Wang} investigated pointwise estimates of solutions to (1.1)-(1.2) by means of a detailed analysis for Green's function. More recently, the asymptotic decay rates toward the planar rarefaction waves based on the $L^2$-energy method are obtained in \cite{Gao1} for 2-dimensions and in \cite{Gao2} for $n$-dimensions ($n=3,4,5)$, respectively. The asymptotic behavior of solutions to the diffusion waves was studied in \cite{Liu1,Ruan}. For the related study of decay rates to the problem (1.1)-(1.2), we mention \cite{Liu1} and \cite{Duan1} only. \cite{Liu1} studied large time behavior of solutions to the problem (1.1)-(1.2) with small initial data in $L^1$-norm perturbation by using a time-weighted energy method. In \cite{Duan1}, under the assumption that $\|u_0\|_{L^1}$ is bounded, Duan $et\ al.$ showed that the optimal $L^2$-norm time-decay rate of solutions is $(1+t)^{-\frac{n}{4}}$, whereas time-decay estimates of the derivatives of the solutions have not been considered. The purpose of this paper is to establish the optimal time decay rates of the solutions and the derivatives of the solutions to the Cauchy problem (1.1)-(1.2) without smallness assumptions on the initial data in $L^1$-norm. It seems that the usual energy method basing on the linearization analysis doesn't work. By the way, our first decay result is inspired from Schonbek \cite{Schonbek2,Schonbek3}, where the well-known Fourier splitting method is established to get optimal decay rate of solutions for the incompressible Navier-Stokes equations in $L^2$-norm or $H^s$-norm. In the present paper, we generalize the Fourier splitting method with a slight modification to deal with the problem (1.1)-(1.2) for $\mathbb{R}^n$ with $1\leq n\leq 4$. Our second decay result is illuminated by a recent work of Guo and Wang \cite{Guo}, where they developed a new method to establish optimal time decay rates of solutions to the Cauchy problem for the compressible Navier-Stokes equations and the Boltzmann equation. The main idea was to combine scaled energy estimates with an interpolation between negative and positive Sobolev norms to get the time decay rate for dissipative equations. By employing this new method, we obtain the optimal $L^p$-$L^2(\mathbb{R}^3)$ decay estimates of solutions to the problem (1.1)-(1.2). \bigbreak First, we cite a temporal global existence result established in \cite{Liu1,Wang}. \vspace{2mm} \noindent\textbf{Proposition 1.1.}(\cite{Liu1}, Theorem 2.1; \cite{Wang}, Theorem 1.1) Assume that $u_0(x)\in H^N(\mathbb{R}^n)\ (n\geq1)$ for an integer $N\geq[\frac{n}{2}]+2$. There exists a small positive constant $\delta_0$ such that if $E_0=\|u_0\|_{H^N(\mathbb{R}^n)}\leq\delta_0$, then the problem (1.1)-(1.2) has a unique global solution $(u,q)(x,t)$ satisfying $$\arraycolsep=1.5pt \begin{array}[b]{rl} &\D u\in C([0,\infty);H^N(\mathbb{R}^n)),\ \nabla u\in L^2([0,\infty),H^{N-1}(\mathbb{R}^n)),\\[2mm] &\D q\in C([0,\infty);H^N(\mathbb{R}^n))\cap L^2([0,\infty),H^{N+1}(\mathbb{R}^n)). \end{array} $$ Moreover, the solution verifies the following uniform energy estimate $$ \|u(t)\|_{H^N(\mathbb{R}^n)}^2+\|q(t)\|_{H^{N+1}(\mathbb{R}^n)}^2+\int_0^t(\|\nabla u(\tau)\|_{H^{N-1}(\mathbb{R}^n)}^2 +\|q(\tau)\|_{H^{N+1}(\mathbb{R}^n)}^2){\rm d}\tau\leq CE_0^2.\eqno(1.4) $$ \bigbreak Our main results in this paper can be stated as follows: \vspace{2mm} \noindent\textbf{Theorem 1.1.} Let $1\leq n\leq4$. Suppose $\|u_0\|_{H^N(\mathbb{R}^n)}\leq\delta_0\ll1$. If further, $u_0\in L^1(\mathbb{R}^n)$ (which need not be small), we have $$ \|D^lu(t)\|_{L^2(\mathbb{R}^n)}\leq C(1+t)^{-\frac{n}{4}-\frac{l}{2}},\ \ \ \ l=0,1,\cdots,N; \eqno(1.5) $$ $$ \|D^lq(t)\|_{L^2(\mathbb{R}^n)}\leq C(1+t)^{-\frac{n}{4}-\frac{l+1}{2}},\ \ \ \ l=1,2,\cdots,N-1.\eqno(1.6) $$ \bigbreak \noindent\textbf{Remark 1.1.} One can rewrite the system (1.1) as a decouple system of $(u,q)(x,t)$: $$ \left\{\begin{array}{l} \D u_t+\sum_{j=1}^nf_j(u)_{x_j}=-u+(I-\Delta)^{-1}u, \\[3mm] q=-(I-\Delta)^{-1}\nabla u. \end{array} \right. \eqno(1.7) $$ To obtain the time-decay rates of $q(x,t)$, from $(1.7)_2$, it suffices to prove the estimates on $u(x,t)$, i.e. (1.5). When $l=0$, (1.5) can be obtained directly by using the usual Fourier splitting method. In addition, for any integer $n\geq 1$, (1.5) also holds true, see Proposition 3.1 in Section 3. This is consistent with the result in \cite{Duan1}. When $l\geq1$, to prove (1.5), the estimates in the Proposition 1.1 and the smallness of $E_0$ should be employed, see details in the proof of Proposition 3.2. \bigbreak \noindent\textbf{Theorem 1.2.} Suppose $\|u_0\|_{H^N(\mathbb{R}^3)}\leq\delta_0\ll1$. If further, $u_0\in \dot{H}^{-s}(\mathbb{R}^3)$ (which need not be small), for some $s\in[0,\frac{3}{2})$, there exists a positive constant $C_0$ such that $$ \|u(t)\|_{\dot{H}^{-s}(\mathbb{R}^3)}^2\leq C_0,\ \ \ \ \|q(t)\|_{\dot{H}^{-s}(\mathbb{R}^3)}^2\leq C_0, \eqno(1.8) $$ and the following decay estimates hold: $$ \|D^lu(t)\|_{H^{N-l}(\mathbb{R}^3)}\leq C_0(1+t)^{-\frac{l+s}{2}}, \ l=0,\cdots,N, \eqno(1.9) $$ $$ \|D^lq(t)\|_{H^{N-1-l}(\mathbb{R}^3)}\leq C_0(1+t)^{-\frac{l+s+1}{2}},\ l=0,\cdots,N-1. \eqno(1.10) $$ By employing the Hardy-littlewood-Sobolev theorem, for $p\in(1,2]$, we have $L^p\in\dot{H}^{-s}(\mathbb{R}^3)$ with $s=3(\frac{1}{p}-\frac{1}{2})\in[0,\frac{3}{2})$. Then, from Theorem 1.2, the following optimal $L^p$-$L^2$ type decay results are obtained. \bigbreak \noindent\textbf{Corollary 1.1.($L^p$-$L^2$ time-decay estimates)} Assume that $\|u_0\|_{H^N(\mathbb{R}^3)}\leq\delta_0\ll1$. If further, $u_0(x)\in L^p(\mathbb{R}^3)$ with some $p\in(1,2]$, then the following decay results hold for any integer $l$ with $0\leq l\leq N$: $$ \|D^lu(t)\|_{H^{N-l}(\mathbb{R}^3)}\leq C_0(1+t)^{-\frac{3}{2}(\frac{1}{p}-\frac{1}{2})-\frac{l}{2}}.\eqno(1.11) $$ $$ \|D^lq(t)\|_{H^{N+1-l}(\mathbb{R}^3)}\leq C_0(1+t)^{-\frac{3}{2}(\frac{1}{p}-\frac{1}{2})-\frac{l+1}{2}}.\eqno(1.12) $$ \bigbreak \noindent\textbf{Remark 1.2.} All the decay results above are obtained without the smallness of the initial perturbation in $L^p(\mathbb{R}^3), p\in(1,2]$ or $\dot{H}^{-s}(\mathbb{R}^3)$. The results generalize those in \cite{Liu1,Wang} for the case of three-dimensional space. The similar problem in $\mathbb{R}^n$ with $n=1,2$ will be investigated in future. \bigbreak \noindent\textbf{Notations.} In this paper, $D^l$ with an integer $l\geq 0$ stands for the usual any spatial derivatives of order $l$. For $1\leq p\leq \infty$ and an integer $m\geq 0$, we use $L^p$ and $W^{m,p}$ denote the usual Lebesgue space $L^p(\mathbb{R}^n)$ and Sobolev spaces $W^{m,p}(\mathbb{R}^n)$ with norms $\|\cdot\|_{L^p}$ and $\|\cdot\|_{W^{m,p}}$, respectively, and set $H^m=W^{m,2}$ with norm $\|\cdot\|_{H^m}$ when $p=2$. In addition, for $s\in\mathbb{R}$, we define a pseudo-differential operator $\Lambda^s$ by $$ \Lambda^sg(x)=\int_{\mathbb{R}^n}|\xi|^s\hat{g}(\xi){\rm e}^{2\pi\sqrt{-1}x\cdot\xi} {\rm d}\xi, $$ where $\hat{g}$ denotes the Fourier transform of $g$. We define the homogeneous Sobolev space $\dot{H}^s$ of all $g$ for which $\|g\|_{\dot{H}^s}$ is finite, where $$ \|g\|_{\dot{H}^s}:=\|\Lambda^sg\|_{L^2}=\||\xi|^s\hat{g}\|_{L^2}. $$ Throughout this paper, we will use a non-positive index $s$. For convenience, we will change the index to be ``$-s$'' with $s\geq0$. $C$ denotes a positive generic (generally large) constant that may vary at different places. The integration domain $\mathbb{R}^3$ will be always omitted without any ambiguity. The rest of this paper is arranged as follows. In the next section, some Sobolev type inequalities and some preliminaries are given for later use. Section 3 shows the proof of Theorem 1.1 by using Fourier splitting method. In the last section, we obtain the time-decay estimates stated in Theorem 1.2. \section*{ 2.\ Preliminaries} Firstly, we give some Sobolev inequalities which will be used in the next two sections. \bigbreak \noindent\textbf{Lemma 2.1.} (Gagliardo-Nirenberg's inequality). Let $0\leq m,k\leq l$, then we have $$ \|D^k g\|_{L^p}\leq C \|D^m g\|_{L^q}^{1-\theta}\|D^lg\|_{L^r}^\theta $$ where $k$ satisfies $$ \frac{1}{p}-\frac{k}{n}=(1-\theta)\left(\frac{1}{q}-\frac{m}{n}\right)+\theta\left(\frac{1}{r}-\frac{l}{n}\right). $$ \bigbreak \noindent\textbf{Lemma 2.2.} (\cite{Guo}, Lemma A.5) Let $s\geq0$ and $l\geq0$, then we have $$ \|D^lg\|_{L^2}\leq C\|D^{l+1}g\|_{L^2}^{1-\theta}\|g\|_{\dot{H}^{-s}}^\theta,\ {\rm where}\ \theta=\frac{1}{l+s+1}. $$ \bigbreak \noindent\textbf{Lemma 2.3.} (\cite{Stein}, Chapter V, Theorem 1) Let $0<s<n, 1<p<q<\infty, \frac{1}{q}+\frac{s}{n}=\frac{1}{p}$, then $$ \|\Lambda^{-s}g\|_{L^q}\leq C\|g\|_{L^p}. $$ \bigbreak Now, when $n=3$, we derive an estimate of Lyapunov-type which plays an important role in closing the energy estimates at each $l$-th level in Section 4. \vspace{2mm} \noindent\textbf{Proposition 2.1.} Let $(u,q)(x,t)$ be a solution to the Cauchy problem (1.1)-(1.2) in $\mathbb{R}^3$. If the assumptions in Proposition 1.1 hold, we have $$ \frac{\rm d}{{\rm d}t}\|D^lu(t)\|_{H^1}^2+\|D^{l+1} u(t)\|_{L^2}^2\leq 0,\ \ \ \ 0\leq l\leq N-1.\eqno(2.1) $$ \noindent\textbf{Proof.} First, we transform system (1.1) into the following equivalent decoupled system $$ \left\{\begin{array}{l} \D u_t-\Delta u_t-\Delta u=-(1-\Delta)\sum_{j=1}^nf_j(u)_{x_j}, \ \ \ \ x\in\mathbb{R}^n,\ t>0,\\ q=-(1-\Delta)^{-1}u, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \qquad\qquad x\in\mathbb{R}^n,\ t>0. \end{array}\right.\eqno(2.2) $$ Multiplying $(2.2)_1$ by $u$, then a direct calculation gives $$\arraycolsep=1.5pt \begin{array}[b]{rl} &\D(u^2+|\nabla u|^2)_t+2|\nabla u|^2-2{\rm div}\left[u\nabla(u_t+u+\sum_{j=1}^3f_j(u)_{x_j})\right] +\sum_{j=1}^3\left[2\int_0^uf_j'(\eta)\eta d\eta\right]_{x_j}\\[3mm] =&\D-\sum_{j=1}^3f_j''(u)u_{x_j}|\nabla u|^2. \end{array} $$ Integrating the above equality with respect to $x$ on $\mathbb{R}^3$ and using Lemma 2.1, we get $$ \frac{\rm d}{{\rm d}t}\|u(t)\|_{H^1}^2+\|D u(t)\|_{L^2}^2\leq C\|\nabla u(t)\|_{L^\infty}\|D u(t)\|_{L^2}^2. $$ As a result, we find $$ \frac{\rm d}{{\rm d}t}\|u(t)\|_{H^1}^2+\|D u(t)\|_{L^2}^2\leq 0.\eqno(2.3) $$ Since $f_{j}(u)=O(u^2)$ when $u\rightarrow0$, without loss of generality, let $f_{j}(u)=u^2$. In terms of estimates for the derivatives of the solution $u$, one can apply $D^l$ on $(2.2)_1$, and multiply the resulting equality by $D^lu$, then integrating it with respect to $x$ over $\mathbb{R}^3$, obtaining $$\arraycolsep=1.5pt \begin{array}[b]{rl} &\D\frac{\rm d}{{\rm d}t}(\|D^lu(t)\|_{L^2}^2+\|\nabla D^lu(t)\|_{L^2}^2)+2\|\nabla D^lu(t)\|_{L^2}^2\\[3mm] =&\D-\sum_{j=1}^3\int D^l(uu_{x_j})D^lu{\rm d}x-\sum_{j=1}^3\int D^l\nabla(uu_{x_j})D^l\nabla u{\rm d}x =:I_1+I_2. \end{array}\eqno(2.4) $$ For $I_1$, by using H\"{o}lder's inequality and Lemma 2.1, we have $$\arraycolsep=1.5pt \begin{array}[b]{rl} I_1=&\D-\sum_{j=1}^3\int_{\mathbb{R}^3}D^l(uu_{x_j})D^lu{\rm d}x=-\sum_{j=1}^3\sum_{0\leq k\leq l}\int_{\mathbb{R}^3}(D^kuD^{l-k}u_{x_j})D^lu{\rm d}x\\[3mm] \leq &\D C \sum_{0\leq k\leq l}\|(D^kuD^{l-k+1}u)(t)\|_{L^{\frac{6}{5}}}\|D^lu(t)\|_{L^6}\\[3mm] \leq &\D C \sum_{0\leq k\leq l}\|(D^kuD^{l-k+1}u)(t)\|_{L^{\frac{6}{5}}}\|D^{l+1}u(t)\|_{L^2}. \end{array} \eqno(2.5) $$ When $k\leq[\frac{l}{2}]$, by using H\"{o}lder's inequality and Lemma 2.1, we get $$\arraycolsep=1.5pt \begin{array}[b]{rl} \|(D^kuD^{l-k+1}u)(t)\|_{L^{\frac{6}{5}}}\leq &\D C\|D^ku(t)\|_{L^3}\|D^{l-k+1}u(t)\|_{L^2}\\[3mm] \leq &\D C\|D^mu(t)\|_{L^2}^{1-\frac{k}{l+1}}\|D^{l+1}u(t)\|_{L^2}^{\frac{k}{l+1}}\|u(t)\|_{L^2}^{\frac{k}{l+1}}\|D^{l+1}u(t)\|_{L^2}^{1-\frac{k}{l+1}}\\[3mm] \leq &\D C(\|D^mu(t)\|_{L^2}+\|u(t)\|_{L^2})\|D^{l+1}u(t)\|_{L^2}\leq \frac{1}{12}\|D^{l+1}u(t)\|_{L^2},\end{array} \eqno(2.6) $$ where we have used the fact $\|u(t)\|_{H^N}\leq CE_0^2\ll1$ in Proposition 1.1, and $m$ satisfies $$ \frac{k}{3}-\frac{1}{3}=(\frac{m}{3}-\frac{1}{2})\times(1-\frac{k}{l+1})+(\frac{l+1}{3}-\frac{1}{2})\times\frac{k}{l+1}.\eqno(2.7) $$ As a result, since $k\leq[\frac{l}{2}]$, we have $$ m=\frac{l+1}{2(l+1-k)}\in[\frac{1}{2},1). $$ When $k\geq[\frac{l}{2}]+1$, from H\"{o}lder's inequality and Lemma 2.1 again, we find $$\arraycolsep=1.5pt \begin{array}[b]{rl} \|(D^kuD^{l+1-k}u)(t)\|_{L^{\frac{6}{5}}}\leq &\D C\|D^ku(t)\|_{L^2}\|D^{l+1-k}u(t)\|_{L^3}\\[3mm] \leq &\D C\|u(t)\|_{L^2}^{1-\frac{k}{l+1}}\|D^{l+1}u(t)\|_{L^2}^{\frac{k}{l+1}}\|D^mu(t)\|_{L^2}^{\frac{k}{l+1}}\|D^{l+1}u(t)\|_{L^2}^{1-\frac{k}{l+1}}\\[3mm] \leq &\D C(\|D^mu(t)\|_{L^2}+\|u(t)\|_{L^2})\|D^{l+1}u(t)\|_{L^2}\leq \frac{1}{4}\|D^{l+1}u(t)\|_{L^2}, \end{array} \eqno(2.8) $$ where $m$ is defined by $$ \frac{l+1-k}{3}-\frac{1}{3}=(\frac{m}{3}-\frac{1}{2})\times\frac{k}{l+1}+(\frac{l+1}{3}-\frac{1}{2})\times(1-\frac{k}{l+1}), $$ that is, $m=\frac{l+1}{2k}\in(\frac{1}{2},1]$ since $k\geq\frac{l+1}{2}$. \\ From (2.5), (2.6) and (2.8), we have $$ I_1=-\sum_{j=1}^3\int D^l(uu_{x_j})D^lu{\rm d}x\leq \frac{1}{4}\|D^{l+1}u(t)\|_{L^2}.\eqno(2.9) $$ The estimate of $I_2$ is absolutely the same to (3.10)-(3.12) except that we replace $l$ by $l+1$. In fact, we have $$ I_2=-\sum_{j=1}^3\int D^l\nabla(uu_{x_j})D^l\nabla u{\rm d}x\leq \frac{1}{4}\|D^{l+1}u(t)\|_{L^2}.\eqno(2.10) $$ Thus, from (2.4), (2.9) and (2.10), we get the estimate of Lyaponov-type (2.1). Then, we complete the proof of Proposition 2.1. \qed \section*{ 3.\ Decay results with initial perturbation in $L^1(\mathbb{R}^n)$} In this section, we will give optimal decay results by Fourier splitting method introduced in \cite{Schonbek1,Schonbek2} together with energy estimates. Theorem 1.1 will be proved by the following lemmas. First, a straightforward application of Fourier splitting method yields an optimal $L^2$-norm time-decay rates of solutions as follows. \bigbreak \noindent\textbf{Proposition 3.1.} If the initial data $u_0(x)\in L^1(\mathbb{R}^n)\cap L^2(\mathbb{R}^n)$ with $n\geq1$, one has $$ \|u(t)\|_{L^2(\mathbb{R}^n)}^2\leq C (\|u_0\|_{L^1(\mathbb{R}^n)}+\|u_0\|_{L^2(\mathbb{R}^n)})(1+t)^{-\frac{n}{2}}. $$ \noindent\textbf{Proof.} First, multiplying $(1.7)_1$ by $u$ and summing up them and then integrating over $\mathbb{R}^n$, we obtain $$\arraycolsep=1.5pt \begin{array}[b]{rl} \D\frac{1}{2}\frac{\rm d}{{\rm d}t}\int_{\mathbb{R}^n} u^2{\rm d}x =&\D\int_{\mathbb{R}^n}\sum_{j=1}^nf_j(u)\partial_{x_j}u{\rm d}x-\int_{\mathbb{R}^n} u^2{\rm d}x+\int_{\mathbb{R}^n} u(1-\Delta)^{-1}u{\rm d}x\\[3mm] =&\D-\int_{\mathbb{R}^n} u^2{\rm d}x+\int_{\mathbb{R}^n} u(1-\Delta)^{-1}u{\rm d}x. \end{array} \eqno(3.1) $$ By Plancherel theorem, we get $$\arraycolsep=1.5pt \begin{array}[b]{rl} &\D\frac{\rm d}{{\rm d}t}\int_{\mathbb{R}^n}|\hat{u}|^2{\rm d}\xi=-\int_{\mathbb{R}^n}|\hat{u}|^2{\rm d}\xi +\int_{\mathbb{R}^n}\frac{1}{1+|\xi|^2}|\hat{u}|^2{\rm d}\xi\\[3mm] = &\D-\int_{\mathbb{R}^n}|\hat{u}|^2{\rm d}\xi+\int_{|\xi|\leq\sqrt{\frac{n}{t}}}\frac{1}{1+|\xi|^2}|\hat{u}|^2{\rm d}\xi +\int_{|\xi|>\sqrt{\frac{n}{t}}}\frac{1}{1+|\xi|^2}|\hat{u}|^2{\rm d}\xi\\[3mm] \leq &\D-\int_{\mathbb{R}^n}|\hat{u}|^2{\rm d}\xi+\int_{|\xi|>\sqrt{\frac{n}{t}}}\frac{t}{n+t}|\hat{u}|^2{\rm d}\xi +\int_{|\xi| \leq\sqrt{\frac{n}{t}}}\frac{1}{1+|\xi|^2}|\hat{u}|^2{\rm d}\xi\\[3mm] = &\D-\frac{n}{n+t}\int_{\mathbb{R}^n}|\hat{u}|^2{\rm d}\xi+\int_{|\xi|\leq\sqrt{\frac{n}{t}}}\frac{1}{1+|\xi|^2}|\hat{u}|^2{\rm d}\xi -\frac{t}{n+t}\int_{|\xi|\leq\sqrt{\frac{n}{t}}}|\hat{u}|^2{\rm d}\xi. \end{array} \eqno(3.2) $$ We rewrite (3.2) as follows. $$ \frac{\rm d}{{\rm d}t}\int_{\mathbb{R}^n}|\hat{u}|^2{\rm d}\xi+\frac{n}{n+t}\int_{\mathbb{R}^n}|\hat{u}|^2{\rm d}\xi\leq\int_{|\xi|\leq\sqrt{\frac{n}{t}}}\left(\frac{1}{1+|\xi|^2} -\frac{t}{n+t}\right)|\hat{u}|^2{\rm d}\xi, $$ which is multiplied by $(n+t)^n$ yields $$\arraycolsep=1.5pt \begin{array}[b]{rl} \dis \frac{\rm d}{{\rm d}t}\left[(n+t)^n\int_{\mathbb{R}^n}|\hat{u}|^2{\rm d}\xi\right] \leq &\D (n+t)^n\int_{|\xi|\leq\sqrt{\frac{n}{t}}}\left(\frac{1}{1+|\xi|^2}-\frac{t}{n+t}\right)|\hat{u}|^2{\rm d}\xi\\[3mm] \leq &\D(n+t)^n\|\hat{u}(t)\|_{L^\infty(\mathbb{R}_\xi^n)}^2\int_{|\xi|\leq\sqrt{\frac{n}{t}}}\left(1-\frac{t}{n+t}\right){\rm d}\xi\\[3mm] \leq &\D C\|u(t)\|_{L^1(\mathbb{R}^n)}^2(n+t)^{n-1}(n+t)^{-\frac{n}{2}}\\[3mm] \leq &\D C\|u_0\|_{L^1(\mathbb{R}^n)}^2(n+t)^{\frac{n}{2}-1}, \end{array} \eqno(3.3) $$ where we have used $\|\hat{u}(t)\|_{L^\infty(\mathbb{R}_\xi^n)}\leq\|u(t)\|_{L^1(\mathbb{R}^n)}$ and the fact $\|u(t)\|_{L^1(\mathbb{R}^n)}\leq \|u_0\|_{L^1(\mathbb{R}^n)}$ in \cite{Francesco1,Gao1,Gao2}.\\ Integrating (3.3) with respect to $t$, we have $$ \|u(t)\|_{L^2(\mathbb{R}^n)}^2\leq C(\|u_0\|_{L^1(\mathbb{R}^n)}+\|u_0\|_{L^2(\mathbb{R}^n)})(n+t)^{-\frac{n}{2}}.\eqno(3.4) $$ This proves Proposition 3.1. \qed \bigbreak In the proof of Lemma 2.1, we used the essential fact that $\D \int_{\mathbb{R}^n}\sum_{j=1}^nf_j(u)u_{x_j}{\rm d}x$ in (3.1) is equal to 0. However, for the derivatives of $u$, the Fourier splitting method above can not be used directly. The following is the main reason. Similar to prove (3.1), we have $$\arraycolsep=1.5pt \begin{array}[b]{rl} \D \frac{1}{2}\frac{\rm d}{{\rm d}t}\int_{\mathbb{R}^n} \!|D^lu|^2{\rm d}x\! =\!&\D\!\int_{\mathbb{R}^n}\!\sum_{j\!=\!1}^nD^lf_j(u)_{x_j}D^lu{\rm d}x\!-\!\int_{\mathbb{R}^n} |D^lu|^2{\rm d}x\!+\!\int_{\mathbb{R}^n} \!D^lu(1\!-\!\Delta)^{\!-\!1}D^lu{\rm d}x. \end{array} \eqno(3.5) $$ But, the term $\D\int_{\mathbb{R}^n}\sum_{j=1}^nD^lf_j(u)_{x_j}D^lu{\rm d}x$ in the RHS of (3.5) is not equal to 0. How to control this term? Here we use the existence results and the smallness of solutions in $H^N$ space stated in Proposition 1.1. \bigbreak \noindent\textbf{Proposition 3.2.} Let $1\leq n\leq 4$. Suppose $\|u_0\|_{H^N(\mathbb{R}^n)}\ll1$. If $u_0(x)\in L^1(\mathbb{R}^n)$ and $D^lu_0(x)\in L^2(\mathbb{R}^n)$, one has $$ \|D^lu(t)\|_{L^2(\mathbb{R}^n)}^2\leq C (\|u_0\|_{L^1(\mathbb{R}^n)}+\|D^lu_0\|_{L^2(\mathbb{R}^n)})(1+t)^{-\frac{n}{2}-l}, \ l=1,2,\cdots,N. \eqno(3.6) $$ \noindent\textbf{Proof.} Applying $D^l\ (1\leq l\leq N)$ on $(1.7)_1$, and multiplying the resulting equality by $D^lu$, then integrating it with respect to $x$ over $\mathbb{R}^n$, one has $$\arraycolsep=1.5pt \begin{array}[b]{rl} \D\frac{1}{2}\frac{\rm d}{{\rm d}t}\int_{\mathbb{R}^n} |D^lu|^2{\rm d}x \!=\!-\!\int_{\mathbb{R}^n}\sum_{j\!=\!1}^nD^lf_j(u)_{x_j}D^lu{\rm d}x\!-\!\int_{\mathbb{R}^n}|D^lu|^2{\rm d}x\!+\!\int_{\mathbb{R}^n} D^lu(1\!-\!\Delta)^{-\!1}D^lu{\rm d}x. \end{array} \eqno(3.7) $$ As mentioned above, to use the Fourier splitting method as Proposition 3.1, we have to get $$ \left|\int_{\mathbb{R}^n}\sum_{j=1}^nD^lf_j(u)_{x_j}D^lu{\rm d}x\right|\leq \delta_0\int_{\mathbb{R}^n} |D^lu|^2{\rm d}x,\eqno(3.8) $$ where $0\leq\delta_0\ll1$. In the following, we only prove (3.8) for $3\leq l\leq N$. For $l=1,2$, the proof of (3.8) is similar. We omit it here. Without loss of generality, let $f_j(u)=u^2$, one has $$\arraycolsep=1.5pt \begin{array}[b]{rl} &\D\int_{\mathbb{R}^n} D^lf_j(u)_{x_j}D^lu{\rm d}x\\[3mm] =&\D\int_{\mathbb{R}^n} uD^lu_{x_j}D^lu{\rm d}x +\int_{\mathbb{R}^n} DuD^{l-1}u_{x_j}D^lu{\rm d}x +\sum_{2\leq k\leq(l-1)}\int_{\mathbb{R}^n} C_k^lD^k uD^{l+1-k}uD^lu{\rm d}x\\[3mm] \leq &\D 2\|Du(t)\|_{L^\infty(\mathbb{R}^n)}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^2 +C\sum_{2\leq k\leq(l-1)}\int_{\mathbb{R}^n} D^k uD^{l+1-k}uD^lu{\rm d}x\\[3mm] \leq &\D2\|Du(t)\|_{L^\infty(\mathbb{R}^n)}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^2 +C\|D^k u(t)\|_{L^4(\mathbb{R}^n)}\|D^{l-k}u_{x_j}(t)\|_{L^4(\mathbb{R}^n)}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}\\[3mm] \leq &\D 2\|Du(t)\|_{L^\infty(\mathbb{R}^n)}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^2\\[3mm] & \D +C\|u(t)\|_{L^2(\mathbb{R}^n)}^{1-\theta_1}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^{\theta_1}\|u(t)\|_{L^2(\mathbb{R}^n)}^{1-\theta_2}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^{\theta_2}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}\\[3mm] \leq &\D 2\|Du(t)\|_{L^\infty(\mathbb{R}^n)}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^2+C\|u(t)\|_{L^2(\mathbb{R}^n)}^{2-(\theta_1+\theta_2)}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^{\theta_1+\theta_2+1}\\[3mm] =&\D 2\|Du(t)\|_{L^\infty(\mathbb{R}^n)}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^2+C\|u(t)\|_{L^2(\mathbb{R}^n)}^{2-(\theta_1+\theta_2)}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^{\theta_1+\theta_2-1}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^2, \end{array} \eqno(3.9) $$ where $C_k^l=\left( \begin{array}{c} l \\ k \\ \end{array} \right)$ and $$ \theta_1=\frac{k+\frac{n}{4}}{l}\ {\rm and}\ \theta_2=\frac{l-k+\frac{n}{4}+1}{l}.\eqno(3.10) $$ By noticing $1\leq n\leq 4$ and $2\leq k\leq l-1$, from (3.10), we know $0<\theta_1\leq1$ and $0<\theta_2\leq1$. Then, it follows from (3.9) and the fact $\|u(t)\|_{H^N}\ll1$ that $$ \int_{\mathbb{R}^n} D^lf_j(u)_{x_j}D^lu{\rm d}x\leq \frac{1}{2}\|D^lu(t)\|_{L^2(\mathbb{R}^n)}^2.\eqno(3.11) $$ Combining (3.11) and (3.7), we have $$ \frac{\rm d}{{\rm d}t}\int_{\mathbb{R}^n} (D^lu)^2{\rm d}x \leq -\int_{\mathbb{R}^n} (D^lu)^2{\rm d}x+\int_{\mathbb{R}^n} D^lu(1-\Delta)^{-1}D^lu{\rm d}x. \eqno(3.12) $$ From Plancherel theorem, one has $$\arraycolsep=1.5pt \begin{array}[b]{rl} &\D\frac{\rm d}{{\rm d}t}\int_{\mathbb{R}^n}\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi \leq-\int_{\mathbb{R}^n}\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi +\int_{\mathbb{R}^n}\frac{1}{1+|\xi|^2}\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi\\[3mm] \leq &\D\!-\!\int_{\mathbb{R}^n}\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi+\int_{|\xi|>\sqrt{\frac{n+2l}{t}}}\frac{t}{n+2l+t}\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi +\int_{|\xi|\leq\sqrt{\frac{n+2l}{t}}}\frac{1}{1+|\xi|^2}\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi\\[3mm] \leq &\D\!-\frac{n\!+\!2l}{n\!+\!2l\!+\!t}\int_{\mathbb{R}^n}\left||\xi|^{2l}\hat{u}\right|^2\!{\rm d}\xi\!+\!\int_{|\xi|\!\leq\!\sqrt{\frac{n\!+\!2l}{t}}}\frac{1}{1\!+\!|\xi|^2}\left||\xi|^{2l}\hat{u}\right|^2\!{\rm d}\xi \!-\frac{t}{n\!+\!2l\!+\!t}\int_{|\xi|\!\leq\!\sqrt{\frac{n\!+\!2l}{t}}}\!\left||\xi|^{2l}\hat{u}\right|^2\!{\rm d}\xi. \end{array} \eqno(3.13) $$ We rewrite (3.13) as $$\arraycolsep=1.5pt \begin{array}[b]{rl} &\D\frac{\rm d}{{\rm d}t}\int_{\mathbb{R}^n}\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi+\frac{n+2l}{n+2l+t}\int_{\mathbb{R}^n}\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi\\[3mm] \leq &\D\int_{|\xi| \leq\sqrt{\frac{n+2l}{t}}}\left(\frac{1}{1+|\xi|^2} -\frac{t}{n+2l+t}\right)\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi. \end{array} \eqno(3.14) $$ Consequently, multiplying (3.14) by $(n+2l+t)^{n+2l}$, one has $$\arraycolsep=1.5pt \begin{array}[b]{rl} &\D\frac{\rm d}{{\rm d}t}\left[(n+2l+t)^{n+2l}\int_{\mathbb{R}^n}\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi\right]\\[3mm] \leq &\D(n+2l+t)^{n+2l}\int_{|\xi|\leq\sqrt{\frac{n+2l}{t}}}\left(\frac{1}{1+|\xi|^2}-\frac{t}{n+2l+t}\right)\left||\xi|^{2l}\hat{u}\right|^2{\rm d}\xi\\[3mm] \leq &\D(n+2l+t)^{n+2l}\|\hat{u}(t)\|_{L^\infty(\mathbb{R}_\xi^n)}^2\int_{|\xi|\leq\sqrt{\frac{n+2l}{t}}}\left(1-\frac{t}{n+2l+t}\right)|\xi|^{2l}{\rm d}\xi\\[3mm] \leq &\D C\|u(t)\|_{L^1(\mathbb{R}^n)}^2(n+2l+t)^{n+2l-1}(n+2l+t)^{-\frac{n+2l}{2}}\\[3mm] \leq &\D C\|u_0\|_{L^1(\mathbb{R}^n)}^2(n+2l+t)^{\frac{n+2l}{2}-1}. \end{array} \eqno(3.15) $$ Integrating (3.15) with respect to $t$ over $(0,t)$, we find $$ \|D^lu(t)\|_{L^2(\mathbb{R}^n)}^2=\||\xi|^l\hat{u}(t)\|_{L^2(\mathbb{R}_\xi^n)}^2\leq C(\|u_0\|_{L^1(\mathbb{R}^n)}+\|D^lu_0\|_{L^2(\mathbb{R}^n)})(n+2l+t)^{-\frac{n+2l}{2}}.\eqno(3.16) $$ Then, we get (3.6). \qed \section*{ 4.\ Decay results with initial perturbation in $\dot{H}^{-s}(\mathbb{R}^3)$} This section devotes to the optimal $L^p$-$L^2(\mathbb{R}^3)$ decay rates of solutions to (1.1)-(1.2) when the initial data is in the negative Sobolev space $\dot{H}^{-s}(\mathbb{R}^3)$ with $s\in[0,\frac{3}{2})$. The following lemma plays a key role in the proof of Theorem 1.2. It shows an energy estimate of the solutions in the negative Sobolev space $\dot{H}^{-s}(\mathbb{R}^3)$. Namely, we have \noindent\textbf{Lemma 4.1.} If $\mathcal{E}_0:=\|u_0\|_{H^N}\ll1$, for $s\in(0,\frac{1}{2}]$, we have $$\arraycolsep=1.5pt \begin{array}[b]{rl} &\D \frac{\rm d}{{\rm d}t}\int\left(|\Lambda^{-s}u|^2+|\Lambda^{-s}\nabla u|^2\right){\rm d}x+\int|\nabla\Lambda^{-s}u|^2{\rm d}x\\[3mm] \leq &\D C(\|Du(t)\|_{H^1}^2+\|D^2u(t)\|_{H^1}^2)(\|\Lambda^{-s}u(t)\|_{L^2}+\|\Lambda^{-s}\nabla u(t)\|_{L^2}); \end{array} \eqno(4.1) $$ and for $s\in(\frac{1}{2},\frac{3}{2})$, we have $$ \arraycolsep=1.5pt \begin{array}[b]{rl} &\D \frac{\rm d}{{\rm d}t}\int\left(|\Lambda^{-s}u|^2+|\Lambda^{-s}\nabla u|^2\right){\rm d}x+\int|\nabla\Lambda^{-s}u|^2{\rm d}x\\[3mm] \leq &\D C(\|u(t)\|_{L^2}^{s-\frac{1}{2}}\|Du(t)\|_{L^2}^{\frac{5}{2}-s}+\|Du(t)\|_{L^2}^{s-\frac{1}{2}}\|D^2u(t)\|_{L^2}^{\frac{5}{2}-s}) (\|\Lambda^{-s}u(t)\|_{L^2}+\|\Lambda^{-s}\nabla u(t)\|_{L^2}). \end{array} \eqno(4.2) $$ \noindent\textbf{Proof.} Applying $\Lambda^{-s}$ to $(2.2)_1$ and multiplying the resulting identity by $\Lambda^{-s}u$, and integrating over $\mathbb{R}^3$ by parts, we get $$\arraycolsep=1.5pt \begin{array}[b]{rl} &\D\frac{1}{2}\frac{\rm d}{{\rm d}t}\int\left(|\Lambda^{-s}u|^2+|\Lambda^{-s}\nabla u|^2\right){\rm d}x+\int|\nabla\Lambda^{-s}u|^2{\rm d}x\\[3mm] =&\D-\sum_{j=1}^3\left\{\int\Lambda^{-s}u\Lambda^{-s}f_j(u)_{x_j}{\rm d}x-\int\Lambda^{-s}u\Delta\Lambda^{-s}f_j(u)_{x_j}{\rm d}x\right\}\\[3mm] :=&\D J_1+J_2. \end{array} \eqno(4.3) $$ For $J_1$, using H\"{o}lder inequality, Lemma 2.1, Lemma 2.3 and Young's inequality, we have $$\arraycolsep=1.5pt \begin{array}[b]{rl} J_1\leq &\D C\|\Lambda^{-s}u(t)\|_{L^2}\sum_{j=1}^3\|\Lambda^{-s}f_j(u)_{x_j}(t)\|_{L^2}\\[3mm] \leq &\D C\|\Lambda^{-s}u(t)\|_{L^2}\sum_{j=1}^3\|f_j(u)_{x_j}(t)\|_{L^{\frac{1}{\frac{1}{2}+\frac{s}{3}}}} \leq C\|\Lambda^{-s}u(t)\|_{L^2}\|\nabla u(t)\|_{L^2}\|u(t)\|_{L^{\frac{3}{s}}}\\[3mm] \leq &\D C\|\Lambda^{-s}u(t)\|_{L^2}\|\nabla u(t)\|_{L^2}\|\nabla u(t)\|_{L^2}^{\frac{1}{2}+s}\|D^2u(t)\|_{L^2}^{\frac{1}{2}-s}\\[3mm] \leq &\D C(\|Du(t)\|_{L^2}^2+\|D^2u(t)\|_{L^2}^2)\|\Lambda^{-s} u(t)\|_{L^2}. \end{array} \eqno(4.4) $$ Here we have used the facts $\frac{1}{2}+\frac{s}{3}<1$ and $\frac{3}{s}\geq6$. \\ Similarly, it holds that $$ J_2\leq C(\|D^2u(t)\|_{L^2}^2+\|D^3u(t)\|_{L^2}^2)\|\Lambda^{-s}\nabla u(t)\|_{L^2}.\eqno(4.5) $$ Combining (4.4) and (4.5), we get (4.1).\\ Next, we want to prove (4.2). A direct calculation as (4.4) gives $$ \arraycolsep=1.5pt \begin{array}[b]{rl} J_1 \leq &\D C\|\Lambda^{-s}u(t)\|_{L^2}\sum_{j=1}^3\|\Lambda^{-s}f_j(u)_{x_j}(t)\|_{L^2}\\[3mm] \leq &\D C\|\Lambda^{-s}u(t)\|_{L^2}\sum_{j=1}^3\|f_j(u)_{x_j}(t)\|_{L^{\frac{1}{\frac{1}{2}+\frac{s}{3}}}}\\[3mm] \leq &\D C\|\Lambda^{-s}u(t)\|_{L^2}\|D u(t)\|_{L^2}\|u(t)\|_{L^{\frac{3}{s}}} \leq C\|\Lambda^{-s}u(t)\|_{L^2}\|Du(t)\|_{L^2} \|u(t)\|_{L^2}^{s-\frac{1}{2}}\|Du(t)\|_{L^2}^{\frac{3}{2}-s}\\[3mm] \leq &\D C\|u(t)\|_{L^2}^{s-\frac{1}{2}}\|Du(t)\|_{L^2}^{\frac{5}{2}-s}\|\Lambda^{-s} u(t)\|_{L^2}. \end{array} \eqno(4.6) $$ In the same way, one has $$ I_2\leq C\|Du(t)\|_{L^2}^{s-\frac{1}{2}}\|D^2u(t)\|_{L^2}^{\frac{5}{2}-s}\|\Lambda^{-s} u(t)\|_{L^2}. $$ Thus we complete the proof of Lemma 4.1. \qed With Proposition 1.1, Proposition 2.1 and Lemma 4.1 in hand, we are now ready to prove Theorem 1.2. \noindent\emph{Proof of Theorem 1.2 for the case of $s\in(0,\frac{1}{2}]$}. Define $\mathcal{E}_{-s}(t)=\|\Lambda^{-s}u(t)\|_{L^2}^2+\|\Lambda^{-s}\nabla u(t)\|_{L^2}^2$. Integrating (4.1) with respect to $t$, we find for $s\in(0,\frac{1}{2}]$, $$ \mathcal{E}_{-s}(t)\leq \mathcal{E}_{-s}(0)+C\int_0^t\|Du(\tau)\|_{H^2}^2\sqrt{\mathcal{E}_{-s}(\tau)}{\rm d}\tau.\eqno(4.7) $$ From (2.1), we have the estimates of integrability of $\|Du\|_{H^1}^2$ with respect to $t$. As a result, we have $$ \mathcal{E}_{-s}(t)\leq \mathcal{E}_{-s}(0)+C\sup\limits_{0\leq\tau\leq t}\sqrt{\mathcal{E}_{-s}(\tau)}\leq C(1+\sup\limits_{0\leq\tau\leq t}\sqrt{\mathcal{E}_{-s}(\tau)}). $$ This yields $\mathcal{E}_{-s}(t)\leq C_1$ with a positive constant $C_1$, that is $$ \|\Lambda^{-s}u(t)\|_{L^2}^2+\|\Lambda^{-s}q(t)\|_{L^2}^2\leq\|\Lambda^{-s}u(t)\|_{L^2}^2+\|\Lambda^{-s}\nabla u(t)\|_{L^2}^2\leq C_1.\eqno(4.8) $$ This proves (1.8) for $s\in(0,\frac{1}{2}]$. \\ Next, we recall the estimate of Lyaponov-type (2.1) as $$ \frac{\rm d}{{\rm d}t}\|D^lu(t)\|_{H^1}^2+\|D^{l+1} u(t)\|_{L^2}^2\leq 0,\ \ \ \ 0\leq l\leq N-1. $$ We may use Lemma 2.2 to have $$ \|D^{l+1}u(t)\|_{L^2}\geq C\|\Lambda^{-s}u(t)\|_{L^2}^{-\frac{1}{l+s}}\|D^lu(t)\|_{L^2}^{1+\frac{1}{l+s}}. $$ From the above inequality and (4.8) we get for each $l$ with $0\leq l\leq N-1$, $$\arraycolsep=1.5pt \begin{array}[b]{rl} \|D^{l+1}u(t)\|_{L^2}^2\geq &\D\frac{1}{2}\|D^{l+1}u(t)\|_{L^2}^2+\frac{1}{2}CC_1^{-\frac{1}{l+s}}\|D^lu(t)\|_{L^2}^{2(1+\frac{1}{l+s})}\\[3mm] \geq &\D C_2(\|D^{l+1}u(t)\|_{L^2}^2+\|D^lu(t)\|_{L^2}^2)^{1+\frac{1}{l+s}}, \end{array} \eqno(4.9) $$ where $C_2>0$ is a constant. Thus we deduce the following time differential inequality $$ \frac{\rm d}{{\rm d}t}\|D^lu(t)\|_{H^{N-l}}^2+C_2(\|D^lu(t)\|_{H^{N-l}}^2)^{1+\frac{1}{l+s}}\leq0,\ {\rm for}\ l=0,1,\cdots,N. $$ Integrating this inequality, one gets for some constant $C_3>0$ $$ \|D^lu(t)\|_{H^{N-l}}^2\leq C_3(1+t)^{-(l+s)},\ {\rm for}\ l=0,1,\cdots,N.\eqno(4.10) $$ $(2.2)_2$ and (4.10) yield $$ \|D^lq(t)\|_{H^{N-1-l}}\leq C_4(1+t)^{-\frac{l+s+1}{2}},\ {\rm for}\ l=0,\cdots,N-1. \eqno(4.11) $$ \noindent\emph{Proof of Theorem 1.2 for the case of $s\in(\frac{1}{2},\frac{3}{2})$}. First we can give from what we have proved for (1.8)-(1.9) with $s=\frac{1}{2}$ above that the following decay results holds: $$ \|D^lu(t)\|_{H^{N-l}}^2\leq C_3(1+t)^{-l-\frac{1}{2}},\ {\rm for}\ l=0,1,\cdots,N.\eqno(4.12) $$ As a result, and using (4.2) that for $s\in(\frac{1}{2},\frac{3}{2})$, $$\arraycolsep=1.5pt \begin{array}[b]{rl} \mathcal{E}_{-s}(t)\leq &\D\mathcal{E}_{-s}(0)+C\int_0^t\|u(\tau)\|_{L^2}^{s-\frac{1}{2}}\|Du(\tau)\|_{L^2}^{\frac{5}{2}-s}\sqrt{\mathcal{E}_{-s}(\tau)}{\rm d}\tau\\[3mm] \leq &\D C_1+CC_3\int_0^t(1+\tau)^{-\frac{7}{4}-\frac{s}{2}}{\rm d}\tau\cdot\sup\limits_{0\leq\tau\leq t}\sqrt{\mathcal{E}_{-s}(\tau)}\\[3mm] \leq &\D C_5(1+\sup\limits_{0\leq\tau\leq t}\sqrt{\mathcal{E}_{-s}(\tau)}), \end{array} \eqno(4.13) $$ which yields (1.8) with $s\in(\frac{1}{2},\frac{3}{2})$. At last, the proof of (1.9) with $s\in(\frac{1}{2},\frac{3}{2})$ can be treated as the case of $s\in(0,\frac{1}{2}]$ above. Thus, by taking $C_0=\max\limits_{1\leq i\leq 5}\{C_i\}$, we complete the proof of Theorem 1.2. \\ \\ {\bf Acknowledgement:} \ \ The research of Zhigang Wu was supported by NSFC (No. 11101112) and in part by NSFC (No. 11071162). The research of Wenjun Wang was supported by the Tian Yuan Fund of Mathematics in China (No.11126096), NSFC (No. 11201300), Shanghai university young teacher training program (No. slg11032) and in part by NSFC (No.11171220, No.11171212). \bibliographystyle{plain}
1,314,259,996,182
arxiv
\section{INTRODUCTION}\label{sec:intro} Pulsars in binary systems with neutron star companions provide the best available laboratories for testing theories of gravity. To date, two such systems have been used for such tests: PSR~B1913+16 \citep{tw82, tw89}, and PSR~B1534+12 \citep{sac+98, sttw02}. Both are consistent with Einstein's general relativity (GR). The globular cluster M15 (NGC~7078) contains 8 known radio pulsars, the brightest of which are PSR~B2127+11A (hereafter M15A), a solitary pulsar with a 110.6\,ms spin period; PSR~B2127+11B (M15B), a solitary 56.1\,ms pulsar; and PSR~B2127+11C (M15C), a 30.5\,ms pulsar in a relativistic 8-hour orbit with another neutron star \citep{wkm+89, and92}. The Keplerian orbital parameters of M15C are nearly identical to those of PSR~B1913+16, though the former did not follow the standard high mass binary evolution \citep{agk+90}. With our data set spanning 12 years, M15C now provides a similar test of GR. \section{OBSERVATIONS AND ANALYSIS}\label{sec:obs} We observed M15 with the 305\,m Arecibo radio telescope from April 1989 to February 2001, with a gap in observations between February 1994 and December 1998 roughly corresponding to a major upgrade of the telescope. All observations used the 430\,MHz line feed, with 10\,MHz of bandwidth centered on 430\,MHz. Observations up to January 1994 were made with the Arecibo 3-level autocorrelation spectrometer (XCOR), which provided 128 lags in each of two circular polarizations and 506.625\,$\mu$s time resolution. The autocorrelation functions were transformed to provide 128 frequency channels across the band, and these data were dedispersed at the appropriate dispersion measure ($DM$) \citep{and92} and folded synchronously with the pulse period for each pulsar. Observations were broken into sub-integrations of 10 minutes for M15A and M15C, and 20 minutes for the fainter M15B. Beginning in January 1999, we used the Caltech Baseband Recorder (CBR) for data acquisition. This backend sampled the telescope signal in quadrature with 2-bit resolution and wrote the raw voltage data to tape for off-line analysis with the Hewlett-Packard Exemplar machine at the Caltech Center for Advanced Computing Research (CACR). After unpacking the data and correcting for quantization effects \citep{ja98}, we formed a virtual 32-channel filterbank in the topocentric reference frame with each channel coherently dedispersed at the dispersion measure ($DM$) of M15C \citep{hr75, jcpu97}. The coherent filterbank data were then dedispersed and folded for each pulsar as for the XCOR data. The folded pulse profiles were cross-correlated against a high signal-to-noise standard profile appropriate to the pulsar and backend (Fig. \ref{fig:m15_profiles}) to obtain an average pulse time of arrival (TOA) for each sub-integration, corrected to UTC(NIST). The standard pulsar timing package {\sc tempo}\footnote{http://pulsar.princeton.edu/tempo}, along with the Jet Propulsion Laboratory's DE405 ephemeris, was used for all timing analysis. TOA uncertainties estimated from the cross-correlation process were multiplied by a constant determined for each pulsar-instrument pair in order to obtain reduced $\chi^2 \simeq 1$. An arbitrary offset was allowed between the XCOR and CBR data sets to account for differences in instrumental delays and standard profiles. The timing models resulting from our analysis are presented in Table \ref{tab:par}, and post-fit TOA residuals relative to these models are shown in Figure \ref{fig:m15_residuals}. All stated and depicted uncertainties correspond to 68\% confidence. \section{DISCUSSION}\label{sec:discussion} \subsection{Post-Keplerian Observables for M15C}\label{sec:pk} In addition to the five usual Keplerian orbital parameters, in the case of M15C we have measured three post-Keplerian (PK) parameters: advance of periastron ($\dot\omega$), time dilation and gravitational redshift ($\gamma$), and orbital period derivative ($\dot P_b$). The dependence of these PK parameters on the Keplerian parameters and component masses depend on the theory of gravity; in GR, these relations are (see Taylor \& Weisberg, 1982; Damour \& Deruelle, 1986; and Damour \& Taylor, 1992):\nocite{tw82,dd86,dt92} \begin{eqnarray} \dot\omega &=& 3G^{2/3}c^{-2}\left(\frac{P_{b}}{2 \pi}\right)^{-5/3} (1-e^2)^{-1} M^{2/3}\,, \label{eq:omdot} \\ \gamma &=& G^{2/3}c^{-2}e\left(\frac{P_{b}}{2 \pi}\right)^{1/3} m_c(m_p + 2m_c) M^{-4/3}\,, \label{eq:gamma} \\ \dot P_{b} &=& -\,\frac{192\pi}{5}G^{5/3}c^{-5}\left(\frac{P_b}{2\pi}\right)^{-5/3}(1-e^2)^{-7/2} \left(1 + \frac{73}{24}e^2 + \frac{37}{96}e^4\right) m_p m_c M^{-1/3} \,, \label{eq:pbdot} \end{eqnarray} where $G$ is the gravitational constant and $c$ is the speed of light. The measurement of any two PK observables determines the component masses under the assumption that GR is the correct description of gravity; measuring the third parameter overdetermines the system and allows a consistency test of GR. \subsection{Kinematic Effects on Pulse Timing}\label{sec:kin} The rate of change of orbital period that we observe in the M15C system, $(\dot{P}_b)^{\rm obs}$, is corrupted by kinematic effects that must be removed to determine the intrinsic rate, $(\dot{P}_b)^{\rm int}$. Following the discussion of \citet{phi92b, phi93} regarding the parallel case of kinematic contributions to $\dot{P}$, we have \begin{equation}\label{eq:kin} \left(\frac{\dot{P_b}}{P_b}\right)^{\rm kin} = -\,\frac{v_0^2}{c R_0} \left(\cos b~\cos l + \frac{\delta - \cos b~\cos l}{1 + \delta^2 - 2\delta \cos b~\cos l}\right) + \frac{\mu^2 d}{c} - \frac{a_l}{c}, \end{equation} where $v_0 = 220 \pm 20$\,km\,s$^{-1}$ is the Sun's galactic rotation velocity, $R_0 = 7.7 \pm 0.7$\,kpc is the Sun's galactocentric distance, $\delta \equiv d/R_0$, $\mu$ is the proper motion, $d = 9.98 \pm 0.47$\,kpc is the distance to the pulsar \citep{mhb04}, and $a_l$ is the pulsar's line-of-sight acceleration within the cluster. The first term in equation (\ref{eq:kin}) is due to the pulsar's galactic orbital motion, the second to the secular acceleration resulting from the pulsar's transverse velocity \citep{shk70}, and the third to the cluster's gravitational field. Acceleration within the cluster may well dominate the kinematic contribution to $\dot{P}_b$, but $a_l$ is an odd function of the distance from the plane of the sky containing the cluster center to the pulsar, and since we do not know if M15C is in the nearer or further half of the cluster, we must use its expectation value, $\bar{a_l} = 0$. \citet{phi93} calculates a maximum value of $\left| a_l \right|_{\rm max} / c = 6 \times 10^{-18}$\,s$^{-1}$, too small for the observed $\dot{P}$ to provide a useful constraint. However, the unknown $a_l$ still dominates the uncertainty of $(\dot{P}_b)^{\rm kin}$; we take the median value of $0.71 \left| a_l \right| _{\rm max}$ as the uncertainty in $a_l$ \citep{phi92b}. Evaluating equation (\ref{eq:kin}), the total kinematic contribution is \begin{equation} \left(\dot P_b\right)^{\rm kin} = (-0.0095\pm0.12)\times10^{-12}\,, \end{equation} and subtracting this contamination from $(\dot P_b)^{\rm obs}$ yields the intrinsic value \begin{equation} \left(\dot P_b\right)^{\rm int} = (-3.95\pm0.13)\times10^{-12}\,. \end{equation} \subsection{Component Masses of M15C and a Test of General Relativity}\label{sec:mass} Solving equations (\ref{eq:omdot}) and (\ref{eq:gamma}) given the measured values of $\dot{\omega}$, $\gamma$, $P_b$, and $e$ gives $m_p = (1.358 \pm 0.010) M_\odot$, $m_c = (1.354 \pm 0.010) M_\odot$, and $M \equiv m_p + m_c = (2.71279 \pm 0.00013) M_\odot$ in the framework of GR (Fig. \ref{fig:m1m2GR}). This result is consistent with, and more precise than, previous mass measurements for the neutron stars in the M15C system \citep{pakw91, and92, dk96}. We note that these masses are consistent with the masses of double neutron star binaries observed in the field \citep{tc99, sta04}. M15 is a metal-poor cluster with a mean metallicity [Fe/H] = -2.3 \citep{skp+91}, suggesting that the mass of neutron stars is not a strong function of the metallicity of their progenitors. Our determination of a third PK parameter gives a test of GR; $(\dot{P_b})^{\rm int}$ is $1.003 \pm 0.033$ times the predicted value. While M15C provides an impressive test of GR, it is less stringent than the $1\%$ $\dot{\omega}$-$\gamma$-$\dot{P_b}$ test provided by PSR~B1913+16 \citep{tw89} and the $0.5\%$ $\dot{\omega}$-$\gamma$-$s$ test provided by PSR~B1534+12 \citep{sttw02}, where $s \equiv \sin i$ is the shape parameter determined through measurement of Shapiro delay. We note that the uncertainty in the intrinsic orbital period decay is due almost entirely to the kinematic contribution, so further observations will not significantly improve our determination of $(\dot{P}_b)^{\rm int}$ or the quality of the test of GR it allows. \subsection{Proper Motion of M15}\label{sec:pm} The proper motions resulting from our timing analysis give absolute transverse velocities for M15A and M15C several times greater than the cluster escape velocity. The measured proper motions for these two pulsars and M15B are shown in Figure \ref{fig:m15_pm}, along with four published proper motion measurements for M15 based on optical astrometry. The pulsar proper motions are all consistent with each other; their average is $\mu_{\alpha} = (-1.0 \pm 0.4)\,{\rm mas\,yr}^{-1}$, $\mu_{\delta} = (-3.6 \pm 0.8)\,{\rm mas\,yr}^{-1}$. This result is in good agreement with the cluster measurement of \citet{ch93}. \subsection{Intrinsic Spin Period Derivatives} If we assume that GR provides the correct description of gravity, we can use $(\dot{P_b})^{\rm obs}$ to determine the total kinematic correction to $\dot{P_b}$ and hence, to $\dot{P}$ for M15C. We find \begin{equation}\label{eq:pbdotkin} \left(\frac{\dot{P_b}}{P_b}\right)^{\rm kin}_{\rm GR} = \left(\frac{\dot{P}}{P}\right)^{\rm kin}_{\rm GR} = \left( -8 \pm 17 \right) \times 10 ^{-19} \,{\rm s}^{-1}\,, \end{equation} which corresponds to $a_l / c = \left( 4 \pm 17 \right) \times 10 ^{-19} \, {\rm s}^{-1}$. We now apply this correction to the observed value of $\dot{P}$ and find the intrinsic value assuming GR, $(\dot{P})^{\rm int}_{\rm GR} = (0.00501 \pm 0.00005) \times 10^{-15}$. This intrinsic spindown rate allows us to improve upon the previous estimate of the pulsar's characteristic age and magnetic field strength \citep{and92}; we find $\tau_c = (0.097 \pm 0.001)$\,Gyr and $B_{\rm surf} = (1.237 \pm 0.006) \times 10^{10}$\,G. Our timing models for M15A and M15C include $\ddot{P}$ (Tab. \ref{tab:par}) which is unlikely to be intrinsic to the pulsars. For M15A in the cluster core, \citet{phi93} estimates the kinematic contribution to be $\left| \dot{a_l} / c \right| \equiv \left| \ddot{P} / P \right| \le 10^{-26}$\,s$^{-1}$ (80\% confidence). This is significantly larger than the observed $\left| \ddot{P} / P \right| \sim 3 \times 10^{-28}$\,s$^{-1}$, so the observed $\ddot{P}$ is consistent with the expected jerk resulting from the cluster's mean field and nearby stars. For M15C, far from the cluster core, we measure $\left| \ddot{P} / P \right| \sim 10^{-28}$\,s$^{-1}$. We note that our measurement of $\ddot{P}$ in M15C is not of high significance ($\sim\!2 \sigma$), and may be an artifact of the systematic trends apparent in our timing data (Fig. \ref{fig:m15_residuals}). \acknowledgments The Arecibo Observatory, a facility of the National Astronomy and Ionosphere Center, is operated by Cornell University under cooperative agreement with the National Science Foundation. We thank W. Deich for providing the pulsar data analysis package, {\sc psrpack}. BAJ and SRK thank NSF and NASA for supporting this research. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration and funded through the internal Research and Technology Development program. BAJ holds a National Research Council Research Associateship Award at the Naval Research Laboratory (NRL). Basic research in radio astronomy at NRL is supported by the Office of Naval Research.
1,314,259,996,183
arxiv
\section{Introduction} Network-based technologies have permeated every aspect of our daily lives. However, the great benefits of network-based technologies come at a cost of increased vulnerability to malicious attacks. The consequences of a malicious attack on a network have become as devastating as ever. A well executed attack on a power utility's network can leave millions of people without electricity. As a result it has become vital to develop effective intrusion detection systems (IDS) that can ensure the safety of a network. Recent advances in machine learning algorithms have made it possible to develop IDS using artificial intelligence. Machine learning algorithms have the capability to independently build detection systems using only the raw traffic data. A number of studies have already shown the effectiveness of machine learning in detecting malicious attacks on a network. One of the key aspect of developing a detection system using machine learning is selection of appropriate features of the data to use for model building. Given a dataset with a large number of features, identifying the right features for model building can dramatically improve the outcome. Machine learning is increasingly being used to automate many IDS processes that were previously done by hand including discovering and adding suspicious URL to the blocked list, adding new rule, creating connection exceptions, an others. If done manually, such activities represent a huge burden on the system administrator. One of the main advantages of machine learning is its scalability. Continued growth of network size and traffic together with increasing number of zero-day attacks and other online threats have made automated machine learning IDS an indispensable tool in defending against malicious attacks. ML-based IDS has the capability to learn from pervious intrusion attempts to identify the malicious traffic patterns associated with them, any future occurrence of these attacks will be identified and classified much faster than human based systems regardless of the size of the network. In this paper, our goal is to study feature selection in network traffic data with the aim of detecting potential attacks. We consider various existing feature selection methods as well as propose a new feature selection algorithm to identify the most potent features in network traffic data. Our results have a dual application. First, we shed light on the key features in network traffic data that help detect a malicious signal. The properties of malicious signals identified by the feature selection methods can help IT professionals to better protect their networks. Second, feature selection is used to develop effective machine learning-based IDS. We present several detection algorithms built with the help of feature selection methods that achieve remarkable performance with respect to detection and false alarm rates. To address the difference between continuous and discrete features, we propose a novel forward search algorithm that combines correlation and mutual information to select the optimal features. Concretely, we use the linear correlation coefficient to measure the redundancy between a new feature and the existing feature subset. And we use mutual information to measure the relevance of the new feature to the target variable. The final feature score is calculated based on a weighted difference between the relevance and redundancy of the feature. Our paper is structured as follows. In Section 2, we present an overview of existing literature related to intrusion detection and machine learning. In Section 3, we provide the description of the proposed algorithm. In Section 4, we carry out numerical analysis of the existing and the proposed feature selection techniques in the context of network security. Section 5 concludes the paper with a brief summary and discussion of future work. \section{Literature review} IDS use one of three detection methods to identify malicious activities, signature-based detection compare the possible attack signature with stored signature of previous attacks \cite{Santoso}. This method has a low rate of false positive and is very effective. However, if the attack signature does not exist in the signature database it is less likely to be detected. In other words, signature-based detection is not efficient against zero-day attacks. The second detection method is anomaly detection in which any abnormal behavior that deviates from regular baseline operation is detected \cite{Tama}. This method outperforms the signature-based approach in detecting zero-day attacks due to the uniqueness of the operational baseline for individual networks. Despite their advantages anomaly detection methods suffer from the high false positive alarms due to the fact that some of the detected baseline deviations are legitimate network behavior. The third detection method is the hybrid detection. It is a combination of the above two methods used to alleviate the weaknesses of the signature based and anomaly techniques. To reduce zero-day attacks and high false positive problems, multiple algorithms must be processed concurrently to decide about the anomality of an event and simultaneously match the signature to the previously recorded attack \cite{Garg}. Machine learning methods are classified into two types based on the training method. First, supervised learning algorithms are designed to be trained on labeled data. There exists a variety of supervised algorithms that have been used in IDS including Logistic Regression \cite{Shah}, Gaussian Naive Bayes \cite{Singh}, Support Vector Machine (SVM), Random Forest \cite{Ahmad} and others. The second type of ML algorithms are the unsupervised learning algorithms which do not rely on labeled data for training and classification. Unsupervised algorithms have the ability to discover patterns and classify traffic automatically. Unsupervised algorithms are particularly useful when labeled training data is unavailable. There exists a variety of supervised algorithms that have been used in IDS including K-Means Clustering, Single Linkage Clustering \cite{Su}, Quarter-sphere Support Vector Machine \cite{Rajasegarar} and others. Another important aspect of ML-based IDS is the ability to process large amount of data quickly in order to detect attacks in real time, classification process consumes large amount of computational time due to the high number of features that must be classified. In fact, not all features are useful to the classification process and some features are either considered as noisy and degrade the process performance or highly correlated to each other and could be omitted \cite{Zuech}. Feature selection techniques are used to improve the classification performance, many researchers tackled the feature selection algorithms in order to minimize the number of selected features by choosing only the most significant features \cite{Ambusaidi}. As a result, ML-based IDS can perform predictions more efficiently. Feature selection is an important preprocessing step in many machine learning tasks. One of the most popular selection techniques is filter approach. It is a simple and efficient method that has been shown to perform effectively in a number of situations. In the filter approach, a univariate measure of feature importance is used to rank and select features. In case of continuous features, commonly used metrics include correlation and $F$-statistics. For discrete features, researchers use $\chi^2$-statistic and mutual information. In general, there exists a number of option for feature evaluation metrics. As shown in \cite{Kamalov4}, any measure that satisfies generalized similarity criteria can be employed in feature evaluation. In \cite{Amiri}, the authors consider the issue of feature selection in the context of IDS. They propose two separate forward search selection algorithms based on linear correlation coefficient and mutual information. Our method differs crucially in that we propose to combine the two metrics into one. Recently, more advanced approaches to basic univariate selection method have been developed. In \cite{Kamalov1}, the authors combine mutual information and $\chi^2$-statistic into a vectorized feature score. In \cite{Kamalov2}, the authors employ Sobolev variance decomposition to evaluate features. The proposed method allows to account for feature interactions under certain conditions. The authors in \cite{Kamalov3} reveal monotonicity property of $\chi^2$ statistic and use it to improve the feature selection process. They show that a new feature can be added to an existing feature subset only if the $\chi^2$ values increases beyond a certain threshold. \section{MICorr selection method} One of the main challenges in filter approach is dealing with different types of variables. In particular, there is no consensus in the literature on how to deal with continuous input and discrete output variables. Relationship between continuous variables is usually measured based on correlation. On the other hand, discrete variables employ entropy based metrics. Existing feature selection method use a uniform approach to all variable ignoring the data type. For instance, continuous variables are binned and treated as discrete. The problem of mixed variables arises prominently in the context of intrusion detection. Most of the variables inside network traffic data are continuous (flow duration, number of packets, length of packets, etc) whereas the target variable is discrete (benign/malicious). Therefore, an effective solution of the problem has significant implications in intrusion detection. We propose a new method (MICorr) to address the above issue by combining the Pearson correlation and mutual information. Given a feature variable $x$ and target variable $y$, we calculate the feature score by taking the difference between the feature relevance and feature redundancy \begin{equation} \mbox{MICorr}(x) = \mbox{Rel}(x, y) - \mbox{Red}(x, S), \end{equation} where $S$ is the subset of already selected features. The MICorr score attempts to simultaneously maximize the information shared between the feature and the target variable while minimizing the correlation with the already selected features. As a results the selected feature will contribute the maximum amount of new information to the optimal feature subset. Let $F$ denote the full set of features. The algorithm for executing the proposed selection method is provided below: \\ \\ \textbf{Algorithm} \\ \line(1,0){150} \begin{enumerate} \item Initialize $S=\emptyset$ \item Choose $x_0$ with max Rel and add it to $S$. Remove feature $x_0$ from $F$. \item For each $x$ in $F$, calculate MICorr($x$) and select the feature $x_1$ with the highest score. Add $x_1$ to $S$ and remove $x_1$ from $F$. \item Repeat Step 2 until the desired number of features is selected. \end{enumerate} \section{Results and analysis} Our goal in this section is to analyze various feature selection methods in the context of intrusion detection. In addition, we demonstrate the effectiveness of the newly proposed feature selection method in identifying the characteristics of network traffic data that are relevant in intrusion detection. We run our numerical experiments on the CSE-CIC-IDS2018 on AWS dataset that consists of 10,000 benign and DDoS instances. Experiments reveal that feature selection helps build potent IDS using machine learning techniques. \subsection{Data} The dataset used in our experiment is obtained from the collaborative project between the Communications Security Establishment and the Canadian Institute for Cybersecurity \cite{Sharafaldin}. The data is based on the creation of user profiles which contain abstract representations of events and behaviors seen on the network. It is based on a simulated LAN network topology that is common on the AWS computing platform. We extract a random sample of size 10,000 from the original data. The data is evenly distributed between benign and DDoS instances. Each instance of the dataset consists of 63 continuous features and a label. As shown in Table \ref{TP}, the features consists of various packet and IAT characteristics. Our goal is to apply machine learning techniques to identify the most relevant features for intrusion detection. \subsection{Feature selection methods} In our study of relevant features we employ three standard selection methods: correlation-based univariate, MI-based univariate, and correlation-based forward search algorithms. Correlation (MI) based univariate methods simply rank features based on their correlation (MI) with the target variable. Then the desired number of features is selected based on the ranking. Forward search correlation-based method iteratively selects features based on maximum relevance and minimum redundancy. The relevance is calculated based on the correlation between the feature and the target variable while redundancy is calculated based on the correlation between the feature and the previously selected subset of features. The above algorithms are quick tools to process the data prior to classification. In addition, we apply the MICorr selection method to the IDS data. \subsection{Results} The feature rankings based on various selection criteria are presented in Table \ref{tab:top}. The columns U-MI and U-Corr represent the univariate selection methods and FS-Corr represents the forward search correlation-based method. Note that MICorr and U-MI share more than half of the selected features. On other hand, FS-Corr and U-Corr have only a few features in common. We also note that feature 6 (Fwd Packet Length Max) and feature 44 (Average Packet Size) are shared by three selection methods indicating their potential significance in IDS. The top two features shared by MICorr and U-MI are 44 and 49 (Subflow Fwd Bytes). The analysis of the results in Table \ref{tab:top}, reveal that features 6, 44, and 49 are the most important in helping detect DDoS attacks on a network. In networks, flow is a series of packets between a source IP and port to a destination IP and port. Forward flow is when a packet moves from node A to node B, and backward is when the reply for the packet moves from node B to node A. It is essential to notice that the Packet Length specifies the whole packet's size, including the header, trailer, and data. However, the Packet Size determines only the size of the header on the packet. The results show that the difference of the packet size and length between the attack packets and the legitimate packets can be used to distinguish attack traffic from legitimate traffic. Indeed, attackers usually generate small packets or even empty packets to reduce the resources required. However, the packets of communication protocols are also small, packets filled with user data, which usually are large, make up a considerable percentage of the legitimate traffic. \begin{table}[h!] \centering \caption{The top 10 features selected by each features selection method.} \label{tab:top} \begin{tabular}{lrrrrr} \toprule {Rank} & MICorr & U-MI & FS-Corr & U-Corr & Tree importance \\ \midrule 1 & 44 & 44 & 13 & 13 & 45 \\ 2 & 49 & 49 & 41 & 36 & 6 \\ 3 & 18 & 4 & 55 & 12 & 37 \\ 4 & 4 & 8 & 23 & 46 & 36 \\ 5 & 5 & 6 & 12 & 10 & 8 \\ 6 & 52 & 45 & 11 & 34 & 4 \\ 7 & 21 & 18 & 6 & 44 & 10 \\ 8 & 6 & 35 & 36 & 37 & 54 \\ 9 & 35 & 21 & 7 & 35 & 13 \\ 10 & 8 & 34 & 32 & 55 & 20 \\ \bottomrule \end{tabular} \end{table} Machine learning has been widely used in a number of applications including design of detection systems. Although there exists a number of powerful ML techniques including Deep Learning and SVM they are black box models that do not reveal the internal decision making process of the model. In contrast, Decision Trees and Random Forest offer competitive alternatives that allow users analyze the internal structure of the model. Since understanding the decision making process of an algorithm is critical in security related environment, we employ Random Forest as the ML model in our experiments. Random Forest is an ML algorithm based on a collection of Decision Trees. It is trained on multiple subsets of the original data and the final prediction is made based on a voting criterion among all weak classifiers. As a result, we obtain a classifier with low bias and variance. For each feature selection algorithm, we used different subsets of selected features to train a RF model. The subsets were trained and tested on the IDS2018 dataset according to 80/20 split. The performance of each subset was measured using model accuracy, precision, and recall. The results of the experiment are presented in Figure \ref{fig:results}. As shown in Figure \ref{fig:forward_MI}, an IDS model built with the top 6 MICorr-selected features is capable of reaching $0.999$ accuracy rate. Furthermore, he top 6 MICorr-selected features produce a $1.00$ precision and $0.998$ recall rates. The results show that the proposed selection method is capable of identifying relevant features in network traffic data and train a powerful ML-based IDS. Other feature selection methods tested in the experiment produce similarly good results albeit slightly higher number of features than MICorr. We note that the top 7 U-Corr selected features achieve $1.00$ accuracy rate, but the top 6 features achieve only $0.96$ accuracy. \begin{figure}[h!] \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\linewidth]{forward_MI} \caption{Performance results using MICorr feature selection. } \label{fig:forward_MI} \end{subfigure} \newline \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\linewidth]{forward_correlation} \caption{Performance results using FS-Corr feature selection. The accuracy of the classifier increases with the size of the selected feature subset.} \label{fig:forward_correlation} \end{subfigure} \newline \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\linewidth]{univariate_correlation} \caption{Performance results using U-Corr feature selection. } \label{fig:univariate_correlation} \end{subfigure} \newline \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\linewidth]{univariate_MI} \caption{Performance results using U-MI feature selection.} \label{fig:univariate_MI} \end{subfigure} \caption{Performance comparison of various feature selection methods.} \label{fig:results} \end{figure} We apply the Decision Tree algorithm to construct an IDS model based on the features selected with MICorr method. As shown in Figure \ref{mi_tree}, the model accurately classifies 99.9\% of all instances. We can also see that the most relevant features in the tree are Fwd Packet Length Max, Subflow Fwd Bytes, Average Packet size, and Fwd IAT Max. Besides the importance of the length and size of packets and bytes in distinguishing benign from malignant packets, the Decision Tree algorithm with MICorr shows that the time (IAT Max) is also essential. Time-dependent sequential data retains the knowledge of the previous packet’s effect on the current packet. For a particular time $t$, continuous network packets are captured to form an input window. An algorithm exhibited in legitimate and malignant packets from some historical data can differentiate between the benign and malignant incoming packets \cite{Diro}. \begin{figure} \centering \includegraphics[width=1\textwidth]{tree_plot_forward_mi} \caption{Decision Tree based on the MICorr-selected features. Blue and orange nodes indicate DDoS and benign instances respectively. The model achieves 99.9\% classification accuracy.} \label{mi_tree} \end{figure} \section{Conclusion} The advent of ubiquitous network based technologies has increased the associated vulnerabilities. As a result, it has become paramount to design and implement effective IDS. In this paper, we apply feature selection methods to improve our understanding of relevant features inside network traffic data and construct potent detection systems. We examine three standard filter selection methods as well as introduce a new selection method. The proposed selection method aims to addresses the discrepancy between continuous input features (Packet Length, Subflow Bytes, etc) and discrete target variable (benign/malicious). The results of numerical experiments show that our method achieve a high degree of accuracy (99.9\%) in distinguishing between benign and malicious signals. We believe that feature selection can improve our understanding of traffic data and its characteristics in the context of intrusion detection. It also helps to design better IDS using ML techniques. In future, it would be important to expand the scope of our study to other types of attacks. \begin{table} \centering \caption{The table of all the features in the dataset.} \label{TP} \small \begin{tabular}{llll} \toprule Index & Name & Index & Name \\ \midrule 1 & Flow Duration & 33 & Min Packet Len \\ 2 & Total Fwd Packets & 34 & Max Packet Len \\ 3 & Total Bwd Packets & 35 & Packet Len Mean \\ 4 & Total Len Fwd Packets & 36 & Packet Len Std \\ 5 & Total Len of Bwd Packets & 37 & Packet Len Variance \\ 6 & Fwd Packet Len Max & 38 & FIN Flag Count \\ 7 & Fwd Packet Len Min & 39 & SYN Flag Count \\ 8 & Fwd Packet Len Mean & 40 & PSH Flag Count \\ 9 & Fwd Packet Len Std & 41 & ACK Flag Count \\ 10 & Bwd Packet Len Max & 42 & URG Flag Count \\ 11 & Bwd Packet Len Min & 43 & Down/Up Ratio \\ 12 & Bwd Packet Len Mean & 44 & Average Packet Size \\ 13 & Bwd Packet Len Std & 45 & Avg Fwd Seg Size \\ 14 & Flow IAT Mean & 46 & Avg Bwd Seg Size \\ 15 & Flow IAT Std & 47 & Fwd Header Length \\ 16 & Flow IAT Max & 48 & Subflow Fwd Packets \\ 17 & Flow IAT Min & 49 & Subflow Fwd Bytes \\ 18 & Fwd IAT Total & 50 & Subflow Bwd Packets \\ 19 & Fwd IAT Mean & 51 & Subflow Bwd Bytes \\ 20 & Fwd IAT Std & 52 & Init\_Win\_bytes\_fwdd \\ 21 & Fwd IAT Max & 53 & Init\_Win\_bytes\_bwd \\ 22 & Fwd IAT Min & 54 & act\_data\_pkt\_fwd \\ 23 & Bwd IAT Total & 55 & min\_seg\_size\_fwd \\ 24 & Bwd IAT Mean & 56 & Active Mean \\ 25 & Bwd IAT Std & 57 & Active Std \\ 26 & Bwd IAT Max & 58 & Active Max \\ 27 & Bwd IAT Min & 59 & Active Min \\ 28 & Fwd PSH Flags & 60 & Idle Mean \\ 29 & Fwd Header Len & 61 & Idle Std \\ 30 & Bwd Header Len & 62 & Idle Max \\ 31 & Fwd Packets/s & 63 & Idle Min \\ 32 & Bwd Packets/s & {} &{}\\ \bottomrule \end{tabular} \end{table}
1,314,259,996,184
arxiv
\section{\textbf{Introduction}} The aim of this paper is to give a comprehensive description, from both measure-theoretic and topological viewpoints, of the dynamics of skew product systems that have either monotone interval maps. A skew product system is a dynamical system $(\Theta \times \mathbb{X} , F )$ of the form \begin{equation}\label{skew} F: \Theta \times \mathbb{X} \to \Theta \times \mathbb{X}, \quad (\theta,x)\mapsto (S(\theta), f_\theta(x)), \end{equation} that driven by a base map $S$ (which can be a solenoid map or a baker map). In this context, describing the asymptotic behavior of the orbits and understanding how this behaviour changes when the system is modified, are two main targets. In the study of skew products, invariant graphs, particulary attracting invariant graphs, play an essential role. In fact, they are the natural substitutes of fixed points and considerably simplify the dynamics of the forced systems. When skew product systems have uniformly contracting fiber maps (hyperbolic setting), there exist invariant attracting sets for the overall dynamics, which are the graph of continuous functions (see \cite{HPS, HPS1}). For the case that the fiber maps fail to be hyperbolic, we need to impose specific conditions which guarantees the existence of an attracting invariant graph that attracts orbits almost surely. This includes the skew product map possesses a negative Lyapunov exponent in the fibre direction or it satisfies the contraction on average condition \cite{Stark1, Stark2, ZG}. Some related results on the ergodic properties and stability of attracting graphs under deterministic perturbations have been proposed by Campbell and Davies \cite{CD, C2}. Also, sufficient conditions for the existence and regularity of invariant graphs has appeared in \cite{Stark1, Stark2, SS}. Stark discussed \cite{Stark1, Stark2} a number of applications to the filtering of time series, synchronization and quasiperiodically forced systems. In general, synchronization is the phenomenon that different oscillations in coupled systems will converge to oscillations that move with identical frequency. However, external forcing, rather than coupling, can also synchronize dynamics. This phenomenon has been widely studied in theoretical physics \cite{PR, RS} and also recently in mathematics \cite{H}. Another applications of invariant graphs in many branches of nonlinear dynamics were also proposed (e.g. \cite{BHN, CD, DK, HOY, PC, SD} etc.). For skew-products with monotone fiber maps there is a close relation between (the existence of) invariant graphs, (fiber) Lyapunov exponents and ergodic measures of $F$ (e.g. \cite{JJ, J, ZG, FKG} etc.). For instance, the stability of an invariant graph is determined by its Lyapunov exponent, if it is negative then the graph is attracting. In this literature, there are attracting invariant graphs with more complicated dynamics. This includes bony graph attractors which are currently object of intense study \cite{KV, Kud, ZG}. A bony graph attractor is an attracting invariant graph which intersects almost every fiber at a single point, and any other fiber at an interval which is called a bone. Here, a more general version of an invariant graph, the so-called invariant multi-graph is considered. An invariant multi-graph is an invariant compact set which is a finite union of invariant graphs, and thus consists of a finite number of points on each fibre. In \cite{JK}, J\"{a}ger and Keller provided a criteria, in terms of Lyapunov exponents, for the existence of attracting invariant multi-graphs. Gelfert and Oliveira \cite{GO} studied step skew-products over a finite-state shift (base) space whose fiber maps are $C^1$ injective maps on the unit interval. They provided certain invariant sets having a multi-graph structure and can be written as graphs of one, two or more functions defined on the base. As an objective, we describe the geometrical structures of attracting invariant multi-graphs for a certain class of skew product systems as defined by (\ref{skew}). In particular, we construct robust invariant bony multi-graphs. Other goal is the existence of SRB measures whose supports lie on invariant graphs. SRB stands for Sinai, Ruelle and Bowen, are the invariant measures most compatible with volume when volume is not preserved. References \cite{KV, VY, ZG} contain further results about the existence and finiteness of SRB measures supported on attracting invariant graphs. Another main objective is to show that the attracting multi-graphs, in our setting, carry finitely many ergodic SRB measures. We also investigate some thermodynamic properties for these systems. Thermodynamic formalism, that is the formalism of equilibrium statistical physics, was adapted to the theory of dynamical systems in the works of Sinai, Ruelle and Bowen \cite{B,R2,S}. Topological pressure, topological entropy and equilibrium states are the fundamental tools in thermodynamic formalism. The existence and uniqueness of equilibrium states are currently object of intense study. Here, we will provide some sufficient conditions ensuring the existence of equilibrium states supported on invariant multi-graphs. Our approach for existence of equilibrium states is based on the technique applied in \cite{RV}. In this scenario, we require the graph functions. In our setting, since the contraction on the fibers is non-uniform, graph functions are only upper semicontinuous. For existence of equilibrium states we apply a version of variational principle provided by Rauch \cite{R} for discontinuous potentials. \textbf{This work is organized as follows:} First, in Sect. 2, we state some terminology concerning invariant multi-graphs and recall conditions ensuring the existence of multi-graphs from \cite{J,JJ,JK}. In Sect. 3, we construct (Theorem \ref{t144}, Theorem \ref{thm88} and Corollary \ref{cor33}) an open set of skew products given by (\ref{skew}) over an invertible base map (solenoid map) having attracting invariant multi-graphs or bony multi-graphs that carry finitely many ergodic SRB measures. Then we investigate some thermodynamic properties in Sect. 4. Sufficient conditions ensuring the existence of equilibrium states supported on invariant multi-graphs is presented (Proposition \ref{p00}). Finally, in Sect. 5, Theorem \ref{thm777}, we provide a measure theoretical isomorphism between the skew products over the solenoid map and skew products over a generalized baker map and deduce the existence of invariant bony multi-graphs for these systems. \section{\textbf{Preliminaries}} In this section, we introduce the concepts and the notations which are basic for the study in the following sections. \begin{definition}[SRB measure]\label{def:skew} Let $X$ be a manifold and $f:X\longrightarrow X$ a continuous map. An $f$-invariant measure $\mu$ is called $Sinai$-$Ruelle$-$Bowen$ (SRB) measure (or a $physical$ measure) if there exists a mesurable set $E\subset X$, of positive Lebesgue-measure, such that for every continuous function $\varphi:X\longrightarrow \mathbb{R}$ and every $x\in E$ we have: \begin{align} \lim_{n\rightarrow\infty}\frac{1}{n}\sum\limits_{j=0}^{n}\varphi(f^{j}(x))=\int\limits_{X}\varphi d\mu. \end{align} The set $E$ is called the basin of $\mu$. In other words, time averages of all continuous functions are given by the corresponding space averages computed with respect to $\mu$, at least for a large set of initial states $x\in X.$ \end{definition} \subsection{Skew products} Consider a skew product system $(\Theta \times \mathbb{X} , F )$ of the form (\ref{skew}), where the dynamics on the fibre space $\mathbb{X}$ may be interpreted as being driven by another system $(\Theta, S)$ since the transformations $f_\theta : \mathbb{X} \mapsto \mathbb{X}$ depend on $\theta$. On the other hand, the space $\mathbb{X}$ is principally considered as the fibre space over the basis dynamics $(\Theta , S)$, i.e. the fibre map $f_\theta$ can be considered as a map from $\{\theta\} \times \mathbb{X}$ to $\{S\theta\} \times \mathbb{X}$, where $\{\theta\} \times \mathbb{X}$ is the fibre space over $\theta \in \Theta$. We adopt the usual notation $ F^{n}(\theta,x ) = (S^{n}\theta , f_{\theta}^{n}(x))$, for the iterates $F^{n}$ of $F$, where $f^{n}_{\theta} = f_{S^{n-1}(\theta)} \circ \cdots \circ f_{\theta}$. Hence, $f_\theta^{n+k}(x) = f_{S^k \theta}^n (f^k_\theta(x))$. For $n = 1$ and $k =-1$ this includes the identity and $f_\theta^{-1}(x) = (f_{S^{-1}\theta})^{-1}(x)$. In this article, we are interested in skew product dynamical systems $F$, defined as below: \begin{definition}\label{generalskew} $\mathcal{F}$ denotes the family of all skew product transformations $F$ given by (\ref{skew}) on $\Theta \times \mathbb{I}$ having the following properties: \begin{enumerate} \item [(1)] The base $\Theta$ is equipped with a $\sigma$-algebra $\mathcal{B}$ and a probability measure $m$, such that $(\Theta, \mathcal{B}, m)$ becomes a probability measurable space, and the base transformation $S: \Theta \to \Theta$ is a bi-measurable and ergodic measure-preserving bijection. In most situations we will assume that $\Theta$ is a compact metric space, in which case $\mathcal{B}$ is always the (completed) Borel $\sigma$-algebra and $S$ is a homeomorphism. \item [(2)] Let $\mathbb{I} = [0,1]$ and $\text{int}(\mathbb{I})=(0,1)$. The fibre maps $ f_{\theta} : \mathbb{I} \to \text{int}(\mathbb{I}) $ are given by $f_{\theta}(x) = \pi_{2} \circ F(\theta,x) $ with $\pi_{2}$ the natural projection from $\Theta \times \mathbb{I}$ to $\mathbb{I}$. We will assume that the fiber maps $f_{\theta} $ are $C^2$ increasing interval maps. \item [(3)] For each $x \in \mathbb{I}$ the map $\theta \mapsto F(\theta,x)$ is measurable. If $\Theta$ is a compact metric space and $S$ is a homeomorphism then the map $\theta \mapsto F(\theta,x)$ is continuous. \end{enumerate} \end{definition} \begin{remark}\label{rem:com} By definition, the dynamic contracts the whole space, i.e. $F(\Theta \times \mathbb{I}) \subset \Theta \times \text{int}(\mathbb{I})$. \end{remark} We equip $\mathcal{F}$ with the following metric: \begin{align} \label{m} \text{dist}_{\mathcal{F}}(F,G):=\sup\limits_{\theta\in \Theta}\text{dist}_{C^2}(f^{\pm 1}_\theta,g^{\pm 1}_\theta), \ \ \text{for each}\ \ F,G \in \mathcal{F}, \end{align} where $f_{\theta}$ and $g_{\theta}$ are the fiber maps of $F$ and $G$ respectively. \subsection{Invariant graphs and Lyapunov exponents} Invariant graphs are fundamental objects in the study of skew product systems, and they are of major interest. \begin{definition}[Invariant graph]\label{def:graph} Let $F \in \mathcal{F}$ be a skew product as Definition \ref{generalskew}. A measurable function $\gamma : \Theta \to \mathbb{I}$ is called an invariant graph (with respect to $F$) if for all $\theta \in \Theta$: \begin{equation}\label{in} F(\theta , \gamma(\theta)) = (S \theta , \gamma(S \theta )) , \text{ equivalently} \quad f_{\theta}(\gamma(\theta)) = \gamma(S\theta). \end{equation} The point set $\Gamma := \{ (\theta , \gamma(\theta)) : \theta \in \Theta \}$ will be called invariant graph as well, but it is labeled with the corresponding capital letter. Denote by $\text{Cl}(\Gamma)$ the closure of $\Gamma$ in $\Theta \times \mathbb{I}$. Let $m$ be an invariant measure for the base map $S$. We say that $\gamma: \Theta \to \mathbb{I}$ is an $(F,m)$ invariant graph if \begin{equation}\label{1graph} f_\theta(\gamma(\theta))=\gamma(S(\theta)), \ \text{for} \ m-a.e. \ \theta \in \Theta. \end{equation} \end{definition} In the sequel, we will denote by $\pi_1 : \Theta \times \mathbb{I} \to \Theta$ and $\pi_2 : \Theta \times \mathbb{I} \to \mathbb{I}$ the canonical projections onto the first and second coordinates, respectively. To any $(F, m)$-invariant graph $\gamma$, an $F$-invariant measure $m_\gamma$ can be assigned by defining \begin{equation}\label{m} m_\gamma(A) = m (\pi_1(A \cap \Gamma)) \end{equation} for any measurable set $A \subseteq \Theta \times \mathbb{I}$. Note that the measure $m_\gamma$ is ergodic if and only if $m$ is ergodic. Throughout the paper, the set of all $F$-invariant probability measures on $\Theta \times \mathbb{I}$ is denoted by $\mathcal{M}(F)$, and the set of $\mu \in \mathcal{M}(F)$ which project to $m$ by $\mathcal{M}_m(F)$. The next theorem is a counterpart of Theorem 1.8.4 in \cite{AL} to our setting, see also \cite[Theorem 1.1]{JJ} and Furstenberg Theorem \cite{F}. It provides that there is a one-to-one correspondence between invariant graphs and invariant ergodic measures of skew products forced by monotone interval maps. \begin{theorem}\label{thm00} Suppose $F \in \mathcal{F}$ is a skew product as Definition \ref{generalskew}, $m \in \mathcal{M}(S)$ and $\mu \in \mathcal{M}_m(F)$ is ergodic. Then there exists an $(F, m)$-invariant graph $\gamma$ such that $\mu = m_\gamma$. \end{theorem} In the investigation of skew product systems, attracting invariant graphs are often useful characteristics, which we simply call attractors. Formally, we introduce the following definition. Note that the effect of the attraction is observed only in the fibre space $\mathbb{I}$. \begin{definition}[Attracting invariant graphs] A point $(\theta,x) \in \Theta \times \mathbb{I}$ is attracting, if there is a constant $\delta >0$ such that $$\lim_{n\to \infty}|{f}_{\theta}^n(x)-{f}_{\theta}^n(z)| = 0,$$ for all $z \in (x-\delta,x+\delta)$. Furthermore, an invariant graph $\gamma$ is called an attracting invariant graph or attractor with respect to the invariant probability $\mu$ if $\mu$-almost every point is attracting. \end{definition} In a weaker form, we say that $A$ is an \emph{attractor} for $F$ (in the sense of Milnor \cite{Mil}) if there is a set of points in the phase space with positive probability whose future orbits tend to $A$, as the number of iterates tends to infinite. The set of orbits attracted to $A$ in the future is called its \emph{basin} and denoted by $B(A)$. \begin{definition}[Maximal attractor]\label{def:att} Given a continuous skew product $F$, a compact set $D \subset \Theta \times \mathbb{I} $ is trapping if $F(D)\subset int{D}$. Then the closed $F$-invariant set $$A_{max}:=\bigcap \limits _{n\geq 0}F^{n}(D)$$ is said to be a maximal attractor for $F$. \end{definition} \begin{definition}[Maximal Lyapunov exponent]\label{max} The maximal Lyapunov exponent of an $F$-invariant probability measure $\mu$ is defined by \begin{equation}\label{lya1} \lambda(\mu,F)=\lim_{n \to \infty}\frac{1}{n}\int_{\Theta \times \mathbb{I}} \log | Df_\theta^{n}(x)| d\mu(\theta,x). \end{equation} \end{definition} The maximal Lyapunov exponent of an invariant graph $\gamma$ with respect to a $S$-invariant probablity measure $m$ is defined as \begin{equation}\label{lya2} \lambda(m,\gamma)=\lim_{n \to \infty}\frac{1}{n}\int_{\Theta} \log |Df_\theta^{n}(x)|dm(\theta). \end{equation} Note that (\ref{lya2}) is a special case of (\ref{lya1}), with $\mu$ given by $\mu(A) = m(\{\theta \in \Theta: (\theta,\gamma(\theta))\in A \}$. \begin{definition}[Upper Lyapunov exponent] Following \cite{FKG, JJ}, for each skew product $F$ given by Definition \ref{generalskew}, the \emph{upper Lyapunov exponent of} $(\theta, x) \in \Theta \times \mathbb{I}$ is \begin{equation}\label{lya3} \lambda_{max}(\theta, x)=\lim_{n \to \infty}\frac{1}{n} \log |Df_\theta^n(x)|, \end{equation} when the limit exists, where $Df_\theta(x)$ is the derivative of $f_\theta$ in $x$. Given any $F$-invariant probability measure $\mu$, we define the \emph{upper Lyapunov exponent of} $\mu$ by \begin{equation}\label{lya4} \lambda_{max}(\mu)=\int \lambda_{max}(\theta, x) d\mu (\theta, x). \end{equation} \end{definition} Note that since $f_\theta^n$ is increasing interval maps, $|Df_\theta^n(x)|=Df_\theta^n(x)$. \begin{definition}[Fiber Lyapunov exponent]\label{ver} The (fiber) Lyapunov exponent of an $(F ,m)$-invariant graph $\gamma$ is given by \begin{equation}\label{lya} \lambda_m(\gamma)=\int_\Theta \log Df_\theta(\gamma(\theta))dm(\theta). \end{equation} \end{definition} \begin{remark}\label{att} Let $\gamma$ be an $(F ,m)$-invariant graph with $ \lambda_m(\gamma)<0$. Then, by \cite[Lemma 1.10]{JJ}, $\gamma$ is an attracting graph with respect to the invariant probability $m_\gamma$. \end{remark} Note that by the Birkhoff ergodic theorem \begin{equation*} \begin{aligned} \lambda_{max}(\theta,\gamma(\theta)) &= \lim_{n\to \infty} \frac{1}{n} \log Df_{\theta}^{n}(\gamma(\theta)) = \lim_{n\to \infty} \frac{1}{n} \sum_{k=0}^{n-1}\log Df_{S^{k}\theta}(f_{\theta}^{k}(\gamma(\theta)))\\ &= \lim_{n \to \infty}\frac{1}{n} \sum_{k=0}^{n-1}\log Df_{S^{k}\theta}(\gamma(S^k\theta)) = \int_{\Theta} \log Df_{\theta}(\gamma(\theta))dm (\theta) \\ &=\lambda_{m}(\gamma) \end{aligned} \end{equation*} for $m$-a.e.~$\theta\in\Theta$. So the average Lyapunov exponent of an invariant graph equals its point-wise Lyapunov exponent for $m$-a.e. $\theta\in \Theta$. Here, we focus on skew products having the non-uniformly contraction rates along the fiber. We address the situation where we only have information about average rates of contraction. This is a weaker form of contraction which is a necessary condition for synchronization, see \cite{Stark2}. When we talk about average rates of contraction, we need an invariant measure. In our setting, take the $S$-invariant measure $m$, as in Definition \ref{generalskew}. \begin{definition}[Non-uniform contraction condition]\label{av} Suppose that the limit \begin{equation}\label{lya} \lambda(\theta)=\lim_{n \to \infty}\frac{1}{n}\sup_{x,x^{\prime}\in I}\log \frac{d(f_\theta^n(x), f_\theta^n(x^{\prime}))}{d(x,x^{\prime})} \end{equation} exists for almost every $\theta$ and is a measurable function of $\theta$ (this is the maximal Lyapunov exponent in the fibre direction of the skew product $F$). We say that the skew-product $F$ contracts non-uniformly if there exists $\lambda < 0$ such that \begin{equation}\label{neg} \lambda(\theta)\leq \lambda < 0 \end{equation} for a.e. $\theta \in \Theta$. \end{definition} \begin{remark}\label{r11} In a special case, when $\mu$ is given by $\mu(A) = m(\{\theta \in \Theta: (\theta,\gamma(\theta))\in A \}$, if the fiber Lyapunov exponent of $m$ has negative upper bound then the skew product $F$ satisfies (\ref{neg}) and hence it contracts non-uniformly. \end{remark} As a consequence of non-uniform contraction condition we have the following result of Stark \cite{Stark2}, which applies in a more general setting. \begin{theorem}(\cite[Theorem 1.4]{Stark2})\label{thmg} Suppose $\Theta$ is a compact metric space, $S:\Theta \to \Theta$ a homeomorphism, $m$ an invariant measure, $X$ a complete metric space and the fiber map $f: \Theta \times X \to X$ is a continuous map satisfying (\ref{neg}). Then there exists an $S$-invariant set $\Lambda \subset \Theta$ such that $m(\Lambda)= 1$ and a function $\gamma: \Lambda \to X$ such that the graph of $\gamma$ is invariant and attracting under $F$. \end{theorem} \subsection{Multi graphs} We concentrate on the case that a compact invariant set is just a finite union of invariant graphs, and thus consists of a finite number of points on each fibre. Let $F \in \mathcal{F}$ be a skew product over a base map $S$ as Definition \ref{generalskew}. \begin{definition}[Multi-graph] Following \cite{GO}, given $F \in \mathcal{F}$, a multi-function $\psi: D \subset \Theta \to \mathbb{I}$ is a relation that associates to every point $\theta \in D$ a nonempty subset $\psi(\theta) \subset \mathbb{I}$. A multi-function $\psi: D \subset \Theta \to \mathbb{I}$ is uniformly finite if there exists $k \geq 1$ such that $\# \psi(\theta) \leq k$ for all $\theta \in D$. Given a uniformly finite multi-function $\psi$, we call the set $\{(\theta, \psi(\theta)): \theta \in D\}$ a multi-graph in $\Theta \times \mathbb{I}$. \end{definition} J\"{a}ger proved that \cite[Theorem 1.14]{JJ} for skew products with $C^1$ interval fiber maps having a minimal homeomorphism as a base, strict negativity of the Lyapunov exponents on a compact invariant set implies that this set is a multi graph. Then this extended to a more general case \cite[Theorem 1.2]{JK}. Let $F \in \mathcal{F}$ be a skew product over a base map $S$ as Definition \ref{generalskew}. Then the tuple $(\Theta,\mathcal{B},m,S)$ is a measure-preserving dynamical system. We say \cite{JK} $K\subset \Theta \times \mathbb{I}$ is a \emph{random compact set} if \begin{enumerate} \item $K_\theta=\{x \in \mathbb{I}: (\theta,x)\in K\}$ is compact for all $\theta \in \Theta$; \item the functions $\theta \mapsto d(x,K_\theta)$ are measurable for all $x \in \mathbb{I}$. \end{enumerate} The set of all $f$-invariant probability measures $\mu$ which project to $m$ supported on a compact $F$-invariant set $K$ is denoted by $\mathcal{M}_m^K(F)$. \begin{theorem}\cite[Theorem 1.2]{JK} Let $F \in \mathcal{F}$ be a skew product with base $(\Theta,\mathcal{B},m,S)$, the family $(x \mapsto \log \|D f_\theta^k(x)\|)_{\theta \in \Theta}$ is equicontinuous for all $k \in \mathbb{N}$ and $K \subset \Theta \times \mathbb{I}$ is a random compact set such that the maximal Lyapunov exponent $\lambda(\mu,F)$ given by (\ref{lya1}) is negative for all $\mu \in \mathcal{M}_m^K(F)$. Then there exists an integer $n$ such that $\# K_\theta=n$ for $m$-a.e. $\theta \in \Theta$. \end{theorem} If the base map $S$ is a homeomorphism on a compact metric space $\Theta$, since fiber maps are $C^2$ and $F$ is continuous, the next corollary can be followed immediately. \begin{corollary}\label{thmm} Suppose $\Theta$ is a compact metric space, $m$ an invariant ergodic measure and $S: \Theta \to \Theta$ is an ergodic measure-preserving homeomorphism. Assume, $F \in \mathcal{F}$ is a skew product as Definition \ref{generalskew} over the base map $S$ and $K$ is a compact $F$-invariant set. Further, assume that for all $\mu \in \mathcal{M}_m^K(F)$ we have $\lambda(\mu,F) < 0$. Then there exists an integer $n$ such that $\# K_\theta=n$ for $m$-a.e. $\theta \in \Theta$. In particular, if $n > 1$ then $K$ is a multi-graph. \end{corollary} \section{\textbf{Bony multi-graphs for skew products with interval fiber maps}} Our major goal here is to describe the structure of invariant graphs and study the geometry of attractors, mainly, in the case that the basis dynamical system is a solenoid map. In particular, we construct robust bony multi-graphs. In our construction, first we provide a single skew product $\widetilde{F}$ over an expanding circle map. Then, we consider its extension $F$ which is a skew product over the solenoid map and show that every skew product $G$ which is close enough to $F$ admits a bony multi-graph which carries finitely many ergodic SRB measures. We recall the family $\mathcal{F}$ of skew products given by Definition \ref{generalskew}. The next theorem is the main result of this section. \begin{theorem}\label{t144} Given any $n \in \mathbb{N}$, there exists an open set $\mathcal{U}\subset \mathcal{F}$ of skew products of the form (\ref{skew}) with interval fiber maps having a solenoid map as the base map such that each skew product system $G$ belonging to $\mathcal{U}$ admits an attracting invariant multi-graph or bony multi-graph having exactly $n$ SRB measures supported in $K(G)$. \end{theorem} The rest of this section is devoted to proving this theorem. \subsection{Skew products forced by expanding circle maps} If $X$ is a metric space and $f : X \longrightarrow X$ is a continuous map, then we say that $f$ is \emph{weakly contractive} whenever for each $x,y \in X$ with $x\neq y$, $d(f(x),f(y)) < d(x,y)$. \begin{remark}\label{fweak} It is a well-known fact \cite[Coro.~3]{J} that if $f$ is weakly contractive and $X$ is compact then there exists a unique fixed point $x \in X$ of the map $f$. Furthermore, for every $y \in X$, $\lim_{k \to \infty}f^k(y)=x$ uniformly. Then we say that $x$ is a \emph{weak attracting fixed point}. Clearly if $f$ is a weakly contractive map then $$d(f^{n}(y), f^{n}(z)) \to 0, \ as, \ n \to \infty,$$ for each $y, z \in X$. \end{remark} \begin{definition} We say that the interval map $f$ is $s$-weakly contractive if it satisfies the following conditions: \begin{enumerate} \item [$(1)$] $f$ is a $C^1$ weakly contractive map. \item [$(2)$] Let $p$ be the unique fixed point of $f$. Then, $Df(p)=1$ and for all $x\neq 0$, one has $Df(x)<1$. \end{enumerate} \end{definition} The following example shows that the set of $s$-weakly contractive maps is nonempty. \begin{example} Define $f:[0.1,0.7] \to [0.1,0.7]$ by $f(x)=3.098 x^{1.83}-2.5 x^{2.4} +0.1$. Then $f$ is a weakly contractive map with the unique fixed point $0.466$. In particular, $Df(0.466)=1$ and for all $x\neq 0.466$, we have $Df(x)<1$. \end{example} In \cite{E}, the author gave another examples of $s$-weakly contractive maps. \begin{definition}[Weak-pair]\label{k} Take two $C^2$ increasing interval maps $f_i$, $i=0, 1,$ defined on an interval $[a,b]$, $C^2$-close to the identity so that they fulfill the following conditions: \begin{enumerate} \item [$(1)$] Each $f_i$, $i=0, 1,$ is a $s$-weakly contractive map having a (unique) weak attracting fixed point $p_i$. \item [$(2)$] For every $x \in [a,b]$, the following ``contraction on average property" hold: \begin{equation}\label{caverage} \sum_{i=0}^1 \log Df_i(x)<0. \end{equation} \item [$(3)$] The weak attracting fixed points $p_i$, $i=0, 1$, are pairwise disjoint. Moreover, $p_i\neq a,b$ and $g_i(p_j)\neq p_i$ for each $j \neq i$. \end{enumerate} Let $p_0< p_1$ and $J = [p_0 , p_1]\subset [a,b]$. Then we say that $(f_0|_{J},f_1|_{J})$ is a weak-pair for $J$. \end{definition} \begin{remark}\label{cov} By conditions (1) and (3) the following ``covering property" holds: there exists the points $x_0$ and $x_1$ with $p_0 < x_0 < x_1 < p_1$ such that for the interval $B = (x_0, x_1) \subset \text{int}(J)$ one has: \begin{equation*} \forall x\in [x_1,x_2], \ Df_i(x)<1 \ \text{and} \ \text{Cl}(B) \subset f_0(B) \cup f_1(B). \end{equation*} \end{remark} Note that, covering property introduced in \cite{NP}. In Figure \ref{fig:1} below, the pairs $(f_0|_{I_1},f_1|_{I_1})$ and $(f_0|_{I_2},f_1|_{I_2})$ are weak-pairs. \begin{figure}[h] \begin{center} \includegraphics[scale=0.35]{555.png} \caption{Two increasing $C^2$ maps $f_0$ and $f_1$ with two weak-pairs} \label{fig:1} \end{center} \end{figure} Fix $n \geq 1$ and consider $n$ disjoint subintervals of $\mathbb{I}=[0, 1]$, say $I_1 = [a_1, b_1], \ldots , I_n = [a_n, b_n]$, with \begin{equation*} 0 <a_1 < b_1 < a_2 < \ldots < a_n < b_n <1 \end{equation*} and two increasing $C^2$ maps $f_0, f_1: \mathbb{I} \to \mathbb{I}$ such that \begin{enumerate} \item [($i$)] $f_0(I_i)\subset I_i, f_1(I_i)\subset I_i \ \text{and} \ (f_0|_{I_i},f_1|_{I_i}) \ \text{is \ a \ weak-pair \ for \ each} \ I_i, \ i=1, \ldots, n$, hence there exist weak attracting fixed points $p^0_i$ and $p^1_i$ of $f_0$ and $f_1$, respectively; \item [($ii$)] $f_0$ and $f_1$ have repelling fixed points $c_i$ and $d_i$, $i=1, \ldots, n-1$, respectively, such that \begin{equation*} 0 <b_1 < c_1 < d_1 < a_2 <\ldots b_{n-1}<c_{n-1}< d_{n-1}<a_n < b_n <1. \end{equation*} \end{enumerate} We assume that $a_i<p^0_i<p^1_i<b_i$. By Remark \ref{cov}, there exist points $x_i$ and $y_i$, $i=1, \ldots, n$, such that $a_i < x_i < y_i < b_i$ and intervals $B_i = (x_i , y_i ) \subset \text{int}(I_i )$ satisfies the following covering property: \begin{equation}\label{f1} \forall x \in [x_i,y_i], Df_i(x)<1, \ \text{and} \ \text{Cl}(B_i)\subset f_0(B_i) \cup f_1(B_i). \end{equation} Consider the circle expanding map $\omega :\mathbb{T}^1 \to \mathbb{T}^1$, $\omega(t)=4t$ $(mod \ 1)$. Let \begin{equation*} L_i \subset \mathbb{T}^1, \ i=0,1, \end{equation*} be disjoint closed arcs, with the length of each arc equal to $1/4$. Then for $0 \leq i \leq 1$ we have $\omega(L_i) = \mathbb{T}^1$. Take a smooth map $\ell : \mathbb{T}^1 \to [0, 1]$ such that $\ell |L_0\equiv0$, $\ell |L_1\equiv 1$ and outside the $\delta$-neighborhood of $L_0 \cup L_1$, for sufficiently small $\delta>0$, $\ell(t) \in (0,1)$. Now, consider an isotopy \begin{align}\label{is} f_{t}(x)=(1-(\ell(t)^2))f_{0}(x)+(\ell(t)^2)f_{1}(x) \end{align} between $f_0$ and $f_1$. Then, for each $t \in L_0$, $f_t=f_0$ and for each $t \in L_1$, $f_t=f_1$. It is easy to see that \begin{align}\label{de} \forall t\in \mathbb{T}^1\setminus B_\delta(L_0 \cup L_1) \ \text{and}\ \forall x\in I_i, \ i=1, \cdots, n, \ \text{one has} \ Df_{t}(x) <1. \end{align} We now define a skew product over the expanding circle map $\omega$, corresponding to the fiber maps $f_0$ , $f_1$ by \begin{align}\label{be} \widetilde{F}:\mathbb{T}^1\times I\to \mathbb{T}^1\times I,\quad \widetilde{F}(t,x)=(\omega(t),f(t,x)), \end{align} where $f(t,x):=f_{t}(x)$ given by (\ref{is}). In the rest of this article we fix the skew products $\widetilde{F}$ given by (\ref{be}). Clearly, by construction, for each $1\leq i\leq n$, the compact set $D_i:=\mathbb{T}^1 \times I_i $ is a trapping region for $\widetilde{F}$. Let us take \begin{equation}\label{max} A_{max}(\widetilde{F}):=\bigcap_{n\geq 0}\widetilde{F}^n(\mathbb{T}^1 \times \mathbb{I}), \ \text{and} \ \Lambda_i:=\bigcap_{n\geq 0}\widetilde{F}^n(\mathbb{T}^1 \times I_i), \ 1\leq i\leq n. \end{equation} Then $\Lambda_i$, $1\leq i\leq n$, are maximal attractors of $\widetilde{F}$ corresponding to the trapping regions $D_i$ and the union $\bigcup_{i=1}^n \Lambda_i$ is a compact invariant set contained in the global attractor $A_{max}(\widetilde{F})$. \subsection{Extension skew products and invariant graphs} Since the base map $\omega:\mathbb{T}^1 \to \mathbb{T}^1$ is non-invertible, hence we can always find an extension $S: \Omega\to \Omega$ that is invertible. By extension we mean that there exists a surjective map $\textbf{p}: \Omega\to \mathbb{T}^1$ such that, $\textbf{p}\circ S=\omega\circ \textbf{p}$. To begin with, take $\Omega$ to be the set of all pre-orbits of $\omega$, that is, all sequences $(t_{-n})_{n\leq 0}$ satisfying $\omega (t_{-n})=t_{-n+1}$ for every $n>0$. Consider the map $\textbf{p}: \Omega\to \mathbb{T}^1$ sending each sequence $(t_{-n})_{n\geq 0}$ to its term $t_0$ of order zero. Observe that $\mathbf{p}(\Omega)=\mathbb{T}^1$. Finally, we define $S: \Omega\to \Omega$ by $$ S(\dots, t_{-n}, \dots, t_0)= (\dots, t_{-n}, \dots, t_0, \omega(t_0)).$$ It is clear that $S$ is well defined and satisfies $\mathbf{p}\circ S=\omega\circ\mathbf{p}$. The inverse limit space $\Omega$ is endowed with the product topology, then it is easy to see that $S$ is a homeomorphism on $\Omega$. \begin{remark}\label{e} On ergodic point of view, given an ergodic measure $\mu^+$ defined on Borel subsets of $\mathbb{T}^1$ there exists a unique measure $\mu$ defined on Borel subsets of $\Omega$ such that $\mathbf{p}_\ast \mu=\mu^+$. We mention that since $\omega:\mathbb{T}^1\longrightarrow \mathbb{T}^1$ is a $C^2$-expanding endomorphism then $\omega$ possesses an absolutely continuous invariant ergodic measure $\nu^{+}$ whose density is bounded and bounded away from zero. \cite{M}. Moreover, $\nu^{+}$ is equivalent to Lebesgue. Thus, $(\Omega,S)$ has an invariant ergodic measure $\nu$ inherited from the invariant measure $\nu^+$ for $\omega$ on $\mathbb{T}^1$, i.e. $\mathbf{p}_\ast \nu=\nu^+$. \end{remark} For the skew products $\widetilde{F}$ given by (\ref{be}), we define an extension skew product map $F=F(\widetilde{F})$ on $\Omega \times \mathbb{I}$ by \begin{equation}\label{sss} F(\textbf{t},x)=(S(\textbf{t}),f(\textbf{t},x))=(S(\textbf{t}),f_{t_0}(x)), \end{equation} for each $\textbf{t}=(\dots, t_{-n}, \dots, t_0)\in \Omega$, where $f_{t_0}$ is given by (\ref{is}). Note that the inverse map $F^{-1}$ is given by $$F^{-1}(\mathbf{t},x)=(\dots, t_{-2}, t_{-1}, (f_{t_{-1}})^{-1}(x)).$$ Then $\mathbf{p}\times id$ is a semi conjugacy between $\widetilde{F}$ and $F$. \begin{remark}\label{ef} By construction, the fiber maps of $\widetilde{F}$ and its extension $F$ are the same. \end{remark} In the rest of this paper, we fix the skew product $F$ with the fiber maps $f_{\textbf{t}}$, $\textbf{t}\in \Omega$. By construction, for each $1\leq i\leq n$, the compact set $\Omega \times I_i $ is a trapping region for $F$. Let us take \begin{equation}\label{max} A_{max}(F):=\bigcap_{n\geq 0}F^n(\Omega \times \mathbb{I}), \ \text{and} \ \Delta_i:=\bigcap_{n\geq 0}F^n(\Omega \times I_i), \ 1\leq i\leq n. \end{equation} Then $\Delta_i$, $1\leq i\leq n$, are maximal attractors of $F$ corresponding to the trapping regions $\Omega \times I_i $ and the union $K:=\bigcup_{i=1}^n \Delta_i$ is a compact invariant set contained in the global attractor $A_{max}(F)$. Let us take \begin{equation}\label{rest} F_i:=F|_{\Omega \times I_i}, \ i=1, \dots, n. \end{equation} \begin{lemma}\label{lemma0} The skew product $F_i$ satisfies the non-uniformly contraction condition given by Definition \ref{av} for each, $i=1, \dots,n$. \end{lemma} \begin{proof} By construction, for each $\textbf{t} \in \Omega$, the fiber map $f_\textbf{t}$ of $F$ is a $C^2$ diffeomorphism. Hence, there exists a $\varsigma \in L^1(\Omega,\nu)$ such that $d(f_\textbf{t}(x),f_\textbf{t}(x^{\prime}))<\varsigma(\textbf{t})d(x,x^{\prime})$, for $\nu$-almost every $\textbf{t}$. Then, by \cite[Lem.~5.1]{Stark2}, for all $m\geq 0$ \begin{equation}\label{n-lya} \varsigma_{m,i}(\textbf{t})=\sup_{x,x^{\prime}\in I_i}\frac{d(f_\textbf{t}^m(x), f_\textbf{t}^m(x^{\prime}))}{d(x,x^{\prime})}, \ i=1, \cdots,n \end{equation} exists for almost every $\textbf{t}$ and it is measurable, where $f^m_{\textbf{t}}=f_{S^{m-1}(\textbf{t})}\circ \ldots \circ f_{S(\textbf{t})}\circ f_\textbf{t}$. Also, by \cite[Lem.~5.3]{Stark2}, the sequence $\varsigma_{m,i}$ is submultiplicative. It immediately follows that $\log \varsigma_{m,i}$ is subadditive, that is $\log \varsigma_{m+k,i}(\textbf{t})\leq \log \varsigma_{m,i}(\textbf{t})+ \log \varsigma_{k,i}(S^m(\textbf{t}))$. We can thus apply the Kingman's subadditive ergodic theorem \cite[Thm.~5.4]{Kr} to $\log \varsigma_{m,i}$ to deduce that the limit \begin{equation}\label{lya} \lambda_i(\textbf{t})=\lim_{m \to +\infty}\frac{1}{m}\log \varsigma_{m,i}(\textbf{t}), \end{equation} exists for $\nu$-almost every $\textbf{t}$ and is $S$ invariant. Moreover, \begin{equation}\label{44} \lim_{m \to \infty}\frac{1}{m}\textnormal{log} \varsigma_{m,i}(\textbf{t})= \inf_{m}\frac{1}{m}\textnormal{log} \varsigma_{m,i}(\textbf{t}) \end{equation} and it is constant by the ergodicity of $\nu$, denoted by $\lambda_i(\nu)$. Note that, by definition, for each $t=(\dots, t_{-1}, t_0)\in \Omega$, $$f^m_{\textbf{t}}=f_{S^{m-1}(\textbf{t})}\circ \dots \circ f_{S(\textbf{t})}\circ f_\textbf{t}=f_{\omega^{m-1}(t_0)}\circ \dots \circ f_{\omega(t_0)}\circ f_{t_0}.$$ On the other hand, by (\ref{de}), for each $\textbf{t}=(\ldots, t_{-1}, t_0) \in \Omega$ with $t_0 \in \mathbb{T}^1 \setminus B_\delta(L_0 \cup L_1)$ and every $x \in I_i$, we have $\|Df_\textbf{t}(x)\|<1$. It is known that \cite{BY} for Lebesgue almost all $t_0$ the set $\{\omega^n(t_0): n \in \mathbb{N}\}$ is dense in $\mathbb{T}^1$. Since, $\nu^+$ is equivalent to Lebesgue, the same holds for $\nu^+$-almost all $t_0$. Hence, for $\nu^+$-almost all $t_0$, there exists $n \in \mathbb{N}$ such that $\omega^n(t_0) \in \mathbb{T}^1 \setminus B_\delta(L_0 \cup L_1)$. This implies that $f_{\omega^n(t_0)}$ is a uniformly contracting map. By these facts, since $\nu$ inherited from $\nu^+$, there exists an upper bound $\lambda <0$ such that $\lambda_i(\nu)\leq \lambda <0$ (note that for each $t$, by definition, the fiber map $f_t$ is either weakly contractive or uniformly contracting). Hence, $F_i$ satisfies the non-uniformly contraction condition for each $i=1, \dots, n$. \end{proof} Given a small $\varepsilon>0$, take a $\varepsilon$-neighborhood $\mathcal{U}$ of $F$ with respect to the metric defined by (\ref{m}). Taking $\varepsilon$ small enough such that for each $G \in \mathcal{U}$ and for all $i=1, \dots, n$, the region $\Omega \times I_i$ is a trapping region for $G|_{\Omega \times I_i}$. For each $G\in \mathcal{G}$, we take \begin{equation}\label{rest1} G_i:=G|_{\Omega \times I_i}, \ i=1, \dots, n. \end{equation} \begin{lemma}\label{cor0} Let the $\varepsilon$-neighborhood $\mathcal{U}$ of $F$ defined above be small enough. Then for every perturbed skew product $G \in \mathcal{U}$, the restricted skew products $G_i$, $i=1, \dots,n$, given by (\ref{rest1}) satisfy the non-uniformly contraction condition given by Definition \ref{av}. \end{lemma} \begin{proof} By Lemma \ref{lemma0}, for each $i=1, \dots, n$, the skew product $F_i$ satisfies the non-uniformly contraction condition. So, by (\ref{44}), for $\nu$-almost $\textbf{t}$, the limit $\lim_{m \to \infty}\frac{1}{m}\textnormal{log} \varsigma_{m,i}(\textbf{t})= \inf_{m}\frac{1}{m}\textnormal{log} \varsigma_{m,i}(\textbf{t})$ exists and it is constant, by the ergodicity of $\nu$, denoted by $\lambda_i(\nu)$; in particular, $\lambda_i(\nu)<0$. Take \begin{equation}\label{la} \lambda=\max\{\lambda_i(\nu): i=1, \dots, n\}. \end{equation} Clearly, $\lambda <0$. Take a skew product $G \in \mathcal{U}$. Let $G$ defined by $G(\textbf{t},x)=(S(\textbf{t}),\textbf{g}(\textbf{t},x))$. Then, by definition of the metric given by $(\ref{m})$, $\sup_{\textbf{t}} dist_{C^2}(f_\textbf{t}^{\pm}, g_\textbf{t}^{\pm})< \varepsilon$. Take \begin{equation}\label{per} \zeta_{m,i}(\textbf{t}):=\sup_{x,x^{\prime}\in I_i}\frac{d(g_\textbf{t}^m(x), g_\textbf{t}^m(x^{\prime}))}{d(x,x^{\prime})}, \end{equation} for each $i=1, \dots, n$. Since, for each $\textbf{t}$, the fiber map $g_\textbf{t}$ is a $C^2$ diffeomorphism, there exists a $\varsigma \in L^1(\Omega,\nu)$ such that $d(g_\textbf{t}(x),g_\textbf{t}(x^{\prime}))<\varsigma(\textbf{t})d(x,x^{\prime})$, for $\nu$-almost every $\textbf{t}$. Then, by \cite[Lem.~5.1]{Stark2}, for all $m\geq 0$, $\zeta_{m,i}(\textbf{t})$ exists for almost every t and it is measurable. Also, by \cite[Lem.~5.3]{Stark2}, the sequence $\zeta_{m,i}$ is submultiplicative and $\log \zeta_{m,i}$ is subadditive. By the Kingman's subadditive ergodic theorem \cite[Thm.~5.4]{Kr}, the limit \begin{equation}\label{lya} \lambda_i(\textbf{t}, G)=\lim_{m \to +\infty}\frac{1}{m}\log \zeta_{m,i}(\textbf{t}), \end{equation} exists for $\nu$-almost every $\textbf{t}$ and is $S$ invariant. Moreover, \begin{equation}\label{t44} \lim_{m \to \infty}\frac{1}{m}\textnormal{log} \zeta_{m,i}(\textbf{t})= \inf_{m}\frac{1}{m}\textnormal{log} \zeta_{m,i}(\textbf{t}) \end{equation} and it is constant by the ergodicity of $\nu$, denoted by $\lambda_i(\nu, G)$. We need to see that this number is negative. Fix small $\delta$ with $\lambda+\delta <0$. By (\ref{44}) and (\ref{t44}), $\lambda_i(\nu)=\inf_{n}\frac{1}{m}\textnormal{log} \varsigma_{m,i}(\textbf{t})$ and $\lambda_i(\nu, G)=\inf_{m}\frac{1}{m}\textnormal{log} \zeta_{m,i}(\textbf{t})$, for $\nu$-a.e. $\textbf{t}$. Thus, for a typical point $\textbf{t}$ for $\lambda_i(\nu)$ and $\lambda_i(\nu, G)$, there exists $m_0=m_0(\textbf{t}) \in \mathbb{N}$ such that $\frac{1}{m_0}\textnormal{log} \varsigma_{m_0,i}(\textbf{t})< \lambda+\delta$, where $\lambda$ is given by (\ref{la}). Taking $\varepsilon$ small enough, we get $\frac{1}{m_0}\textnormal{log} \zeta_{m_0,i}(\textbf{t})< \lambda+\delta <0$. Consequently, $$ \lambda_i(\nu, G)=\inf_{m}\frac{1}{m}\textnormal{log} \zeta_{m,i}(\textbf{t})< \frac{1}{m_0}\textnormal{log} \zeta_{m_0,i}(\textbf{t})< \lambda+\delta <0. $$ Thus $G_i$ satisfies the non-uniformly contraction condition. \end{proof} Note that, for each $1\leq i\leq n$, the compact set $\Omega \times I_i $ is a trapping region for each $G \in \mathcal{U}$. Let us take \begin{equation}\label{maxi} A_{max}(G):=\bigcap_{n\geq 0}G^n(\Omega \times I), \ \text{and} \ \Delta_i(G):=\bigcap_{n\geq 0}G^n(\Omega \times I_i), \ 1\leq i\leq n. \end{equation} Then $\Delta_i(G)$, $1\leq i\leq n$, are maximal attractors corresponding to the trapping regions $\Omega \times I_i $ and the union $K(G):=\bigcup_{i=1}^n \Delta_i(G)$ is a compact invariant set contained in the global attractor $A_{max}(G)$. Applying Theorem \ref{thmg} we get the next result (see also\cite[Thm.~5]{C2}, \cite[Thm.~1.4]{Stark2} or \cite[Pro.~2.3]{BHN}). \begin{theorem}\label{thm000} Let $G\in \mathcal{U}$, where the open set $\mathcal{U}$ is given by Lemma \ref{cor0}. Then there exist $S$-invariant sets $\Omega_i(G)\subseteq \Omega$, $i=1, \dots,n$, such that $\nu(\Omega_i(G))=1$ and measurable functions $\gamma_{G,i}:\Omega_i(G)\to I_i$ such that $\Gamma_{G,i}$, the graphs of $\gamma_{G,i}$, are invariant under $G$. Furthermore $\Gamma_{G,i}$, $i=1, \dots,n$, are attracting $(G,\nu)$ invariant graphs. \end{theorem} For the skew product $F$ and any small perturbation $G \in \mathcal{U}$, we define measures $\nu_{\gamma_{F,i}}$ and $\nu_{\gamma_{G,i}}$ on $\Omega \times I_i$ by \begin{equation}\label{meas} \nu_{\gamma_{F,i}}:=\nu\circ(id\times \gamma_{F,i})^{-1}|_{\Omega \times I_i}, \ \ \nu_{\gamma_{G,i}}:=\nu\circ(id\times \gamma_{G,i})^{-1}|_{\Omega \times I_i}, \end{equation} for each $i=1, \dots,n$. Since $\nu$ is ergodic, so they are also ergodic. In particular, they are supported on the maximal attractors $\Delta_i(F)$ and $\Delta_i(G)$, respectively. Note that for the case $n=1$, i.e. we have only one invariant graph, you can see more details of construction and the proofs in \cite{ZG}. \begin{definition}[Bony attracting graph and bony multi-graph] Let $G$ be a skew product with interval fiber maps and an invertible base map $S$. \begin{enumerate} \item [$(1)$] An attracting bony graph is an attracting invariant graph that is the union of the graph of a continuous function $\gamma$ defined on some set of full measure and a set of vertical closed intervals (bones) contained in the closure of the graph, see \cite{KV,ZG}. If the graph function $\gamma$ defined on the whole space $\Omega$, then we call the graph of $\gamma$ a continuous attracting invariant graph. \item [$(2)$] A compact invariant set $K$ is a bony multi-graph for $G$ if $K$ is a multi-graph composed of the finite union of attracting invariant graphs and at least one of the invariant graphs contained in $K$ is an attracting bony graph. \end{enumerate} \end{definition} In \cite[Theorem~1]{ZG}, we investigated a certain class of skew products whose fiber maps are increasing $C^2$-interval maps over the base map $(\Omega,S)$. We proved the existence of an open set of skew products such that any skew product $G$ belonging to this set admits a unique attracting invariant graph for which the following dichotomy is ascertained. This invariant graph is either a continuous attracting graph or an attracting bony graph. In both cases they carry an SRB measure. Here, we extend the result to the case that $G$ admits more than one attracting invariant graph. The next result can be followed by applying \cite[Theorem 1]{ZG} to our setting by any modification (see also \cite[Lemma~2.6]{ZG}). \begin{theorem}\label{thm88} Given a skew product system $G$ belonging to $\mathcal{U}$, the maximal attractors $\Delta_i(G)$, $i=1, \dots, n$, defined by (\ref{maxi}), satisfy the following properties: \begin{enumerate} \item [$(1)$] The maximal attractor $\Delta_i(G)$ is either a continuous attracting invariant graph or an attracting bony graph. In the case $\Delta_i(G)$ is an attracting bony graph, the graph function $\gamma_{G,i}$ defined on a subset $\Omega_i(G)\subseteq \Omega$ with total measure and there exists a family of vertical closed intervals (bones), one bone in each fiber $\textbf{t} \times I_i$, with $\textbf{t} \in \Omega \setminus \Omega_i(G)$; in particular, the bones are contained in the closure of the graph $\Gamma_{G,i}$ and $\text{Cl}(\Gamma_{G,i})=\Delta_i(G)$. \item [$(2)$] The invariant ergodic measures $\nu_{\gamma_{G,i}}$, $i=1, \dots,n$, given by (\ref{meas}), supported on the closure of $\Gamma_{G,i}$, are SRB measures. \end{enumerate} \end{theorem} However, weakly contractive fiber maps prevent the existence of bones in the skew product, but bones appear in small perturbations of the original system. By \cite[Lemma~2.5]{ZG}, we get the next lemma. It shows that the set of $G \in \mathcal{U}$ having an attracting bony graph is nonempty. \begin{lemma}\label{G} There exists a skew product $G \in \mathcal{U}$ and an index $1\leq i\leq n$ such that $\Delta_i(G)$ is the closure of an attracting bony graph. In particular, the subset of bones has the cardinality of the continuum and is dense in the attractor. \end{lemma} \begin{lemma}\label{lem99} There exists an upper semicontinuous extension of the graph function $\gamma_{\mathbb{G},i}: \Omega_i(G) \to I_i$ to the whole space $\Omega$. \end{lemma} \begin{proof} By Theorem \ref{thm88}, there exist $S$-invariant sets $\Omega_i(G)$ such that $\nu(\Omega_i(G))=1$ and measurable function $\gamma_{G,i} : \Omega_i(G) \to I_i$ such that $\Gamma_{G,i} $, the graph of $\gamma_{G,i}$, is invariant under $G$. Let $\textbf{t} \in \Omega$ and $\varepsilon > 0$ be given. By Theorem \ref{thm88}, $\text{Cl}(\Gamma_{G,i})=\Delta_i(G)$. Take $I_{i,\textbf{t}}:=\{\textbf{t}\} \times I_i$ and $\Delta_{G,i,\textbf{t}}:=\Delta_i(G)\cap I_{i,\textbf{t}}$. Then \begin{equation*} \Delta_{G,i,\textbf{t}}=\bigcap_{n \geq 0}g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}(I_i)=\lim_{n \to \infty}g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}(I_i), \ i=1, \dots,n. \end{equation*} Note that $g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}(I_i)$ is a sequence of nested intervals, and thus $\Delta_{G,i,\textbf{t}}$ is either an interval or a single point. In particular, if $\textbf{t} \in \Omega_i(G)$ then $\Delta_{G,i,\textbf{t}}$ is a single point. If $n$ is big enough then \begin{equation*} g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}(I_i) \subset U_{\frac{\varepsilon}{2}}(\Delta_{G,i,\textbf{t}}), \end{equation*} where $U_{\frac{\varepsilon}{2}}(\Delta_{G,i,\textbf{t}})$ is $\frac{\varepsilon}{2}$-neighborhood of $\Delta_{G,i,\textbf{t}}$. Let $\textbf{t}^{\prime}$ be sufficiently close to $\textbf{t}$. Then $g_{\textbf{t}^{\prime}} \circ g_{S^{-1}(\textbf{t}^{\prime})} \circ \dots \circ g_{S^{-n}(\textbf{t}^{\prime})}$ is $C^2$ close to $g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}$ and hence \begin{equation*} g_{\textbf{t}^{\prime}}\circ g_{S^{-1}(\textbf{t}^{\prime})}\circ \dots \circ g_{S^{-n}(\textbf{t}^{\prime})}(I_i) \subset U_{\frac{\varepsilon}{2}}(\Delta_{G,i,\textbf{t}}). \end{equation*} Then \begin{equation}\label{1} \Delta_{G,i,\textbf{t}^{\prime}}\subset g_{\textbf{t}^{\prime}} \circ g_{S^{-1}(\textbf{t}^{\prime})} \circ \dots \circ g_{S^{-n}(\textbf{t}^{\prime})}(I_i)\subset U_{\frac{\varepsilon}{2}}(g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}(I_i))\subset U_{\varepsilon}(\Delta_{G,i,\textbf{t}}). \end{equation} This implies the upper-semicontinuity of $\Delta_{G,i,\textbf{t}}$. This semicontinuity will immediately imply the continuity of its graph part. Indeed, by (\ref{1}), we obtain that $$\text{diam}(\Delta_{G,i,\textbf{t}^{\prime}})\leq \text{diam}(\Delta_{G,i,\textbf{t}})+2\varepsilon. $$ If $\Delta_{G,i,\textbf{t}}$ is a single point, then $\text{diam}(\Delta_{G,i,\textbf{t}})=0$. Then, by this fact and (\ref{1}), $$|\text{diam}(\Delta_{G,i,\textbf{t}})-\text{diam}(\Delta_{G,i,\textbf{t}^{\prime}})|= \text{diam}(\Delta_{G,i,\textbf{t}^{\prime}})\leq \text{diam}(\Delta_{G,i,\textbf{t}})+2\varepsilon \leq 2\varepsilon.$$ This implies continuity at $\textbf{t}$. Consider an extension of $\gamma_{G,i} : \Omega_i(G) \to I_i$ to the whole space $\Omega$ as the following: for each $\textbf{t} \in \Omega$, take \begin{equation}\label{2} \gamma_{i}(\textbf{t})=\lim_{n \to \infty}g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}(a_i), \end{equation} where $a_i$ is the left endpoint of $I_i$. By (\ref{1}), if $\textbf{t} \in \Omega_i(G)$, then $\lim_{n \to \infty},g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}(I_i)=\lim_{n \to \infty},g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}(a_i) $. This fact implies that the map $\gamma_i$ as defined in (\ref{2}) is an extension of $\gamma_{G,i}$. We claim that $\gamma_i$ is upper semicontinuous. Indeed, let $\gamma_i(\textbf{t})<c$, for some $c \in (0,1)$. Then, by definition, $\lim_{n \to \infty}g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}(a_i)<c$. Hence, there exists $n_0 \in \mathbb{N}$ such that $g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}(a_i)<c$, for each $n \geq n_0$. Note that the fiber maps are increasing interval maps. Take $\varepsilon >0$ small enough such that if an interval map $f$ is $C^2$-$\varepsilon$-close to $g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}$ then $f(a_0)<c$. Now take $\textbf{t}^{\prime}$ sufficiently close to $\textbf{t}$ such that $g_{\textbf{t}^{\prime}} \circ g_{S^{-1}(\textbf{t}^{\prime})} \circ \dots \circ g_{S^{-n}(\textbf{t}^{\prime})}$ is $C^2$-$\varepsilon$-close to $g_{\textbf{t}}\circ g_{S^{-1}(\textbf{t})}\circ \dots \circ g_{S^{-n}(\textbf{t})}$. Then $g_{\textbf{t}^{\prime}} \circ g_{S^{-1}(\textbf{t}^{\prime})} \circ \dots \circ g_{S^{-n}(\textbf{t}^{\prime})}(a_0)<c$. This fact implies the upper semicontinuity of $\gamma_i$. \end{proof} By Corollary \ref{thmm}, Lemma \ref{cor0} and Theorem \ref{thm88}, we get the next result. \begin{corollary}\label{cor33} Let $G \in \mathcal{U}$ and consider the compact invariant set $K(G):=\bigcup_{i=1}^n \Delta_i(G)$, where $\Delta_i(G)$, $i=1, \dots,n$, are given by (\ref{maxi}). Then $K(G)$ is an invariant multi-graph or a bony multi-graph. In particular, $K(G)$ carries $n$ ergodic SRB measures $\nu_{\gamma_{G,i}}$, $i=1, \dots,n$, given by (\ref{meas}). \end{corollary} \section{Thermodynamic properties of invariant graphs} Take an skew product $G \in \mathcal{U}$ given by Theorem \ref{thm000}. Here, we discuss thermodynamic properties of invariant graphs $\Gamma_{G,i}$, $i=1, \dots , n$, given by Theorem \ref{thm88}, and hence the bony multi-graph $K(G)$. Let $(X, d)$ be a non-empty, compact metric space and $T : X \to X$ be a continuous transformation. For each $n \in \mathbb{N}$ define for $x, y \in X$ the metric \begin{equation*} d_n(x, y) := \max \{d(T^i(x), T^i(y)) : 0 \leq i < n \} . \end{equation*} Given some $\epsilon > 0$, a subset $\emptyset \neq E \subseteq K \subseteq X$ is called $(\epsilon, n)$-\emph{separated} in $K$, if \begin{equation*} \inf \{ d_n(x, y) : x \neq y \in E \} \geq \epsilon. \end{equation*} In addition, $E \subseteq K$ is called \emph{maximally} $(\epsilon, n)$-\emph{separated} in $K$, if for all $z \in K$ the set $E \cup \{ z \}$ is not $(\epsilon, n)$-separated anymore. Using Zorn's lemma, for every non-empty subset $K \subset X$ there exists a maximally $(\epsilon, n)$-separated set $E \subset K$. In what follows, the set of all $T$-invariant probability measures is denoted by $\mathcal{M}_T(X)$. Moreover, we denote by $\mathcal{E}_T (X) \subseteq \mathcal{M}_T(X)$ the set of all $T$-invariant, ergodic probability measures on $X$. Recall for $\mu \in \mathcal{M}_T(X)$, we denote by $h_\mu(T)$ the measure-theoretic entropy of $T$ with respect to $\mu$. For dynamical systems $(X, T)$ the quantity $h_{top}(T)$ denotes the topological entropy of $(X, T)$, and one has \begin{equation*} h_{top}(T)=\sup \{h_\mu(T): \mu \in \mathcal{M}_T(X)\}. \end{equation*} Let $(X, T)$ be a dynamical system and $\phi : X \to \mathbb{R}$ be an arbitrary function. For every subset $K \subseteq X$, define \begin{equation*} P_K(T, \phi) := \lim_{\epsilon \to 0}\limsup_{n \to \infty}\frac{1}{n}\log \sup_{E}\sum_{x \in E}\exp \sum_{i=0}^{n-1}\phi (T^i(x)), \end{equation*} where the supremum is taken over all $(\epsilon, n)$-separated sets in $K$. The variational principle for the topological pressure of continuous functions was proven in \cite{W}: One has \begin{equation*} P_X(T, \phi) = \sup \{h_\mu(X) + \int_X \phi d\mu\}, \end{equation*} where the supremum is taken over all ergodic $T$-invariant Borel probability measures $\mu$ on $X$. Here $h_\mu(T)$ denotes the measure theoretic entropy of $T$ with respect to $\mu$. Take an skew product $G\in \mathcal{U}$ given by Theorem \ref{thm000}. We recall that the base map $S$ is an extension of expanding circle map $\omega: \mathbb{T}^1 \to \mathbb{T}^1$, $\omega(t)=4t \ (mod 1)$. In the setting of uniformly expanding maps, equilibrium states always exist and they are unique SRB measures if the potential is H\"{o}lder continuous and the dynamics is topologically exact, see \cite[Theorem 12.1]{VO}. Clearly $\omega$ is a topologically exact uniformly expanding map and hence it admits a unique SRB equilibrium measure $\nu^+_\phi$ for each H\"{o}lder continuous potential $\phi$. Moreover, it is supported on the whole $\mathbb{T}^1$. As we have seen in Subsection 4.2, there exists a semiconjugacy $\textbf{p} : \Omega \to \mathbb{T}^1$ sending each sequence $\textbf{t}=(\ldots, t_{-n}, \ldots, t_{-1}, t_0)\in \Omega$ to its term $t_0$ of order zero. Given an ergodic measure $\mu^+$ defined on Borel subsets of $\mathbb{T}^1$ there exists a unique measure $\mu$ defined on Borel subsets of $\Omega$ such that $\mu^+=\textbf{p}_\ast \mu$. Here, we investigate the relation between entropy and topological pressure of the expanding circle map $\omega$ and its extension $S$. First, we need the following results. Assume $f:M\longrightarrow M$ and $\tilde{f}:\tilde{M}\longrightarrow \tilde{M}$ are two continuous transformations of compact topological spaces $M$ and $\tilde{M}$, respectively. If $f$ is a topological factor of $\tilde{f}$, then $h_{top}(f)\leq h_{top}(\tilde{f})$, see \cite{VO}. Also, Ledrappier-Walter's formula states the relation between metric entropy of $\tilde{f}:\tilde{M}\to \tilde{M}$ and its topological factor $f:M\to M$: \begin{theorem} $(\mathbf{Ledrappier-Walter's \ formula}).$\cite{LW} Let $\tilde{M}$ and $M$ be compact metric spaces and let $\tilde{f}:\tilde{M}\longrightarrow \tilde{M}$, $f:M\longrightarrow M$ and $\pi:\tilde{M}\longrightarrow M$ be continuous maps such that $\pi$ is surjective and $\pi\circ\tilde{f}=f\circ\pi$, then $$\sup_{\tilde{\mu};\pi_{*}\tilde{\mu}=\mu}h_{\tilde{\mu}}(\tilde{f})=h_{\mu}(f)+\int h_{top}(\tilde{f},\pi^{-1}(y))d\mu(y).$$ \end{theorem} \begin{lemma}\label{l} The following two facts hold: \begin{enumerate} \item [$(1)$] For any probability measures $\mu$ and $\mu^+$ invariant under $S$ and $\omega$, respectively, with $\mu^+=\textbf{p}_\ast \mu$, one has that $h_{\mu^+}(\omega)=h_\mu(S)$. \item [$(2)$] Let $\phi$ be a H\"{o}lder continuous potential for the expanding circle map $\omega$. Then $P_\omega(\phi)=P_S(\psi)$, where $\psi= \phi \circ \textbf{p}$. \end{enumerate} \end{lemma} \begin{proof} (1) Since \begin{equation*} \textbf{p}^{-1}(t)=\{(\ldots, t_{-n}, \ldots, t_{-1}, t_0)\in \Omega: t_0=t\} \end{equation*} we observe that $h_{top}(\omega, \textbf{p}^{-1}(t))=0$ for every $t \in \mathbb{T}^1$ because we can choose a subset of $\textbf{p}^{-1}(t)$ with finite cardinality as $n$-generator for every $n \in \mathbb{N}$. Thus, we apply the Ledrappier-Walter's formula to conclude that $h_{\mu^+}(\omega)=h_\mu(S)$, see Section 3.3 of \cite{RV} for more details. (2) First we observe that if $\phi$ is H\"{o}lder continuous then $\phi \circ \textbf{p}$ is also H\"{o}lder continuous. Since $h_{top}(\omega, \textbf{p}^{-1}(t))=0$ for every $t \in \mathbb{T}^1$, by the previous lemma and applying Ledrappier-Walter's formula, we have \begin{eqnarray*} P_\omega(\phi)\leq P_S(\psi) &=& \sup\{h_\mu(S)+\int \psi d\mu\} \\ &=& \sup\{h_{\mu^+}(\omega)+\int h_{top}(\omega, \textbf{p}^{-1}(t))d\mu^+(t)+\int \phi d\mu^+\} \\ &=& P_\omega(\phi), \end{eqnarray*} where the supremum is taken over all probability measures $\mu$ invariant under $S$. Note that given an invariant measure $\mu^+$ defined on Borel subsets of $\mathbb{T}^1$ there exists a unique invariant measure $\mu$ defined on Borel subsets of $\Omega$ such that $\mu^+=\textbf{p}_\ast \mu$. \end{proof} By Theorem \ref{thm000}, there exists $S$-invariant sets $\Omega_i(G)\subseteq \Omega$, $i=1, \dots,n$, such that $\nu(\Omega_i(G))=1$ and measurable functions $\gamma_{G,i}:\Omega_i(G)\to I_i$ such that $\Gamma_{G,i}$, the graphs of $\gamma_{G,i}$, are invariant under $G$. Furthermore $\Gamma_{G,i}$, $i=1, \dots,n$, are attracting $(G,\nu)$ invariant graphs. Let us take the skew products \begin{equation}\label{ss} G_i:= G|_{\Omega \times I_i}, \ G_i(\textbf{t}, x)= (S(\textbf{t}), g(\textbf{t}, x)), \ 1\leq i\leq n. \end{equation} By Lemma \ref{lem99}, we extend measurable functions $\gamma_{G,i}$ to the whole space $\Omega$ so that the extended functions are upper semicontinuous. We also write the extended functions by the same notation $\gamma_{G,i}$. Recently, a lot of attention has been done to extend the definition of pressure to not necessarily continuous functions $\phi$, and to prove a corresponding variational principle. A variational principle for sub-additive, upper semi-continuous sequences of functions was established in \cite{CFH} and \cite{BF}. This result was recently generalized in \cite{FH} for weighted topological pressure on systems with upper semi-continuous entropy mapping. Then, Rauch \cite{R} extended the original definitions of pressure to discontinuous $\phi$, and compare them to the classical ones. Furthermore he determined several classes of functions, which admit variational inequalities and principles. Following \cite{R}, we call $\phi$ to be \emph{quasi-integrable with respect to} $\mu$, if either $\int_X \phi^+ d\mu <\infty$ or $\int_X \phi^- d\mu <\infty$, where $\phi^+:=\max(\phi,0)$ and $\phi^+:=\max(- \phi,0)$. The set of all measurable $\phi : X \to \mathbb{R}$, which are quasi-integrable for all $\mu \in \mathcal{M}_T (X)$, is defined by $Q_T (X)$. We call $\phi \in Q_T (X)$ \emph{quasi-integrable with respect to} $T$. Let be $\phi \in Q_T (X)$. The function $\phi$ is \emph{upper semi-continuous with respect to} $T$, if the following holds: If $\{ \mu_n \}_{n\in \mathbb{N}}$ is a sequence of atomic probability measures $\mu_n$ such that there exists a $\mu \in \mathcal{M}_T (X)$ satisfying $\mu_n \to \mu$ in the weak$^\ast$-topology, then \begin{equation*} \limsup_{n \to \infty}\int_X \phi d\mu_n \leq \int_X \phi d\mu. \end{equation*} The set of all upper semi-continuous functions with respect to $T$ is denoted by $U_T (X) \subseteq Q_T (X)$, see \cite{R} for more details. \begin{theorem}\cite[Theorem C]{R} Let $(X, T)$ be a dynamical system satisfying $h_{top}(T) < \infty$. If $\phi : X \to \mathbb{R}$ is upper semi-continuous with respect to $T$ (see the above definition), then one has \begin{equation*} P_T(X,\phi)=\sup\{h_\mu(T)+ \int_X \phi d\mu \}, \end{equation*} where the supremum is taken over all $T$-invariant Borel probability measures $\mu$ on $X$. \end{theorem} A function $\phi : X \to \mathbb{R}$ is called \emph{upper semi-continuous}, if $\{ x \in X : \phi(x) < c\}$ is an open set for every $c \in \mathbb{R}$. By definition every upper semi-continuous function is also Borel measurable. We denote the set of all upper semi-continuous functions $\phi : X \to \mathbb{R}$ by $U(X)$. As $X$ is compact, every $\phi \in U(X)$ is bounded from above (see for example \cite{AB} Theorem 2.43). This immediately yields $U(X) \subseteq Q_T (X)$. In addition, by Proposition 6 of \cite{R}, one has $U(X) \subseteq U_T (X)$ for every continuous mapping $T : X \to X$. \begin{corollary}\cite[Corollary~6]{R}\label{upper} Let $h_{top}(T) < \infty$ and $\phi \in U(X)$. Then one has \begin{equation*} P_T(X,\phi)=\sup\{h_\mu(T)+ \int_X \phi d\mu \}, \end{equation*} where the supremum is taken over all $T$-invariant Borel probability measures $\mu$ on $X$. \end{corollary} Also, we recall the following proposition from \cite{R}. \begin{proposition} Let $\{\mu_n\}_{n \in \mathbb{N}}$ be a sequence of Borel probability measures with limit measure $\mu$ in the weak$^*$ topology. Then one has for $\varphi \in U(X)$ \begin{equation*} \limsup_{n \to \infty}\int_X \varphi d\mu_n \leq \int_X \varphi d\mu. \end{equation*} \end{proposition}\label{p1} \begin{proposition}\label{eq} Consider the skew products $G_i= G|_{\Omega \times I_i}$, $1\leq i\leq n$, given by (\ref{ss}) and let $\phi_i: \Omega \times I_i \to \mathbb{R}$ be a H\"{o}lder continuous potential. Take $\psi_i : \Omega \to \mathbb{R}$ defined by $\psi_i(\textbf{t}) = \phi_i(\textbf{t}, \gamma_{G,i}(\textbf{t}))$. Then there exist some equilibrium state $\mu_{\psi_i}$ for $(S, \psi_i)$. \end{proposition} \begin{proof} By Lemma \ref{lem99}, $\psi_i$ is upper semicontinuous. By Corollary \ref{upper}, one has that \begin{equation*} P_S(\Omega,\psi_i)=\sup\{h_\mu(S)+ \int_\Omega \psi_i d\mu \}, \end{equation*} where the supremum is taken over all $S$-invariant Borel probability measures $\mu$ on $\Omega$. Since the expanding circle map $\omega$ is a $C^2$ local diffeomorphism then the natural extension $S$ is locally Lipschitz continuous, i.e., given $\textbf{t} \in \Omega$ there exists a neighborhood $V_\textbf{t}$ such that for every $\textbf{t}_1, \textbf{t}_2 \in S(V_{\textbf{t}})$ we have \begin{equation*} d_\Omega(S^{-1}(\textbf{t}_1),S^{-1}(\textbf{t}_2))\leq \sigma(\textbf{t})d_{\Omega}(\textbf{t}_1, \textbf{t}_2) \end{equation*} where $\sigma(\textbf{t})=\|D\omega^{-1}\| \circ \textbf{p}(\textbf{t})$. In particular, $S$ is a Ruelle expanding map and hence it is expansive. By Corollary 9.2.17 of \cite{VO}, the entropy function of an expansive transformation in a compact metric space is upper semi-continuous. By Proposition \ref{p1} and upper semicontinuity of $\psi_i$, the function $\mu \mapsto \int_\Omega \psi_i d\mu$ is also upper semicontinuous. By these facts the map $\mu \mapsto h_\mu(S)+ \int_\Omega \psi_i d\mu$ is upper semicontinuous and hence, there exists some equilibrium state $\mu_{\psi_i}$. Indeed, let $\{\mu_n\}_{n \in \mathbb{N}}$ be a sequence of Borel probability invariant measures such that \begin{equation*} h_{\mu_n}(S)+ \int_{\Omega} \psi_i d\mu_n \quad \text{converges \ to} \quad P_S(\Omega,\psi_i). \end{equation*} Since the space invariant Borel probability measures is compact, there exists some accumulation point $\mu_{\psi_i}$. By this fact and upper semicontinuity of $\mu \mapsto h_\mu(S)+ \int_\Omega \psi_i d\mu$, we get \begin{equation*} h_{\mu_{\psi_i}}(S)+ \int_{\Omega} \psi_i d\mu_{\psi_i} \geq \liminf_{n \to \infty}h_{\mu_n}(S)+ \int_{\Omega} \psi_i d\mu_n=P_S(\Omega,\psi_i), \end{equation*} so $\mu_{\psi_i}$ is an equilibrium state, as stated. \end{proof} Let $\phi: \Omega \times I \to \mathbb{R}$ be a H\"{o}lder continuous potential that does not depend on the fiber. This means that the function $\phi(\textbf{t}, .) : \Omega \to \mathbb{R}$ is constant, for each $\textbf{t} \in \Omega$ fixed. Take the restriction $\phi_i:=\phi |_{\Omega \times I_i}$, $i=1, \dots, n$. Clearly, $\phi_i$ is also a H\"{o}lder continuous potential. Hence, as above, the potential $\phi_i$ induces an upper semicontinuous potential $\psi_i : \Omega \to \mathbb{R}$ defined by $\psi_i(\textbf{t}) = \phi_i(\textbf{t}, \gamma_{G,i}(\textbf{t}))$. We recall that the expanding circle map $\omega$ possesses an absolutely continuous invariant ergodic measure $\nu^+$ which is equivalent to Lebesgue. Then, $(\Omega, S)$ has an invariant ergodic measure $\nu$ inherited from the invariant measure $\nu^+$, i.e. $\nu^+=\textbf{p}_\ast \nu$. \begin{proposition}\label{p00} Let $\phi: \Omega \times I \to \mathbb{R}$ be a H\"{o}lder continuous potential and take $\phi_i=\phi |_{\Omega \times I_i}$, $i=1, \dots, n$. Consider the skew product $G\in \mathcal{U}$ and $G_i= G|_{\Omega \times I_i}$, $1\leq i\leq n$, given by (\ref{ss}). Assume the measure $\mu_{\psi_i}$, given by Proposition \ref{eq}, is ergodic and it is not singular with respect to the invariant ergodic measure $\nu$. Then $\mu_{\phi_i}=\mu_{\psi_i} \circ (id \times \gamma_{G,i})^{-1}$ is an equilibrium state associated to $(G_i, \phi_i)$ which is supported on the maximal attractors $\Delta_i(G)$. \end{proposition} \begin{proof} Take the measure $\mu_{\psi_i}$, given by Proposition \ref{eq}, which is ergodic and it is not singular with respect to the invariant ergodic measure $\nu$. As both measures are $S$-invariant and ergodic, this implies that these measures coincide. Since $\mu_{\psi_i}$ is ergodic, so $\mu_{\phi_i}$ is also ergodic. In particular, since $\mu_{\psi_i}$ is equivalent to $\nu$, so $\mu_{\phi_i}$ is supported on the maximal attractors $\Delta_i(G)$. By Lemma \ref{cor0}, $\lambda_i(\nu, G_i)$ is negative and $G_i$ satisfies the non-uniformly contraction condition, hence, for $\nu$- almost every $\textbf{t} \in \Omega$, $h_{top}(G_i, \pi^{-1}(\textbf{t}))=0$. Since $\mu_{\psi_i}$ is equivalent to $\nu$, thus, for $\mu_{\psi_i}$- almost every $\textbf{t} \in \Omega$, $h_{top}(G_i, \pi^{-1}(\textbf{t}))=0$. Applying Ledrappier-Walter's formula, we have \begin{eqnarray*} P_S(\psi_i)\leq P_{G_i}(\phi_i) &=& \sup\{h_{\hat{\mu}}(G_i)+\int \phi_i d\hat{\mu}\} \\ &=& \sup\{h_{\mu}(S)+\int h_{top}(G_i, \pi^{-1}(\textbf{t}))d\mu(\textbf{t})+\int \psi_i d\mu\} \\ &=& P_S(\psi_i). \end{eqnarray*} Hence, $P_S(\psi_i)= P_{G_i}(\phi_i)$ and $\mu_{\phi_i}$ is an equilibrium state for $(G_i, \phi_i)$. \end{proof} \section{Skew products over the toral baker map} Here, a toral baker map on $\mathbb{T}^2=\mathbb{T}^1 \times \mathbb{T}^1$ is defined by \begin{align}\label{17} H:\mathbb{T}^2\to \mathbb{T}^2, \ H(t,s)=(bt(mod 1),\frac{(s+[bs])}{b}), \end{align} for some positive integer $b$ and for each $\theta=(t,s)\in \mathbb{T}^2$. This map is bijective but not continuous. Moreover, it preservers the Lebesgue measure $m$. Hence, $(\mathbb{T}^2, \mathcal{B}, m, H)$ is a measure-preserving dynamical system, in the sense of Arnold \cite{AL}, where $\mathcal{B}$ is the Borel $\sigma$-algebra on $\mathbb{T}^2$. Assume $\mathcal{F}_H \subset\mathcal{F}$ denotes the family of all skew product transformations $\mathbb{G}$ with the base map $H$ defined on $\mathbb{T}^2 \times \mathbb{I}$ of the form \begin{equation}\label{skew00} \mathbb{G}: \mathbb{T}^2 \times \mathbb{I} \to \mathbb{T}^2 \times \mathbb{I}, \ \mathbb{G}(\theta,x)=(H(\theta),g_{\theta}(x)), \end{equation} where $(\theta,x)\in \mathbb{T}^2 \times \mathbb{I}$, $\theta=(t,s)\in \mathbb{T}^2$ and the fiber maps $g_{\theta}$ depends on $\theta=(t,s)$ only through $t$, so we can write $g_{\theta}=g_{t}$, where $\mathcal{F}$ is given by Definition \ref{generalskew}. By definition, $\mathbb{G}(\mathbb{T}^2 \times \mathbb{I}) \subset \mathbb{T}^2 \times \text{int}(\mathbb{I})$. Note that the baker map $H$ is bijective but not continuous. Moreover, it is a canonical extension of the corresponding expanding circle map $\omega(t)=bt \ \text{mod 1}$, where $b$ is a positive integer. Here, we assume that $b=4$. \begin{lemma} There exists a measure-theoretical isomorphism $h: (\mathbb{T}^2, H, m) \to(\Omega, S, \nu)$, where $m$ is the Lebesgue measure on $\mathbb{T}^2$. \end{lemma} \begin{proof} We define $h:\mathbb{T}^2 \to \Omega$ by $h(\theta)=(\ldots,t_{-1},t_{0})$, for $\theta=(t,s)\in \mathbb{T}^2$ so that $(t_0,s_0)=(t,s)$ and for each $j \in \mathbb{N}$, $(t_{-j},s_{-j})=H^{-j}(t_0,s_0)$. Then, it is easy to see that $h$ is a bijective map. Indeed, the injectivity of the baker map $H$ implies the injectivity of $h$. Assume $\textbf{t}=(\ldots,t_{-1},t_{0})\in \Omega$. Take \begin{equation*} t=t_0, \ s=\sum_{i=0}^\infty \frac{\lfloor 4 t_{-i-1} \rfloor}{4^{(i+1)}}, \ \text{and} \ \theta = (t,s). \end{equation*} Then $h(\theta)=\textbf{t}$ and hence, $h$ maps the two dimensional torus $\mathbb{T}^2$ onto the inverse limit space $\Omega$. Also, $h \circ H=S \circ h$. It is easy to see that, $h$ is a measure-theoretical isomorphism between the two measure preserving dynamical systems $(\mathbb{T}^2, H, m)$ and $(\Omega, S, \nu)$, where $m$ is the Lebesgue measure on $\mathbb{T}^2$. \end{proof} Let us take the invariant graph $\widetilde{F}$ is given by (\ref{be}). It admits an extension $\mathbb{F}\in \mathcal{F}_H$ of the form (\ref{skew00}) and an extension $F$ given by (\ref{sss}) over the solenoid map $S$. \begin{theorem}\label{thm777} There exists an open set $\mathcal{U}_H \subset \mathcal{F}_H$ of skew products over the baker map $H$ such that each skew product $\mathbb{G} \in \mathcal{U}_H$ admits an attracting multi-graph or an attracting bony multi-graph. \end{theorem} \begin{proof} Take any skew product $\mathbb{G} \in \mathcal{F}_H$ over the baker map $H$ of the form (\ref{skew00}) sufficiently small to $\mathbb{F}$. Note that for iterates of $\mathbb{G}$ we denote \begin{equation*} \mathbb{G}^n(\theta,x)=(H^n(\theta),g_{H^{n-1}(\theta)}\circ \cdots \circ g_{\theta}(x))= (H^n(\theta),g_\theta^{n}(x)). \end{equation*} Since the fiber maps $g_{\theta}$ depend on $\theta=(t,s)$ only through $t$, so we can write $g^n_{\theta}(x)=g_{\omega^{n-1}(t)}\circ \dots \circ g_{t}(x)$. Thus $g^n_{\theta}(x)=g^n_{\textbf{t}}$, for each $\textbf{t}=(\dots,t_{-1},t_0)\in \Omega$ with $t_0=t$. By this fact, for each skew product $G\in \mathcal{U}$ over the base map $S$ satisfying in the conclusion of Lemma \ref{cor0}, we associate a skew product $\mathbb{G}$ over the baker map $H$ defined by $\mathbb{G}(\theta,x)=(H(\theta),g(\theta,x))=(H(\theta),g_t(x))$, $\theta=(t,s)\in \mathbb{T}^2$, where $g_t$, $t\in \mathbb{T}^1$, are the fiber maps of $G$. Therefore, there exist an open set $\mathcal{U}_H\subset \mathcal{F}_H$ and a one to one correspondence between the skew products $G\in \mathcal{U}$ over the base map $S$ and the skew products $\mathbb{G}\in \mathcal{U}_H$ over the baker map $H$. Moreover, $(h \times id)\circ \mathbb{G}=G\circ (h \times id)$. Given a skew product transformation $G\in \mathcal{U}$, take the invariant graphs $\gamma_{G,i}$ given by Theorem \ref{thm000} defined on the subset $\Omega_i(G)\subseteq \Omega$ with total measure. By Theorem \ref{thm88}, $\text{Cl}(\Gamma_{G,i})=\Delta_i(G)$, where $\Delta_i(G)$ is the maximal attractor given by (\ref{maxi}), $i=1, \dots,n$. Now, for each $\theta=(t,s)\in \mathbb{T}^2$, define $\gamma_{\mathbb{G},i}(t,s) =\gamma_{G,i}(h(t,s))$. We denote the graph of $\gamma_{\mathbb{G},i}$ by $\Gamma_{\mathbb{G},i}$. Notice that the fiber map of $\mathbb{G}$ is constant along the stable leaves of the baker map $H$ given by vertical fibers $\{t\} \times \mathbb{T}^1$, hence $\gamma_{\mathbb{G},i}(t,s)$ is constant along the stable leaves of $H$. Take $I_i(\textbf{t})=\{\textbf{t}\}\times I_i$ for $\textbf{t}=(\cdots,t_{-1},t_0)\in \Omega$, and \begin{equation*} I_i(\textbf{t},m,G)=g_{S^{-1}(\textbf{t})}\circ \cdots \circ g_{S^{-m}(\textbf{t})}(I_i)=g_{t_{-1}}\circ \cdots \circ g_{t_{-m}}(I_i), \end{equation*} for $i=1, \dots,n$. Then \begin{equation*} \Delta_i(G)\bigcap I_i(\textbf{t})=\bigcap_{n\geq 0}I_i(\textbf{t},m,G). \end{equation*} Note that, by construction, the fiber maps of the skew products $G$ and $\mathbb{G}$ are the same. Hence, by these observations and the previous lemma, $\text{Cl}(\Gamma_{\mathbb{G},i})$ is an attracting invariant graph or an attracting bony graph for $\mathbb{G}$. Let us take \begin{equation}\label{maxi1} A_{max}(\mathbb{G}):=\bigcap_{n\geq 0}\mathbb{G}^n(\mathbb{T}^2 \times I), \ \text{and} \ \Delta_i(\mathbb{G}):=\bigcap_{n\geq 0}\mathbb{G}^n(\mathbb{T}^2 \times I_i), \ 1\leq i\leq n. \end{equation} Then $\Delta_i(\mathbb{G})=\text{Cl}(\Gamma_{\mathbb{G},i})$, $1\leq i\leq n$, and thus the union $K(\mathbb{G}):=\bigcup_{i=1}^n \Delta_i(\mathbb{G})$ is an attracting multi-graph or bony multi-graph. \end{proof} \section{Data Availability} Data sharing not applicable to this article as no datasets were generated or analysed during the current study. \section*{Conflict of interest} The authors declare that they have no conflict of interest.
1,314,259,996,185
arxiv
\section{Introduction} Mathematical billiards serve as relevant models of various phenomena in mechanics, geometric optics and acoustics, statistical physics, and quantum physics. Such billiards also constitute one of the most popular and arguably the most visual class of dynamical systems in the mathematical studies. In mathematical billiards, a point particle moves by inertia in a domain with boundary. When the point particle reaches the boundary, it gets elastically reflected.\par Recently were introduced physical billiards where the moving particle is a hard ball \cite{Bun19}. It was shown in this paper that in the transition from a mathematical to a physical billiard in the same billiard table any type of transition from chaotic to regular dynamics and vice versa may occur. Moreover, such transition from the point to a finite size particle can completely change the dynamics of some classical and well-studied models like e.g. the Ehrenfests' Wind-Tree model \cite{ABB}. In quantum systems, a ``particle'' naturally has a finite size due to the uncertainty principle which leads to some new findings in the quantum chaos theory \cite{PCB,RBG}.\par Interesting changes in dynamics occur when the boundary of a billiard table has a visible singularity, i.e. a point in the intersection of two or more smooth components of the boundary such that a small enough physical particle can hit that point of the boundary. If a billiard table is two-dimensional, then such singularities are internal corners where two smooth components of the boundary intersect and make an angle greater than $\pi$ inside the billiard table. In all papers cited above, it was assumed that reflection of the ball off such visible singularity occurs in a natural manner corresponding to the simplest elastic collision. In the present note, we justify this assumption for a smooth hard ball. It is worthwhile to mention that there are other types of reflection of a ball off a visible singular point that correspond to a rough ball which may acquire rotation after such collision \cite{Gar69} even under the assumption that it is a no-slip collision \cite{BG93,CF17}. \section{Different types of boundary singularities in billiard tables} Let $Q$ be a domain in $d$-dimensional Euclidean space $\mathbf{R}^d$ such that its boundary $\partial Q$ is the union of a finite number of $C^1$-smooth $(d-1)$-dimensional manifolds. A point $q$ of the boundary $\partial Q$ is called singular if the boundary is not differentiable at that point. That means a singular point belongs to the intersection of some (at least two) differentiable (aka regular) components of the boundary. Note that we also call a singular point in dimension two (i.e. $\dim Q=2$) a corner. All non-singular points of the boundary $\partial Q$ are called regular points.\par Consider a free motion of a hard ball (a disk in $\dim 2$) of radius $r>0$ in the domain $Q$ with elastic reflections off the boundary $\partial Q$. The resulting dynamical system is called a physical billiard \cite{Bun19}, and the domain $Q$ is a billiard table. To describe the dynamics of such ball, it is enough to follow the motion of its center. It is easy to see that the center of the ball moves in the smaller billiard table, which one gets by moving any point $q$ of the boundary by $r$ to the interior of the billiard table along the internal unit normal vector $n(q)$ \cite{Bun19}.\par We will call a singular point $q$ of the boundary $\partial Q$ an invisible singular point if for any $r>0$ the hard ball of radius $r$ cannot hit that point. Otherwise, a singular point is called a visible singular point. Therefore, $q$ is a visible singular point of a billiard table if a ball with a sufficiently small radius can hit $q$. A formal mathematical definition of a visible singular point (in any dimension) is the following one. A singular point $A$ is a visible singular point if for any neighborhood $N$ of $A$ the convex hull of $Q\cap N$ contains a neighborhood of $A$.\par We also call a visible singular point in dimension two an internal corner. For example, Fig. \ref{Corners} shows visible and invisible singular points in dimension two.\par \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{Corners.pdf} \caption{Corners (singular points) B, C, and E are invisible to any disk. The point D is not singular, since boundary is differentiable at D. The corner A is an internal corner (a visible singular point).}~\label{Corners} \end{center} \end{figure} Note that being a visible singular point (an internal corner) does not mean that a hard ball of any radius $r>0$ can reach (hit) that point. Namely, if the radius of the particle is larger than some constant (which depends on the shape of a billiard table), then some visible singular points become invisible (see Fig. \ref{invis}).\par \begin{figure}[ht] \begin{center} \includegraphics[width=4cm]{invis.pdf} \caption{An internal corner becomes invisible when radius of disk is larger than some constant.}~\label{invis} \end{center} \end{figure} Observe that at the moment of collision with a visible singular point, the center of hard ball can be at different positions, and these possible positions depend on the shape of the boundary $\partial Q$ (see Fig. \ref{3dim}). This should be contrasted with the collision of the ball off the boundary at a regular point, when the center of the ball always has one position, namely at the distance $r$ on the internal normal line to the boundary of a billiard table. In Fig. \ref{3dim}, two situations are depicted, which may happen in three dimensional billiard tables.\par \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{3dim.pdf} \caption{(a) There are two lines of visible singular points. When a hard ball hits a point on those lines, its center is on an arc of a circle centered at that point and orthogonal to the corresponding line. However, at the moment of collision with the intersection point of those two lines the center of hard ball can be only in one position. (b) Here is one isolated visible singular point. At the moment of collision with such singularity the center of a hard ball is on a piece of $2$-sphere centered at that singular point.} \label{3dim} \end{center} \end{figure} Since the particle is a hard ball, it will keep its shape at the moment of collision. Hence the center of the hard ball is at the distance $r$ from a collision point (regardless of whether this point is a regular or singular point of the boundary). Therefore, the boundary of the reduced billiard table of the mathematical billiard, which has the same dynamics as the considered physical billiard \cite{Bun19}, acquires a piece of a sphere (or an arc of a circle if the dimension of the billiard table is two) of radius $r$ with the center at the visible singular point. Hence the reduced billiard table of the equivalent mathematical billiard has a dispersing component in the boundary which generates a chaotic (hyperbolic) dynamics in case if a moving particle is a smooth hard ball.\par However, for any type of a hard ball, a reduced billiard table of the equivalent mathematical billiard acquires a dispersing (or semi-dispersing) component. This fact holds true for any type of collision of the physical ($r>0$) particle with the boundary at a visible singular point. However, such collisions can generally be elastic or inelastic and with or without slip \cite{CF17,Gar69}. Dynamics of rough ball even in case of no-slip collisions is much more complicated than the dynamics of a smooth ball.\par In Fig. \ref{internalcorner}, it is easy to see the boundary of the reduced billiard table of the mathematical billiard acquires a dispersing component, because of the case of dimension two depicted. Here, the center of a disk can be located at any point of an arc of the circle with the center at the singular point and with the radius equals to the radius of the disk. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{onecorner.pdf} \caption{A collision between a disk and a visible singular point (here, an internal corner) is shown in the left picture. On the right, one can see its equivalent for a (virtual) collision between disk's center and an arc of a circle centered at the internal corner with the same radius as the disk's radius.}~\label{internalcorner} \end{center} \end{figure} \section{No-slip collisions of a hard ball with a visible singular point} In the case of no-slip collisions, each reflection of the moving particle (hard ball) off the boundary occurs at a single point. Hence a collision at any point of the boundary does not depend on the shape of the boundary elsewhere. Therefore, the collision problem can be actually considered as a reflection of a hard ball off a point.\par At the moments of the collision, the impulse $\Delta P$ decomposes into two components, which are the normal impulse $\Delta P_N$ acting towards the center of hard ball and the tangent impulse $\Delta P_T$ based on friction which is tangent to the hard ball at the collision point. The tangent impulse can result in either loss of kinetic energy or exchange between linear and angular momentum while the total kinetic energy is preserved. We will consider the friction-free (elastic) collision and the case when the impulse $\Delta P_T$ results in an exchange between linear and angular momentum without loss of energy. In other words, we consider only conservative (Hamiltonian) dynamics.\par Let a hard ball of radius $r>0$ with the center at a point $O$ and mass $m=1$ hits a visible singular point $A$ of the boundary of a billiard table $Q$. Denote the linear velocity of hard ball's center just before (after) the collision by $V^b$ ($V^a$). Consider now a decomposition of $V^b$ to two components $V_N^b$ and $V_T^b$, where $V_N^b=Proj_{\overrightarrow{OA}}V^b$ and $V_T^b=V^b-V_N^b$. Note that we will use the superscript $a$ instead of $b$ to denote velocity components at a moment of time right after the reflection. Denote also the vector form of angular velocity just before (after) the collision about the point $O$ by $\omega^b$ ($\omega^a$).\par The collision map $S$ at point $A$ will map linear components and the angular component of the velocity just before collision $(V_N^b,V_T^b,\omega^b)$ to those right after collision $(V_N^a,V_T^a,\omega^a)$. The map $S$ has the following properties: \begin{enumerate} \item The map $S$ is an orthogonal map because of the assumption that the system in question is Hamiltonian. \item Because of time reversibility of dynamics, $S^2$ is the identity map. \item The normal component of the linear velocity with respect to the boundary of hard ball at the contact point $A$ (i.e. $V_N^b$) always reverses under the map $S$. \end{enumerate} The conditions 1 and 2 imply that the eigenvalues of the map $S$ are $1$ or $-1$. In view of 3, one gets $S(V_N^b,V_T^b,\omega^b)=(-V_N^b,V_T^a,\omega^a)$, or equivalently, $V=(V_N^b,\vec{0},\vec{0})$ is an eigenvector of $S$ corresponding to the eigenvalue $-1$. It also implies that $\Delta P_N=-2V_N^b$. Without any loss of generality we assumed that the mass of a hard ball is $1$.\par The Hamiltonian system under consideration satisfies three conservation laws of the kinetic energy $K$, the linear momentum $P$, and the angular momentum $L$ about the point $O$. These conservation laws in dimension $3$ are given by the relations \begin{equation}\label{eq main} \left\{ \begin{array}{l} K^b=\frac{1}{2}\left(|V_N^b|^2+|V_T^b|^2+I|\omega^b|^2\right) \\ \hspace{0.5cm}=\frac{1}{2}\left(|V_N^a|^2+|V_T^a|^2+I|\omega^a|^2\right)=K^a,\\ P^b+\Delta P=V_N^b+V_T^b+\Delta P_N+\Delta P_T=V_N^a+V_T^a=P^a \\ L^b+\Delta P_T\times\overrightarrow{AO}=I\omega^b+\Delta P_T\times\overrightarrow{AO}=I\omega^a=L^a, \end{array}\right. \end{equation} where $I$ is the moment of inertia of the hard ball.\par Using that $V_N^a=-V_N^b$ and $\Delta P_N=-2V_N^b$, one can simplify (\ref{eq main}) as \begin{equation}\label{main} \left\{ \begin{array}{l} |V_T^b|^2+I|\omega^b|^2=|V_T^a|^2+I|\omega^a|^2,\\ V_T^b+\Delta P_T=V_T^a \\ I\omega^b+\Delta P_T\times\overrightarrow{AO}=I\omega^a. \end{array}\right. \end{equation} By solving (\ref{main}) for $\Delta P_T$, we get \begin{equation}\label{delta} \langle\Delta P_T,\frac{r^2+I}{I}\Delta P_T+ 2V_T^b+2\overrightarrow{AO}\times\omega^b\rangle=0, \end{equation} where $\langle.,.\rangle$ is the inner product in $\mathbf{R}^3$ and $r$ is radius of hard ball. It is easy to see that $\Delta P_T=\vec{0}$ is a solution of (\ref{delta}) under the condition that there is no friction.\par Observe that the conservation laws in dimension $2$ are the same as in (\ref{eq main}) under the assumption that the billiard table $Q$ is a subset of $xy$-plane in $\mathbf{R}^3$. \subsection{Friction-free collision (a smooth ball)} In this section, we study a friction-free (i.e. $\Delta P_T=\vec{0}$) Hamiltonian system. In this case, (\ref{main}) implies $$V_T^a=V_T^b,\hspace{1cm} \omega^a=\omega^b.$$ Here, the solution $(V_N^a,V_T^a,\omega^a)=(-V_N^b,V_T^b,\omega^b)$ of (\ref{eq main}) corresponds to the case of smooth hard ball \cite{Gar69} when the ball does not acquire rotation upon collision. Thus, in this case, we have an elastic reflection where the angle of incidence is equal to the angle of reflection.\par Also, this friction-free collision is equivalent to the elastic reflection of the hard ball's center $O$ off a piece of a $2$-sphere (it can be an arc of a circle) centered at the visible singular point $A$ with the same radius as the radius of the hard ball \cite{ABB,Bun19}.\par In case of dimension $3$, the collision map $S$ is a linear map from a $6$-dimensional subspace of $\mathbf{R}^9$ to itself with eigenvalues $1$ and $-1$. When $\Delta P_T=\vec{0}$, the eigenvectors which correspond to these eigenvalues have the forms $(\vec{0},V_T^b,\omega^b)$ and $(c\overrightarrow{AO},\vec{0},\vec{0})$, respectively, where $c$ is a constant. Also, the eigenspaces corresponding to the eigenvalues $1$ and $-1$ have dimensions five and one, respectively. \subsection{Collisions with friction (a rough ball)} For the Hamiltonian system under consideration, the presence of the frictional force means that $|\Delta P_T|\neq0$. The corresponding solution of (\ref{eq main}) when $|\Delta P_T|\neq0$ describes dynamics of a rough ball \cite{Gar69}, which has ultra-elastic (no-slip) reflections off the boundary. In this case, the tangential component of the linear velocity partially transfers to the angular velocity and vice versa.\par A nontrivial solution for $\Delta P_T$ in (\ref{delta}), is given by \begin{equation}\label{rough} \Delta P_T=-\frac{2I}{r^2+I}(V_T^b+\overrightarrow{AO}\times\omega^b). \end{equation} Let $S$ be the collision map in dimension $3$ when the tangent impulse $\Delta P_T$ is given by (\ref{rough}). Then $(\vec{0},V_T^b,\omega^b)$ is an eigenvector of the collision map $S$ corresponding to the eigenvalue $1$ if $V_T^b+\overrightarrow{AO}\times\omega^b=\vec{0}$. The solution set of the vector equation $V_T^b+\overrightarrow{AO}\times\omega^b=\vec{0}$ is a three dimensional space. Hence, the eigenspace corresponding to the eigenvalue $1$ of the collision map $S$ is a $3$-dimensional space.\par Moreover, $(\vec{0},V_T^b,\omega^b)$ is an eigenvector of the collision map $S$ which corresponds to the eigenvalue $-1$ if $V_T^b\times\overrightarrow{AO}-I\omega^b=\vec{0}$. In this case, the solution set of the vector equation $V_T^b\times\overrightarrow{AO}-I\omega^b=\vec{0}$ is a two dimensional space. This implies that the eigenspace corresponding to the eigenvalue $-1$ of the collision map $S$ is a $3$-dimensional space (we know $(\overrightarrow{AO},\vec{0},\vec{0})$ is another eigenvector for eigenvalue $-1$). \section*{Acknowledgements} The authors are grateful to R. Feres for helpful discussions.
1,314,259,996,186
arxiv
\section{Introduction} There is currently a great interest on new measures of quantum correlations for mixed states, different from the entanglement measures \cite{BDSW.96}. Quantum entanglement is essential for quantum teleportation \cite{Be.93,NC.00} and also for pure state quantum computation, where its increase with system size is necessary to achieve an exponential speedup over classical computation \cite{JL.03,Vi.03}. However, the computation model proposed by Knill and Laflamme \cite{KL.98} has shown that for mixed states, such speedup can in principle be achieved without entanglement \cite{DFC.05}. This suggests the subsistence of useful quantum correlations in some separable mixed states, which, we recall, are defined as convex mixtures of product states \cite{RF.89}. While a separable pure state is a product state, separable mixed states comprise product states, mixtures of commuting products and also mixtures of non-commuting product states. The latter can possess entangled eigenstates and give rise to non-classical capabilities. Consequently, measures such as the quantum discord \cite{OZ.01,HV.01,Ve.03,ZZ.03} have recently received much attention. While coinciding with the entanglement entropy for pure states, the quantum discord is non-zero for separable mixed states of the last type, vanishing just for classically or one-way classically correlated states, i.e., states diagonal in a standard or conditional product basis. The circuit of \cite{KL.98} was in fact shown in \cite{DSC.08} to exhibit a non-negligible discord. Other measures with similar properties include the one-way information deficit \cite{Ho.05,SKD.11}, the geometric discord \cite{DVB.10}, based on the standard Hilbert-Schmidt norm, and the general entropic measures which we defined in \cite{RCC.10}, based on generalized entropic forms. The latter contain the two previous measures as particular cases, embedding them in a unified picture. Since they are applicable with entropic forms complying with minimum requirements, they offer, like the geometric discord, the possibility of easier evaluations, allowing at the same time to identify some universal features exhibited by all these measures \cite{RCC.10}. Related generalized measures vanishing just for full classically correlated states, like those of \cite{MV.10} and \cite{SL.08}, were also considered \cite{RCC.10}. Let us remark that important quantum capabilities of separable states with non-zero discord, and hence non-zero values of the previous measures, were recently unveiled \cite{MD.11,CAB.11,SKD.11,PGA.11,RRV.11}. Other relevant properties of the quantum discord and its evaluation in specific states and scenarios were discussed in \cite{ShL.09,FA.10,FCOC.11,LBAW.08,DG.09,SL2.08,AR.10,SA.09, WSF.09,CRC.10,AD.11,YL.11}. The aim of this work is to analyze the explicit evaluation of the generalized measures of \cite{RCC.10} in some important general cases. We first provide in Sec.\ \ref{II} the general stationary condition that the least disturbing local measurement should satisfy, including conditions for its independence from the entropy employed (universality), together with its explicit form for general two-qubit states. Here we show that in addition to the quadratic case (geometric discord), the measure based on a cubic function of the density matrix (``cubic'' discord) can also be exactly evaluated for any state of two qubits. Moreover, for two-qubit states this measure shares with the geometric discord the same pure state limit, where they are both proportional to the square of the concurrence \cite{WW.98,Ca.03}. As specific applications, we provide in sec.\ \ref{III} the general expression for two-qubit states with maximally mixed reduced states, valid for any entropic form, analyzing its main features. We also examine their evaluation in the so-called $X$ states \cite{AR.10}, where explicit expressions for the quadratic and cubic cases are provided, and for the important case of a mixture of two aligned states \cite{CRC.10}, which represents in particular the exact state of a pair in the ground state of a finite $XY$ ferromagnetic spin $1/2$ chain in the vicinity of the factorizing field \cite{RCM.08}. Differences with the quantum discord, related in particular with the minimizing measurement, are also discussed. Conclusions are finally drawn in Sec.\ \ref{IV}. \section{Formalism\label{II}} \subsection{Information loss by unread local measurement\label{I}} Let us consider a bipartite system $A+B$ initially in a state $\rho_{AB}$. After an unread local von Neumann measurement in system $B$, defined by orthogonal one dimensional projectors $P_j^B=I\otimes P_j$, with $P_j=|j_B\rangle\langle j_B|$ ($\sum_j P_j=I$, $P_j P_{j'}=\delta_{jj'}P_j$), the joint state becomes \begin{equation} \rho'_{AB}=\sum_j P_j^B\rho_{AB}P_j^B=\sum_j p_j \rho_{AB/j}\,, \label{rp}\end{equation} where $p_j={\rm Tr}\,\rho_{AB} P_j^B$ is the probability of outcome $j$ and $\rho_{AB/j}=P_j^B\rho_{AB}P_j^B/p_j$ the state after such outcome. The state (\ref{rp}) is just the diagonal of $\rho_{AB}$ in a conditional product basis formed by the states $|i_j j\rangle\equiv|i_{jA}\rangle|j_B\rangle$, with $|i_{jA}\rangle$ the eigenstates of $\rho_{A/j}={\rm Tr}_B\rho_{AB/j}$. The loss of information due to such measurement, i.e., the information contained in the off-diagonal elements of the original $\rho_{AB}$ in the previous basis, can be quantified by the quantity \cite{RCC.10} \begin{equation} I_f^{M_B}(\rho_{AB})=S_f(\rho'_{AB})-S_f(\rho_{AB})\,, \label{IfM}\end{equation} where $S_f(\rho)$ denotes a generalized entropy of the form \begin{equation}S_f(\rho)={\rm Tr}\,f(\rho)\,,\label{Sf}\end{equation} with $f:[0,1]\rightarrow\Re$ a smooth strictly concave function ($f''(p)<0$ for $p\in(0,1)$) satisfying $f(0)=f(1)=0$ \cite{CR.02,WW.78}. This ensures $S_f(\rho)\geq 0$ for any state $\rho$, with $S_f(\rho)=0$ if and only if $\rho$ is a pure state ($\rho^2=\rho$), and $S_f(\rho)$ maximum, at fixed dimension $n$, for the maximally mixed state $\rho=I/n$. Eq.\ (\ref{IfM}) is then {\it non-negative} for any $S_f$ of the previous form, vanishing only if the original $\rho_{AB}$ remains unchanged by such measurement. This positivity follows from the majorization relation \cite{NC.00,WW.78,Bha.97} $\rho'_{AB}\prec \rho_{AB}$ ($\rho'_{AB}$ more mixed than $\rho_{AB}$) satisfied by the post-measurement state, which implies $S_f(\rho'_{AB})\geq S_f(\rho_{AB})$ for all such $S_f$ \cite{RCC.10}. Moreover, the previous entropic inequality implies in fact majorization when valid for {\it all} $S_f$ of the previous form \cite{RC.03}. The minimum of $I_f^{\rm M_B}$ among all local measurements, \begin{equation} I_f^B(\rho_{AB})=\mathop{\rm Min}_{M_B}I_f^{M_B}(\rho_{AB})\,, \label{If}\end{equation} provides a measure of the quantum correlations between $A$ and $B$ present in the original state and destroyed by local measurement \cite{RCC.10}. It vanishes only if $\rho_{AB}$ is already of the ``classical'' (with respect to $B$) form (\ref{rp}). For such states there is an unread local measurement in $B$ ($M_B$) which leaves the state invariant. Eq.\ (\ref{If}) is obviously not affected by local unitary transformations. In the case of pure states ($\rho_{AB}^2=\rho_{AB}$), it can be shown that (\ref{If}) becomes the generalized entanglement entropy \begin{equation}I_f^{B}(\rho_{AB})=E_f(A,B)\equiv S_f(\rho_A)=S_f(\rho_B), \end{equation} where $\rho_A={\rm Tr}_B\,\rho_{AB}$ and $\rho_B$ are the reduced states of each subsystem \cite{RCC.10}. Hence, pure state entanglement can be seen as the minimum information loss due to a local measurement. In this case $I_f^B(\rho_{AB})=I_f^A(\rho_{AB})$, an identity which does not hold in general for mixed states. In the von Neumann case $S_f(\rho)=S(\rho)\equiv-{\rm Tr}\rho\log\rho$, Eq.\ (\ref{IfM}) can be also written as \cite{RCC.10} \begin{equation} I^{M_B}(\rho_{AB})=S(\rho'_{AB})-S(\rho_{AB})=S(\rho_{AB}||\rho'_{AB}) \label{IS}\,,\end{equation} where $S(\rho||\rho')=-{\rm Tr}\,\rho(\log\rho'-\log\rho)$ is the {\it relative} entropy \cite{NC.00,WW.78,Ve.02} (a non-negative quantity), since $\rho'_{AB}$ is the diagonal of $\rho_{AB}$ in a certain basis. The minimum $I^B$ of Eq.\ (\ref{IS}) coincides with the one-way information deficit \cite{Ho.05,SKD.11} and also with one of the measures discussed in \cite{MV.10}. In the case of pure states, $I^B$ reduces to the standard entanglement entropy $E(A,B)=S(\rho_A)=S(\rho_B)$. In the case of the so-called linear entropy \begin{equation}S_2(\rho)=2(1-{\rm Tr}\,\rho^2)\,,\label{S2}\end{equation} which is a quadratic function of $\rho$ and corresponds to $f(\rho)=2\rho(1-\rho)$ in (\ref{Sf}), Eq.\ (\ref{IfM}) can be written as \cite{RCC.10} \begin{equation} I_2^{M_B}(\rho_{AB})=S_2(\rho'_{AB})-S_2(\rho_{AB}) =2||\rho'_{AB}-\rho_{AB}||^2\,,\label{I2}\end{equation} where $||O||^2={\rm Tr}\,O^\dagger O$ is the squared Hilbert-Schmidt norm. The ensuing minimum (\ref{If}), to be denoted here as $I_2^B$, becomes then equivalent \cite{RCC.10} to the geometric discord of ref.\ \cite{DVB.10}, defined as the minimum Hilbert-Schmidt distance between $\rho_{AB}$ and any classically correlated state of the form (\ref{rp}). In the case of pure states, $I_2^B$ reduces to the square of the pure state concurrence (i.e., the tangle), defined as $C_{AB}=\sqrt{2(1-{\rm Tr}\,\rho_A^2)}$ \cite{Ca.03}. In the same way we may define the $q$ information loss \begin{eqnarray} I_q^{M_B}(\rho_{AB})&=&S_q(\rho'_{AB})-S_q(\rho_{AB})\,,\label{Iq}\\ S_q(\rho)&=&(1-{\rm Tr}\rho^q)/(1-2^{1-q})\,,\;q>0\,,\label{Sq} \end{eqnarray} where $S_q(\rho)$ is the so-called Tsallis entropy \cite{TS.88}, which corresponds to $f(\rho)=(\rho-\rho^q)/(1-2^{1-q})$ in (\ref{Sf}) and is a function of the Renyi entropy. Eq.\ (\ref{Sq}) reduces to the linear entropy (\ref{S2}) for $q=2$ and to the von Neumann entropy for $q\rightarrow 1$, with $\log=\log_2$ for the present normalization (chosen such that $S_q(\rho)=1$ for a maximally mixed single qubit state, i.e., $2f(1/2)=1$). Eq.\ (\ref{Iq}) allows in particular to switch continuously from the von Neumann case (\ref{IS}) to the quadratic case (\ref{S2}). On the other hand, the original quantum discord \cite{OZ.01, HV.01,Ve.03,ZZ.03} is based on the von Neumann entropy and can be written (considering von Neumann measurements) as \begin{equation} D^B(\rho_{AB})=\mathop{\rm Min}_{M_B}[I^{M_B}(\rho_{AB})-I^{M_B}(\rho_B)]\,. \label{D}\end{equation} It contains an additional term $I^{M_B}(\rho_B)=S(\rho'_B)-S(\rho_B)$ related to the local information loss and was actually defined in \cite{OZ.01} as the minimum difference between the initial mutual information \begin{equation} I(A:B)=S(\rho_{A}\otimes\rho_B)-S(\rho_{AB})\,, \label{Smut}\end{equation} where $S(\rho_A\otimes\rho_B)=S(\rho_A)+S(\rho_B)$, and that after the local measurement, $I^{M_B}(A:B)=S(\rho'_A)+S(\rho'_B)-S(\rho'_{AB})$. Since $\rho'_A=\rho_A$, such difference reduces to Eq.\ (\ref{D}). The information loss (\ref{IfM}) can be regarded in fact as a type of generalized mutual information. Eq.\ (\ref{Smut}) is a measure of the total correlations between $A$ and $B$ in the original state, absent in the product state $\rho_A\otimes\rho_B$. The latter is the state which {\it maximizes} the von Neumann entropy subject to the constraint of providing just all {\it local} averages $\langle O\otimes I\rangle$ and $\langle I\otimes O\rangle$, i.e., the correct reduced states $\rho_{A}$ and $\rho_{B}$. This is in fact what is expressed by the positivity of Eq.\ (\ref{Smut}): Any other state $\rho_{AB}$ with the same local reduced states has a smaller entropy. On the other hand, the post-measurement state (\ref{rp}) can be seen as the {\it more mixed} state providing the same averages as $\rho_{AB}$ of all observables of the form $\sum_j \alpha_j O_j\otimes P_j$, diagonal in the local basis defined by $M_B$ (as ${\rm Tr}\,\rho_{AB}\, O\otimes P_j={\rm Tr}\,\rho'_{AB}\,O\otimes P_j$), such that $S_f(\rho'_{AB})\geq S_f(\rho_{AB})$ $\forall$ $S_f$. The difference $I_f^{M_B}$ is then a measure of the correlations $\langle O\otimes |j_B\rangle\langle k_B|\rangle$, $k\neq j$, contained in the original state $\rho_{AB}$ and absent in $\rho'_{AB}$. In particular, if $M_B$ is a measurement in a basis where $\rho_B$ is diagonal, $\rho'_{AB}$ reproduces not only $\rho_A$ ($\rho'_A={\rm Tr}_B\,\rho'_{AB}=\rho_A$ $\forall$ $M_B$) but also $\rho_B$ ($ \rho'_B={\rm Tr}_A\,\rho'_{AB}=\rho_B$ for this measurement), as well as all averages $\langle O\otimes P_j\rangle$, being the more mixed state with such property. Notice that in contrast with $\rho'_{AB}$, the state $\rho_{A}\otimes \rho_B$ is in general not more mixed than the original state ($\rho_A\otimes\rho_B\prec\!\!\!\!\!/\;\rho_{AB}$), so that the positivity of Eq.\ (\ref{Smut}) cannot be extended to a general entropy. \subsection{General stationary condition} Let us now derive the equations determining the least disturbing local measurement defined by Eq.\ (\ref{If}). \\ {\it Theorem 1.} For a given entropic function $f$, the least disturbing local measurement satisfies the equation \begin{equation} {\rm Tr}_A[f'(\rho'_{AB}),\rho_{AB}]=0\label{st}\,,\end{equation} where $f'$ is the derivative of $f$ and $\rho'_{AB}$ the post-measurement state (\ref{rp}). \\ {\it Proof}: The generalized entropy of the state (\ref{rp}) is \begin{equation} S_f(\rho'_{AB})=\sum_{i,j}f({p}^i_j)\,,\;\; {p}^i_j= \langle i_j j|\rho_{AB}|i_j j\rangle\,,\end{equation} where $\langle i_j j|\rho_{AB}|k_j j\rangle=\delta_{ik} p^i_j$. Considering a small unitary variation of the local measurement basis, such that $\delta|j_B\rangle=(e^{i\delta h}-1)|j_B\rangle\approx i\delta h|j_B\rangle$, with $\delta h$ a small local hermitian operator, we have $\delta p^i_j\approx i\langle i_{j}j|[\rho_{AB},\delta h_B]|i_{j}j\rangle$ up to first order in $\delta h$, with $\delta h_B=I\otimes \delta h$. Hence, \begin{eqnarray}\delta I_f^{M_B}&=& \sum_{i,j}f'(p^i_j)\delta p^i_j=i{\rm Tr}\,[f'(\rho'_{AB}),\rho_{AB}]\delta h_B\nonumber\\ &=&i{\rm Tr}_B\,({\rm Tr}_A[f'(\rho'_{AB}),\rho_{AB}])\delta h\,. \nonumber\end{eqnarray} The condition $\delta I_f^{M_B}=0$ $\forall$ $\delta h$ leads then to Eq.\ (\ref{st}). Eq.\ (\ref{st}) implies explicitly $\sum_{i} f'(p^i_j)\langle i_j j|\rho_{AB}|i_j k\rangle=\sum_i f'(p^i_k)\langle i_k j|\rho_{AB}|i_k k\rangle$ $\forall$ $k,j$, and determines a certain set of feasible local basis $\{|j_B\rangle\}$. Note that the states $|i_j\rangle$ of $A$ depend in general on $j$. The minimizing local basis $\{|j_B\rangle\}$ will not diagonalize, in general, the reduced state $\rho_B$. Nonetheless, Eq.\ (\ref{st}) entails that the local eigenstates can be optimum in some important situations: If in a standard product basis $\{|ij\rangle=|i_A\rangle|j_B\rangle\}$ formed by eigenstates of $\rho_A$ and $\rho_B$ the only off-diagonal elements of $\rho_{AB}$ are $\langle ij|\rho_{AB}|kl\rangle$ with $i\neq k$ {\it and} $j\neq l$, such that \begin{equation} \langle ij|\rho_{AB}|ik\rangle=\delta_{jk}{p}^i_j\,,\;\; \langle ij|\rho_{AB}|lj\rangle=\delta_{il}{p}^i_j \,, \label{d}\end{equation} Eq.\ (\ref{st}) is trivially satisfied {\it $\forall$ $S_f$} for a measurement in the basis $\{|j_B\rangle\}$. Such basis would then provide {\it a universal stationary point of $I_f^B$}. This is precisely the case of a pure state, written in the Schmidt basis as $|\Psi_{AB}\rangle=\sum_k\sqrt{p_k}|k_A k_B\rangle$, and also of a mixture of $|\Psi_{AB}\rangle$ with the maximally mixed state, \[\rho_{AB}=x|\Psi_{AB}\rangle\langle\Psi_{AB}|+ {\textstyle\frac{1-x}{n}}I\,,\;x\in[0,1]\,,\] where Eqs.\ (\ref{d}) and hence (\ref{st}) will be satisfied $\forall$ $f$ for a measurement in the basis $\{|k_B\rangle\}$. It was shown in \cite{RCC.10} that such basis provides {\it the universal least disturbing local measurement} for these states, minimizing $I_f^{M_B}$ $\forall$ $S_f$. In the case of the linear entropy, $f'(\rho'_{AB})\propto I-2\rho'_{AB}$ and Eq.\ (\ref{st}) becomes just ${\rm Tr}_A[\rho'_{AB},\rho_{AB}]=0$, indicating that the post-measurement state $\rho'_{AB}$ should locally (in $B$) commute with the original state. In the case of the original discord (\ref{D}), the additional local term leads in the variation to the modified equation \begin{equation} {\rm Tr}_A[f'(\rho'_{AB}),\rho_{AB}] -[f'(\rho'_B),\rho_B]=0\label{stD}\,,\end{equation} where here $f'(\rho)$ can be replaced by $-\log\rho$. \subsection{The two-qubit case} Let us now examine in detail the case of two-qubits. Any state of a two qubit system can be written as \begin{eqnarray}\rho_{AB}&=&\frac{1}{4}(I+\bm{r}_A\cdot \bm{\sigma}_A+ \bm{r}_B\cdot \bm{\sigma}_B+\bm{\sigma}_A^t J\bm{\sigma}_B)\,, \label{1}\end{eqnarray} where $\bm{\sigma}_A\equiv \bm{\sigma}\otimes I$, $\bm{\sigma}_B\equiv I\otimes \bm{\sigma}$, with $\bm{\sigma}^t=(\sigma_x,\sigma_y,\sigma_z)$ the Pauli operators and $I$ the identity (in the corresponding space). The basic traces ${\rm tr}\,\sigma_\mu=0$, ${\rm tr}\,\sigma_\mu\sigma_\nu=2\delta_{\mu\nu}$ for $\mu,\nu=x,y,z$, ensure that \[\bm{r}_A=\langle \bm{\sigma}_A\rangle\,,\;\bm{r}_B= \langle \bm{\sigma}_B\rangle\,,\;J=\langle \bm{\sigma}_A\bm{\sigma}_B^t\rangle\,,\] i.e., $J_{\mu\nu}=\langle \sigma_{A\mu}\,\sigma_{B\nu}\rangle$, where $\langle O\rangle={\rm Tr}\,\rho_{AB}\,O$. Any complete local projective measurement in $B$ can be considered as a spin measurement along the direction of a unit vector $\bm{k}$, represented by the orthogonal projectors $P_{\pm\bm{k}}=\half(I\pm\bm{k}\cdot\bm{\sigma})$. This leaves just those elements of $\rho_{AB}$ proportional to $\bm{k}\cdot\bm{\sigma}$, leading to the post-measurement state \begin{equation} \rho'_{AB} =\frac{1}{4}[I+\bm{r}_A\cdot \bm{\sigma}_A+ (\bm{r}_B\cdot \bm{k})\bm{k}\cdot\bm{\sigma}_B+(\bm{\sigma}_A^t J\bm{k})(\bm{k}\cdot\bm{\sigma}_B)]\,,\label{rhok} \end{equation} which corresponds to $\bm{r}_B\rightarrow \bm{k}\bm{k}^t\bm{r}_B$ and $J\rightarrow J\bm{k}\bm{k}^t$ in (\ref{1}). The information loss due to this measurement will be denoted as $I_f^{\bm k}\equiv S_f(\rho'_{AB})-S_f(\rho_{AB})$. We now show that {\it the general stationary condition for the measurement direction $\bm{k}$ in $B$ reads} \begin{equation} \alpha_1\bm{r}_B+\alpha_2 J^t\bm{r}_A+\alpha_3 J^tJ\bm{k}=\lambda\bm{k}\,, \label{lam}\end{equation} i.e., $\bm{k}\times(\alpha_1 \bm{r}_B+\alpha_2 J^t\bm{r}_A+\alpha_3 J^tJ\bm{k})=\bm{0}$, where $\lambda$ is a proportionality factor and the coefficients $\alpha_i$ are given by \begin{equation}(\alpha_1,\alpha_2,\alpha_3)={\textstyle\frac{1}{4} \sum\limits_{\mu,\nu=\pm 1} f'({p}^\mu_\nu)(\nu,\frac{\nu\mu} {|\bm{r}_A+\nu J\bm{k}|},\frac{\mu}{|\bm{r}_A+\nu J\bm{k}|})}\,, \label{al}\end{equation} with $p^\mu_\nu$ ($\mu,\nu=\pm 1$) the eigenvalues of (\ref{rhok}): \begin{equation} {p}^\mu_\nu={\textstyle\frac{1}{4}} (1+\nu \bm{r}_B\cdot\bm{k}+\mu|\bm{r}_A+\nu J\bm{k}|)\,.\label{la} \end{equation} {\it Proof:} The state (\ref{rhok}) is diagonal in the conditional product basis formed by the eigenstates of $\bm{k}\cdot\bm{\sigma}_B$ and $(\bm{r}_A+\nu J\bm{k})\cdot\bm{\sigma}_A$, with $\nu=\pm 1$ the eigenvalues of $\bm{k}\cdot\bm{\sigma}_B$, which leads to the eigenvalues (\ref{la}). We can then write \[f'(\rho'_{AB})={\textstyle\frac{1}{4}\sum\limits_{\nu,\mu}f'({p}^\mu_\nu) (I+\mu\frac{\bm{r}_A+\nu J\bm{k}}{|\bm{r}_A+\nu J\bm{k}|}\cdot\bm{\sigma}_A)(I+\nu\bm{k}\cdot\bm{\sigma}_B)}\,.\] Using now the basic trace relations and $[\bm{r}\cdot\bm{\sigma},\bm{s}\cdot\bm{\sigma}]= 2i(\bm{r}\times\bm{s})\cdot\bm{\sigma}$, we obtain ${\rm Tr}_A\,[(\bm{r}\cdot\bm{\sigma}_A) (\bm{s}\cdot\bm{\sigma}_B),\bm{\sigma}_A^tJ\bm{\sigma}_B]= 4i(\bm{s}\times J^t\bm{r})\cdot\bm{\sigma}_B$ and hence \[{\rm Tr}_A\,[f'(\rho'_{AB}),\rho_{AB}]={\textstyle}i[\bm{k}\times(\alpha_1\bm{r}_B +\alpha_2 J^t\bm{r}_A+\alpha_3 J^tJ\bm{k})]\cdot\bm{\sigma}_B\,,\] with $\alpha_i$ given by (\ref{al}). Eq.\ (\ref{st}) leads then to Eq.\ (\ref{lam}). We can also check Eq.\ (\ref{lam}) directly. From (\ref{la}), we have $\delta {p}^\mu_\nu= \frac{\nu}{4}(\bm{r}_B+\mu\frac{J^t(\bm{r}_A+\nu J\bm{k})}{|\bm{r}_A+\nu J\bm{k}|})\cdot\delta\bm{k}$ for changes $\delta\bm{k}$ in the direction of the local measurement apparatus, with $\bm{k}\cdot\delta\bm{k}=0$ since $\bm{k}$ is a unit vector. The condition $\delta I_f^{\bm k}=\sum_{\nu,\mu}f'({p}^\mu_\nu)\delta {p}^\mu_\nu=0$ then implies $(\alpha_1\bm{r}_B+\alpha_2 J^t\bm{r}_A+\alpha_3 J^tJ\bm{k})\cdot\delta\bm{k}=0$, which leads to Eq.\ (\ref{lam}) since $\delta\bm{k}$ is orthogonal to $\bm{k}$. Writing $\bm{k}=(\sin\gamma\cos\phi,\sin\gamma\sin\phi,\cos\gamma)$, Eq.\ (\ref{lam}) leads to a transcendental system for $\gamma,\phi$ ($\tan\gamma=d_z/\sqrt{d_x^2+d_y^2}$, $\tan\phi=d_y/d_x$, with $\bm{d}$ the l.h.s.\ of (\ref{lam})). Eq.\ (\ref{lam}) can be also seen as a self-consistent eigenvalue equation for the matrix $(\alpha_1\bm{r}_B+\alpha_2 J^t\bm{r}_A)\bm{k}^t+\alpha_3 J^tJ$. Let us remark that the initial reduced local state $\rho_B={\rm Tr}_A\, \rho_{AB}={\textstyle\frac{1}{2}} (I+\bm{r}_B\cdot\bm{\sigma})$, becomes \begin{equation} \rho'_B={\textstyle\frac{1}{2}}[I+(\bm{r}_B\cdot\bm{k}) (\bm{k}\cdot\bm{\sigma})]\,, \label{rbk}\end{equation} after the local measurement. The minimizing direction $\bm{k}$ will depend on the matrix $J$ and may obviously deviate from $\bm{r}_B$, changing the local state. A ``transition'' in the direction of the least disturbing $\bm{k}$, from $\bm{r}_B$ to the direction of the main eigenvector of $J^tJ$, can then be expected from (\ref{lam}) as $J$ increases from $0$, whose details will in general depend on the choice of entropy (see sec.\ \ref{III}). In the case of the original quantum discord (\ref{D}), the extra local contribution in (\ref{stD}) leads to the modified stationary condition (see also \cite{AD.11}) \begin{eqnarray} (\alpha_1-\eta)\bm{r}_B+\alpha_2 J^t\bm{r}_A+\alpha_3 J^tJ\bm{k} =\lambda\bm{k}\,,\label{eqmod} \end{eqnarray} where $\eta=\frac{1}{2}\sum_{\nu=\pm}\nu f'(p_\nu)=\half\log(p_-/p_+)$, with $p_\nu=\sum_\mu {p}^\mu_\nu=\frac{1}{2}(1+\nu\bm r_B\cdot\bm k)$ the eigenvalues of $\rho'_B$. The extra term $-\eta\bm{r}_B$ will tend to diminish the effect of $\bm{r}_B$, favoring the direction determined by $J^tJ$. \subsection{The quadratic and cubic information measures} While the evaluation of a general entropy $S_f(\rho)$ requires the determination of the eigenvalues of $\rho$, for those choices of $f$ involving just low integer powers of $\rho$, $S_f(\rho)$ can be determined without their explicit knowledge. For instance, using just the basic trace relations ${\rm tr}\,\sigma_\mu=0$. ${\rm tr}\,\sigma_\mu\sigma_\nu=2 \delta_{\mu\nu}$, the linear entropy (\ref{S2}) of any two qubit state can be evaluated as \begin{equation} S_2(\rho_{AB})= {\textstyle\frac{3}{2}}-{\textstyle\frac{1}{2}}(|\bm{r}_A|^2+|\bm{r}_B|^2+||J||^2)\,, \label{S22}\end{equation} where $||J||^2={\rm tr}\,J^tJ$ and $|\bm{r}|^2=\bm{r}\cdot\bm{r}=\bm{r}^t\bm{r}$. For the post-measurement state (\ref{rhok}), Eq.\ (\ref{S22}) becomes \begin{eqnarray}S_2(\rho_{AB}')&=& {\textstyle\frac{3}{2}}- {\textstyle\frac{1}{2}}|\bm{r}_A|^2 -{\textstyle\frac{1}{2}}\bm{k}^tM_2\bm{k},\label{S22p}\\ M_2&=&\bm{r}_B\bm{r}_B^t+J^tJ\,, \label{M2}\end{eqnarray} where $M_2$ is a positive semidefinite symmetric matrix. The information loss becomes therefore \begin{equation} I_2^{\bm{k}}={\textstyle\frac{1}{2}}(|\bm{r}_B|^2+||J||^2 -\bm{k}^tM_2\bm{k})= {\textstyle\frac{1}{2}}({\rm tr}\,M_2-\bm{k}^tM_2\bm{k})\,. \end{equation} The minimum $I_2^{\bm{k}}$ is just twice the {\it geometric discord}, defined and evaluated for two qubits in \cite{DVB.10}. It corresponds then to $\bm{k}$ directed along the eigenvector with the {\it largest} eigenvalue of the matrix $M_2$: \begin{eqnarray} I_2^B(\rho_{AB})&=&\mathop{\rm Min}_{\bm k}\, I_2^{\bm k}={\textstyle\frac{1}{2}}({\rm tr}\,M_2-\lambda_1) ={\textstyle\frac{1}{2}}(\lambda_2+\lambda_3) \label{I22}\end{eqnarray} where $(\lambda_1,\lambda_2,\lambda_3)$ are the eigenvalues of $M_2$ sorted in {\it decreasing} order. A state $\rho_{AB}$ which is already of the form (\ref{rhok}) leads to $I_f^B(\rho_{AB})=0$ $\forall$ $S_f$ and is then characterized by a matrix $M_2$ of rank 1 (such that $\lambda_2=\lambda_3=0$). It is verified that for $f'({p}^\mu_\nu)\propto 1-2{p}^\mu_\nu$, Eq.\ (\ref{lam}) reduces to the present eigenvalue equation $M_2\bm{k}=\lambda \bm{k}$, since $(\alpha_1,\alpha_2,\alpha_3)\propto (\bm{r}_B\cdot\bm{k},0,1)$. Another entropy which can be easily evaluated for any state of two qubits is the $q=3$ case in (\ref{Sq}), \begin{equation} S_3(\rho)={\textstyle\frac{4}{3}}(1-{\rm Tr}\,\rho^3)\,. \label{S3}\end{equation} {\it Theorem 2}. The entropy (\ref{S3}) of the general two qubit state (\ref{1}), and the ensuing minimum information loss $I_3^B(\rho_{AB})$ due to a local measurement in $B$, are given by \begin{eqnarray} S_3(\rho_{AB}) &=&{\textstyle\frac{1}{2}}[S_2(\rho_{AB})+1-(\bm{r}_A^t\,J\,\bm{r}_B-{\rm det}\,J)] \,,\label{S33}\\ I_3^B(\rho_{AB})&=&\mathop{\rm Min}_{\bm k}\,I_3^{\bm k} ={\textstyle\frac{1}{4}}({\rm tr}\,M_3-2\,{\rm det}\,J-\lambda_1)\nonumber\\ &=&{\textstyle\frac{1}{4}}(\lambda_2+\lambda_3)- {\textstyle\frac{1}{2}}\, {\rm det}\,J \label{I33}\,, \end{eqnarray} where $S_2(\rho_{AB})$ is the entropy (\ref{S22}) and $(\lambda_1,\lambda_2,\lambda_3)$ are the eigenvalues, sorted in decreasing order, of the matrix \begin{equation} M_3=\bm{r}_B\bm{r}_B^t+J^tJ+\bm{r}_B\bm{r}_A^tJ+J^t\bm{r}_A\bm{r}_B^t\,, \label{M3}\end{equation} which is positive semidefinite. \\ {\it Proof}: Applying the basic trace relations together with ${\rm tr}\,\sigma_\mu\sigma_\nu \sigma_\tau=2i\epsilon_{\mu\nu\tau}$, with $\epsilon$ the full antisymmetric tensor ($\mu,\nu,\tau\in\{x,y,z\}$), the only terms with non-zero trace in $\rho^3$ are ${\rm Tr}(\bm{r}_A^t{\sigma}_A)(\bm{\sigma}_A^tJ\bm{\sigma_B}) (\bm{r}_B^t\bm{\sigma}_B)=4\bm{r}_A^tJ\bm{r}_B$ (and the same for its 3! permutations), ${\rm Tr}(\bm{\sigma}_A^t J \bm{\sigma}_B)^3=3!(2i)^2 {\rm det}\,J$ and the quadratic terms appearing already in ${\rm Tr}\,\rho^2$. This leads to to Eq.\ (\ref{S33}). Using Eq.\ (\ref{S33}), the cubic entropy of the post-measurement state (\ref{rhok}) can be expressed as \begin{equation} S_3(\rho'_{AB})={\textstyle\frac{5}{4}}-{\textstyle\frac{1}{4}}(|\bm{r}_A|^2 +\bm{k}^tM_3\bm{k})\,,\label{S33p}\end{equation} where $M_3$ is the matrix (\ref{M3}), since $\bm{r}_A^tJ\bm{r}_B={\rm tr}\,\bm{r}_B\bm{r}_A^tJ={\rm tr}\,J^t\bm{r}_A\bm{r}_B^t$ and ${\rm det} (J\bm{k}\bm{k}^t)=0$. The matrix $M_3$ is clearly symmetric and also {positive semi-definite}, as $\bm{k}^t M_3\bm{k}\geq (|\bm{k}\cdot\bm{r}_B|-|J\bm{k}|)^2\geq 0$ $\forall$ $\bm{k}$ if $|\bm{r}_A|\leq 1$. The information loss $I_3^{\bm k}=S_3(\rho'_{AB})-S_3(\rho_{AB})$ is therefore \begin{equation} I_3^{\bm{k}}={\textstyle\frac{1}{4}}({\rm tr}\,M_3-2\, {\rm det}\,J-\bm{k}^t M_3\bm{k})\,,\end{equation} where ${\rm tr}\,M_3=|\bm r_B|^2+||J||^2+2\bm{r}_A^tJ\bm{r}_B$. Its minimum corresponds then to $\bm{k}$ along the eigenvector with the {\it largest eigenvalue of $M_3$}, which leads to Eq.\ (\ref{I33}). It is also verified that Eq.\ (\ref{lam}) leads in the present case to the same eigenvalue equation $M_3\bm{k}=\lambda\bm{k}$, since $(\alpha_1,\alpha_2,\alpha_3) \propto(\bm{r}_B^t\bm{k}+\bm{r}_A^tJ\bm{k},\bm{r}_B^t\bm{k},1)$ for $f'(p^\mu_\nu)\propto 1-3(p^\mu_\nu)^2$. As opposed to $I_2^{\bm{k}}$, the minimizing measurement can now depend also on $\bm{r}_A$ through the last terms of $M_3$. A state of the form (\ref{rhok}) is then characterized by matrices $M_3$ and $J$ of rank $1$, such that Eq.\ (\ref{I33}) vanishes. Let us notice that under arbitrary local rotations $\bm{\sigma}_\alpha\rightarrow R_\alpha\bm{\sigma}_\alpha$ for $\alpha=A,B$ ($R_\alpha R_\alpha^t=I$, ${\rm det}\,R_\alpha=+1$), we have $\bm{r}_\alpha\rightarrow R_\alpha^t\bm{r}_\alpha$ and $J\rightarrow R_A^t J R_B$ in (\ref{1}), such that $M_2\rightarrow R_B^t M_2R_B$ and $M_3\rightarrow R_B^t M_3R_B$. Their eigenvalues remain therefore invariant. Of course, ${\rm det}\,J$ and all other terms in Eqs.\ (\ref{S22}) and (\ref{S33}) remain also unaltered. Eqs.\ (\ref{S22}) and (\ref{S33}) provide in fact strict bounds on these invariants. As $S_2(\rho_{AB})\geq 0$ $\forall$ $\rho_{AB}$, Eq.\ (\ref{S22}) implies \begin{equation}|\bm{r}_A|^2+|\bm{r}_B|^2+||J||^2\leq 3\,,\label{b1}\end{equation} with $|\bm{r}_A|^2+|\bm{r}_B|^2+||J||^2=3$ if and only if $\rho_{AB}$ is pure ($\rho_{AB}^2=\rho_{AB}$, $S_2(\rho_{AB})=0$). Moreover, as ${\rm Tr}\, \rho^{q'}\leq {\rm Tr}\,\rho^q$ if $q'>q>0$, for the present normalization we have $S_3(\rho)\geq \frac{2}{3}S_2(\rho)$, which for a two qubit state implies \begin{equation} \bm{r}_A^tJ\bm{r}_B-{\rm det}\,J\leq 1-{\textstyle\frac{1}{3}}S_2(\rho_{AB})\,, \label{b2}\end{equation} with $\bm{r}_A^tJ\bm{r}_B-{\rm det}\,J=1$ if and only if $\rho_{AB}$ is pure. We can verify these results by writing a pure state of two qubits in the Schmidt basis, $|\Psi_{AB}\rangle=\sqrt{p}\,|00\rangle+\sqrt{1-p}\,|11\rangle$, with $p\in[0,1]$, which leads to $|\bm{r}_A|=|\bm{r}_B|=|2p-1|$, $||J||^2=1+8p(1-p)$, $\bm{r}_A^tJ\bm{r}_B=(2p-1)^2$ and ${\rm det}\,J=-4p(1-p)$, and hence to equality in (\ref{b1})--(\ref{b2}). An important final remark concerning the quadratic and cubic entropies is that for an arbitrary single qubit state $\rho_A=\frac{1}{2}(I_2+\bm{r}_A\cdot\bm{\sigma})$ they are {\it identical}, since ${\rm tr}\sigma_\mu^m=0$ for $m$ odd: \begin{equation}S_2(\rho_A)=S_3(\rho_A)=1-|\bm{r}_A|^2\,.\label{23}\end{equation} This entails that the corresponding entanglement monotones \cite{Vi.00} for a two-qubit state are also {\it identical} \cite{RCC.10}, coinciding with the square of the concurrence $C_{AB}$ \cite{WW.98,Ca.03}. {\it Both } quantities $I_2^B$ and $I_3^B$ reduce then to the squared concurrence $C^2_{AB}$ in the case of a pure two-qubit state. This last result can be directly verified using the previous Schmidt decomposition: Both matrices $M_2$ and $M_3$ become diagonal in the ensuing $z$ basis, their two lowest eigenvalues being identical: $\lambda_2=\lambda_3=4p(1-p)=-{\rm det}\,J$. Eqs.\ (\ref{I22}) and (\ref{I33}) lead then to $I_2^B=I_3^B=4p(1-p)$, which is just the square of $C_{AB}=2\sqrt{p(1-p)}$. \section{Application\label{III}} \subsection{States with maximally mixed reduced states} As a first example, let us consider the case $\bm{r}_A=\bm{r}_B=\bm{0}$ in (\ref{1}), such that $\rho_A=\rho_B=\frac{1}{2}I$ and \begin{equation} \rho_{AB}={\textstyle\frac{1}{4}} (I+\bm{\sigma}_A^tJ\bm{\sigma}_B)\,. \label{rhox}\end{equation} We will show that for the state (\ref{rhox}): \\ a) The measurement direction $\bm{k}$ in system $B$ minimizing $I_f^B$ is {\it universal}, i.e., the same for any entropy $S_f$, and given by that of the eigenvector with the largest eigenvalue of the matrix $J^tJ$. \\ b) The ensuing minimum information loss is given by \begin{eqnarray} I_f^B(\rho_{AB})&=&{\textstyle 2f(\frac{p_1+p_2}{2})+2f(\frac{p_3+p_4}{2})}\nonumber\\ &&-f(p_1)-f(p_2)-f(p_3)-f(p_4)\,, \label{ff}\end{eqnarray} where $(p_1,p_2,p_3,p_4)$ are the eigenvalues of (\ref{rhox}) {\it sorted in decreasing order}. \\ c) $I_f^A=I_f^B$ $\forall$ $f$, the minimizing direction in $A$ being that of the eigenvector with the largest eigenvalue of $JJ^t$. \\ {\it Proof of a):} For $\bm{r}_A=\bm{r}_B=\bm{0}$, the eigenvalues (\ref{la}) of $\rho'_{AB}$ become ${p}^\mu_\nu(\bm{k})=\frac{1}{4}(1+\nu\mu |J\bm{k}|)$, being two-fold degenerate. If $\bm{k}_m$ is the normalized eigenvector with the largest eigenvalue ($J_m^2$) of $J^tJ$, we have $|J\bm{k}|=\sqrt{\bm{k}^tJ^tJ\bm{k}}\leq \sqrt{\bm{k}^t_m J^tJ\bm{k}_m}=|J_m|$ for any unit vector $\bm{k}$, and hence $p^\mu_\mu(\bm{k})\leq p^\mu_\mu(\bm{k}_m)$. This implies that the distribution $\{{p}^\nu_\mu(\bm{k})\}$ is {\it majorized} \cite{Bha.97} by $\{{p}^\nu_\mu(\bm{k}_m)\}$, i.e., \begin{equation} \rho'_{AB}(\bm{k})\prec \rho' _{AB}(\bm{k}_m)= {\textstyle\frac{1}{4}}[I+J_m(\tilde{\bm{k}}_m\cdot \sigma_{A})(\bm{k}_m\cdot\sigma_{B})] \,,\label{rkm}\end{equation} where $\tilde{\bm{k}}_m=J\bm{k}_m/J_m$ is the corresponding eigenvector of $JJ^t$, entailing $S_f(\rho'_{AB}(\bm{k}))\geq S_f(\rho'_{AB}(\bm{k}_m))$ and hence $I^{\bm {k}}_f\geq I_f^{{\bm k}_m}$ $\forall$ $\bm{k}$ and $S_f$. The state $\rho' _{AB}(\bm{k}_m)$ is thus the {\it least mixed} classical state associated with $\rho_{AB}$, and measurement along $\bm{k}_m$ the {\it least disturbing local measurement} (in $B$) for {\it any} $S_f$. Accordingly, the general stationary condition (\ref{lam}) leads in this case to the eigenvalue equation $J^tJ\bm{k}=\lambda\bm{k}$ $\forall$ $f$, with both matrices $M_2$ and $M_3$ of Eqs.\ (\ref{M2}), (\ref{M3}) reducing to $J^tJ$. This result is apparent. The local axes can be always chosen such that the matrix $J$ is {\it diagonal}. This can be achieved through its singular value decomposition $J=U_AJ^dU_B^t$, where $J^d_{\mu\nu}=J_\mu\delta_{\mu\nu}$, with $J_\mu^2$ the eigenvalues of $J^tJ$ (the same as those of $JJ^t$) and $U_A$, $U_B$ orthonormal matrices ($U_\alpha U^t_\alpha=I$). The signs of the $J_\mu$ should be chosen such that $U_\alpha$ are rotation matrices (${\rm det}\,U_\alpha=+1$). Replacing $\bm{\sigma}_\alpha\rightarrow U_\alpha\bm{\sigma}_\alpha$ in (\ref{rhox}), we then obtain \begin{equation}\rho_{AB}= {\textstyle\frac{1}{4}}(I+\sum_{\mu=x,y,z}J_\mu \sigma_{A\mu}\sigma_{B\mu}) \,.\label{rhoxd}\end{equation} Since $|J_m|={\rm Max}\{|J_\mu|\}$, the universal least disturbing measurement is, therefore, {\it along the maximally correlated direction}, leaving the largest term of (\ref{rhoxd}) in the post-measurement state (\ref{rkm}). Note that Eq.\ (\ref{rhoxd}) satisfies Eqs.\ (\ref{d}) in a product basis formed by the eigenstates of $\sigma_{A\mu}\sigma_{B\mu}$, for any $\mu=x,y,z$. \\ {\it Proof of b):} Eq.\ (\ref{rhoxd}) is diagonal in the Bell basis $\{|\Psi_{1,2}\rangle=\frac{|00\rangle\pm|11\rangle}{\sqrt{2}}$, $|\Psi_{3,4}\rangle=\frac{|01\rangle\pm|10\rangle}{\sqrt{2}}\}$, i.e., $\rho_{AB}=\sum_{i}p_i|\Psi_i\rangle\langle \Psi_i|$, with eigenvalues \[{\textstyle p_{1,2}=\frac{1+J_z\pm (J_x-J_y)}{4},\; p_{3,4}=\frac{1-J_z\pm (J_x+J_y)}{4}}\,.\] Without loss of generality we may always choose the local axes $x,y,z$ such that $|J_m|=|J_z|\geq |J_x|\geq |J_y|$, with $J_z\geq 0$, $J_x\geq 0$ (rotations of angle $\pi$ around one of the axes in $A$ or $B$ lead to $J_\mu\rightarrow-J_\mu$ for the other axes). In such a case $p_1\geq p_2\geq p_3\geq p_4$, and the least disturbing measurement is along $z$, such that Eq.\ (\ref{rkm}) becomes \begin{equation} \rho'_{AB}(\bm{k}_m)={\textstyle\frac{1}{4}}(I+J_z\sigma_{A z}\sigma_{B z}) \,,\end{equation} having degenerate eigenvalues \[{\textstyle \frac{1+J_z}{4}=\frac{p_1+p_2}{2}\,,\;\; \frac{1-J_z}{4}=\frac{p_3+p_4}{2}}\,.\] The minimum information loss $I^B_f(\rho_{AB})= S_f(\rho'_{AB}(\bm{k}_m))-S_f(\rho_{AB})$ becomes therefore Eq.\ (\ref{ff}), where $(p_1,p_2,p_3,p_4)$ are in general the eigenvalues of $\rho_{AB}$ sorted in decreasing order. \\ {\it Proof of c):} Since Eq.\ (\ref{ff}) is fully determined by the sorted eigenvalues of $\rho_{AB}$, we have obviously $I_f^A=I_f^B$, a result which is apparent from the symmetric representation (\ref{rhoxd}). From (\ref{rkm}) it is seen that the minimizing measurement in $A$ is along $\tilde{\bm{k}}_m$. Let us now discuss the main features of Eq.\ (\ref{ff}). It is verified that the strict concavity of $f$ ensures $I_f^B(\rho_{AB})\geq 0$ $\forall$ $S_f$, with $I_f^B(\rho_{AB})=0$ {\it only if} $p_1=p_2$ {\it and} $p_3=p_4$, in which case $\rho_{AB}=\rho'_{AB}=p_1(|00\rangle\langle 00|+|11\rangle\langle 11|)+p_3(|01\rangle\langle 01|+|10\rangle\langle 10|)$ {\it is a classically correlated state}. In the von Neumann case $f(p)=-p\log p$, Eq.\ (\ref{ff}) is just the quantum discord $D^A=D^B$ of the state, coinciding with the result of ref.\ \cite{SL2.08}. For the states (\ref{rhox}), $\rho'_B=\rho_B=\half I$ for any $M_B$, entailing that the quantum discord (\ref{D}) reduces to the information deficit, i.e., to the present quantity $I_f^B$ for the von Neumann choice of $f$. In the quadratic case (\ref{S2}), Eq.\ (\ref{I22}) or (\ref{ff}) lead to \begin{equation} I_2^B(\rho_{AB})={\textstyle\frac{1}{2}}(J_x^2+J_y^2)=(p_1-p_2)^2+(p_3-p_4)^2\,, \label{I2s} \end{equation} which is just twice the geometric discord of the state, whereas in the cubic case (\ref{S33}), Eqs.\ (\ref{I33}) or (\ref{ff}) lead to \begin{eqnarray} I_3^B(\rho_{AB})&=&{\textstyle\frac{1}{4} (J_x^2+J_y^2)-\frac{1}{2}J_xJ_yJ_z}\label{I3sa}\\ &=&(p_1-p_2)^2(p_1+p_2)+(p_3-p_4)^2(p_3+p_4)\label{I3s} \end{eqnarray} which is just the average of the terms in (\ref{I2s}) and implies $I^B_3(\rho_{AB})\leq I^B_2(\rho_{AB})$. Let us notice that for small $J_\mu$, Eq.\ (\ref{ff}) becomes in fact proportional to (\ref{I2s}) {\it for any} $S_f$: Setting $J_m=J_z$, \begin{equation} I_f^B(\rho_{AB})\approx {\textstyle\frac{1}{2}} c_f (J_x^2+J_y^2)+O(J^3)=c_f I_2^B(\rho_{AB})+O(J^3) \end{equation} with $c_f=-\frac{1}{4}f''(\frac{1}{4})>0$. This implies {\it a universal behavior} in the vicinity of the maximally mixed state $I/4$, in agreement with the general results of \cite{RCC.10}. {\it Relation with entanglement}. It is well known that the state (\ref{rhox}) is entangled if and only if its largest eigenvalue $p_1$ satisfies $p_1>1/2$. Its concurrence \cite{WW.98} is given by \begin{equation} C_{AB}={\rm Max}[2p_1-1,0]\,,\label{C} \end{equation} with $2p_1-1=p_1-p_2-p_3-p_4$. This implies \begin{equation} I_2^B\geq C^2_{AB},\;\;I_3^B\geq C^2_{AB}\,,\label{bound} \end{equation} with equality for $C_{AB}>0$ valid in both cases only if $p_3=p_4=0$ ($C^2_{AB}\leq (p_1-p_2)^2-(p_1-p_2)(p_3+p_4)\leq (p_1-p_2)^2(p_1+p_2)$ if $p_3+p_4\leq p_1-p_2$). Eq.\ (\ref{bound}) means that for the states (\ref{rhox}), $I_2^B$ and $I_3^B$ are both {\it upper bounds} to their corresponding entanglement monotone. This is not a general property. For instance, it is not valid in the von Neumann case $f(\rho)=-\rho\log\rho$, where Eq.\ (\ref{ff}) can be lower than the entanglement of formation $E_{AB}=\sum_{\nu=\pm}f(\frac{1+\nu\sqrt{1-C_{AB}^2}}{2})$ \cite{WW.98} for the present states. \begin{figure} \vspace*{-0.cm} \centerline{\scalebox{.7}{\includegraphics{fig1.ps}}} \vspace*{-0.5cm} \caption{The maximum and minimum values reached by the quantum correlation measures $I_2^B(\rho_{AB})$ and $I_3^B(\rho_{AB})$ in the state (\ref{rhox}), Eqs.\ (\ref{I2s})--(\ref{I3s}), as a function of its maximum eigenvalue $p_1$. The common minimum is just the squared concurrence $C^2_{AB}$, whereas the respective maximum is indicated by the dashed and dashed-dotted lines. The inset depicts the maximum and minimum values reached in this state by $I_f^B$ in the von Neumann case ($q=1$, $log=log_2$), where it coincides with the quantum discord, with the solid line depicting the entanglement of formation. The least disturbing measurement is here the same for all entropies, and along the direction of the main principal axis of $J^tJ$ (see text). Quantities plotted are dimensionless in all figures.} \label{f1} \end{figure} \vspace*{0.cm} Fig.\ \ref{f1} depicts the maximum and minimum values reached by $I_2^B$ and $I_3^B$ in the states (\ref{rhox}) for fixed values of the maximum eigenvalue $p_1$. The common minimum is just the squared concurrence $C^2_{AB}$, reached for $p_3=p_4=0$ if $p_1\geq 1/2$ (and $p_2=p_1$, $p_3=p_4$ if $p_1\leq 1/2$). The maximum is reached for $p_2=p_3=p_4$ if $p_1\geq 7/13\approx 0.54$ for $I_2$ and $p_1\agt 0.44$ for $I_3$, and for $p_2=p_3$, $p_4=0$, if $p_1$ lies below the previous values and above $1/3$. As a result, the maximum values for zero concurrence of $I_2$ and $I_3$ within these states are $1/8$ and $2/27$ respectively, obtained at $p_1=1/2$. In contrast, in the von Neumann case the minimum (again obtained for $p_3=p_4=0$ if $p_1\geq 1/2$) lies clearly below $E_{AB}$ $\forall$ $p\in(1/2,1)$, and even the maximum (attained at $p_2=p_3=p_4$ if $p\agt 0.86$ and $p_2=p_3$, $p_4=0$ if $1/3\leq p_1\alt 0.86$) lies {\it below} $E_{AB}$ if $p_1\agt 0.91$. If $p\leq 1/3$ the maximum in these three measures is reached for $p_2=p_3=p_1$. \subsection{States with parity symmetry} Let us now consider the case where both $\bm{r}_A$ and $\bm{r}_B$ are directed along the same principal axis, i.e., $\bm{r}_B$ along $\bm{k}$ and $\bm{r}_A$ along $J\bm{k}$, with $\bm{k}$ an eigenvector of $J^tJ$ (and hence, $J\bm{k}$ an eigenvector of $JJ^t$). Choosing these axes as the local $z$ axes, such that $\bm{r}_A=r_A\bm{k}_z$, $\bm{r}_B=r_B\bm{k}_z$ and $J_{\mu\nu}=J_\mu\delta_{\mu\nu}$, such state can be written as \begin{eqnarray}\rho_{AB}&=&{\textstyle\frac{1}{4}}(I+r_A\sigma_{Az}+r_B\sigma_{Bz}+ \sum_{\mu=x,y,z}J_\mu \sigma_{A\mu}\sigma_{B\mu})\label{X}\\ &=&\frac{1}{4}\left(\begin{array}{cccc}a_+&0&0&\alpha_+\\0&c_+&\alpha_-&0\\ 0&\alpha_-& c_-&0\\ \alpha_+&0&0&a_- \end{array}\right), \begin{array}{rcl} a_{\pm}&=&1+J_z\pm (r_A+r_B)\\ c_{\pm}&=&1-J_z\pm(r_A-r_B)\\ \alpha_{\pm}&=&J_x\mp J_y\end{array}.\nonumber\end{eqnarray} where the matrix is the representation in the standard basis of $\sigma_{Az}\sigma_{Bz}$ eigenstates. This state commutes with the spin parity \cite{RCM.08} $P_z=-\exp[i\pi(\sigma_{Az}+\sigma_{Bz})/2]$. It is also denoted as an $X$ state \cite{AR.10}. We will now show that {\it a measurement of $\bm{\sigma}_{B}$ along any of the principal axes $x,y,z$ will provide a stationary point of $I_f^{\bm{k}}$ $\forall$ $S_f$}. \\ {\it Proof:} For a measurement along the $z$ axis ($\bm{k}=\bm{k}_z$), i.e., along the axis where $\rho_B$ is diagonal, $J^tJ\bm{k}_z=J_z^2\bm{k}_z$, $\bm{r}_A$ and $\bm{r}_B$ are all along this axis and Eq.\ (\ref{lam}) is then trivially satisfied $\forall$ $\alpha_i$. It is a particular case of Eq.\ (\ref{d}), which here holds in the standard basis. For a measurement along the $x$ axis ($\bm{k}=\bm{k}_x$), $J^tJ\bm{k}_x=J_x^2\bm{k}_x$ while $\bm{r}_B\cdot \bm{k}_x=0$ and $|\bm{r}_A+\nu J\bm{k}_x|=\sqrt{r_A^2+J_x^2}$. Hence ${p}^\mu_\nu=\frac{1}{4}(1+\mu|\bm{r}_A+\nu J\bm{k}_x|)$ is independent of $\nu$. This leads to $\alpha_1=\alpha_2=0$ in (\ref{al}), in which case Eq.\ (\ref{lam}) is again satisfied. For $\bm{k}=\bm{k}_y$ the argument is similar. We also remark that these arguments also apply to the quantum discord (\ref{D}), as $\eta=0$ in (\ref{eqmod}) for $\bm{k}=\bm{k}_x$ or $\bm{k}_y$. While other stationary directions may also exist, the principal axes are strong candidates for minimizing $I_f^{\bm k}$. Typically, the minimum will be attained for measurements along $z$ if ${\rm Max}[|J_x|,|J_y|]$ is sufficiently small, while otherwise measurements along $x$ or $y$ will be preferred. A transition between these two regimes will arise as $J_x$ or $J_y$ increases, whose details will depend on the entropic function and may involve intermediate directions $\bm{k}$. Writing $\bm{k}=(\sin\gamma\cos\phi,\sin\gamma\sin\phi,\cos\gamma)$, these intermediate solutions can be found from Eq.\ (\ref{lam}), which leads here to $\phi=0$ or $\phi=\pi/2$ (if $|J_x|>|J_y|$ the minimum corresponds to $\phi=0$ for {\it any} $S_f$, as the ensuing distribution majorizes that for $\phi=\pi/2$) and to $\gamma=0$ or \begin{equation} \cos\gamma=\frac{\alpha_1 r_{B}+\alpha_2 J_z r_A} {\alpha_3(J_x^2-J_z^2)}\,,\label{eqf}\end{equation} where we have assumed $|J_x|>|J_y|$ such that $\phi=0$. The intermediate solutions $|\gamma|\in(0,\pi/2)$ of (\ref{eqf}), if existent, are {\it degenerate}, as both choices $\pm\gamma$ lead to the same $I_f^{\bm{k}}$. Just the principal axes solutions are non-degenerate. The final expression for $I_f^B$ is formally \begin{equation}I_f^B(\rho_{AB})=\sum_{\mu,\nu=\pm}f({p}^\mu_\nu)-f(\lambda^\mu_\nu)\,, \label{Ifx}\end{equation} where ${p}^\mu_\nu=\frac{1}{4}(1+\nu r_Bk_z+\mu\sqrt{(r_A+\nu J_zk_z)^2+J^2_xk^2_x})$ are the eigenvalues (\ref{la}) of $\rho'_{AB}$ and $\lambda^\mu_\nu$ those of $\rho_{AB}$: \begin{equation} \lambda^\mu_\nu={\textstyle\frac{1}{4}}[1+\nu J_z+ \mu\sqrt{(r_A+\nu r_B)^2+(J_x-\nu J_y)^2}]\,. \label{pex}\end{equation} We can verify the previous results in the quadratic and cubic cases. For an $X$ state both matrices $M_2$ and $M_3$ (Eqs.\ (\ref{M2}), (\ref{M3})) are diagonal in the principal axes basis: \begin{eqnarray} M_{2_{\mu\nu}}&=&\delta_{\mu\nu}(J_\mu^2+\delta_{\mu z}r_B^2)\,,\nonumber\\ M_{3_{\mu\nu}}&=&\delta_{\mu\nu}[J_\mu^2+\delta_{\mu z}(r_B^2+2 r_B r_AJ _z)]\,.\nonumber \end{eqnarray} Hence, the optimum measurement will be along the axis with the maximum diagonal value and {\it no intermediate solutions will arise} (for non-degenerate eigenvalues), as opposed to the general case. Assuming $|J_y|<|J_x|$, a ``sharp'' $z\rightarrow x$ transition for the least disturbing measurement will then take place, the $x$ axis preferred for \begin{eqnarray} J_x^2&>&J_z^2+r_B^2\,,\;\;q=2\,,\label{c2}\\ J_x^2&>&J_z^2+r_B^2+2r_Br_A J_z\,,\;\;q=3\label{c3}\end{eqnarray} in the quadratic and qubic cases respectively, such that \begin{eqnarray} I_2^B(\rho_{AB})&=&\half\{J_y^2+{\rm Min}[J_x^2,r_B^2+J_z^2]\}\,,\label{c4}\\ I_3^B(\rho_{AB})&=&{\textstyle\frac{1}{4}} \{J_y^2-2J_xJ_yJ_z+{\rm Min}[J_x^2,r_B^2+J_z^2+2r_Ar_BJ_z]\}\,.\nonumber\\ &&\label{c5} \end{eqnarray} These expressions are in general no longer upper bounds to the squared concurrence, which for these states is $C_{AB}=\frac{1}{2}{\rm Max}[|\alpha_+|-\sqrt{c_+c_-},|\alpha_-|-\sqrt{a_+a_-},0]$. Nonetheless, $I_2^B$ remains an upper bound to $C_{AB}^2$ in the ``$z$ phase'', as $C_{AB}^2\leq \frac{1}{4}(J_x\pm J_y)^2\leq \frac{1}{2}(J_x^2+J_y^2)$. \subsection{Mixture of aligned states} As a particular relevant example of Eq.\ (\ref{X}), we will consider the mixture of two states with spins aligned along different directions. Choosing the $z$ axis as the bisector, such state can be written as \begin{eqnarray} \rho_{AB}&=& \half(|\theta\theta\rangle\langle\theta\theta| +|-\theta-\theta\rangle\langle-\theta-\theta|)\label{sth}\\ &=& \frac{1}{4}\left(\begin{array}{cccc}a_+&0&0&c\\0&c&c&0\\ 0&c&c&0\\c&0&0&a_- \end{array}\right)\,, \begin{array}{rcl}a_{\pm}&=&(1\pm\cos\theta)^2\\ c&=&\sin^2\theta \end{array}\,, \end{eqnarray} which corresponds to $(J_x,J_y,J_z)=(\sin^2\theta,0,\cos^2\theta)$ and $r_A=r_B=\cos\theta$ in (\ref{X}). Here \begin{equation}{\textstyle|\theta\rangle=\exp[-i\frac{\theta}{2}\sigma_y]|0\rangle= \cos\frac{\theta}{2}|0\rangle+\sin\frac{\theta}{2}|1\rangle} \end{equation} is the state with the spin forming an angle $\theta$ with the $z$ axis in the $x,z$ plane. The relevance of this state was discussed in \cite{CRC.10}. It represents, roughly, the state of a spin pair in the definite parity ground state of a finite $n$ spin ferromagnetic type $XY$ spin chain in a transverse field for $|B|<B_c$, and the {\it exact} state of any pair at the immediate vicinity of the factorizing field \cite{RCM.08} (neglecting small coherence terms $\propto \cos^{n-2}\theta$). This state is {\it separable}, i.e., it is a convex mixture of product states \cite{RF.89}, and the concurrence $C_{AB}$ accordingly vanishes $\forall$ $\theta$. Nonetheless, it has non-zero discord \cite{CRC.10} if $\theta\in(0,\pi/2)$. It will then have non-zero values of {\it any} $I_f^B$ in this interval, with $I_f^B=I_f^A$ $\forall$ $S_f$ due to the symmetry of the state. For $\theta=0$ it is obviously a pure product state, while for $\theta=\pi/2$ it is a {\it classically correlated state}, i.e., diagonal in a {\it standard} product basis, implying $I_f^B(\theta)\equiv I_f^B(\rho_{AB}(\theta))=0$ for $\theta=0$ or $\theta=\pi/2$ $\forall$ $S_f$. \begin{figure}[t] \vspace*{-0.5cm} \centerline{\scalebox{.6}{\includegraphics{fig2.ps}}} \vspace*{-0.5cm} \caption{Top: The quantum correlation measures $I_q^B(\rho_{AB})$ in the state (\ref{sth}), as a function of the angle $\theta$ for $q=1$ (von Neumann case), $2$ and $3$. $D^B$ denotes the quantum discord. Bottom: The least disturbing measurement angle $\gamma$ vs.\ $\theta$ for the same cases depicted above. It is seen that $\gamma$ exhibits a sharp transition from $0$ to $\pi/2$ (i.e., from $z$ to the $x$ axis) in the quadratic $(q=2)$ and cubic $(q=3)$ cases, whereas in the von Neumann case ($q=1$) the transition is smooth. No transition arises in the case of the quantum discord.} \label{f2} \end{figure} \vspace*{0.cm} It can be expected that as $\theta$ increases, the least disturbing measurement will change from $z$ to $x$. In the quadratic and cubic cases, the transition is {\it sharp}. We obtain, according to Eqs.\ (\ref{c2})--(\ref{c5}), \begin{eqnarray} I_2&=&\half\left\{\begin{array}{lr} \sin^4\theta&\theta<\theta_{c2}\\ \cos^2\theta+\cos^4\theta &\theta>\theta_{c2}\end{array},\right. \\ I_3&=&{\textstyle\frac{1}{4}} \left\{\begin{array}{lr}\sin^4\theta&\theta<\theta_{c3}\\ \cos^2\theta+3\cos^4\theta &\theta>\theta_{c3}\end{array},\right. \end{eqnarray} where $\cos^2\theta_{c2}=1/3$ ($\theta_{c2}\approx 0.61\pi/2$) and $\cos^2\theta_{c3}=(\sqrt{17}-3)/4$ ($\theta_{c3}\approx 0.64\pi/2$), with the minimizing measurement changing from $z$ to $x$ for $\theta>\theta_{ci}$. These two quantities exhibit then a cusp-like maximum at $\theta=\theta_{ci}$, i.e. slightly above $\pi/4$, as seen in Fig.\ \ref{f2}. On the other hand, for other entropies a smooth transition from $z$ to the $x$ direction can arise. For instance, in the von Neumann case $z$ is preferred exactly for $\theta\leq \pi/4$, but $x$ is minimum only for $\theta\agt 0.64\pi/2$. In between, the optimum measurement is obtained for an intermediate angle $\gamma$, as determined by Eq.\ (\ref{eqf}), which varies continuously from $0$ to $\pi/2$, as seen in Fig.\ \ref{f2}. This leads to a smooth maximum, located closer to $\pi/4$. In the case of the quantum discord, the minimizing angle is $\gamma=\pi/2$ $\forall$ $\theta$, exhibiting then {\it a different behavior} due to the effect of the local term. In this case a relative entropy, rather than a total entropy, is minimized. \begin{figure}[t] \vspace*{-0.5cm} \centerline{\scalebox{.6}{\includegraphics{fig3.ps}}} \vspace*{-0.5cm} \caption{The least disturbing measurement angle $\gamma$ vs.\ $\theta$ determined by $I_q^B(\theta)$, for different values of $q$.} \label{f3} \end{figure} \vspace*{0.cm} For the present state there is no least mixed state $\rho'_{AB}$, and the least disturbing measurement depends, therefore, on the entropic function. In order to appreciate previous results from a more general perspective, the behavior of the minimizing angle for different $q$ in the entropies (\ref{Sq}) is depicted in Fig.\ \ref{f3}. The sharp transition $z\rightarrow x$ (i.e., $0\rightarrow\pi/2$) occurs for $2\leq q\leq 3$, indicating a special critical behavior at these two values. A smoothed transition like that encountered in the von Neumann case arises for $1/2<q<2$ and also $q>3$, where $\gamma$ varies continuously from $0$ to $\pi/2$ within some window of $\theta$ values, which narrows for $q$ close to $2$ or $3$. For $0<q\leq 1/2$, the minimizing angle changes sharply from $0$ to an intermediate value $\gamma\approx \theta$, increasing then almost linearly with $\theta$ ($\gamma\approx\theta$). This is due to the fact that for low $q$, $S_q(\rho'_{AB})$ is minimized when the lowest eigenvalue of $\rho'_{AB}$ vanishes, and this occurs precisely for $\gamma=\theta$. On the other hand, for high $q$, $S_q(\rho'_{AB})$ is minimized when the largest eigenvalue of $\rho'_{AB}$ is maximum, and the latter is maximized for $\gamma=0$ if $\theta\leq\theta_c\approx 0.66\pi/2$, and for an intermediate $\gamma$ if $\theta>\theta_c$, which varies continuously from $0$ to $\pi/2$ for $\theta_c<\theta<\pi/2$. Accordingly, for high but finite $q$ values, $\gamma=0$ for $\theta\alt \theta_c$, increasing then with $\theta$ and reaching $\pi/2$ at an increasingly higher $\theta$. Different disorder criteria lead then to different least disturbing measurements in this case, in contrast with the state (\ref{rhox}). \section{Conclusions\label{IV}} We have analyzed the determination of the minimum information loss $I_f^B$ associated with an unread local measurement in a bipartite system, for a general entropy $S_f$. Such quantity is a measure of the quantum correlations lost in the local measurement, and reduces to the information deficit and the geometric discord when $S_f$ is chosen as the von Neumann and linear entropy respectively. A general stationary condition was derived, together with its explicit form for an arbitrary mixed state of two qubits. Explicit expressions for the cubic entropy and the associated measure $I_3^B$ were in this case obtained, which require, as in the quadratic case (geometric discord), just the eigenvalues of a $3\times 3$ matrix. As application, we have first examined two-qubit mixed states with maximally mixed marginals, where the minimum information loss $I_f^B$ for any entropy was shown to be a simple function of the eigenvalues of $\rho_{AB}$. The minimizing measurement is in this case universal. Moreover, in this case $I_2^B$ and $I_3^B$ were shown to be strict upper bounds of the squared concurrence, which is the associated entanglement monotone for both entropies. We have also analyzed the case of $X$ states, providing explicit expressions for $I_2^B$ and $I_3^B$ and showing that spin measurements along the principal axes of the matrix $J^tJ$ are {\it universal} stationary points of $I_f^B$ for {\it any} $S_f$. Finally, the special case of a mixture of aligned states was examined in detail. Here the least disturbing local measurement changes, for all measures $S_f$, from $z$ (bisector axis) to the $x$ axis as the angle $2\theta$ between both directions changes from $0$ to $\pi$, being then different from that optimizing the original quantum discord (which stays constant), although the type of transition depends on the information measure employed. The least disturbing measurement according to $I_f^B$ is thus more sensible to the strength of the correlation, and reflects the ``transition'' experienced by the state. Application of the present formalism to more complex systems is currently under investigation. The authors acknowledge support of CIC (RR) and CONICET (NC,LC) of Argentina.
1,314,259,996,187
arxiv
\section{Introduction} Solar flares are the result of sudden energy release, often exceeding $10^{32}$ ergs, from magnetic reconnection \citep{Hudson2011}. The main manifestation of flares is extensive emission over much of the electromagnetic spectrum. Intriguingly, under some special circumstances, negative flare contrasts (decrease of intensity) are occasionally reported in stellar observations in visible continuum wavelengths \citep{Flesch1974}. In those cases, a 20\% negative contrast in the optical continuum is typically detected prior to the initial brightening \citep{Hawley1995}. The widely accepted postulate for generating negative flares is the H$^{-}$ absorption model \citep{Grinin1973,Grinin1983}, in which a bombarding beam of electrons penetrates to the lower and cooler atmosphere and causes enhanced collisional ionization of hydrogen, and leads to an increase of electron density and eventually an increase of H$^-$ opacity. As a result, the formation layer of the visible continuum, though still in the photosphere, is shifted slightly upward and becomes cooler. Analogously, intensity drops associated with flares can occur on the Sun, but involving a variety of features and formation mechanisms. The continuum dimming caused by the enhanced H$^-$ opacity was rarely observed and the diminution effects are weaker on the Sun than that of the stellar events \citep{Henoux1990}. So far only two events have been reported, with negative contrasts of 5\% at 5500~\AA\ \citep{Henoux1990} and 1\% $\sim$ 2\% at 8500~\AA\ \citep{Ding2003b}. However, due to the measurement uncertainty and low resolution, both events are regarded as non-compelling detections \citep{Ding2003b, Henoux1990}. In microwave observations, negative contrasts of solar radiation during flares have also been reported \citep{Sawyer1977, Maksimov1991}. The term ``negative flare'' was used but the dimming in radio flux was caused by absorption of intervening filament material, or a similar mechanism, rather than a change in the local emission process itself. Besides the visible continuum and microwave observations, negative flares can also be found in helium-line observations. Solar observations by \citet{Zirin1980} attracted attention to the \ion{He}{1} D3 line at 5876~\AA, by which the helium element was first discovered in 1868. \citet{Liu2013} confirmed the occurrence of enhanced D3 darkening and discussed fine structures of the dark-bright pattern of flare ribbons. Owing to electrons orbiting the nucleus, a \ion{He}{1} atom comprises two different kinds of energy states: the singlet (para-helium) and the triplet (ortho-helium) states. Since the ground state is a singlet one ($1s^2$ $^1S$), radiative transitions to the higher triplet states ($2p$~$^{3}P$ and $3d$~$^{3}D$) responsible for the D3 line are forbidden. These states are then populated through collisional transitions or photoionization from the ground state followed by recombinations, which require energy from thermal (or nonthermal) electrons and EUV irradiation, respectively. Whether the D3 line appears in emission or absorption depends on whether the line source function in the line forming region is larger or smaller than the nearby continuum intensity. The line source function is in turn determined by the ratio of the populations at the two energy states \citep{Rutten2003}. In the case that these energy states are populated mainly through thermal collisional transitions, the plasma temperature is the key factor in determining whether the D3 line appears in absorption or emission \citep{Centeno2008, Mauas2005, Zirin1988}. Similar to the D3 line, another helium line, at 10830~\AA, is formed by the transition between $2s$~$^{3}S$ and $2p$~$^{3}P$ of the helium triplet. A negative flare was reported by \citet{Harvey1984}, in \ion{He}{1} 10830~\AA\ using spectrographic data. However, due to the lack of spatial information, the possibility of enhanced absorption of plage or dark filament material moving into the spectral slit can not be ruled out. Two mechanisms, namely photoionization-recombination and collisional ionization-recombination, that can populate the helium triplet and thus enable the transitions between $2s$~$^{3}S$ and $2p$~$^{3}P$ states, were studied by \citet{Ding2005}. As expected, the EUV radiation turns the \ion{He}{1} 10830 line to emission via photoionization-recombination, which has been confirmed by \citep{Zeng2014}. Notably, the collisional ionization-recombination can enhance the absorption at the beginning of the flare and generate stronger emission at flare maximum than the photoionization-recombination effect \citep{Ding2005}. As such, under collisional bombardment, the intensity in a flaring region could first decrease and then increase afterward. In this study, we present the very first, unambiguous evidence of negative contrast observed during solar flares and discuss the morphological and spectral properties of the negative flares. \section{Observation} The key observations in \ion{He}{1} 10830~\AA\ presented in this study, were obtained using the newly commissioned 1.6-m New Solar Telescope \citep[NST;][]{Goode2012} at Big Bear Solar Observatory. Equipped with a high-order adaptive optics system using 308 sub-apertures \citep{Cao2010}, NST provides unprecedented high spatio-temporal resolution. For instance, the image scale at 10830~\AA\ achieves 0\arcsec.085 (61.6 km) per pixel. The near infrared (10830~\AA) imaging system is based on a liquid-nitrogen-cooled Hg\,Cd\,Te focal plane CMOS array and a Lyot-filter \citep{Cao2010}, which was tuned to the blue wing of the \ion{He}{1} line at 10830.05~\AA\ with a passband of 0.5~\AA. The typical field-of-view (FOV) is about 85\arcsec\ and the effective cadence is 15 s after performing the speckle reconstruction \citep{Woger2007}. Supportive H$\alpha$ imaging spectroscopic data were obtained simultaneously by the Fast Imaging Solar Spectrograph \citep[FISS;][]{Chae2013}, which has a smaller FOV of 40\arcsec\ by 60\arcsec\ with a scanning cadence as fast as 20 s and a spectral resolving power ($\Delta\lambda$/$\lambda$) of $1.4 \times 10^5$. Complementary full-disk intensity maps are obtained from two major instruments onboard the Solar Dynamics Observatory \citep[SDO;][]{SDO}, the Helioseismic and Magnetic Imager \citep[HMI;][]{HMI} provided the visible continuum maps and the Atmospheric Imaging Assembly \citep[AIA;][]{AIA} provided UV/EUV maps. The spatial resolution of SDO/HMI and SDO/AIA is lower than that of the NST, but they provide context information in locating the NST target on the solar disk and comparing flare emissions in different wavelengths. In the past several decades, taking advantage of rapid developments of focal plane instrumentation, substantial flare observations have been carried out by 2-D imaging to provide spatially resolved information. However, there was no high-resolution imaging spectroscopy for solar flares in the visible or UV wavelengths from space \citep{Fletcher2011}. The newly launched NASA mission, IRIS \citep{IRIS}, fills this gap by providing imaging spectroscopy in both near ultraviolet (NUV) and far ultraviolet (FUV) bands. The NUV window of IRIS has a main passband at 2800~\AA, and the FUV window covers the spectra near 1400~\AA. The IRIS raster size is chosen depending on the program. The spectrograph may scan a FOV of 130\arcsec\ by 175\arcsec\ using a slit with dimension of 0.\arcsec33 by 175\arcsec\ for slow observations \citep{IRIS} and may speed up with 1-16 step rasters for flares or faster programs. Pre-flare spectra served as reference and for the study of pre-flare conditions. The slit position is visible in the Slit-Jaw images as a dark vertical line. \section{Data Analysis and Result} It is notable that previous \ion{He}{1} D3 observations of solar flares are rare and the data were recorded on film \citep{Liu2013, Zirin1980}. To our knowledge, no negative flare has ever before been clearly recorded using an imaging system in 10830~\AA, probably due to the low sensitivity of detectors and coarse observing resolutions (spatial and temporal). Using the unprecedented 100-km resolution in \ion{He}{1} 10830~\AA, we report the first imaging observation of negative flares. As listed in Table~\ref{nf_list}, two events were observed, on 2013 August 17 (M1.4) and 2014 August 1 (M1.5), and presented in this study. \begin{table}[pht] \caption{Negative flares in \ion{He}{1} 10830~\AA, observed by NST. \label{nf_list}} \begin{tabular}{lccccr} \\ \tableline\tableline Date & GOES SXR & NST 10830~\AA & FISS/NST H$\alpha$ & IRIS & RHESSI \\ & magnitude & & & & \\ 2013 Aug. 17 & M1.4 & Yes & Yes & No & No \\ 2014 Aug. 01 & M1.5 & Yes & No & Yes & No \\ \tableline \end{tabular} \end{table} \subsection{Morphological Analysis of the event on 2013 August 17} On 2013 August 17, we observed a solar flare with GOES soft-X-ray-class M1.4 at about 18:43 UT in active region NOAA 11818. A filament eruption, or possibly eruption of a magnetic flux rope, was associated with this flare and led to a halo coronal mass ejection (CME). However, the filament disappeared early in the flare period and therefore has no effect on the \ion{He}{1} 10830 dark flare ribbon found much later at the flare footpoints. Figure~\ref{fulldisk} presents the HMI and NST images taken around 19:12 UT showing a negative flare ribbon in the \ion{He}{1} 10830~\AA\ indicated by the black arrow. In the left panel, a full-disk HMI continuum map is displayed, on which a small area is highlighted by a red box corresponding to the NST's FOV. The right panel shows the NST \ion{He}{1} 10830~\AA\ map. This flare has a typical two-ribbon morphology. One ribbon is located inside the sunspot penumbra-umbra region with a stronger magnetic field and the other is located in the surrounding granular area with a weaker magnetic field. We focus on the ribbon outside the sunspot as it is the one associated most clearly with negative contrasts. It is notable that all the NST images are speckle reconstructed to achieve diffraction-limited resolution. The speckle reconstruction could in principle add artificial features to a region with a strong gradient in brightness. In order to confirm that the negative flare signature is real, we verified that the dark ribbon is also clearly identified in the raw data prior to speckle reconstruction. Therefore, we are confident that the observed flare ribbon with a negative contrast is not an artificial feature introduced by the speckle reconstruction. One example of the \ion{He}{1} 10830~\AA\ image taken at 18:56:14 UT is shown in Figure~\ref{spacetime}a. The bright flare source is clearly seen. Owing to NST's unprecedented high resolution, a dark edge is clearly visible on the upper boundary of this flare ribbon. Note that the ribbon is moving upward in the FOV and thus the dark edge is actually the moving front that represents the footpoints of newly reconnected flare loops. In other words, the ribbon with a negative contrast corresponds to the latest electron beams penetrating from the solar corona, where the magnetic reconnection occurred. The bright part of the ribbon behind the dark frontier is the signal of the flare heating immediately following the enhanced line absorption. Such a leading front due to electron penetration has been reported in H$\alpha$ observations as narrow bright features with red-shifted spectra, for instance by \citet{Svestka1980} using the multi-slit spectrograph developed by Lockheed Solar Observatory \citep{Martin1974}. During our observing run, using the imaging spectroscopy by FISS, we were able to identify the red-shift in H$\alpha$ spectra at the same location where the enhanced 10830 absorption occurs. For illustration, as shown in Figure~\ref{ha}, a cross is put on a representative location of the ribbon front, which shows emission in H$\alpha$ and EUV 304~\AA, but absorption in \ion{He}{1} 10830~\AA. On the upper right panel, the H$\alpha$ spectra on the cross and the light curves of \ion{He}{1} 10830~\AA\ and EUV 304~\AA\ are plotted. At an early time of 19:04:00 UT, the flare emission had not yet propagated to the cross and a typical broad H$\alpha$ absorption line is seen (black dotted curve in the lower right panel). Later on, at 19:06:28 UT, the ribbon-front had arrived at the cross location, indicating the start of precipitation characterized by a strong red-wing enhancement in the H$\alpha$ line profile (blue curve in the lower right panel). After 19:07:28 UT, the emission dominates in all wavelengths and we see an emission profile in H$\alpha$ with a typical central reversal, resulting from self-absorption \citep{Ding1997a}. This strong red wing emission in H$\alpha$, appearing in the early phase of the flare, indicates the existence of large scale mass motions caused by impulsive heating, consistent with the most recent radiative hydrodynamic simulation of an electron beam heated atmosphere \citep{dacosta2015}. Therefore, a close relationship is built up between the negative flare ribbon and the precipitation of energetic particles. The above description of the event agrees with the scenario predicted by \citet{Ding2005}, in which the electron beams bombarding on cooler plasma over-populate the $2s$~$^{3}S$ state via collisional ionization-recombination, resulting in the absorption in \ion{He}{1} 10830. Later on, when this layer of the solar atmosphere is efficiently heated to a higher temperature, the $2p$~$^{3}P$ state is also sufficiently populated and emission is produced due to both photoionization-recombination and collisional ionization-recombination, by enhanced EUV irradiation and nonthermal electrons, respectively. The transition from absorption to emission is likely a result of the line source function varying from a lower value (in a cooler atmosphere) to a higher one (in a hotter atmosphere). We select a slit cutting through the dark ribbon (see Figure~\ref{spacetime}a) and plot its time-space diagram in Figure~\ref{spacetime}b. As such, the motion of the dark ribbon and the expansion of the following bright flare ribbon are clearly identified. By measuring the distance traveled within the time period from 18:56:14 UT to 19:14:44 UT, the average speed of the ribbon motion is calculated to be 3.7 km s$^{-1}$, which is significantly slower than some strong flares \citep[e.g.,][]{Xu2006} but within a typical range for moderate flares. In the bottom panels of Figure~\ref{spacetime}, we see an emission area in UV 1700~\AA, which is almost identical to \ion{He}{1} 10830~\AA\ but obviously without the dark fringe. The temporal variation of intensity is illustrated in Figure~\ref{lc}. On the left panel, a small slit on the path of the dark ribbon is chosen, and a square region is selected as a reference area. The light curves of the averaged intensity in the slit and the large box are plotted in the right panel in red and green colors, respectively. As expected, when the ribbon sweeps through the slit, we see an obvious drop of the intensity compared to the initial stage. Afterwards, the slit region is dominated by the subsequent flare heating and thus a steady increase of the 10830~\AA\ intensity is observed. The error bars represent 1$\sigma$ deviation of the measurements of the contrasts in both the reference area and the slit. Therefore, we are confident that the intensity variation, especially the decrease, is due to neither the variation of the background (green curve) nor the seeing fluctuation. The brightness, relative to the reference area, of the slit area drops from about 5.7\% to $-8.0\%$. This can be translated to a negative contrast of $-13.7\%$ relative to the pre-flare condition. The slit light curve also provides the time scale of the dark flare ribbon. A Gaussian fit of the dimming period shows a full-width-at-half-maximum (FWHM) of about 91 s. Another important parameter is the characteristic size (width of the ribbon), which is crucial for the estimation of electron flux used in the numerical simulation of flare absorption and emission. We measure the size following the method of \citet{Xu2012a}. As shown in Figure~\ref{width}, four slits are selected at different positions and different times across the dark flare ribbon. In the bottom panels, the intensity spatial profile of each slit is fitted using a Gaussian function plus a constant at the pre-flare level. The FWHM measurements give a width of the dark ribbon in a range from 340 km to 510 km. A wider range of the width is possible by selecting more slit positions. However, the four-point measurements are within the best seeing conditions and therefore are more representatively meaningful. Note that the dark flare ribbon is fully resolved during the entire period of observation, and thus the lower limit of the width is at least greater than 120 km (two pixels). \subsection{IRIS imaging spectroscopy of the event on 2014 August 1} In the second event on 2014 August 1, the Interface Region Imaging Spectrograph (IRIS) \citep{IRIS} observed the same region as NST. The high resolution UV spectra provide supportive information with respect to the negative flare ribbon observed by NST. While not as striking as the earlier event, the main features of the negative contrast front are the same in the two events. At about 18:15 UT, the IRIS spectral slit was placed at a position that cuts through the flare ribbon perpendicularly, which is perfect to study the different components of the flare ribbon (see Figure~\ref{2ndsji}). In Figure~\ref{2ndspe}, \ion{Mg}{2} and \ion{C}{2} spectra of this slit position are plotted. Recall that the negative flare edge shows negative contrast in \ion{He}{1} 10830~\AA\ and the following part is in emission. Therefore, we anticipate that we will see different spectral characteristics at the front edge of the ribbon compared to the areas behind. Two line groups, the \ion{Mg}{2} lines at 2796.33~\AA\ and 2803.51~\AA\ and the \ion{C}{2} lines at 1334.53~\AA\ and 1335.71~\AA, are studied. The line profiles, of \ion{Mg}{2} and \ion{C}{2}, in representative pixels of the front (corresponding to the dark edge in \ion{He}{1} 10830~\AA) and the middle (emission in all UV lines and \ion{He}{1} 10830~\AA) of the northern flare ribbon, are plotted in Figure~\ref{spemg} and Figure~\ref{spec}. As expected, the front edge (blue) of the northern ribbon shows significant abnormal features compared to the rest of this ribbon body (orange), which is expected because negative contrast is observed in \ion{He}{1} 10830~\AA. Several important characteristics are inferred: 1) In general, the spectra of the ordinary flare ribbon behind the front are brighter than those on the negative front. 2) Strong, mainly red-shifted emission and line broadening (with a FWHM near or greater than 0.8~\AA) are found in the negative front relative to the ribbon behind. 3) Both the \ion{Mg}{2} and \ion{C}{2} show a central reversal pattern in the ribbon front. This is a common feature for quiet Sun spectra \citep{Leenaarts2013} but unusual for flare regions \citep{Kerr2015}. Furthermore, we classified the \ion{Mg}{2} profile shapes into groups of self-similar profiles using a machine learning algorithm \citep{MacQueen1967}. A total of 30 groups are classified according to their characteristics, for instance the line broadening and the central reversal. The northern ribbon, with a dark front observed in \ion{He}{1} 10830~\AA, appeared with a special type of \ion{Mg}{2} profile, including the broadest profiles of the whole FOV. A selection of these profiles is shown in Figure~\ref{ml}. The two images on the left show the IRIS raster in two different wavelengths at 2792.37~\AA\ and 2796.34~\AA. Overplotted are the color-coded locations of special \ion{Mg}{2} profiles, whose shapes are shown in the panels on the right. The average profile of each colored group is shown in black and is used to determine the FWHM. The widest profiles of the FOV with a FWHM of ~1~\AA\ are shown in green and from the magnified inset it can be seen that these profiles lie just north of the bright flare ribbon, corresponding to the dark ribbon observed in \ion{He}{1} 10830~\AA. The southern ribbon did not show any especially broad \ion{Mg}{2} profiles and it also did not show any dark ribbon front in \ion{He}{1} 10830~\AA. \section{Discussion} The prevalence of the negative flare ribbons observed in \ion{He}{1} 10830~\AA\ provides important and unique constraints in modeling solar flares. The negative contrasts are detected on the advancing edge of the flare ribbon, representing the footpoints of newly reconnected flare loops according to the `standard' flare model \citep{Cargill1983,Hudson2011}. In other words, the ribbon with a negative contrast corresponds to the latest electron beams penetrating from the solar corona, where the magnetic reconnection occurred. The bright part of the ribbon behind the dark frontier is the signal of the flare heating immediately following the enhanced line absorption. Such a leading front due to electron penetration has been reported in H$\alpha$ observations as narrow bright features with red-shifted spectra, for instance by \citep{Svestka1980} using the multi-slit spectrograph developed by Lockheed Solar Observatory \citep{Martin1974}. Spectroscopic observations in H$\alpha$ from NST/FISS and UV lines from IRIS reveal distinct features on the ribbon front from that behind the front, namely a strongly enhanced red wing and line broadening, respectively. In newly formed flare ribbons, the red shift is usually the dominant velocity pattern, which is considered an unambiguous signature of mass motion, possibly due to chromospheric evaporation resulting from the electron precipitation. According to previous modeling, two mechanisms are plausible in explaining the enhanced absorption in \ion{He}{1} 10830~\AA. In the collisional ionization-recombination model, the helium atom is excited, through collisional ionization from the ground state by energetic electrons followed by recombinations to triplet states. Our observation can be explained by this model since we are able to correlate the electron-precipitation site with the dark edge of the flare ribbon in 10830~\AA. On the other hand, the photoionization-recombination model assumes that the helium atom is excited by the EUV radiation \citep{Ding2005}. Any EUV radiation with a wavelength shorter than 304~\AA\ can generate excitation of \ion{He}{1} atoms via the photoionization-recombination process. One recent simulation reproduces a much deeper absorption profile in \ion{He}{1} 10830~\AA\ caused by X-ray and EUV radiation \citep{Allred2015}. In such a scenario, the flare ribbon front is heated by the X-ray and EUV radiation and enhanced absorption is seen. As the heating accumulates, for the region behind the front, emission becomes dominant. Therefore, either model can explain the mixture of negative and positive contrasts observed in one flare ribbon. However, as the soft X-ray and EUV radiation is not so strong in the beginning, and rises less impulsively than the non-thermal electron flux, it is hard to explain the narrowness of the dark ribbon. Also, it does not explain the associated strong red-shifts and broadening. Therefore, we think that the first scenario, collisional ionization-recombination, is more promising in explaining the observed narrow negative flare front. In conclusion, observations of darkening in \ion{He}{1} 10830~\AA\ can be used to provide strong constraints in modeling the nonthermal effects of solar flares. By contrast, both collisional ionization-recombination and photoionization-recombination mechanisms can contribute to the emission that appears in the following bright ribbon. The decisive physical parameter in determining whether collisional ionization-recombination leads to an absorption or emission is the local plasma temperature. During a strong flare, the heating is more efficient and the lower layers are heated to a higher temperature very quickly. Under these conditions, it is less likely that a negative flare in \ion{He}{1} 10830~\AA\ will be observed. However, weak flares, like the low M-class flares studied, have a more moderate heating pattern. Therefore, we are able to see the dark flare source before the lower atmosphere is heated to a high temperature. Although the exact value of the critical temperature is unknown at this moment, a hint is provided by the D3 observations, in which the threshold is about 25000 K \citep{Zirin1980}. A more detailed study, especially a correlative numerical simulation, is unavoidably hampered in these events due to the lack of RHESSI hard X-ray coverage, which is needed to provide quantitative measurements of the electron energy distribution. Nevertheless, this is a significant and pioneering observation providing a rare but promising opportunity to understand the non-thermal effects in solar flares. Furthermore, this unique NST observation provides a solid scientific cornerstone for the future observations with the 4-meter Daniel K. Inouye Solar Telescope that will have even higher resolution. We would like to thank the referee for valuable comments. The data used in this paper were obtained with NST at Big Bear Solar Observatory, which is operated by New Jersey Institute of Technology. Obtaining the excellent data would not have been possible without the help of the BBSO team. BBSO operation is supported by NJIT, US NSF AGS-1250818 and NASA NNX13AG14G, and NST operation is partly supported by the Korea Astronomy and Space Science Institute and Seoul National University, and by strategic priority research program of CAS with Grant No. XDB09000000. IRIS is a NASA Small Explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research center and major contributions to downlink communications funded by the Norwegian Space Center (NSC,Norway) through an ESA PRODEX contract. The supportive white light data is taken by SDO/HMI. This work is supported by NSF grants AGS-1153424, AGS-1250374, AGS-1408703, AGS-1348513, and NASA grants NNX-11AQ55G, NNX-13AF76G, NNX-13AG13G and NNX-11AO70G to NJIT. W. Cao was supported by NSF AGS-0847126 and NSFC-11428309. H. Ji was supported by CAS Xiandao-B with grant XDB09000000, NSFC-11333009, NSFC-11173062, NSFC-11221063, and the 973 program with grant 2011CB811402. The work of K.-H. Cho and J. Chae was supported by the National Research Foundation of Korea (NRF - 2012 R1A2A1A 03670387). \bibliographystyle{apj}
1,314,259,996,188
arxiv
\section{I Introduction} The World Wide Web (WWW) continues its striking expansion going beyond $10^{11}$ web pages. Information retrieval from such an enormous database becomes the main challenge for WWW users. An efficient solution, known as the PageRank Algorithm (PRA) proposed by Brin and Page in 1998 \cite{brin}, forms the basis of the Google search engine used by the majority of internautes in everyday life. The PRA is based on the construction of the Google matrix which can be written as (see e.g. \cite{googlebook} for details): \begin{equation} {\bf G}=\alpha {\bf S}+(1-\alpha) {\bf E}/N \; . \label{eq1} \end{equation} Here the matrix ${\bf S}$ is constructed from the adjacency matrix ${\bf A}$ of directed network links between $N$ nodes so that $S_{ij}=A_{ij}/\sum_k A_{kj}$ and the elements of columns with only zero elements are replaced by $1/N$. The second term in r.h.s. of (\ref{eq1}) describes a finite probability $1-\alpha$ for WWW surfer to jump at random to any node so that the matrix elements $E_{ij}=1$. This term allows to stabilize the convergence of PRA introducing a gap between the maximal eigenvalue $\lambda=1$ and other eigenvalues $\lambda_i$. Usually the Google search uses the value $\alpha=0.85$ \cite{googlebook}. By the construction $\sum_i G_{ij}=1$ so that the asymmetric matrix ${\bf G}$ has a left eigenvector being a homogeneous constant for $\lambda=1$. The right eigenvector at $\lambda=1$ is the PageRank vector with positive elements $p_j$ and $\sum_j p_j=1$. All WWW nodes can be ordered by decreasing $p_j$ so that the PageRank plays a primary role in the ordering of websites and information retrieval. The classification of nodes in the decreasing order of $p_j$ values is used by the Google search to classify importance of web nodes. The information retrieval and ordering is based on this classification and we also use it in the following. It is interesting and important to note that by the construction the operator ${\bf G}$ belongs to the class of Perron-Frobenius operators \cite{googlebook}. Such type of operators naturally appear in the ergodic theory \cite{sinai} and in the description of dynamical systems with Hamiltonian or dissipative dynamics \cite{mbrin,osipenko}. The studies of properties of ${\bf G}$ are usually done only for the PageRank vector which can be find efficiently by the PRA due to a relatively small average number of links in WWW. At present Google succeeds to operate with PageRank vectors of size of the whole WWW being of the order of $10^{11}$. It is established that for large WWW subsets $p_j$ is satisfactory described by a scale-free algebraic decay with $p_j \sim 1/j^{\beta}$ where $j$ is the PageRank ordering index and $\beta \approx 0.9$ \cite{googlebook,donato}. The studies of PageRank properties are now very active in the computer science community being presented in a number of interesting publications (see e.g. \cite{boldi,avrach1,avrach2} and an overview of the field in \cite{avrach3}). While the properties of the PageRank are of primary importance it is also interesting to analyze the properties of the Google matrix ${\bf G}$ as a whole large matrix. Such an analysis can help to establish links between the Google matrix and other fields of physics where large matrices play an important role. Among such fields we can mention the Random matrix theory \cite{mehta} which finds applications for a description of spectra in complex many-body quantum systems and the Anderson localization which is an important physical phenomenon for an electron transport in disordered systems (see e.g. \cite{anderson}). A transition from localized to delocalized eigenstates also can take place in networks of small world type (see \cite{giraud2005,berkovits}). However, in the physical systems considered in \cite{mehta,anderson,giraud2005,berkovits} all matrices are Hermitian with real eigenvalues while the Perron-Frobenius matrices have generally complex eigenvalues. A first attempt to analyze the properties of right eigenvectors $\psi_i$ (${\bf G} \psi_i=\lambda_i \psi_i$) and complex eigenvalues $\lambda_i$ was done recently in \cite{ggs}. The Google matrix was constructed from a directed network generated by the Albert-Barabasi model and the WWW University networks with randomization of links. The Google matrix was considered mainly for the value $\alpha=0.85$. It was shown that at certain conditions a delocalization phase emerges for the PageRank and states with complex $\lambda$. In spite of a number of interesting results found in \cite{ggs} a weak feature of models used there is a significant gap between $\lambda=1$ of PageRank vector and $|\lambda_i| \leq 0.4$ of other vectors. We note that according to \cite{ggs} the University networks have $|\lambda_i|$ close to $1$ but after randomization of links a large gap emerges in the spectrum of $\lambda$. This gap in $|\lambda|$ was rather large and was not sensitive to a variation of $\alpha$ in the interval $0.85 \leq \alpha \leq 1$. Hence, the PageRank vector also was not very sensitive to $\alpha$ while for real WWW it is know that $p_j$ is rather sensitive to $\alpha$ due to existence of $|\lambda_i|$ close to $1$ \cite{googlebook,ggs}. Thus the results obtained in \cite{ggs} show that even if the Google matrix is constructed on the basis of typical models of scale-free networks it is quite possible that its spectrum may have a large gap for $0.85 \leq \alpha \leq 1$ thus being rather far from spectral properties of the Google matrices of WWW. Therefore it is rather desirable to have other simple models which generate a directed network with Google matrix properties being close to those of WWW. \begin{figure} \centerline{\epsfxsize=2.7cm\epsffile{fig1a.eps} \hfill\epsfxsize=2.7cm\epsffile{fig1b.eps} \hfill\epsfxsize=2.7cm\epsffile{fig1c.eps}} \vglue 0.2cm \centerline{\epsfxsize=2.7cm\epsffile{fig1d.eps} \hfill\epsfxsize=2.7cm\epsffile{fig1e.eps} \hfill\epsfxsize=2.7cm\epsffile{fig1f.eps}} \vglue -0.2cm \caption{(Color online) PageRank $p_j$ for the Google matrix generated by the Chirikov typical map (\ref{eq2}) at $T=10$, $k=0.22$, $\eta=0.99$ (set $T10$, top row) and $T=20$, $k=0.3$, $\eta=0.97$ (set $T20$, bottom row) with $\alpha=1, 0.95, 0.85$ (left to right). The phase space region $0 \leq x < 2\pi; -\pi \leq p < \pi $ is divided on $N=3.6 \cdot 10^5$ cells; $p_j$ is zero for blue and maximal for red. } \label{fig1} \end{figure} With an aim to have more realistic models we develop in this work another approach and construct the Google matrix from the Perron-Frobenius operator generated by a certain dynamical system. The probability flow in these models has rich and nontrivial features of general importance like simple and strange attractors with localized and delocalized dynamics governed by simple dynamical rules. Such objects are generic for nonlinear dissipative dynamics and hence can have relevance for actual WWW structure. Thus these objects can find some reflections in the PageRank properties. The dynamical system is described by the Chirikov typical map \cite{chirikov} with dissipation, the properties of this simple model has been analyzed in detail in a recent work \cite{frahm}. We find that the Google matrix generated by this dynamical model has many $\lambda_i$ close to $1$ and the PageRank becomes sensitive to $\alpha$ (see Fig.~\ref{fig1}). This model captures also other specific properties of WWW Google matrices. To construct a network of nodes from a continuous two-dimensional phase space we divide the space of dynamical variables $(x,y)$ on $N=N_x \times N_y$ cells (we use $N_x=N_y$). Then $N_c$ trajectories are propagated from a cell $j$ on the whole period of the dynamical map and the elements $S_{ij}$ are taken to be equal to a relative number $N_i$ of trajectories arrived at a cell $i$ ($S_{ij}=N_i/N_c$ and $\sum_i S_{ij}=1$). Thus ${\bf S}$ gives a coarse-grained approximation of the Perron-Frobenius operator for the dynamical map. The Google matrix ${\bf G}$ of size $N$ is constructed from ${\bf S}$ according to Eq.~(\ref{eq1}). We use a sufficiently large values of $N_c$ so that the properties of ${\bf G}$ become not sensitive to $N_c$. Such a discrete approximation of the Perron-Frobenius operator is known in dynamical systems as the Ulam method \cite{ulam}. Indeed, Ulam conjectured that such a matrix approximant correctly describes the Perron-Frobenius operator of continuous phase space. For hyperbolic maps the Ulam conjecture was proven in \cite{li}. Various types of more generic one-dimensional maps have been studied in \cite{tel,kaufmann,froyland2007}. Further mathematical results have been reported in \cite{ding,liverani,froyland2008a,froyland2008b} with extensions and prove of convergence for hyperbolic maps in higher dimensions. However, the studies of more generic two-dimensional maps remain rather restricted (see e.g. \cite{froyland1998}) and non-systematic. In principle the construction of directed graphs on the basis of dynamical systems is a known mathematical approach (see e.g. \cite{osipenko}) but the spectral properties of the Google matrix built on such graphs were not studied till now. In this paper we show that the Ulam method applied to two-dimensional dissipative dynamical maps generates a new type of directed networks which we call the Ulam networks. We present here numerical and analytical studies of certain properties of the Google matrix of such networks. The paper is organized as follows: in Section II we give the description the Chirikov typical map and the way the Ulam network is constructed on the basis of this map with the corresponding Google matrix, the properties of the map and the network are described here; in Section III the properties of the eigenvalues and eigenstates of the Google matrix are analyzed in detail including the delocalization transition for the PageRank, fractal Weyl law and the global contraction properties; the summary of the results is presented in Section IV. \section{II Ulam networks of dynamical maps} \subsection{Chirikov typical map} To construct an Ulam network and a generated by it Google matrix we use a dynamical two-dimensional dissipative map. The dynamical system is described by the Chirikov typical map introduced in 1969 for a description of continuous chaotic flows \cite{chirikov}: \begin{equation} y_{t+1} =\eta y_{t}+k \sin (x_t+\theta_t) \;, \;\; x_{t+1} = x_t+y_{t+1} \; . \label{eq2} \end{equation} Here the dynamical variables $x,y$ are taken at integer moments of time $t$. Also $x$ has a meaning of phase variable and $y$ is a conjugated momentum or action. The phases $\theta_t=\theta_{t+T}$ are $T$ random phases periodically repeated along time $t$. We stress that their $T$ values are chosen and fixed once and they are not changed during the dynamical evolution of $x,y$. We consider the map in the region of Fig.~1 ($0 \leq x < 2\pi, -\pi \leq y <\pi$) with the $2\pi$-periodic boundary conditions. The parameter $0< \eta \leq 1$ gives the global dissipation. The properties of the symplectic map at $\eta=1$ have been studied recently in detail \cite{frahm}. The dynamics is globally chaotic for $k > k_c \approx 2.5/T^{3/2}$ and the Kolmogorov-Sinai entropy is $h \approx 0.29 k^{2/3}$ (more details about chaotic dynamics and the Kolmogorov-Sinai entropy can be found in \cite{sinai,mbrin,chirikov79,ott}). In this study we use two random sets of phases $\theta_t$ with $T=10$ and $T=20$. Their values are given in the Appendix. We also fixed the dissipation parameter $\eta=0.99$ for $T=10$ and $\eta=0.97$ for $T=20$. We call these two sets of parameters as $T10$ and $T20$ sets respectively. The majority of data are obtained at $k=0.22$ for the set $T10$ and at $k=0.3$ for the set $T20$ (see Fig.~\ref{fig1}). These are two main working points for this work. For the set $T10$ ($k=0.22$, $\eta=0.99$) we have the theoretical value of the Kolmogorov-Sinai entropy $h=0.29k^{2/3}=0.105$ for the symplectic map at $\eta=1$ \cite{frahm}. The actual value at $\eta=1$ is determined numerically by the computation of the Lyapunov exponent and has a value $h=0.0851$. For $\eta=0.99$ we also have the global dissipation rate $\gamma_c=-T\ln \eta=0.1005$ after the map period (which is equal to $T$ iterations). The global contraction factor is $\Gamma_c=\eta^T=\exp(-\gamma_c)=0.9043$. For a weak dissipation the fractal dimension $d$ of the limiting set can be approximately estimated in a usual way (see e.g. \cite{ott}) as $d=2-\gamma_c/(Th)=1.882$. \begin{figure} \centerline{\epsfxsize=8.5cm\epsffile{fig2.eps}} \vglue -0.2cm \caption{Bifurcation diagram showing values of $y$ vs. map parameter $k$ for the set $T10$ of the Chirikov typical map (\ref{eq2}). The values of $y$, obtained from 10 trajectories with initial random positions in the phase space region, are shown for integer moments of time $100 < t/T \leq 110$ (left panel) and $10^4 < t/T \leq 10^4+100$ (right panel). } \label{fig2} \end{figure} In a similar way for the set $T20$ ($k=0.3$, $\eta=0.97$) we have the theoretical value $h=0.29k^{2/3}=0.1299$, while the actual numerical value is $h=0.1081$. Also here $\gamma_c=-T\ln \eta=0.609$, $\Gamma_c=0.5437$ and the estimated fractal dimension of the limiting set is $d=2-\gamma_c/(Th)=1.718$. \begin{figure} \centerline{\epsfxsize=8.5cm\epsffile{fig3.eps}} \vglue -0.2cm \caption{Same as in Fig.~\ref{fig2} for the set $T20$. } \label{fig3} \end{figure} The bifurcation diagrams for the sets $T10$ and $T20$ are shown in Fig.~\ref{fig2} and Fig.~\ref{fig3} respectively. On large time scales we clearly see parameter $k$ regions with simple and chaotic attractors. For a shorter time scales a distinction between two regimes becomes less pronounced. This means that during a long time a trajectory moves between few simple attractors (which are clearly seen in Fig.~\ref{fig1} in the left column) before a final convergence is reached. \subsection{Network construction and distribution of links} The Ulam network for the Chirikov typical map (\ref{eq2}) is constructed in the following way. The whole phase space region $2\pi \times 2\pi$ is divided into $N=N_x \times N_y$ cells ($N_x=N_y$) and $N_c$ trajectories are propagated from each given cell $j$ during $T$ map iterations which form the period of the map. After that the elements of matrix $S_{ij}$ are computed as $S_{ij}=N_i/N_c(j)$ where $N_i$ is a number of trajectories arrived from a cell $j$ to cell $i$. In this way we have by a definition $\sum_i S_{ij}=1$. Such ${\bf S}$ gives a coarse-grained approximation of the Perron-Frobenius operator for the map (\ref{eq2}). The Google matrix ${\bf G}$ of size $N$ is constructed from ${\bf S}$ according to Eq.~(\ref{eq1}). To construct $S_{ij}$ we usually use $N_c=10^4$ but the properties of ${\bf S}$ are not affected by a variation of $N_c$ in the interval $10^3 \leq N_c \leq 10^5$. Since the cell size is very small it is unimportant in what way $N_c$ trajectories are distributed inside the cell. Up to statistical fluctuations, the values of $S_{ij}$ remains the same for homogeneous or random distribution of $N_c$ trajectories inside a cell. Up to $N=22500$ we used exact diagonalization of ${\bf G}$ to determine all eigenvalues $\lambda_i$ and right eigenvectors $\psi_i$, for larger $N$ up to $N=1.44 \cdot 10^6$ we used the PRA to determine the PageRank vector. The majority of data are presented for two typical sets $T10, T20$ of parameters of the map (\ref{eq2}) and the PageRanks for various values of $\alpha$ are shown in Fig.~\ref{fig1}. For these sets the dynamics has a few fixed point attractors but it takes a long time $t \sim 10^3$ to reach them. During this time a trajectory visits various regions of phase space. It is important to note that the discreteness of phase space linked to a finite cell size produces an important physical effect which is absent in the original continuous map (\ref{eq2}): effectively it introduces an additional noise which amplitude $\sigma$ is approximately $\sigma \sim 2\pi/\sqrt{N}$. This becomes especially clear for the symplectic case at $\eta=1$ and at small values of $k$ at $T=1$ (all $\theta_t$ are the same). In this case the map is reduced to the Chirikov standard map \cite{chirikov79} and the continuous map dynamics is bounded by the invariant Kolmogorov-Arnold-Moser (KAM) curves. However, the discreteness of phase space allows to jump from one cell to another and thus to jump from one curve to another. This leads to a diffusion in $y$ direction and appearance of a homogeneous ergodic state at $\lambda=1$. A direct analysis also shows that at any finite cell size the operator ${\bf S}$ has a homogeneous ergodic state with $\lambda=1$, we also checked this via numerical diagonalization of matrix sizes $N \approx 20000$. This example shows that the Ulam conjecture is not valid for quasi-integrable symplectic maps in the KAM regime. \begin{figure} \centerline{\epsfxsize=8.5cm\epsffile{fig4.eps}} \vglue -0.2cm \caption{(Color online) Differential distribution of number of nodes with {\it ingoing} $P_{in}(\kappa)$ (blue) and {\it outgoing} $P_{out}(\kappa)$ (red) links $\kappa$ for sets $T10$ (left) and $T20$ (right). The straight dashed lines give the algebraic fit $P(\kappa) \sim \kappa^{-\mu}$ with the exponent $\mu = 1.86, 1.11$ ($T10, T20$) for {\it ingoing} and $\mu = 1.91, 1.46$ ($T10, T20$) {\it outgoing} links. Here $N=1.44 \cdot 10^6$ and $P(\kappa)$ gives a number of nodes at a given integer number of links $\kappa$ for this matrix size. Blue point at $\kappa=0$ shows that in the whole matrix there is a significant number of nodes with zero {\it ingoing} links. } \label{fig4} \end{figure} \begin{figure} \centerline{\epsfxsize=8.5cm\epsffile{fig5.eps}} \vglue -0.2cm \caption{(Color online) Same as in Fig.~\ref{fig3} for the set $T10$ at $k=0.22$ (left) (same as Fig.~\ref{fig4} left) and $k=0.6$ (right) and $N=3.6 \cdot 10^5$. The fit gives the exponent $\mu=1.87, 1.92$ for {\it ingoing} (blue), {\it outgoing} (red) links at $k=0.22$ (left) and $\mu=1.70, 1.83$ for {\it ingoing} (blue), {\it outgoing} (red) links at $k=0.6$ (right). } \label{fig5} \end{figure} The physical origin of the difference between the continuous map and the finite size cell approximation is due to introduction of an effective noise term $\sigma_t$ in r.h.s. of (\ref{eq2}) induced by a finite cell size. Due to this noise the trajectories diffuse over all region $-\pi <y<\pi$ after a diffusive time scale $t_D \sim \pi^2/\sigma^2$ even if the continuous map is in the KAM regime with bounded dynamics in $y$. Hence, here $\sigma \sim 2\pi/\sqrt{N}$ is an effective amplitude of noise introduced by cell discreetness. Even if this $\sigma$-noise leads to a drastic change of dynamics for quasi-integrable regime its effects are not very important in the case of chaotic dynamics where noise gives only a small additional variation as compared to strong dynamical variations induced by dynamical chaos. With such a physical understanding of discreetness effects we continue to investigate the properties of the Ulam networks. However, we stress that the $\sigma$-noise is local in the phase space and hence it is qualitatively different from the Google term $\alpha$ which generates stochastic jumps over all sites. In Figs.~\ref{fig4},\ref{fig5} we show the distributions of ingoing $P_{in}(\kappa)$ and outgoing $P_{out}(\kappa)$ links $\kappa$ in the Ulam network presented by ${\bf S}$ matrix generated by the map (\ref{eq2}) as described above. These distributions are satisfactory described by a scale-free algebraic decay $P \sim 1/\kappa^{\mu}$ with $\mu \approx 1.86, 1.11$ for ingoing and $1.91, 1.46$ outgoing links at $T10,T20$ respectively and a typical number of links per node $\kappa \sim 10$ (see Fig.~\ref{fig4} and Fig.~\ref{fig5}). Such values are compatible with the WWW data of scale-free type where $\mu \approx 2.1, 2.7$ for ingoing, outgoing links \cite{googlebook,donato}. However, we may also note an appearance of certain deviations at large values of $\kappa$. Indeed, for a dynamical system a large number of links appears due to exponential stretching of one cell after $T$ map iterations that gives a typical number of links $k \sim \exp(hT)$. It is possible that during the dynamical evolution much larger values of stretching can appear. Indeed, the comparison of two cases at $k=0.22$ and $k=0.6$ for the set $T10$ in Fig.~\ref{fig5} shows that for larger $k$ the scale-free distribution continues to much larger values of $\kappa > 200$ while for smaller $k$ the scale-free type decay stops around $\kappa \approx 50$. For the set $T20$ the stretching is stronger and the scale-free decay continues up to larger values of $\kappa$. It is clear that for the Ulam networks discussed here one has a rapid exponential decay of links distribution at asymptotically large link number $\kappa$. However, due to an exponential growth of typical $\kappa \sim \exp(hT)$ a scale-free type decay can be realized up to very large $\kappa$ by increasing $T$. In these studies we stay at the chosen working points where a scale-free decay remains dominant for matrix sizes of the order of $N \sim 10^5 - 10^6$. Finally we note that the models of the Google matrix generated by the Ulam networks are most interesting for dissipative maps. Indeed, by construction the left eigenvector of the Google matrix $\psi_i^+ {\bf G} = \psi_i^+$ at $\lambda=1$ is a homogeneous vector $\psi_i^+=const$. As a result for symplectic maps the right vector of PageRank $p_j$ is also homogeneous. Only dissipation term generates an inhomogeneous decay of $p_j$. \section{III Properties of eigenvalues and eigenstates} \subsection{Delocalization transition for PageRank with $\alpha$} The variation of PageRank $p_j$ with $\alpha$ is shown in Fig.~\ref{fig1} for two sets $T10$ and $T20$. The distribution $p_j$ is plotted for each cell of the phase space $(x,y)$, the numbering of cells is done by the integer grid $n_x \times n_y$ which has a certain correspondence with the index $j$ which numerates the values of $p_j$ in the decreasing order with $j$. At $\alpha=1$ the distribution $p_j$ is concentrated only on a few local spots corresponding to fixed point attractors. Physically this happens due to presence of $\sigma$ noise, induced by cell discretization, which leads to transitions between various fixed points. With the decrease of $\alpha$ the PageRank starts to spread over a strange attractor set. The properties of strange attractors in dynamical dissipative systems are described in \cite{ott}. In the map (\ref{eq2}) the strange attractor appears at larger values of $k$ (namely $k > 0.5$ for $T10$, $k>0.34$ for $T20$, see Figs.~\ref{fig2},~\ref{fig3}) but a presence of effective noise induced by $\sigma$ and $1-\alpha$ terms leads to an earlier emergence of strange attractor. Below a certain value $\alpha < \alpha_c$ the PageRank becomes completely delocalized over the strange attractor as it is clearly seen in Fig.~\ref{fig1} for the set $T10$. \begin{figure} \centerline{\epsfxsize=8.5cm\epsffile{fig6.eps}} \vglue -0.2cm \caption{(Color online) PageRank distribution $p_j$ for $N=10^4$, $9\cdot 10^4$, $3.6\cdot 10^5$ and $1.44\cdot 10^6$ shown by red, magenta, green and blue curves, the dashed straight lines show fits $p_j \sim 1/j^\beta$ with $\beta$: $0.48$ (b), $0.88$ (e), $0.60$ (f). Dashed lines in panels (a),(d) show an exponential Boltzmann decay (see text, lines are shifted in $j$ for clarity). Other parameters, including the values of $\alpha$, and panel order are as in Fig.~\ref{fig1}. In panels (a),(d) the curves at large $N$ become superimposed. Here and below logarithms are decimal. } \label{fig6} \end{figure} The dependence of $p_j$ on $j$ is shown in more detail in Fig.~\ref{fig6}. For $\alpha=1$ PageRank shows a rapid drop with $j$ that can be fitted by an exponential Boltzmann type distribution $p_j \sim \exp(- b \gamma_c j/D_\sigma)$ where $b$ is a numerical constant ($b \approx 1.4; 2.1$ for $T10; T20$), $\gamma_c= -T \ln \eta$ is the global dissipation rate and $D_\sigma = \sigma^2 N \approx (2\pi)^2$ is $\sigma$ noise diffusion (dashed lines in Fig.~\ref{fig6}a,d). Such an exponential decay results from the Fokker-Planck description of map (\ref{eq2}) in the presence of $\sigma$ noise term which gives diffusive transitions on nearby cells. For $\alpha <1$ random surfer transitions introduced by Google give a significant modification of PageRank which shows an algebraic decay $p_j \sim 1/j^{\beta}$ with the exponent $\beta$ dependent on $\alpha$ (Fig.~\ref{fig6}b,e,f); for the set $T20$ at $\alpha=0.95$ we obtain $\beta \approx 0.88$ being close to the numerical value found for the WWW \cite{googlebook}. However, $\beta$ decreases with the decrease of $\alpha$ and for $T10$ set a delocalization takes place for $\alpha=0.85$ so that $p_j$ spreads homogeneously over the strange attractor (see Fig.~\ref{fig1} top right panel and Fig.~\ref{fig6}c). For $T20$ set $p_j \sim \psi_{i=1}(j)$ remains localized at $\alpha=0.85$ so that a PArticipation Ratio (PAR) $\xi=\sum_j (|\psi_i(j)|^2)^2/\sum_j(|\psi_i(j)|^4$ for the PageRank remains finite at large $N$. We use this definion of PAR $\xi$ for all eigenvectors $\psi_i(j)$. \subsection{Properties of other eigenvectors} To understand the origin of the delocalization transition in $\alpha$ we analyze in Fig.~\ref{fig7} the properties of all eigenvalues $\lambda_i$ and eigenvectors $\psi_i$ with their PAR $\xi$. Due to $\sigma$ noise activation transitions take place between the attractor fixed points leading to states with $\lambda_i$ being exponentially close to $\lambda=1$ (Fig.~\ref{fig7}a). The convergence to $|\lambda|=1$ is exponential in $N$ for certain states and may lead to numerical problems at very large $N$. However, the standard numerical diagonalization methods remained stable for the values of $N$ used in our studies. The distribution of $\lambda_i$ in the complex plane is shown in Fig.~\ref{fig7}c,d: there are $\lambda_i$ approaching $\lambda=1$ mainly along the real axis but a majority of $\lambda_i$ are distributed inside a circle of finite radius around $\lambda=0$; this radius decreases with the increase of global dissipation from $\gamma_c=0.10$ for set $T10$ to $\gamma_c=0.61$ for $T20$. The PAR values for states inside the circle have typical values $4 \leq \xi \leq 300$ shown by grayness. The dependence of $\xi$ on $\gamma=-2 \ln |\lambda|$ and $N$ shows that the eigenstates inside the circle remain localized at large $N$ (Fig.~\ref{fig7}b). We attribute this to the fact that at large $N$ the diffusion due to $\sigma$ noise in presence of dissipation leads to spreading only over a finite number of cells and thus $\xi$ remains bounded. This $\xi(\gamma,N)$ dependence is different from one obtained in \cite{ggs} for the Albert-Barabasi model, the comparison with data from WWW University networks is less conclusive due to strong fluctuations from one network to another (see Fig.~4 in \cite{ggs}): an average growth of $\xi$ is visible there even if at $N \sim 10^4$ the values of $\xi$ are comparable with those of Fig.~\ref{fig7}b. Globally our data of Fig.~\ref{fig7} show that the diffusive modes at $|\lambda_i| < 1$ remain localized on a number of nodes $\xi \ll N$. \begin{figure} \centerline{\epsfxsize=8.5cm\epsffile{fig7.eps}} \vglue -0.2cm \caption{(Color online) (a) Dependence of gap $1-|\lambda|$ on Google matrix size $N$ for few eigenstates with $|\lambda|$ most close to $1$, set $T10$, $\alpha=1$; (b) dependence of PAR $\xi$ on $\gamma=-2 \ln |\lambda|$ for $N=2500$, $5625$, $8100$, $10^4$, $14400$ for set $T10$, $\alpha=1$ (curves from top to bottom: red, magenta, green, blue, black); (c) complex plane of eigenvalues $\lambda$ for set $T10$ with their PAR $\xi$ values shown by grayness (black/blue for minimal $\xi \approx 4$, gray/light magenta for maximal $\xi \approx 300$; here $\alpha=1$, $N=1.44 \cdot 10^4$); (d) same as (c) but for set $T20$. } \label{fig7} \end{figure} We also stress an important property of eigenvalues and eigenvectors with $0<|\lambda_i| < 1$. In agreement with the known theorems \cite{googlebook} our numerical data show that for the states with $0<|\lambda_i| < 1$ their $\xi_i$ are independent of $\alpha$ ($\lambda_i$ are simply rescaled by a factor $(1-\alpha)$ according to \cite{googlebook}). This happens due to a specific property of $(1-\alpha){\bf E}/N$ term in ${\bf G}$, which is constructed from a homogeneous vector with rank equal to unity. Right eigenvectors are orthogonal to the homogeneous left vector and hence $(1-\alpha)$ term affects only the PageRank but not other eigenvectors. \subsection{Fractal Weyl law for Google matrix} Another interesting characteristic of ${\bf G}$ is the density distribution $d W(\gamma)/d \gamma$ over $\gamma$. The data presented in Fig.~\ref{fig8} show that its form becomes size independent in the limit of large $N$. At small $\gamma < 3$ the density decreases approximately linearly with $\gamma$ without any large gap. We find rather interesting that the total number of states $N_\gamma$ with finite $\gamma < \gamma_b \approx 5$ grows algebraically as $N_\gamma = A N^{\nu}$ with $\nu < 1$ (Fig.~\ref{fig8} inset). We interpret this result on the basis of the fractal Weyl law established recently for non-unitary matrices with fractal eigenstates (see e.g. \cite{zworski,dlsweyl} and Refs. therein). According to this law the exponent $\nu$ is $\nu=d-1$ where $d$ is the fractal dimension of the system. Approximately we have $d-1 \approx 1 - \gamma_c/(T h)$ \cite{ott,dlsweyl} that gives $\nu=0.88, 0.72$ for the sets $T10, T20$ with the numerical values of $\gamma_c$, $h$ given above. These values are in a good agreement with the fit data $\nu=0.85, 0.61$ of Fig.~\ref{fig8} inset. The fact that $\nu <1$ implies that almost all states have $\lambda=0$ in the limit of large $N$ (in this work we do not discuss the properties of these degenerate states with large $\xi \sim N$). \begin{figure} \centerline{\epsfxsize=8.5cm\epsffile{fig8.eps}} \vglue -0.2cm \caption{Probability distribution $d W(\gamma)/d\gamma$ for set $T10$, $\alpha=1$ at $N=2.5 \cdot 10^3 (\times)$, $10^4 (+)$, $1.44 \cdot 10^4$ (dots); $W(\gamma)$ is normalized by the number of states $N_\gamma=0.55 N^{0.85}$ with $\gamma <6$. Inset: dependence of number of states $N_\gamma$ with $\gamma<\gamma_b$ on $N$ for sets $T10$ (circles, $\gamma_b=6$) and $T20$ (triangles, $\gamma_b=3$); dashed lines show the fit $N_\gamma=A N^{\nu}$ with $A=0.55, \nu= 0.85$ and $A=0.97, \nu =0.61$ respectively. } \label{fig8} \end{figure} It is interesting to note that the fractal Weyl law is usually discussed for the open quantum chaos systems (see \cite{zworski,dlsweyl} and Refs. therein). There the matrix size is inversely proportional to an effective Planck constant $N \propto 1/\hbar$. For the Ulam networks generated by dynamical attractors a cell size in the phase space places the role of effective $\hbar$. This opens interesting parallels between quantum chaotic scattering and discrete matrix representation of the Perron-Frobenius operators of dynamical systems. \subsection{PageRank delocalization again} The dependence of PAR $\xi$ of the PageRank on $\alpha$ and $N$ is shown in Fig.~\ref{fig9}. It permits to determine the critical value $\alpha_c$ below which the PageRank becomes delocalized showing $\xi$ growing with $N$. According to this definition we have $\xi$ independent of large $N$ for $\alpha > \alpha_c$ while for $\alpha < \alpha_c$ the PAR $\xi$ grows with $N$. The obtained data give $\alpha_c \approx 0.95$, $0.8$ for $T10, T20$. Further investigations are needed to understand the dependence of $\alpha_c$ on system parameters. Here we make a conjecture that $1-\alpha_c \approx C \gamma_c \ll 1$ with a numerical constant $C \approx 0.3$. Indeed, for larger dissipation rate $\gamma_c = - T \ln \eta$ a radius of a circle with large density of $\lambda_i$ in the complex plane $\lambda$ becomes smaller (see Fig.~\ref{fig7}c,d) and thus larger values of $1-\alpha$ are required to have a significant contribution of these excited relaxation modes to the PageRank. Also the data of \cite{dlsweyl} for systems with absorption rate $\gamma_c$ show a low density of states at $\gamma < \gamma_c$ so that it is natural to expect that one should have $1-\alpha_c \sim \gamma_c$ to get a significant contribution of delocalized relaxation modes from a strange attractor set to the PageRank. It is quite probable that $C$ depends in addition on system parameters. Indeed, even at fixed $\gamma_c$ and $\alpha=0.99$ being rather close to $1$ it is possible to have a transition from localized to delocalized PageRank by increasing $k$ in the map (\ref{eq2}) (see Fig.~\ref{fig9} inset and Fig.~\ref{fig10}). This transition in $k$ takes place approximately at $k \approx 0.55$ when fixed point attractors merge into a strange attractor (see the bifurcation diagram in Fig.~\ref{fig2}). A peak in $\xi$ around $k \approx 0.38$ is related to birth and disappearance of a strange attractor in a narrow interval of $k$ at $k \approx 0.38$. At the same time an increase of $k$ from $0.22$ to $0.6$ practically does not affect the link distributions $P(\kappa)$ changing the value of $\mu$ only by 10\% (see Fig~\ref{fig5}). This shows that the correlations inside the directed network generated by the map (\ref{eq2}) play a very important role. \begin{figure} \centerline{\epsfxsize=8.5cm\epsffile{fig9.eps}} \vglue -0.2cm \caption{(Color online) Dependence of PageRank $\xi$ on $\alpha$ for set $T10$ at $N=5625$ (dotted magenta), $1.44 \cdot 10^4$ (dotted red), $9 \cdot 10^4$ (dashed red), $6.4 \cdot 10^5$ (full red) and for $T20$ at $N=1.44 \cdot 10^4$ (dotted blue), $9 \cdot 10^4$ (dashed blue), $6.4 \cdot 10^5$ (full blue). Inset shows dependence of $\xi$ on $k$ for set $T10$ at $\alpha=0.99$ with $N=1.44 \cdot 10^4$ (dotted red), $9 \cdot 10^4$ (dashed red), $3.6 \cdot 10^5$ (full red). } \label{fig9} \end{figure} \begin{figure} \centerline{\epsfxsize=4.2cm\epsffile{fig10a.eps} \hfill\epsfxsize=4.2cm\epsffile{fig10b.eps}} \vglue -0.2cm \caption{(Color online) Same as Fig.~\ref{fig1} for the set $T10$ at $\alpha=0.99$, $N=3.6 \cdot 10^5$ at $k=0.22$ (left) and $k=0.6$ (right); PAR $\xi$ are the same as in the inset of Fig.~\ref{fig9}. } \label{fig10} \end{figure} \subsection{Global contraction} As discussed above a nontrivial decay of the PageRank $p_j$ in our Ulam network appears due to a dissipative nature of the map (\ref{eq2}). Indeed, since $\eta <1$ there is a global contraction of the phase space area by a factor $\Gamma_c=\eta^T$ after $T$ iterations of the map (after its period). Such a property is very natural for the continuous map but it is more difficult to see its signature from the matrix form of the Perron-Frobenius operator after the introduction of discreteness of the phase space. Nevertheless this contraction can be extracted from the matrix ${\bf G}$ taken at $\alpha=1$. To extract it we apply ${\bf G}$ with $\alpha=1$ to a homogeneous vector $p^{(h)}_j=1/N$ getting the new vector ${\bar p^{(h)}} = {\bf G} p^{(h)}$ and count the number of nodes $N_\Gamma$ where ${\bar p^{(h)}} > q/N $ and $0<q<1$ is some positive number characterizing the level of the distribution. Then the contraction of the network is defined as a fraction of such states: $\Gamma=N_\Gamma/N$. The result of computation of the contraction factor for the Ulam network of map (\ref{eq2}) for the sets $T10$, $T20$ is shown in Fig.~\ref{fig11}. The network contraction parameter $\Gamma$ is independent of $q$ in a large interval $10^{-4} \leq q \leq 0.1$ and it converges to the contraction value $\Gamma_c$ of a continuous map in the limit of large matrix size $N$. \begin{figure} \centerline{\epsfxsize=4.2cm\epsffile{fig11a.ps} \hfill\epsfxsize=4.2cm\epsffile{fig11b.ps}} \vglue -0.2cm \caption{(Color online) Dependence of the network contraction factor $\Gamma$ on the level $q$ of probability distribution over the network nodes (see text). Left panel shows data for the set $T10$ at $k=0.22$, right panel shows data for the set $T20$ at $k=0.3$ for the Ulam network of map (\ref{eq2}). The size of the network is $N=10^4, 4 \cdot 10^4, 16 \cdot 10^4$ (curves from top to bottom at $q=0.01$). The dashed curves show the contraction $\Gamma_c=\eta^T$ of the continuous map (\ref{eq2}) corresponding to the network with $N=\infty$. } \label{fig11} \end{figure} We think that the Google matrix of WWW networks can be also characterized by a global contraction factor and it would be interesting to study its properties in more detail. However, this remains a task for future studies. \section{IV Summary} In summary, we demonstrated that the Perron-Frobenius operator built from a simple dissipative map with dynamical attractors generates a scale-free directed network with properties being rather similar to the WWW. The networks and their Google matrices are obtained on the basis of the Ulam method for coarse-graining of the Perron-Frobenius operator and thus can be viewed as the Ulam networks or Ulam graphs. The Google matrix of such Ulam networks reproduces many properties of real networks with an algebraic decay of the PageRank and quasi-degeneracy of eigenvalues near unity for the Google parameter $\alpha=1$. In this formulation the popular websites correspond to dynamical fixed point attractors which help to generate global scale-free properties of the network. The PageRank of the system becomes delocalized for $\alpha$ smaller than a certain critical value, such a delocalization is linked to emergence of a strange attractor. Even for $\alpha$ very close to unity a moderate change of system parameters can drive the system to a strange attractor regime with a complete delocalization of the PageRank making the Google search inefficient. In view of a great importance of the Google search for WWW \cite{googlebook,donato} and its new emerging applications \cite{redner} it may be rather useful to study in more detail the properties of the Google matrix generated by simple dynamical maps. Of course, it is quite possible that at the present state the Google matrix of WWW is more stable in respect to variation of $\alpha$ (indications for that can be found e.g. in \cite{avrach1,avrach2,avrach3}). However, WWW evolves with time and may become more sensitive to changes of $\alpha$. Also the Google search can be applied to a large variety of other important networks (see e.g. \cite{avrach3,redner}) which may be more sensitive to various parameter variations. It is quite possible that the Ulam networks discussed here only partially simulate the properties of the WWW. However, the Ulam networks are easy to generate and at the same time they show a large variety of rich interesting properties. The parallels between the Ulam networks and the actual WWW can be instructive for deeper understanding of both. Therefore, we think that their further studies will give us better understanding of the Google matrix properties. The studies of the Ulam networks will also lead to a better understanding of intricate spectral properties of the Perron-Frobenius operators. The application of the thermodynamical formalism \cite{ruelle,artuso} to the spectra of such operators can help to understand their properties in a better way. \section{Acknowledgements} We thank A.S.Pikovsky who pointed to us a link between our numerical construction procedure of the matrix ${\bf S}$ built from the discrete phase space cells and the Ulam method. One of us (DLS) thanks A.S.Pikovsky for useful discussions and hospitality at the Univ. Potsdam during the work on the revised version of this paper. We also thank an unknown referee B who pointed to us Refs.~\cite{osipenko,boldi,avrach1,avrach2,avrach3} in the report for the initial short version of the paper.
1,314,259,996,189
arxiv
\section{Introduction} \label{S:introduction} A \emph{heap} (or \emph{priority queue}) is a data structure containing a set of items, each with an associated key selected from a totally ordered key space. Heaps support the following operations: \begin{itemize} \item[] $\mbox{\it make-heap}()$: Create and return a new, empty heap. \item[] $\mbox{\it find-min}(H)$: Return an item of smallest key in heap $H$; return null if $H$ is empty. \item[] $\mbox{\it insert}(H, x)$: Insert item $x$ with predefined key into heap $H$. Item $x$ must be in no other heap. \item[] $\mbox{\it delete-min}(H)$: Delete from $H$ the item that would be returned by $\mbox{\it find-min}(H)$, and return it. \item[] $\mbox{\it meld}(H_1, H_2)$: Return a heap containing all items in item-disjoint heaps $H_1$ and $H_2$, destroying $H_1$ and $H_2$ in the process. \item[] $\mbox{\it decrease-key}(H, x, k)$: Given the location of item $x$ in heap $H$, and given that the key of $x$ is at least $k$, decrease its key to $k$. \item[] $\mbox{\it delete}(H, x)$: Given the location of item $x$ in heaps $H$, delete $x$ from $H$. \end{itemize} One can implement $\mbox{\it delete}(H, x)$ as $\mbox{\it decrease-key}(H, x, -\infty)$ followed by $\mbox{\it delete-min}(H, x)$. We shall assume this implementation and not further mention $\mbox{\it delete}$: To within a constant factor its time bound is the same as that of $\mbox{\it delete-min}$. Henceforth by \emph{deletion} we mean a $\mbox{\it delete-min}$ operation. If the only operations on keys are comparisons, $n$ insertions followed by $n$ deletions will sort $n$ items by key, so the amortized time\footnote{We shall study \emph{amortized time} throughout. See~\cite{tarjan1985amortized} for information on amortized analysis.} of either insertion or deletion on an $n$-item heap must be $\Omega(\log n)$. The \emph{Fibonacci heap}~\cite{Fibonacci} supports deletion in $\mathrm{O}(\log n)$ amortized time and all other heap operations in $\mathrm{O}(1)$ amortized time. Since the invention of the Fibonacci heap, many other heaps with similar efficiency have been developed. See~\cite{BrodalSurvey, HollowHeaps, KS19}. One is the \emph{pairing heap}~\cite{FSST86}, a ``self-adjusting" heap that is simple and efficient in practice~\cite{LarkinSenTarjan, StaskoVitter}. The paper that introduced pairing heaps proved an $\mathrm{O}(\log n)$ amortized time bound for all operations. The authors conjectured that pairing heaps have the same amortized efficiency as Fibonacci heaps, but this was disproved by Fredman~\cite{FredmanLB}, who showed that pairing heaps and similar self-adjusting heaps must take $\Omega(\log\log n)$ time per decrease-key if they take $\mathrm{O}(\log n)$ time per deletion. Iacono and {\"O}zkan~\cite{IaconoOzkan} proved the same lower bound for a different large class of self-adjusting heaps. These results raise the question of whether pairing heaps, or any other kind of self-adjusting heap, have efficiency matching these lower bounds. The question remains open for pairing heaps, but we have answered it in the affirmative for two other heap implementations, slim and smooth heaps~\cite{phd2022}, as we discuss in Section~\ref{S:remarks}. Improving the bound for decrease-key is not our goal here. Rather, it is to reduce the bound for insertions and melds. Iacono~\cite{IaconoPairing} reduced the amortized time bound per insertion in pairing heaps to $\mathrm{O}(1)$ and that of meld to zero while preserving the $\mathrm{O}(\log n)$ bound for decrease-keys and deletions. His proof is complicated and has large constant factors, however. We give a simplified proof of Iacono's result with much smaller constant factors. That is, we prove that pairing heaps have the following amortized time bounds: $\mathrm{O}(1)$ per insertion, zero per meld, and $\mathrm{O}(\log n)$ per decrease-key and deletion. Our approach applies to other self-adjusting heap implementations, as we discuss in Section~\ref{S:remarks}. \section{Pairing heaps} \label{S:pairing-heaps} A pairing heap stores the items in a heap as the nodes of an ordered tree. Henceforth we shall speak of nodes rather than items. The tree is heap-ordered by key: If node $x$ with key $x.key$ is the parent of node $y$ with key $y.key$, then $x.key \leq y.key$. Thus the root is a node of minimum key. Access to the tree is via the root. The fundamental primitives for modifying heap-ordered trees are \emph{linking} and its inverse, \emph{cutting}. A \emph{link} of the roots of two node-disjoint heap-ordered trees combines the trees by making the root of smaller key the parent of the other root, breaking a tie arbitrarily. The surviving root is the \emph{winner} of the link; the new child is the \emph{loser} of the link. We denote by $xy$ a link won by $x$ and lost by $y$. A link makes the loser the new \emph{first} child of the winner. Thus the children of any node are ordered by link time, latest first. We think of this order as being left to right: the first and last child are \emph{leftmost} and \emph{rightmost}, respectively; the \emph{left} and \emph{right} siblings of a child $y$ of $x$ are the children linked to $x$ before and after $y$ was linked to $x$, respectively. A \emph{cut} of a link $xy$ breaks the link, breaking the tree containing $x$ and $y$ into two trees, one rooted at $y$ containing the entire subtree of $y$ as it existed before the cut, and one containing $x$ that is unchanged except for the removal of $y$ and its descendants. A pairing heap does the heap operations as follows: \begin{itemize} \item[] $\mbox{\it make-heap}()$: Create and return an empty tree. \item[] $\mbox{\it find-min}(H)$: Return the root of $H$. \item[] $\mbox{\it insert}(H, x)$: Make $x$ into a one-node tree. If $H$ is non-empty, link $x$ with the root of $H$; otherwise, make $x$ the root of $H$. \item[] $\mbox{\it meld}(H_1, H_2)$: If either $H_1$ or $H_2$ is empty, return the other; otherwise, return the tree formed by linking the roots of $H_1$ and $H_2$. \item[] $\mbox{\it decrease-key}(H, x, k)$: Set $x.key = k$. If $x$ is not the root of $H$, cut the link lost by $x$ and link $x$ with the root of $H$. \item[] $\mbox{\it delete-min}(H)$: Let $x$ be the root of $H$. If $H$ has no other nodes, replace $H$ by an empty tree and return $x$. Otherwise, cut all the links between $x$ and its children, making the list of children of $x$ (ordered by link time) into a list of roots. Link these roots in two passes. The first, the \emph{pairing pass} links the roots in pairs left to right, the first with the second, the third with the fourth, and so on. The second, the \emph{assembly} pass, repeatedly links the current rightmost root with its left neighbor until only one root remains. Replace $H$ by the tree rooted at the remaining root, and return $x$. See Figure~\ref{F:pairing-heap}. \end{itemize} \begin{figure}[ht!] \centering \includegraphics[width=3.8in]{pairing-heap.eps} \caption{Pairing heap linking during delete-min, after the root is deleted. The circles represent nodes and the numbers indicate their keys. Lines between nodes represent links, with the winner above the loser.} \label{F:pairing-heap} \end{figure} One good way to represent a pairing heap is to use three pointers per node: one to the leftmost child of a node, one to its right neighbor, and one to its left neighbor, or to its parent if it has no left neighbor. The first pointer of a node is null if it has no children; the second is null if the node is a rightmost child or a root; the third is null if the node is a root. This representation allows a link or cut to be done in $\mathrm{O}(1)$ time and makes the worst-case time per operation $\mathrm{O}(1)$ plus the number of links done. A more compact representation with only two pointers and one bit per node~\cite{FSST86} uses the first pointer to indicate the leftmost child of a node, or its right neighbor if it has no children. If a node is a leftmost child, its second pointer indicates the right neighbor of its parent, or its parent if it has no right siblings; if a node is not a leftmost child, its second pointer indicates its left neighbor. Pointers are null when the designated nodes do not exist. This representation is ambiguous in one case, when the first pointer of a node $x$ indicates $y$ and the second pointer of $y$ indicates $x$. Node $x$ could have no right siblings and leftmost child $y$, or $x$ could have no children and right neighbor $y$. To disambiguate the representation in this case, each node has a bit indicating whether it is a leftmost child. Only the root has a null second pointer. A third representation uses \emph{hollow nodes}~\cite{HollowHeaps}. We make the tree exogenous rather than endogenous~\cite{tarjanbook}: Instead of the nodes \emph{being} the items, they \emph{hold} the items, or pointers to the items. Each node has a pointer to its leftmost child and to its right neighbor. To do a decrease-key of an item not in the root, we allocate a new node and move the item whose key decreases to this new node, making its old node \emph{hollow}. We move the children of the old node to the new node. During a deletion, we deallocate any hollow nodes that become roots. Each item maintains a pointer to the current node holding it. The hollow-node representation is simple, but it uses at least as much space as either of the endogenous representations, and if there are many decrease-key operations a tree can become mostly hollow (although one can do periodic deallocation of hollow nodes to overcome this). It is simple though, and it may be appropriate in situations that require an exogenous representation. We consider a sequence of pairing heap operations on an initially empty collection of heaps. Each operation except deletion does at most one link and takes $\mathrm{O}(1)$ time. Deletion takes $\mathrm{O}(1)$ time plus $\mathrm{O}(1)$ time per link. Hence to bound the total time of the sequence we just need to bound the number of links. \section{Node and link types} \label{S:node-and-link-types} We call a node \emph{temporary} if it is eventually deleted, \emph{permanent} if not. We state our time bounds in terms of the number of temporary nodes. Given a heap operation, we denote by $n$ the number of temporary nodes in the heap or heaps undergoing the operation. In stating time bounds we assume $n \geq 4$; if $n < 4$ in a given operation the amortized time of the operation is $\mathrm{O}(1)$. To bound the number of links, we classify links in four overlapping ways. We call a link an \emph{insertion link}, \emph{decrease-key link}, \emph{pairing link}, or \emph{assembly link} if it is done by an insertion or meld, by a decrease-key, by the pairing pass of a deletion, or by the assembly pass of a deletion, respectively. \begin{lemma} \label{L:insertion-links} The number of insertion links is at most the number of insertions. \end{lemma} \begin{proof} Consider the number of non-empty trees in a collection of heaps. This number is initially zero, always non-negative, decreases by one each time a meld does a link, and can only increase, by one, during an insertion into an empty tree, which does not do a link. Hence the number of melds that do links is at most the number of insertions that do not do links. \end{proof} \begin{lemma} \label{L:assembly-links} The number of assembly links during a deletion is at most the number of pairing links during the deletion. \end{lemma} \begin{proof} Consider a deletion that does $k$ links between $k+1$ roots. It does at least $k/2$ pairing links and hence at most $k/2$ assembly links. \end{proof} In addition to classifying links based on the operation that does them, we classify them based on their futures. A link is a \emph{deletion link}, abbreviated \emph{d-link}, if it is cut by a deletion; a \emph{key link}, abbreviated {k-link}, if it is cut by a decrease-key; a \emph{final link}, abbreviated \emph{f-link}, if it is never cut. \begin{lemma} \label{L:f-links} The number of final links plus the number of deletions is at most the number of insertions. \end{lemma} \begin{proof} Every final link is lost by a different permanent node. Every deletion deletes a different temporary node. \end{proof} We call a link \emph{real} if its winner and loser are both temporary and it is not cut by a decrease-key, \emph{phantom} if its winner or loser is permanent or it is cut by a decrease-key. We call a child \emph{real} or \emph{phantom} if it is connected to its parent by a real or phantom link, respectively. We call a link done during a deletion \emph{left} or \emph{right} if the loser is left or right of the winner on the root list before the link, respectively. By Lemma~\ref{L:assembly-links}, it is enough to bound the number of pairing links. We do this in three steps. First, in Section~\ref{S:pairing-links} we show that the number of such links is at most a constant times the number of insertions, decrease-keys, real right assembly links, and real pairing links. Second, in Section~\ref{S:size} we bound the number of real right assembly links. Third, in Section~\ref{S:mass} we bound the number of real pairing links. In Section~\ref{S:total} we combine our bounds to complete our analysis of pairing heaps. \section{Pairing links} \label{S:pairing-links} \begin{theorem} \label{T:pairing} The total number of pairing links during a sequence of pairing heap operations is at most four per insertion plus three per decrease-key plus two per real right assembly link plus two per real pairing link. \end{theorem} \begin{proof} Let $vw$ be a pairing link created during the deletion of a node $u$. Node $u$ is temporary, since it is deleted. Let $uv$ and $uw$ be the links lost by $v$ and $w$ that were cut when $u$ was deleted. Both links were done in earlier operations. In the following five cases, we charge $vw$ to itself or to a link or operation related to $uv$ or $uw$. We say $vw$ is a \emph{Case-$i$ link} if Case $i$ is the first that applies to $vw$. \noindent\textbf{Case 1:} Link $vw$ is a k-link, f-link, or real link. We charge $uv$ to itself. There is at most one Case-1 link per pairing f-link, pairing k-link, and real pairing link. In the remaining cases $v$ is temporary and $w$ is permanent, since if $v$ were permanent $vw$ would be a k-link or f-link, and if $v$ and $w$ were temporary $vw$ would be a k-link or real link. \noindent\textbf{Case 2:} Link $uw$ or $uv$ is an insertion link or a decrease-key link. Charge $vw$ to this link. Link $vw$ is uniquely determined by either $uv$ or $uw$, since $vw$ is the first link that $v$ and $w$ participate in after $u$ is deleted. In the remaining cases $uv$ and $uw$ are pairing or assembly links. \noindent\textbf{Case 3:} Link $uv$ is a pairing link. Link $uv$ is real because $u$ and $v$ are temporary and $uv$ was cut by a deletion. We charge $vw$ to $uv$. Link $uv$ can only be charged once in this way since $uv$ determines $vw$, so there is at most one Case-3 link per real pairing link. \noindent\textbf{Case 4:} Link $uw$ is an assembly link. Node $w$ must have won a link during the pairing pass that preceded $uw$, or else $w$ was the one root that did not participate in a pairing link. Since $w$ is permanent, if it won a link, it was a k-link or f-link. We charge $vw$ to this link. If $w$ did not win a link, we charge $vw$ to the deletion that did $uw$. There is at most one Case-4 link per pairing k-link, pairing f-link, and deletion. \noindent\textbf{Case 5:} Link $uv$ is an assembly link and $uw$ is a pairing link. Consider the assembly pass that does $uv$. After winning $uv$, $u$ is the rightmost root, and it either loses a right assembly link during the pass (say $xu$) or it is the sole remaining root at the end of the pass. Since $u$ is temporary, $xu$ is either a k-link or a real right assembly link. We charge $vw$ to $xu$ if it exists, to the deletion that does $uv$ otherwise. Whichever is charged, we shall prove it is charged at most twice over all Case-5 links. Since $v$ and $w$ are adjacent children of $u$ when $u$ is deleted, $uv$ and $uw$ are consecutive among the d-links won by $u$. Link $uw$ is a pairing link, and must be done before or after the assembly pass that does $uv$. Hence $uv$ is either the first or the last d-link that $u$ wins during this assembly pass. It follows that $xu$ or the deletion is charged at most twice: once for the first d-link $u$ wins during the pass, and once for the last. Hence there are at most two Case-5 links per real right assembly link, assembly k-link, and deletion. Combining the bounds for all cases, the number of pairing links is at most one per insertion link (Case 2) and decrease-key link (Case 2); two per pairing f-link (Cases 1 and 4), pairing k-link (Cases 1 and 4), assembly k-link (Case 5), real pairing link (Cases 1 and 3), and real right assembly link (Case 5); and three per deletion (Cases 4 and 5). By Lemma~\ref{L:f-links} the number of f-links plus the number of deletions is at most the number of insertions, so combining the bounds for insertion links, f-links, and deletions gives a total of at most four times the number of insertions. Combining the bounds for decrease-key links, pairing k-links, and assembly k-links gives a total of at most three times the number of decrease-keys. The bound in the theorem follows. \end{proof} \begin{remark} Even though by Lemma~\ref{L:assembly-links} it is enough to count pairing d-links, it does \emph{not} follow from the proof of Theorem~\ref{T:pairing} that it suffices to count real pairing links. It can happen for example that \emph{all} the pairing links in a deletion are won by temporary nodes and lost by permanent nodes. \end{remark} \section{Real right assembly links} \label{S:size} To bound the number of real right assembly links, we introduce the notion of node size. The \emph{size} of $x.s$ of a temporary node $x$ is the number of its descendants, including itself, connected to $x$ by a path of real links. The size of $x$ is at least one, at most the number of temporary nodes in its subtree, and never decreases, since a real link is cut only when its winner is deleted. The size of $x$ increases only when $x$ wins a real link with a node $y$, and then it increases by $y.s$. We shall show that certain real right assembly links cause node size doublings, and use this to bound the number of real right assembly links. Our argument uses credits and debits. One credit will pay for one real right assembly link. We allocate a certain number of credits to each operation. We are allowed when needed to borrow a credit to pay for a link. This incurs a debit. We can use credits allocated to later operations to pay off debits. If there are no debits at the end of a sequence of operations, the sum of the credits allocated to the operations is an upper bound on the number of links. Now let's get into the details. Each time a real right assembly link is done during a sequence of heap operations, we borrow a credit to pay for it. This incurs a debit, which we give to a temporary node. To pay off debits, we allocate $\lg n$ credits to each deletion and $(\lg n)/2$ credits to each decrease-key, where $n$ is the number of temporary nodes in the heap at the time of the operation.\footnote{We denote by $\lg$ the base-two logarithm.} We shall maintain the following \emph{debit invariant}: \begin{enumerate} \item[] \textbf{A:} Each temporary node $x$ has at most $(\lg x.s)/2$ debits, except the rightmost root during the assembly pass of a deletion, which if temporary has at most $\lg x.s$ debits. \end{enumerate} This invariant gives special status to the rightmost root during the assembly pass of a deletion: Its debit allowance is double the normal amount. \begin{lemma} \label{L:assembly-debits} Any sequence of heap operations maintains A. \end{lemma} \begin{proof} Invariant A holds initially since there are no debits. It can fail only when a node with debits is deleted (we must pay off its debits), when an assembly link occurs (incurring a debit) or when an assembly pass ends. (If the remaining root is temporary we must pay off its excess debits.) Consider a deletion. If the root deleted is temporary, we use $(\lg n)/2$ of the credits allocated to the deletion to pay off any debits it has accrued. This restores A. The pairing pass of the deletion preserves A, so A holds at the beginning of the assembly pass. If remaining root $x$ is temporary when the assembly pass ends, we use the remaining $(\lg n)/2$ of the credits allocated to the deletion to pay off any excess debits it was allowed during the assembly pass. This makes A true after the assembly pass. It remains to show that assembly links preserve A. A left assembly link $xy$ preserves A, since the size of $y$ does not change and the size of $x$ does not decrease. Let $xy$ be a phantom right assembly link. Whether $x$ is permanent or temporary, $xy$ must be a k-link: If $x$ is permanent, $xy$ cannot be an f-link, since $y$ is eventually deleted; if $x$ and $y$ are both temporary, the phantom link $xy$ is a k-link by definition. Link $xy$ makes $x$ the rightmost root, so we must pay off the extra $(\lg y.s)/2$ debits allowed to $y$ by A. For this we use the $(\lg n)/2$ credits allocated to the decrease-key that cuts $xy$, where $n$ is the number of temporary nodes in the tree containing $y$ when the cut happens. Since the size of $y$ cannot change until $xy$ is cut, $y.s \leq n$. Thus the decrease-key gives us enough credits to pay off the extra debits, restoring A. Let $xy$ be a real right assembly link. This is the crucial case. We must show that after the link A allows the extra debit incurred by $xy$. Let variables take their values just before the link. By A, $x$ and $y$ have at most $(\lg x.s)/2$ and $\lg y.s$ debits before the link, respectively. The link increases the size of $x$ to $x.s + y.s$ and makes $x$ the rightmost root, so after the link A allows $x$ and $y$ to have a total of $\lg(x.s+ y.s) + \lg(y.s)/2$ debits. The latter minus the former is \[\lg(x.s + y.s) - (\lg x.s)/2 - (\lg y.s)/2 \geq 1\] by the inequality $2\lg(a + b) \geq \lg a + \lg b + 2$. Thus A holds after the link: We can move debits between $x$ and $y$ as needed to satisfy it. The lemma holds by induction on the number of deletions, since there are no debits after all temporary nodes have been deleted. \end{proof} Lemma~\ref{L:assembly-debits} implies: \begin{theorem} \label{T:real-assembly} The number of real right assembly links is at most $(\lg n)/2$ per decrease-key plus at most $\lg n$ per deletion. \end{theorem} \section{Real pairing links} \label{S:mass} Our analysis of real pairing links is similar to our analysis of real assembly links, but instead of showing that certain assembly links cause size doublings, we show that certain pairing links cause halvings of a related parameter, mass. The \emph{mass} of a real child $x$, denoted $x.m$, is the size of its parent just after $x$ becomes its child. Equivalently, the mass of a real child $x$ is one plus the sum of its size and those of its real right siblings. A phantom child has no mass. Except in the middle of a deletion, a root has no mass. During the assembly and pairing passes of a deletion, the mass of a temporary root is the sum of its size and those of the temporary roots to its right on the root list. Unlike node size, which never decreases, the mass of a node can increase or decrease. Nevertheless, by bounding the increases and showing that certain real pairing links cause halvings in mass, we are able to bound the number of real pairing links. We do not need to use debits, only credits. One credit will pay for one real pairing link. Nodes can hold unused credits. We allocate $(3/2)\lg n + (\lg e)/2$ credits to each deletion and $\lg n$ credits to each decrease-key. We shall maintain the following invariant: \begin{enumerate} \item[] \textbf{B:} Each real child $x$ and each temporary root during a deletion has at least $(\lg x.m)/2$ credits, except during the pairing pass of a deletion, when the leftmost temporary root $x$ not yet involved in a pairing link in the pass has at least $(3\lg x.m)/2$ credits, and during the assembly pass of a deletion, when the rightmost temporary root is not required to have credits. \end{enumerate} \begin{lemma} \label{L:pairing-credits} Any sequence of heap operations maintains B. \end{lemma} \begin{proof} Invariant B holds initially. Only real links, cuts of real links, phantom pairing links, beginning a pairing pass, or ending an assembly pass can violate B. We shall show that deletions preserve B and then that decrease-keys and insertion links preserve B. When a deletion occurs, all children of the root are real. Deletion of the root decreases by one the mass of each of its children. These children become the roots on the root list. Thus deletion of the root preserves B except for the leftmost temporary root if there is one, which now may require extra credits. To make B true for this root, we give it $\lg n$ of the credits allocated to the deletion. Suppose B holds just before a pairing link of roots $x$ and $y$, with $x$ left of $y$ before the link. Let $w$ be the leftmost temporary root not yet involved in a pairing link, if any, and let $z$ be the nearest temporary root right of $y$, if any. Root $w$ is $x$, $y$, or $z$. Let variables take their values just before the link. If the link between $x$ and $y$ is phantom, the link preserves B, since $w.m \geq z.m$ and the link does not increase the mass of any node. If the link between $x$ and $y$ is real, we need a credit to pay for it. Let $M$ be the mass of $z$ if $z$ exists, or $1$ if it does not. Root $w$ is $x$, $x.m = x.s+y.s+M$, and $y.m=y.s+M$. Before the link, $x$ and $y$ have a total of $(3/2)\lg x.m + (\lg y.m)/2 \geq (3/2)\lg (x.s + y.s + M) + (\lg M)/2$ credits. After the link, the winner of the link between $x$ and $y$ needs $(\lg x.m)/2 = \lg(x.s+y.s+M)/2$ credits, the loser needs $\lg(x.s + y.s)/2$, and $z$ needs an additional $\lg M$ credits, totaling $\lg(x.s+y.s+M)/2+\lg(x.s + y.s)/2+\lg M$. Subtracting the number needed after the link from the number available before the link leaves at least \[\lg(x.s+y.s+M) - \lg(x.s + y.s)/2 - (\lg M)/2 \geq 1\] by the inequality $2\lg(a + b) \geq \lg a + \lg b + 2$, so the link frees a credit to pay for itself, after we move credits among $x$, $y$, and $z$ as needed to preserve B. It follows by induction that the pairing pass preserves B. We do not have to pay for assembly links, but we do have to account for increases in mass caused by such links. The analysis of the assembly pass is like the proof of Lemma~\ref{L:assembly-debits}. Invariant B holds at the beginning of the assembly pass. Suppose it holds at some time during the assembly pass. If the assembly pass is finished, we re-establish B at the remaining root by giving it $(\lg n)/2)$ of the credits allocated to the deletion. A phantom assembly link preserves B except for the rightmost temporary root: No masses increase, the loser of the link becomes a phantom child, and the rightmost temporary root changes only if the old rightmost root loses the link. Let $xy$ be a real left assembly link. The link does not change the mass of $y$ and increases the mass of $x$. This makes B true for $y$. Node $x$ remains the rightmost temporary root and so requires no credits. Let $xy$ be a real right assembly link. The link increases the mass of $x$, but $x$ becomes the rightmost temporary root, freeing its $(\lg x.m)/2$ credits, where $x.m$ is the mass of $x$ before the link. The link makes $y$ a real child with mass $x.m$. We use the credits freed from $x$ to establish B at $y$. It remains to consider decrease-keys and insertion links. Cutting a k-link at the beginning of a decrease-key changes no masses and hence preserves B: The loser of the link that is cut was a phantom child and becomes a root, with no mass in either case. If the decrease-key does a phantom link, the link changes no masses and hence preserves B. Suppose the decrease-key does a real link $xy$. Node $y$ becomes a real child, with positive mass. To establish B for $y$, we give to $y$ $(\lg n)/2$ of the credits allocated to the decrease-key. A phantom insertion link preserves B. A real insertion link $xy$ causes the mass of $y$ to increase to at most $n$, the number of temporary nodes in the heap at the time of the link. Since $y$ is temporary, it eventually becomes the only root in its heap, which must happen before $y$ can be deleted. Let $n'$ be the number of temporary nodes in the heap the first time after link $xy$ that $y$ becomes the only root in its heap. To $y$ we give $(\lg n')/2$ of the credits allocated to the operation that caused $y$ to become the only root in its heap. This operation is either a decrease-key or a deletion, and it is only charged once, since $y$ is the only root in the resulting heap and $xy$ is the most recent previous real insertion link lost by $y$. If $n' < n$, we need an additional $(\lg n - \lg n')/2$ credits to establish B for $y$ We obtain these from the remaining $(\lg e)/2$ credits allocated to each deletion. For each deletion, we distribute $(\lg e)/2$ credits equally among the $n'' - 1$ nodes in the heap just after the deletion, $(\lg e)/(2(n'' -1))$ to each. These credits are spendable at any time, including before the deletion. Between the time $xy$ is created and $y$ next becomes the only root in its heap, there are at least $n - n'$ deletions of other nodes, which give $y$ a total of $(\lg e)/2\sum_{i=n'}^{n-1} 1/i \geq (\lg n - \lg n')/2$ credits, enough to establish B at $y$. Having covered all the cases, we infer by induction that the lemma is true. \end{proof} Lemma~\ref{L:pairing-credits} implies: \begin{theorem} \label{T:real-pairing} The number of real pairing links is at most $\lg n$ per decrease-key plus at most $(3/2)\lg n + (\lg e)/2$ per deletion. \end{theorem} \section{Grand total} \label{S:total} Theorems~\ref{T:pairing}, \ref{T:real-assembly}, and \ref{T:real-pairing}, give a bound on the number of pairing links of $4$ per insertion, $3\lg n + 3$ per decrease-key, and $5\lg n + \lg e$ per deletion. By Lemma~\ref{L:assembly-links}, the same bound applies to assembly links. Adding the number of insertion links (one per insertion, by Lemma~\ref{L:insertion-links}) and decrease-key links (one per decrease-key), we obtain: \begin{theorem} \label{T:total} The total number of links done by a sequence of pairing heap operations is at most $9$ per insertion, $6\lg n + 7$ per decrease-key, and $10\lg n + 2\lg e$ per deletion. \end{theorem} \begin{corollary} The running time of a sequence of pairing heap operations is $\mathrm{O}(1)$ per insertion, zero per meld, and $\mathrm{O}(\log n)$ per decrease-key and deletion. \end{corollary} \section{Remarks} \label{S:remarks} One outcome of our analysis is that each decrease-key that does a k-link or an f-link takes $\mathrm{O}(1)$ amortized time, not $\mathrm{O}(\log n)$. In particular, decreasing the key of a permanent node takes $\mathrm{O}(1)$ amortized time. Furthermore, the bounds for decrease-key and deletion are in terms of the number of temporary nodes in the heap, not of all nodes. If one is willing to allow insertions and melds to take $\mathrm{O}(\log n)$ amortized time, our analysis can be significantly simplified: We do not need the concept of permanent and temporary nodes, we do not need Theorems~\ref{T:pairing} and \ref{T:real-assembly}, we can use Lemma~\ref{L:assembly-links} directly, and we can improve the constants in Theorem~\ref{T:real-pairing}: In the proof of that theorem, we charge the increase in the mass of the loser of an insertion link to the insertion or meld that does the link. This gives us the following alternative to Theorem~\ref{T:total}: \begin{theorem} \label{T:total-alt} The total number of links done by a sequence of pairing heap operations is at most $\lg n + 1$ per insertion, $\lg n$ per meld, $\lg n + 3$ per decrease-key, and $2\lg n$ per deletion. \end{theorem} One can combine the analyses of Sections~\ref{S:size} and~\ref{S:mass} by using a single credit invariant in place of A and B (with appropriate exceptions in the middle of pairing and assembly passes): \begin{enumerate} \item[] \textbf{C:} Every node $x$ that is a real child or temporary root in the middle of a deletion has at least $(\lg x.m - \lg x.s)/2 = \lg(x.m/x.s)/2$ credits. \end{enumerate} We have not done this, for two reasons: the analyses of real right assembly links and real pairing links are in fact independent, and (we hope) easier to understand separately rather than together; and the former requires only sizes, not masses. But the idea of using $\lg(x.m/x.s)$ or some related function of $x.m/x.s$ as a potential gives much more. By using a function of $x.m/x.s$ that grows more slowly than $\log$, Pettie~\cite{PettiePairing} obtained an amortized time bound for pairing heaps of $\mathrm{O}(2^{2\sqrt{\lg\lg \nm}})$ per insertion, meld, and decrease-key, where $\nm$ is an upper bound on the size of any heap over all operations, while preserving the $\mathrm{O}(\log n)$ bound per deletion. The first author~\cite{phd2022} improved Pettie's bound for insert, meld, and decrease-key to $\mathrm{O}(\sqrt{\log\log \nm} 2^{\sqrt{2\lg\lg \nm}})$. We have not yet verified whether our ideas combine with this result to reduce the bound per insertion and meld to $\mathrm{O}(1)$. The ``natural" function to use as a potential is $\lg\lg(x.m/x.s)$. Using this function in a novel analysis along with the concept of temporary and permanent nodes, we have obtained significantly improved bounds for three other kinds of self-adjusting heaps: \emph{multipass} pairing heaps~\cite{FSST86}, in which deletions do repeated paring passes until there is only one root; and slim and smooth heaps~\cite{KS19, HKST21}, which do deletions using locally maximum linking. Our amortized time bounds for multipass pairing heaps are $\mathrm{O}(\log n)$ for deletion, $\mathrm{O}((\log\log n)(\log\log\log n))$ for decrease-key, and $\mathrm{O}(1)$ for the other operations~\cite{phd2022}. Our bounds for slim and smooth heaps are the same but without the extra $\log\log\log n$ factor in the decrease-key bound~\cite{phd2022}: slim and smooth heaps have efficiency that matches the lower bounds. Of course, the big open question is whether pairing heaps match the lower bounds. Being optimists, we conjecture that the answer is yes. \bibliographystyle{plain}
1,314,259,996,190
arxiv
\section{Introduction} In the standard model, the number of fermion generations appears as an arbitrary parameter, meaning that a mathematically consistent theory can be built up using any number of fermion generations. The same is true for many extensions of the standard model, including grand unification models based on the gauge groups SU(5) and SO(10). An interesting question related to extension of the standard model is whether the number of fermion generations can in any way be explained through the internal consistency of the model. In the literature, there is some discussion of grand unification models based on big orthogonal groups like SO(18), where one spinor multiplet contains all known fermion fields of all generations, and much more \cite{Wilczek:1981iz}. It was shown that \cite{Chang:1985jd, Hubsch:1985zn}, with suitable symmetry breaking scheme, only three generations can remain light, whereas others obtain masses at much above the electroweak scale. In a quite different line of development, it was shown that if one extends the electroweak group to $\rm SU(3) \times U(1)$ and tries to accommodate the standard fermions into multiplets of this gauge group which must include some new fermions, cancellation of gauge anomalies can restrict the number of generations and one can obtain consistent models with the number of generations equal to three or any multiple of it \cite{Singer:1980sw, Pisano:1991ee, Frampton:1992wt, Valle:1983dk, Foot:1994ym}. These models will be described briefly in \sec{s:331}. Then, in \sec{s:suN} and \sec{s:N>6}, we try to see whether these models can be embedded into a simple SU(N) group. We conclude that, if one uses only completely antisymmetric tensor representations, such an embedding cannot be found. Then, in \sec{s:other}, we start looking for general conditions that will specify the number of fermion generations for arbitrary SU(N) groups. In \sec{s:su9}, we analyze one simple model, based on the group SU(9), that gives three generations. The renormalization group analysis of various scales in this models has been performed in \sec{s:gut}. We end with some concluding remarks in \sec{s:conclu}. \section{The 3-3-1 models}\label{s:331} The 3-3-1 models are based on the gauge group {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$}. The first factor in the gauge group is the group of QCD, whereas the other two factors pertain to electroweak interactions. There are two versions of such models, and we discuss them one by one. In one version, proposed by Pisano and Pleitez \cite{Pisano:1991ee} and by Frampton \cite{Frampton:1992wt} (to be denoted as the PPF model), the left-chiral fermions and antifermions belong to the following representations of the gauge group: \begin{subequations} \label{PPFrep} \begin{eqnarray} f_a & = \left( \begin{array}{c} \hat\ell \\ \nu_\ell\\ \ell \end{array} \right)_a & \sim \Rep331{1,3,0} \label{PPFrep:f} \\ Q_1 & = \left( \begin{array}{c} T_1 \\ u_1 \\ d_1 \end{array} \right) & \sim \Rep331{3,3,{2\over3}} \\ Q_i & = \left( \begin{array}{c} B \\ d \\ u \end{array} \right)_i & \sim \Rep331{3,3^*,-{1\over3}} \\ \hat u_a && \sim \Rep331{3^*,1,-{2\over3}}_a \\ \hat d_a && \sim \Rep331{3^*,1,{1\over3}}_a \\ \hat T_1 && \sim \Rep331{3^*,1,-{5\over3}} \\ \hat B_i && \sim \Rep331{3^*,1,{4\over3}}_i \,. \end{eqnarray} \end{subequations} Note that there are two kinds of generation indices: $a$ goes from 1 to 3, whereas $i$ takes only the values 2 and 3. An antifermion has been denoted by a hat. We emphasize that all representations given above pertain to the left-chiral components only. The representation of a right-chiral fermion would be the complex conjugate of that of the left-chiral antifermion, and vice versa. Note that there are extra quark fields, i.e., fields which are triplets of $\rm SU(3)_c$, but there is no neutrino field that is sterile under the standard model. Different generations of fermions are not copies of one another. Gauge anomalies cancel between the three generations \cite{Pisano:1991ee, Frampton:1992wt}, but not within a single generation. Thus, the consistency of the model requires three generations of fermions. Of course, this pattern of three generations can be repeated, obtained number of generations to be multiples of 3. The other version of 3-3-1 models was proposed by Singer, Valle and Schechter \cite{Singer:1980sw} (and hence will be referred to as the SVS model) and was examined by other authors later \cite{Valle:1983dk, Foot:1994ym}. In this version, there are sterile neutrinos, the left-chiral component of which has been denoted by $\hat\nu$ in the list below: \begin{subequations} \label{FLTrep} \begin{eqnarray} f_a & = \left( \begin{array}{c} \nu_\ell \\ \ell \\ \widehat \nu_\ell \end{array} \right)_a & \sim \Rep331{1,3,-{1\over3}} \\ \hat\ell_a && \sim \Rep331{1,1,1} \label{FLTrep:l^}\\ Q_1 & = \left( \begin{array}{c} u_1 \\ d_1 \\ u'_1 \end{array} \right) & \sim \Rep331{3,3,{1\over3}} \\ Q_i & = \left( \begin{array}{c} d \\ u \\ d' \end{array} \right)_i & \sim \Rep331{3,3^*,0} \\ \hat u_a, \hat u'_1 && \sim \Rep331{3^*,1,-{2\over3}} \\ \hat d_a, \hat d'_i && \sim \Rep331{3^*,1,{1\over3}} \,. \end{eqnarray} \end{subequations} The notation for the generation indices is as before. The primed fields are extra quark fields which are not present in the standard model. Like the previous model, gauge anomalies cancel between various generations. The difference between the two models may be summarized in the following way. The standard model gauge group is not a maximal subgroup of the group {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$}. In particular, the electroweak $\rm SU(3)_L$ has two neutral generators, and some combination of these two, along with the generator of $\rm U(1)_X$, form $Y$ and $I_{3L}$, the two neutral generators of the standard electroweak model. Using the standard normalization of the SU(3) generators, \begin{eqnarray} \mathop{\rm tr} (T_A T_B) = \frac12 \delta_{AB} \,, \label{eq:gutnorm} \end{eqnarray} these combinations are given by \begin{subequations} \label{PPF2SM} \begin{eqnarray} I_{3L} &=& - \frac12 T_3 + \frac{\surd3}2 T_8 \,, \\* Y &=& \frac32 T_3 + \frac{\surd3}2 T_8 + X \end{eqnarray} \end{subequations} in the first model, whereas in the second one, they are given by \begin{subequations} \label{FLT2SM} \begin{eqnarray} I_{3L} &=& T_3 \,, \\* Y &=& - \frac1{\surd3} T_8 + X \,. \end{eqnarray} \end{subequations} The SU(3) generators are given by \begin{eqnarray} T_i = \begin{cases} \frac12 \lambda_i & \mbox{for the fundamental representation}, \\ -\frac12 \lambda^*_i & \mbox{for the anti-fundamental representation}, \end{cases} \end{eqnarray} where the $\lambda$'s are the well-known Gell-Mann matrices. \section{Seeking embeddings into SU(N)}\label{s:suN} Since the group {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} is of rank 5, the smallest unitary group that contains it as a subgroup is SU(6). Therefore, in this section, we analyze whether the models discussed in \sec{s:331} can be embedded into an SU(6) grand unified model. For the sake of convenience, let us announce here the notation that will be used for denoting representations of various groups. For the grand unified group, the notations will be denoted by boldface. The {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} representations will be denoted by three numbers in parenthesis, e.g., \Rep331{a,b,c}, as has already been done in \Eqs{PPFrep}{FLTrep}. And the representation of the standard model group, when required, will be denoted by square brackets, e.g., $\SMrep{a,b,c}$. Thus, with this notation, the fundamental representation of SU(6) has the decomposition \begin{eqnarray} \rep 6 &\to& \Rep331{3,1,-\frac13} + \Rep331{1,3,\frac13} \nonumber\\* &\to& \SMrep{3,1,-\frac13} + \SMrep{1,2,\frac12} + \SMrep{1,1,0} \,. \label{6ofsu6} \end{eqnarray} It is now easy to see that neither PPF nor SVS model can be embedded into SU(6). For the PPF model \cite{Pisano:1991ee, Frampton:1992wt}, note the representation of leptons given in \Eqn{PPFrep:f}. Certainly, it is not contained in the fundamental representation of SU(6) or its complex conjugate, $\rep 6^*$. Higher representations can be obtained by taking Kronecker products of the $\rep 6$ and $\rep 6^*$ representations, and will be of the generic form $(\rep6)^m(\rep6^*)^n$. Denoting the two different {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} representations that appear in \Eqn{6ofsu6} by $A$ and $B$, we can write \begin{eqnarray} (\rep6)^m(\rep6^*)^n &\to& \sum_{m',n'} {m \choose m'} {n \choose n'} (A)^{m-m'} (B)^{m'} (A^*)^{n-n'} (B^*)^{n'} \,. \end{eqnarray} Take any term in the sum. Contributions to the $\rm U(1)_X$ quantum number come from all four factors, and is given by \begin{eqnarray} X = \frac13 ( -m + 2m' + n - 2n') \,. \end{eqnarray} Thus, for the lepton representation such as in \Eqn{PPFrep:f}, we need \begin{eqnarray} m-n = 2(m'-n') \,. \end{eqnarray} in order to obtain $X=0$. Only the $B$ and $B^*$ contributes to non-trivial $\rm SU(3)_L$ representations. In order to obtain a triplet, we need \begin{eqnarray} m' - n' = 1 \mod 3 \,, \end{eqnarray} considering the triality of the representations. Moreover, the lepton must be a color singlet, which means that we should have \begin{eqnarray} (m-m') - (n-n') = 0 \mod 3 \,. \end{eqnarray} These three conditions cannot be satisfied with integers, and hence it is impossible to obtain a $(1,3,0)$ representation of {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} in any representation of SU(6). For the SVS model, the same kind of analysis can be performed keeping an eye towards the antilepton in \Eqn{FLTrep:l^}. In order to produce a singlet of both SU(3) factors, one needs Kronecker product of equal number of $6$ and $6^*$. However, such products will give $X=0$. We have thus proved that neither the PPF, nor the SVS model can be embedded into SU(6). This result should not be understood to imply that an SU(6) grand unified model is impossible. We can take a different embedding of the standard model generators $I_{3L}$ and $Y$, viz., \begin{subequations} \begin{eqnarray} I_{3L} &=& T_3 \,, \\* Y &=& \frac1{\surd3} \lambda_8 + X \,. \end{eqnarray} \end{subequations} The SM reduction of the $6^*$ representation of SU(6) can be easily read from \Eqn{6ofsu6}: \begin{eqnarray} \rep 6^* \to \underbrace{\SMrep {3^*,1,\frac13}}_{\hat d} + \underbrace{ \SMrep{ 1,2,-\frac12}}_{L} + \SMrep{ 1,1,0 } \,. \label{6*su6} \end{eqnarray} Also, the antisymmetric rank-2 representation has the following decomposition under the SM gauge group: \begin{eqnarray} \rep {15} &\to& \Rep331{3^*,1, -\frac23} + \Rep331{1,3^*,\frac23} + \Rep331{3,3,0} \nonumber\\ &\to& \underbrace{ \SMrep{3^*,1,-\frac23}}_{\hat u} + \SMrep{1,2,\frac12} + \underbrace{ \SMrep{1,1,1}\vphantom{1\over6}}_{\hat\ell} + \underbrace{ \SMrep{3,2,\frac16}}_{Q} + \SMrep{3,1,-\frac13} \,. \label{15su6} \end{eqnarray} In both \Eqs{6*su6}{15su6}, we have marked the known fermions which correspond to the SM representations. We observe that all known fermions of a single generation belong to these two representations. This is therefore like the minimal SU(5) grand unified model where the corresponding representations contain all known fermion fields of a single generation and nothing more. In this case, there are some extra fermions to complete the SU(6) representations. There is one big difference between the SU(5) and this SU(6) model. This can be seen from the anomaly coefficients of different representations. For the completely antisymmetric tensorial representations of SU(N), the anomaly coefficients are as follows \cite{Banks:1976yg}: \begin{eqnarray} \begin{array}{c|ccc} \hline \mbox{Representation} & \form 1 & \form 2 & \form 3 \\ \hline \mbox{anomaly coefficient} & 1 & N-4 & \frac12 (N-3)(N-6) \\ \hline \end{array} \label{anom} \end{eqnarray} Here, the completely antisymmetric representation of rank $n$ has been denoted by \form n. The anomaly coefficient of any representation and its complex conjugate will be negative of each other. We see that for the SU(5) group, anomalies cancel between the $\rep5^*$ and $\rep{10}$ representations. For the SU(6) gauge group, the $\rep 6^*$ and the $\rep{15}$ representations have anomaly coefficients $-1$ and 2 respectively. Thus, along with 3 copies of the $\rep{15}$ representation that is necessary in order to have three quark doublets, we need 6 copies of the $\rep 6^*$ representation in order to cancel anomalies. There will thus be 3 extra copies of the lepton doublet $L$. But these will form gauge invariant masses with the three copies of the $\SMrep{1,2,\frac12}$ representation that appears in the three $\rep{15}$-plets, and these masses can be much heavier than the electroweak scale. Similarly, among the six copies of the $\hat d$ representation that appear in the $\rep 6^*$ multiplets, three will form gauge invariant masses with the $\SMrep{3,1,-\frac13}$ representations present in the $\rep{15}$-plets, leaving only three $\hat d$'s for the electroweak scale. The sterile neutrino fields, i.e., the $\SMrep{1,1,0}$ representations shown in \Eqn{6*su6}, can also have mass terms that are invariant under the SM gauge group. If all these masses are large, the only fields that are left over at the electroweak scale are no different from the fields that are obtained in three generations of $\rep5^*$ and $\rep{10}$ multiplets of SU(5). Moreover, the model has no explanation for the number of generations. Any number of $\rep{15}$-plets, along with twice the number of $\rep6^*$-plets, would be anomaly-free. Hence this model is not interesting for our discussion. \section{Generalities about SU(N) models with $N>6$}\label{s:N>6} We now consider models based on the gauge groups SU(N), with $N>6$. Henceforth we will use completely antisymmetric tensor representations only. Such a representation of rank $n$ will be denoted by \form n, as was done in \Eqn{anom}. The fundamental representation will be thus denoted by \form 1 in this notation, whereas its complex conjugate will be \form{N-1}. We will show that, with antisymmetric representations only, neither PPF nor SVS model can be embedded into an SU(N) grand unified group. The crucial aspect of both PPF and SVS models is that, among the left-chiral fields, the quarks, i.e., color triplets, appear in triplet or antitriplet of $\rm SU(3)_L$, whereas the antiquarks, i.e., color antitriplets, are all singlets of $\rm SU(3)_L$. In particular, then, there is no multiplet that transforms like \Rep331{3^*,3,\star} or \Rep331{3^*,3^*,\star}. On the other hand, among the quark fields, some should be in \Rep331{3,3,\star} and some in \Rep331{3,3^*,\star} representations, thereby ensuring anomaly cancellation among different generations. These features are not available in the decomposition of any \form m representation of an SU(N) group, as we show now. Since {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} is not a maximal subgroup of the groups under consideration, it should be possible to embed the {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} group into the SU(N) grand unified group in more than one ways. First, we assume that the decomposition of the fundamental representation of SU(N) into {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} is given by \begin{eqnarray} \form 1 \equiv \rep N \to \Rep331{3,1,\star} + \Rep331{1,3,\star} + \sum_{k=7}^N \Rep331{1,1,\star} \,, \label{NofSUN} \end{eqnarray} where the $\rm U(1)_X$ charges have been left unspecified, denoted by the star symbol. There are various ways of assigning the $\rm U(1)_X$ quantum numbers, and the specifics are irrelevant for our discussion. The representation \form m can contain a \Rep331{3^*,3,\star} submultiplet if, among the $m$ tensor indices, 2 come from the color part and 1 from the $\rm SU(3)_L$ part, and the remaining $m-3$ indices should belong to the $\rm U(1)_X$ subgroup. However, the count of \Rep331{3,3^*,\star} is exactly the same. Thus, we obtain equal numbers of \Rep331{3^*,3,\star} and \Rep331{3,3^*,\star} submultiplets. The only way to get rid of the \Rep331{3^*,3,\star} submultiplets is for them to form {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} invariant masses with \Rep331{3,3^*,\star} submultiplets and become superheavy. But then there is no remaining \Rep331{3,3^*,\star} submultiplet to contribute to the fermion content of PPF or SVS models. Hence the impasse. The situation is no different if we assume that the decomposition of the fundamental follows the rule \begin{eqnarray} \form 1 \equiv \rep N \to \Rep331{3,1,\star} + \Rep331{1,3^*,\star} + \sum_{k=7}^N \Rep331{1,1,\star} \,. \label{N2ofSUN} \end{eqnarray} In this case, following the same argument, we would conclude that the number of \Rep331{3,3,\star} and \Rep331{3^*,3^*,\star} are equal. Now, the \Rep331{3^*,3^*,\star} submultiplets can be got rid of by forming superheavy masses with \Rep331{3,3,\star} submultiplets. But then there will be no \Rep331{3,3,\star} left at the {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} level, which is unacceptable for both PPF and SVS models. \section{Other embeddings into SU(N)}\label{s:other} At this point, we ignore the intermediate symmetry {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} and try to see whether it is possible to embed the standard model fermions into an SU(N) grand unified group that would provide an explanation for the number of fermion generations. To this end, we first list the decomposition of the lowest rank antisymmetric tensor representations of SU(N) into the SM gauge group. The fundamental representation decomposes as follows: \begin{eqnarray} \label{breaking} \form 1 &=& \SMrep{3,1,-\frac13} + \SMrep{1,2,\frac12} + (N-5) \cdot \SMrep{1,1,0} \,, \label{eq:t1} \end{eqnarray} where the number within parentheses indicate the number of copies of the SM gauge singlet in the last term. Similarly, we obtain \begin{eqnarray} \form 2 &=& \SMrep{3^*,1,-\frac23} + \SMrep{3,2,\frac16} + (N-5) \cdot \SMrep{3,1,-\frac13} \nonumber\\* && \null + \SMrep{1,1,1} + (N-5) \cdot \SMrep{1,2,\frac12} + {N-5 \choose 2} \cdot \SMrep{1,1,0} \,, \nonumber\\ \form 3 &=& \SMrep{1,1,-1} + \SMrep{3^*,2,-\frac16} + (N-5) \cdot \SMrep{3^*,1,-\frac23} \nonumber\\* && \null + (N-5) \cdot \SMrep{3,2,\frac16} + \SMrep{3,1,\frac23} + {N-5 \choose 2} \SMrep{3,1,-\frac13} \nonumber\\* && \null + (N-5) \cdot \SMrep{1,1,1} + {N-5 \choose 2} \SMrep{1,2, \frac12} + {N-5 \choose 3} \SMrep{1,1,0} \,. \label{eq:t2t3} \end{eqnarray} \begin{table} \caption{Number of different SM multiplets occurring in completely antisymmetric representations of SU(N).}\label{t:SMreps} $$ \begin{array}{c|ccc} \hline & \multicolumn{3}{c}{\mbox{Representation}} \\ \cline{2-4} \mbox{SM multiplet} & \form 1 & \form 2 & \form 3 \\ \hline \SMrep{1,2,-\frac12} & -1 & -(N-5) & -{N-5 \choose 2} \\ \SMrep{3,2,\frac16} & 0 & 1 & N-6 \\ \SMrep{3^*,1,-\frac23} & 0 & 1 & N-6 \\ \SMrep{3^*,1,\frac13} & -1 & -(N-5) & -{N-5 \choose 2} \\ \SMrep{1,1,1} & 0 & 1 & N-6 \\ \hline \end{array} $$ \end{table} Suppose now we consider a model with $n_1$ copies of \form 1 representation, $n_2$ copies of \form 2, and $n_3$ copies of \form 3. In counting the ``copies'', we will denote a complex conjugate representation by a negative number. We count the number of different SM multiplets in the antisymmetric representations of SU(N) and summarize the result in Table\,\ref{t:SMreps}. We see that, in order to obtain $n_g$ generations of fermions, we need \begin{subequations} \label{conditions} \begin{eqnarray} n_2 + (N-6) \,n_3 = n_g \label{gen1} \end{eqnarray} in order to ensure the correct number of quark doublets, as well as $\hat u$ and $\hat\ell$. In addition, we need \begin{eqnarray} n_1 + (N-5) n_2 + {N-5 \choose 2} \,n_3 = - n_g \label{gen2} \end{eqnarray} so that the correct number of lepton doublets and $\hat d$ are obtained. Anomaly cancellation between the representations will be ensured if we have \begin{eqnarray} n_1 + (N-4) n_2 + \frac12 (N-3)(N-6) \,n_3 = 0 \,, \label{anomcancel} \end{eqnarray} \end{subequations} making use of the anomaly co-efficients given in \Eqn{anom}. However, only two of the three relations in \Eqn{conditions} are independent. We find it simple to work with \Eqs{gen1}{anomcancel}. The general solution of these equations is given by \begin{eqnarray} n_1 &=& -(N-4) n_g + \frac12 (N-5)(N-6) \,n_3 \,, \nonumber\\* n_2 &=& n_g - (N-6) \,n_3 \,. \label{nsolu} \end{eqnarray} If we take $n_3=0$, we obtain $n_g=n_2$, and hence no explanation of generations. This is what was done for the SU(6) grand unified model discussed in Sec.\,\ref{s:suN}, a model which was found uninteresting precisely because it could not predict the number of generations. However, we can consider other kinds of solutions of \Eqn{nsolu}. For example, if we take $n_2=0$, we obtain \begin{eqnarray} n_g = (N-6) \,n_3 \,. \end{eqnarray} In this case, for the grand unified group SU(9), we find that the number of generations must be 3 or its multiple. It should be noted that it is just as easy to obtain solutions of \Eqn{nsolu} with $n_1$, $n_2$ and $n_3$ all non-zero. One such model with an SU(8) gauge group was the subject matter of Ref.\cite{Martinez:2011iq}, where the authors took $n_1=-9$, $n_2=1$, $n_3=1$ and obtained three generations of fermions. From our analysis, it seems that they could have obtained any other number of generations by adjusting the number of copies of various representations. For example, $n_1=-13$, $n_2=2$, $n_3=1$ would give four generations. However, the merit of the $n_2=0$ solutions is that the number of generations cannot be arbitrary: it must be a multiple of 3. \section{Anatomy of an SU(9) model}\label{s:su9} As seen from our earlier analysis, an SU(9) model, in the absence of rank-2 antisymmetric multiplets, automatically gives three fermion generations, provided we take \begin{eqnarray} n_3 = 1 \,, \qquad n_1 = -9 \,, \label{-9+1} \end{eqnarray} which means that there should be 9 copies of the anti-fundamental representation, and one copy of the rank-3 antisymmetric multiplet. To see in more detail how different fermion representations of the standard model are obtained from these representations of the grand unified group SU(9), we first discuss the decomposition of these multiplets under the group {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$}. The decomposition of the fundamental of SU(9) can be taken to be as given in \Eqn{NofSUN}. We can choose the $\rm U(1)_X$ quantum number in a way that it vanishes for all singlets of $\rm SU(3)_c \times SU(3)_L$. Then, choosing the normalization of the $\rm U(1)_X$ arbitrarily, we can write \begin{eqnarray} \form {\bar 1} \equiv \rep {\bar 9} \to \Rep331{3^*,1,\frac13} + \Rep331{1,3^*,-\frac13} + 3 \cdot \Rep331{1,1,0} \,. \end{eqnarray} This gives, for the rank-3 representation of SU(9), the following decomposition into {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} multiplets: \begin{eqnarray} \form 3 \equiv \rep {84} &=& \Rep331{1,1,0} + \Rep331{1,1,1} + \Rep331{1,1,-1} \nonumber\\* && + 3 \cdot \Rep331{3,3,0} + \Rep331{3^*,3,-\frac13} + \Rep331{3,3^*,\frac13} \nonumber\\* && + 3 \cdot \Rep331{3^*,1,-\frac23} + 3 \cdot \Rep331{1,3^*,\frac23} + 3 \cdot \Rep331{1,3,\frac13} + 3 \cdot \Rep331{3,1,-\frac13} . \end{eqnarray} Looking at these decompositions, we find that the multiplets specified by \Eqn{-9+1} contain the following vector-like combinations of {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} submultiplets: \begin{eqnarray} \begin{array}{r@{\hspace{7mm}}l} 28 & \Rep331{1,1,0} \\[2mm] 3 & \Rep331{3^*,1,\frac13} + \Rep331{3,1,-\frac13} \\[2mm] 3 & \Rep331{1,3^*,-\frac13} + \Rep331{1,3,\frac13} \\[2mm] 1 & \Rep331{3^*,3,-\frac13} + \Rep331{3,3^*,\frac13} \\[2mm] 1 & \Rep331{1,1,1} + \Rep331{1,1,-1} \,. \\ \end{array} \label{331vec} \end{eqnarray} Such things can have {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} invariant mass terms at the level where the said symmetry is intact, and do not affect the SM reduction of the model. These singlets and vector-like particles do not contribute to the {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} anomalies. In addition, we find that the decomposition consists of several chiral multiplets of {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$}. Under the group $\rm SU(3)_L$, these multiplets transform either like triplets or antitriplets, or like singlets. We want the triplets and antitriplets to contain the doublets of the standard electroweak gauge group. For that, we first have to discuss how the SM is embedded into {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$}. Since the members of an $\rm SU(2)_L$ doublet have the same value of hypercharge $Y$, we need to ensure that the diagonal $Y$ generator has two equal entries in the $3\times3$ representation of $\rm SU(3)_L$. We notice that the PPF model was constructed in such a way that the second and the third elements of a triplet end up with the same value of $Y$ through \Eqn{PPF2SM}, whereas in the SVS model, the first two elements are equal. In order to obtain something different, we can now try a solution in which the first and the third elements of $Y$ should have the same diagonal elements. This solution is given by \begin{subequations} \label{our2SM} \begin{eqnarray} I_{3L} &=& \frac12 T_3 + \frac{\surd3}2 T_8 \,, \\* Y &=& \frac12 T_3 - \frac1{2\surd3} T_8 + X \,. \end{eqnarray} \end{subequations} With these assignments, we now present the chiral multiplets of {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} that are present in the choice of representations in \Eqn{-9+1}, as well as their SM decompositions: \begin{subequations} \label{SMsurvivors} \begin{eqnarray} 3 \cdot \Rep331{3,3,0} &\to& 3 \cdot \SMrep{3,2,\frac16} + 3 \cdot \SMrep{3,1,-\frac13} \,, \label{330->SM} \\* 6 \cdot \Rep331{3^*,1,\frac13} &\to& 6 \cdot \SMrep{3^*,1,\frac13} \,, \label{3*1d}\\* 6 \cdot \Rep331{1,3^*,-\frac13} &\to& 6 \cdot \SMrep{1,2,-\frac12} + 6 \cdot \SMrep{1,1,0} \,, \label{} \\* 3 \cdot \Rep331{3^*,1,-\frac23} &\to& 3 \cdot \SMrep{3^*,1,-\frac23} \,, \\* 3 \cdot \Rep331{1,3^*,\frac23} &\to& 3 \cdot \SMrep{1,2, \frac12} + 3 \cdot \SMrep{1,1,1} . \end{eqnarray} \end{subequations} We see that, at the SM level, there are three quark doublets, three multiplets that transform like $\hat u_L$, and also three that transform like the $\hat\ell_L$: precisely the numbers necessary for obtaining three fermion generations in the SM. We find six SM multiplets that transform like the $\hat d_L$ in \Eqn{3*1d}, but that is not a problem, because three of them can pair up with and equal number of \SMrep{3,1,-\frac13} multiplets that appear in \Eqn{330->SM}, and can obtain a bare mass term that is invariant under the SM gauge group. Similarly, the three \SMrep{1,2, \frac12} multiplets pair up with three of the six \SMrep{1,2, -\frac12} multiplets, leaving three lepton doublets at the SM level. Thus, we are left with exactly three chiral generation of SM fermions which are necessary for building up the standard model. In addition, there are vector-like combinations of the SM gauge group which come from \Eqn{331vec} and \Eqn{SMsurvivors}. We write these combinations as follows: \begin{eqnarray} \begin{array}{clc} \hline \mbox{Notation} & \mbox{Representations under} & \mbox{Number of} \\ & \mbox{the SM gauge group} & \mbox{copies} \\ \hline\\[-1mm] Q & \SMrep{3,2, \frac16} +\SMrep{3^*,2,- \frac16} & \ 1 \\[2mm] U & \SMrep{3,1, \frac23} +\SMrep{3^*,1,- \frac23} & \ 1 \\[2mm] D & \SMrep{3,1,- \frac13}+\SMrep{3^*,1,- \frac16} & \ 6 \\[2mm] L & \SMrep{1,2, \frac12} +\SMrep{1,2,- \frac12} & \ 6 \\[2mm] E & \SMrep{1,1,-1}+\SMrep{1,1,1} & \ 1 \\[2mm] S & \SMrep{1,1,0} & 40 \\[4mm] \hline \label{SMvec} \end{array} \end{eqnarray} This model is therefore very different from the existing models based on the gauge group {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$}, the ones outlined in \sec{s:331}. In the existing models, cancellation of gauge anomalies is obtained by making one fermion generation different from the others. In the present model, this is not the case. At the level of the {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} group, all fermion generations have the same transformation properties. However, there are extra fermions that help cancel the anomalies, fermions which become vector-like at the SM level. Thus, even apart from the grand unification prospects via SU(9), the multiplets presented in \Eqn{SMsurvivors} can be taken to represent a new {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} model, which can be studied in its own right. However, if we consider it purely as a {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} model, there is no explanation of the number of generations. One can obtain any number of generations by changing the number of copies of each multiplet, keeping the ratios intact, and obtain any number of generation one wants. Thus, it is fair to say that the explanation of the number of generations comes directly from the grand unification group SU(9), and not from its subgroup {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$}. \section{Gauge coupling unification and the intermediate scales}\label{s:gut} One addresses now the unification picture of the SU(9) model defined by the condition given in \Eqn{-9+1}. For the sake of simplicity, we shall assume in our analysis that the SU(9) GUT model spontaneously breaks, through the scheme given by \Eqn{breaking}, directly to the SM gauge group at a unique unification scale $\Lambda$. Thus, the unification condition for the SM gauge couplings $\alpha_{1,2,3}$ at the scale $\Lambda$ reads as \begin{eqnarray} \label{eq:unif} \alpha_U = k_1\,\alpha_1(\Lambda) \,=\, k_2\,\alpha_2(\Lambda) \,=\, k_3\, \alpha_3(\Lambda) \,. \end{eqnarray} The normalization constants $k_i$ are defined by \begin{eqnarray} k_i\,\equiv\, \frac{\mathop{\rm tr} T^2_i}{\mathop{\rm tr} T^2}\,, \end{eqnarray} where $T$ and $T_i$ are the same unbroken generator properly normalized to the GUT group and its subgroup $G_i$, respectively, for a given representation. Taking into account the decomposition of the fundamental representation given in \Eqn{breaking} and the GUT normalization in \Eqn{eq:gutnorm}, one easily derives that the normalization constants $k_i$ take the canonical values \begin{eqnarray} k_1=\frac53\,,\quad k_2=k_3=1\,, \end{eqnarray} like in the SU(5) GUT models. The relation given in \Eqn{eq:unif} is only valid at the unification scale and one has to relate these gauge coupling with measurable quantities at electroweak scale. The evolution of the gauge couplings in the one-loop approximation is ruled by the solutions of the Renormalization Group Equations (RGE's), which depend on the masses of the particles in the model. We take the SM particles and $n_H$ Higgs doublets at the electroweak scale. In addition, the vector-like combinations given in \Eqn{SMvec} can also have masses between $M_Z$ and the unification scale $M_U$. We will assume, for the sake of simplicity, that all extra vector-like fermions with the same quantum number share the same mass scale, and denote these scales collectively as $M_I$, where $I$ can take the `values' $Q,\,U,\,D,\,L,\,E$ as shown in \Eqn{SMvec}. The solutions of the RGE's can then be written as \begin{subequations} \begin{align} \label{eq:RGEsol1} \alpha^{-1}_1(\mu)&=\alpha^{-1}_1(M_Z)-\frac{b_1}{2\pi}\ln\left(\frac{\mu}{M_Z}\right) -\sum_I\frac{b^I_1}{2\pi}\ln\left(\frac{\mu}{M_I}\right) \,,\\ \label{eq:RGEsol2} \alpha^{-1}_2(\mu)&=\alpha^{-1}_2(M_Z)-\frac{b_2}{2\pi}\ln\left(\frac{\mu}{M_Z}\right) -\sum_I\frac{b^I_2}{2\pi}\ln\left(\frac{\mu}{M_I}\right) \,,\\ \label{eq:RGEsol3} \alpha^{-1}_3(\mu)&=\alpha^{-1}_3(M_Z)-\frac{b_3}{2\pi}\ln\left(\frac{\mu}{M_Z}\right) -\sum_I\frac{b^I_3}{2\pi}\ln\left(\frac{\mu}{M_I}\right) \,, \end{align} \end{subequations} where $b_i$'s are the one-loop beta coefficients that take into account the quantum numbers of the SM fermions, the gauge bosons and $n_H$ Higgs doublets: \begin{eqnarray} b_1=\frac{20}{9}\, n_g+\frac{n_H}{6}\,,\quad b_2=\frac{4}{3}\, n_g+\frac{n_H}{6}-\frac{22}{3}\,,\quad b_3=\frac{4}{3}\, n_g-11\,. \end{eqnarray} The $b^I_i$'s are the one-loop beta coefficients for the intermediate extra vector-like fermions. The values of these $b^I_i$'s are given in table~\ref{t:betavec}. \begin{table} \caption{Beta coefficients for the extra vector-like fermions.}\label{t:betavec} \begin{center} \begin{tabular}{c|ccccc} & $Q$ & $U$ & $D$ & $L$ & $E$\\ \hline $b_1$ & $2/9$ & $16/9$ & $4/9$ & $2/3$ & $4/3$ \\ $b_2$ & $2$ & $0$ & $0$ & $2/3$ & $0$ \\ $b_3$ & $4/3$ & $2/3$ & $2/3$ & $0$ & $0$ \\ \end{tabular} \end{center} \end{table} In order to get some insight into the unification in the one-loop approximation, i.e. to understand the intermediate scales which lead to successful unification, let us define the effective beta coefficients $B_i$~\cite{Giveon:1991zm,EmmanuelCosta:2011wp}, \begin{eqnarray} B_i\equiv\frac{1}{k_i}\left(b_i+\sum_I b_i^I\,r_I\right)\,, \end{eqnarray} where the ratio $r_{I}$ is given by \begin{eqnarray} r_I = \frac{\ln\left(\Lambda/M_I\right)}{\ln\left(\Lambda/M_Z\right)}\,. \end{eqnarray} It is also convenient to introduce the differences $B_{ij}\equiv B_i-B_j$, such that \begin{eqnarray} B_{ij}= B^{\text{SM}}_{ij}+\sum_I\Delta^I_{ij}r_I\,, \end{eqnarray} where $B^{\text{SM}}_{ij}$ corresponds to the SM particle contribution and \begin{eqnarray} \Delta^I_{ij}= \frac{b^I_i}{k_i}-\frac{b^I_j}{k_j}\,. \end{eqnarray} We then find that \begin{subequations} \label{Btests} \begin{eqnarray} \label{eq:Btest} B\equiv\frac{B_{23}}{B_{12}} &=& \frac{\sin^2\theta_W-\dfrac{k_2}{k_3}\dfrac{\alpha} {\alpha_s}} {\dfrac{k_2}{k_1}-\left(1+\dfrac{k_2}{k_1}\right)\sin^2\theta_W}\,, \\ \label{eq:Ltest} B_{12}\, \ln \left(\frac{\Lambda}{M_Z}\right) &=& \frac{2\pi}{\alpha} \left[\frac{ 1}{k_1}-\left(\frac{1}{k_1}+\frac{1}{k_2} \right)\sin^2\theta_W\right ]. \end{eqnarray} \end{subequations} \begin{figure} \begin{center} \includegraphics{btest.eps} \end{center} \caption{\label{fig:vl} The intermediate scales of the extra vector-like fermions in function of the unification scale $\Lambda\,$.}\label{f:btest} \end{figure} Notice that the right-hand sides of \Eqn{Btests} depend only on low-energy electroweak data and the group factors $k_i$. Adopting the following experimental values at $M_Z$~\cite{Beringer:1900zz} \begin{subequations} \label{expvalues} \begin{eqnarray} \alpha^{-1} &=& 127.916\pm0.015\,, \\ \sin^2\theta_W &=& 0.23116\pm0.00012\,, \\ \alpha_s &=& 0.1184\pm0.0007\,, \end{eqnarray} \end{subequations} the above relations give \begin{eqnarray} \label{eq:Btestexp} B &=& 0.718\pm0.003\,, \\* B_{12}\,\ln\left(\frac{\Lambda}{M_Z}\right) &=& 185.0\pm0.2\,, \end{eqnarray} in the canonical GUT models with $k_i=(5/3,1,1)$. The coefficients $B_{ij}$ that appear in the left-hand sides of \Eqn{Btests} strongly depend on the particle content of the theory. For instance, considering the SM-like particles, with $n_g$ generations, together with $n_H$ light Higgs doublets, one has for the coefficients $B_{12}$ and $B_{23}$: \begin{eqnarray} \label{eq:beffSM} B_{12}=\frac{22}{3}-\frac{n_H}{15}\,,\quad B_{23}=\frac{11}{3}+ \frac{n_H}{6}\,. \end{eqnarray} In the case of SM, i.e. $n_g=3$ and $n_H=1$ one has \begin{eqnarray} B\,=\,115/218\approx0.53\,, \end{eqnarray} that is not compatible with the calculated value in eq.~\eqref{eq:Btestexp} and clearly, the B-test fails badly in the SM. In table~\ref{tab:betacoeffs}, we have summarized the contributions of the vector-like fermions to $B_{12}$ and $B_{23}$. \begin{table} \caption{\label{tab:betacoeffs} Beta coefficients for the extra vector-like fermions.} \begin{center} \begin{tabular}{c|ccccc} & $Q$ & $U$ & $D$ & $L$ & $E$\\ \hline $B_{12}$ & $-28/15$ & $16/15$ & $4/15$ & $-4/15$ & $4/5$\\ $B_{23}$ & $2/3$ & $-2/3$ & $-2/3$ & $2/3$ & $0$ \end{tabular} \end{center} \end{table} Using these values, we ave taken random values of the intermediate scales of the extra vector-like fermions and found combinations of these intermediate scales which are compatible with successful unification. In our numerics we have assumed only one Higgs doublet, i.e. $n_H=1$, and we have taken a rough lower bound on the unification scale, $M_U > 6 \times10^{15}$\,GeV, coming from the unobservability of proton decay \cite{Beringer:1900zz} into $e^+\pi^0$. A million set of such random combinations were taken. A few of the allowed ones are presented in Fig.~\ref{f:btest}. From the entire set of combinations used in our numerics, we find that the allowed range for the unification scale obtained is \begin{eqnarray} 6\times10^{15}\,\text{GeV}\leq\Lambda\leq2.2\times10^{16}\,\text{GeV}\,, \end{eqnarray} which is also roughly what the limited data of Fig.~\ref{f:btest} indicates. For the scales of the vector-like extra fermions we have \begin{subequations} \begin{eqnarray} 5.5\times10^{13}\,\text{GeV}\leq M_Q\leq2.4\times10^{14}\,\text{GeV}\,,\\[2mm] 1.2\times10^{15}\,\text{GeV}\leq M_U \leq2.2\times10^{16}\,\text{GeV}\,,\\[2mm] 6.6\times10^{13}\,\text{GeV}\leq M_D \leq1.2\times10^{16}\,\text{GeV}\,,\\[2mm] 7.4\times10^{13}\,\text{GeV}\leq M_L\leq2.0\times10^{16}\,\text{GeV}\,,\\[2mm] 1.7\times10^{15}\,\text{GeV}\leq M_E\leq2.1\times10^{16}\,\text{GeV}\,. \end{eqnarray} \end{subequations} We see that the range of mass scales of the vector-like fermions are high, but they are indeed necessary for driving the evolution of the gauge couplings to a perfect unification. If one varies the number of Higss doublet, $n_H=2,3$, the ranges of intermediate scales above do not change significantly. \section{Conclusions}\label{s:conclu} To summarize, we have succeeded in building a grand unified model based on the group SU(9). The model uses fermions in antisymmetric representations only and the consistency of the model demands that the number of fermion generations is three. As mentioned in the introduction, earlier models based on the gauge group {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} and SO(18) also had this property of predicting the number of generations. It is interesting to note that our group SU(9) contains {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$}, and is contained in SO(18). However, in our model, the transformation properties of known fermions in the {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} subgroup of SU(9) is not the same as those used in earlier models based on the {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$} group. It is a different one, where the known fermions of all generations transform in the same way under {$\rm SU(3)_c \times SU(3)_L \times U(1)_X$}. On the other hand, a comparison with the SO(18) models also reveals an interesting connection. In SO(18) models, the fermion generations are contained in the spinor representation. The spinor representation of SO(18), which is $\rep{256}$-dimensional, decomposes under the SU(9) subgroup as follows: \begin{eqnarray} \rep{256} = \form 1 + \form 3 + \form 5 + \form 7 + \form 9 \,. \end{eqnarray} Our SU(9) model uses only the first two kinds of these submultiplets to predict the number of fermion generations. We have also shown that our model can be consistent with unification requirements. In this part of the analysis, we have assumed a direct breaking of SU(9) into the SM gauge group. A more detailed analysis, including possibilities of intermediate symmetry breaking scales, will be taken up in a future work. \section*{Acknowledgments} The work of D.E.C.~was supported by Associa\c c\~ao do Instituto Superior T\'ecnico para a Investiga\c c\~ao e Desenvolvimento (IST-ID) and also by FCT through the projects PEst-OE/FIS/UI0777/2011 (CFTP), CERN/FP/123580/2011 and PTDC/FIS-NUC/0548/2012. \bibliographystyle{unsrt}
1,314,259,996,191
arxiv
\section{INTRODUCTION} \label{sec:intro} User authentication utilizing Electroencephalography (EEG) modality is feasible provided it possesses the following biometric characteristics, namely, universality, uniqueness, permanence, and collectibility~\cite{Nakamura2017_ear}. In contrast to other established physiological biometrics (face, iris, fingerprint, palm), EEG has anti-spoofing characteristic as it is difficult to forge complex signals from several EEG channels of a user profile. Whereas, both active (sophisticated effort) and passive (unsophisticated effort) types of spoof attacks can be performed on face, fingerprint, iris, and palm biometrics. However, the biometric characteristics of acceptance (in terms of user friendliness) and permanence need to be enhanced to deploy EEG based authentication systems in real life. Hence, studies on enhancing acceptance and permanence of EEG data for authentication are always encouraged. Most commonly, EEG data is acquired using wearable devices like Acti-Cap or electrode headsets which requires a degree of initial preparation like placing several electrodes in addition to gel application on the user's scalp. This may throw some hindrances in the process of the data collection. Although, the user does not have to interact with any interface to provide the data. To make user friendly EEG data acquisition Kosmyna et al.~\cite{Kosmyna2019attentivu_glass} have developed a wearable pair of EEG and EOG (Electrooculography) glasses. In the work by Zhang et al.~\cite{zhang2019identity} a portable device, called MindWave Mobile, is used for EEG data acquisition via user's forehead. The single-channel device simplifies the initial preparation process and improves user comfort. The data collected through this portable device has authenticated users with accuracies ranging between 80\% and 95\%. Further developments of such EEG devices with fewer electrodes, which can be made commercially available, can enhance the acceptance/user friendliness characteristic of EEG based biometrics system. Depending on the type of security application, the EEG based user authentication system may perform template-based user enrollment, where a user needs to submit several data samples to build templates for future use~\cite{abuhamad2020sensor}. Authenticating users in a future time frame requires EEG to have permanence characteristics. This can be introduced by keeping the user behavior/activity constant during both enrollment and verification. Here, the conditions of the surrounding environment must also be considered. User authentication through EEG data can be performed by three data acquisition processes - (i) while relaxing with eyes closed; (ii) while exposed to visual stimuli; and (iii) while performing mental tasks~\cite{1-Bashar2016}. In this study, the EEG data is utilized from the WAY\_EEG\_GAL~\cite{6-Luciw2014} public dataset, which is acquired while users are performing a mental task of lifting, holding, and resting objects of different weights (165g/330g/660g) and surfaces (sandpaper/suede/silk) where both parameters (weight and surface texture) get changed randomly. In the process, each user provides several grasp and lift trials for at least two hours. Successfully authenticating users across this time frame of two hours during which each user's behavior/activity is similar will ensure the permanence characteristic of the WAY\_EEG\_GAL~\cite{6-Luciw2014} EEG data to a considerable degree. Therefore, the aim of this study lies in evaluating the user discriminative capacity of the EEG data, acquired across the span of two hours while users performed the mental task of lifting objects. Figures~\ref{fig:P1_raw} and~\ref{fig:P12_raw} show user 1 and 12's time domain data respectively where we can visually see a difference in the data trend between session 1 and session 9. Acceptable authentication accuracy will ensure the permanence of this dataset given its dynamic nature across sessions. The state of the art public dataset~\cite{6-Luciw2014} is aimed for acquisition of EEG signals for prosthetic control of object manipulation. Through this, one can study the precision grasp-and-lift (GAL) of an object. The dataset has 32 channels of EEG data, 5 channels of EMG data, and kinematics data. However, in this study, the EMG and kinematics data are not utilized. Here, user authentication is performed only through the EEG data to evaluate its degree of uniqueness and permanence, while users are executing grasp-and-lift trials of objects. After the initial pre-processing, statistical features are extracted over each of the 32 channels of the EEG data which increases the data dimensionality. Therefore, feature importance is performed per genuine user that chooses the best features to authenticate each genuine user against the rest. State of the art on EEG based user authentication show high performances when Machine Learning classifiers, namely, kNN (~\cite{jayarathne2017-intro},~\cite{2-Rahman2021},~\cite{Valizadeh2019_decrypting}), SVM (~\cite{4-Albasri2019},~\cite{1-Bashar2016},~\cite{2-Rahman2021},~\cite{Valizadeh2019_decrypting},~\cite{koike2016high}), and LDA (~\cite{chin2019exploring},~\cite{5-Jayarathne2020}) are utilized. Hence, in this study, for the two pilot tests and user authentication experiments, kNN, SVM, and LDA classifiers are chosen. The two pilot studies performed utilize LDA and non-linear SVM (Radial Basis Function) which achieve accuracies of 53.3\% and 68\% respectively on multi-labeled data (each user had a unique label). The pilot studies give an idea of the learning trends of these Machine Learning models on the multi-labeled WAY\_EEG\_GAL data. Observing the pilot tests, the experiments on user authentication are performed where each genuine user is authenticated against all other impostor users. For this, initially, kNN classifier is used, which achieves around $\approx 75$\% accuracy. Looking at the pilot tests and the first attempt of user authentication using kNN, finally SVM (both linear and non-linear) models are chosen to perform classification. The overall average accuracies of 85.18\% and 86.92\% across users are achieved using linear and non-linear SVMs respectively. Also, across all users, the overall average F1 scores of 87.51\% and 88.94\% are observed using linear and non-linear SVMs respectively. Beyond the overall performances, individuals showing high performances of 95.3\% accuracy (95.3\% F1 score) with linear SVM and 97.4\% accuracy (97.3\% F1 score) with non-linear SVM are also observed. Therefore, this work has the following contributions: i) Utilizing WAY\_EEG\_GAL public dataset~\cite{6-Luciw2014} for user authentication. ii) Evaluating the discriminating (uniqueness) and permanence characteristics of the EEG data from the public dataset~\cite{6-Luciw2014}. iii) Performing feature importance per genuine user for authenticating against all other impostors. iv) Utilizing both Linear and non-linear SVMs for user authentication on this dataset. v) A thorough statistical test to evaluate the significant difference of the performance trends across all users between linear and non-linear SVMs. We observe a close performance between the linear and non-linear SVMs. The rest of the work includes Section~\ref{sec:lit_rev} which presents the related works. Section~\ref{sec:data} describes the public dataset. In Section~\ref{sec:method} we discuss the experimental procedures. Section~\ref{sec:results} reports all the experimental results. Section~\ref{sec:hypo_test} reports hypothesis testing. Lastly, Section~\ref{sec:conclusion} concludes our study. \section{RELATED WORKS} \label{sec:lit_rev} The criteria of choosing the state of the art studies that are related to this work is based on the Machine Learning algorithms utilized for user authentication and the user behavior involved during acquisition of the data. Most recent studies tend to expose users to a stimuli (visual or auditory) or keep them in resting state while the EEG data is acquired. Given the data collection procedure in WAY\_EEG\_GAL involves lifting objects of random weights and textures, authenticating users from this dataset is based on individuals performing mental tasks for 2 hours. The aim of working on this dataset~\cite{6-Luciw2014} for user authentication through EEG is to evaluate the uniqueness and permanence of the data collected while users are engaged in performing mental tasks of lifting random objects. The test of uniqueness will indicate the discriminating capacity of this dataset for user authentication through mental tasks. On the other hand, the test of permanence of this data will estimate the authentication capcity of EEG data over a time span of 2 hours given EEG is dynamic compared to most biometrics. The study by Bashar et al.~\cite{1-Bashar2016} claim the validity of utilizing EEG as a biometric modality. Individual's brain signals are linked to their genetic information which make them unique from other individuals. Collecting data from 9 subjects through EMOTIV headset, they classify individuals using multi-class SVM classifier. The data is collected at a sampling rate of 128 Hz from 5 EEG channels (AF3, AF4, T7, T8, Pz) per user for six weeks. Over the span of several weeks of data collection they hold several sessions where each involve two tasks of eyes close and eyes open in relaxing state with minimal body movements. They achieve best performances of 94.44\% of TPR (True Positive Rate) and 5.56\% of FPR (False Positive Rate). Nakamura et al.~\cite{Nakamura2017_ear} perform a feasibility test of the in-ear EEG sensor in their study. The in-ear sensor earpiece, consisting of two electrodes, is inserted in the user's left ear canal. It is made of a memory-foam substrate and two conductive flexible electrodes where the material is a viscoelastic foam which makes it a generic earpiece. Such devices enhance the user-friendliness of EEG based authentication once deployed in real life. Data is collected at a sampling rate of 1200 Hz from 15 subjects for two days. Users provide data in eyes close and resting state. Utilizing binary SVM they achieve 99\% accuracy as the best result. In a recent work by Rahman et al.~\cite{2-Rahman2021} EEG data from 10 users is fused with their keypress data. Each of the 10 volunteers participate for 10 sessions where in each session a user types "qu-ELEC371" fixed-text password for 50 times. EEG has anti-spoofing capacity but may lack accuracy due to variability~\cite{2-Rahman2021}. This shows the need for test of permanence of EEG as a biometric modality from the research community. The EEG data is collected using EMOTIV headset (from AF3, AF4, T7, T8, PZ channels) while users are performing the mental task of typing the password. The EEG data undergoes pre-processing using baseline correction, filtering, segmentation, and resampling techniques. They extract features from the EEG data which includes statistical, frequency domain, and time domain (including the notable Hjorth parameters) features. They achieve 99.7\% and 98.7\% accuracies using only keypress and EEG respectively. They report the enhanced individual user performance when both modalities are fused at score level. The study by Yang et al.~\cite{yang2019improved} involves evaluation of two public datasets, namely, UCI (Univeristy of California Irvine) EEG dataset~\cite{UCI_data} and EEG MMI (Motor Movement/Imagery) Dataset~\cite{MMI_data} for user authentication. The UCI~\cite{UCI_data} and MMI~\cite{MMI_data} datasets involve 122 and 109 volunteers respectively. In each dataset the EEG data is collected from EEG headsets with 64 channels. The UCI~\cite{UCI_data} and MMI~\cite{MMI_data} EEG data are collected at the sampling rates of 256 Hz and 160 Hz respectively. The data collections involve activities like eyes open, eyes close, and imagery tasks. Utilizing LDA (Linear Discriminant Analysis) classifier, Yang et al.~\cite{yang2019improved} achieve best accuracies of 93.28\% and 98.24\% with UCI~\cite{UCI_data} and MMI~\cite{MMI_data} datasets respectively. The work by Arnau-Gonz{\'a}lez et al.~\cite{Arnau2018_influence} is another study that uses public datasets for EEG based user authentication. They utilize DEAP (Dataset for Emotion Analysis using EEG, Physiological and Video Signals)~\cite{DEAP_data}, MAHNOB-HCI~\cite{MAHNOB_data}, and SEED (SJTU Emotion EEG (SEED) Dataset)~\cite{SEED_data} datasets for user authentication that involve 32, 30, and 15 volunteers respectively. The EEG data of DEAP~\cite{DEAP_data}, MAHNOB-HCI~\cite{MAHNOB_data}, and SEED~\cite{SEED_data} are collected at sampling rates of 512 Hz, 512 Hz, and 1000 Hz respectively which are downsampled in Arnau-Gonz{\'a}lez et al.'s~\cite{Arnau2018_influence} study. Each dataset exposes volunteers to a video stimuli to capture variations in emotion. Involving several machine learning and deep learning algorithms, this work achieves a best result of 99.15\% accuracy with the DEAP dataset~\cite{DEAP_data}. In this work, we utilize WAY\_EEG\_GAL public dataset~\cite{6-Luciw2014} and perform Machine Learning based classification for user authentication. We have performed feature importance per user profile using ExtraTreeClassifier algorithm. Both linear and non-linear SVMs are utilized for classification so that the difference in the performances can be observed between the two algorithms belonging to the same family. \begin{figure*}[t] \centering \fbox{\includegraphics[width=0.85\linewidth]{figure1.png}} \caption{\small Data collection set up of WAY\_EEG\_GAL~\cite{6-Luciw2014}. See Appendix~\ref{FirstAppendix} for EEG electrode labelings.} \label{fig:data_collect} \end{figure*} \begin{table*}[h] \centering \small \begin{tabular}{ |c|c|c|c|c|c|c| } \hline \textbf{Data status} &TOTAL &AVERAGE &MEDIAN &MINIMUM &MAXIMUM &STANDARD \\ &&&&& &DEVIATION \\\hline \textbf{Pre-processing} &19548778 &1629064.83 &1632897.5 &1447770 &1858867 &125591.34 \\\hline \textbf{Post-processing} &310137 &25844.75 &25905 &22966 &29493 &1993.59 \\\hline \end{tabular} \caption{\small Data statistics for all 9 sessions across 12 users in pre-processing and post-processing stages.} \label{table:data_stat_pre_post} \end{table*} \begin{figure*}[t] \centering \fbox{\includegraphics[width=0.9\linewidth]{figure2.png}} \caption{\small The Machine Learning pipeline utilized for this study.} \label{fig:pipeline} \end{figure*} \section{WAY\_EEG\_GAL DATASET~\cite{6-Luciw2014}} \label{sec:data} The WAY\_EEG\_GAL~\cite{6-Luciw2014} (WAY : Wearable interfaces for hAnd function recoverY; EEG : Electroencephalography; GAL : Grasp And Lift trials) dataset has 12 volunteers (8 females, 4 males; 19-35 age group; right-handed) where each user participates for at least two hours providing around 328 grasp-and-lift trials. In total there are 10 sessions of data per user out of which it is observed that the $10^{th}$ session does not have corresponding EMG data. Hence, to maintain uniformity we work with the first 9 sessions of each user. Table~\ref{table:data_stat_pre_post} shows the data statistics in post-processing stage across the 12 users for all the 9 sessions. Each user is prompted to grab, lift, hold, and rest objects of different weights (165g, 330g, 660g) and surfaces (sandpaper, suede, silk surface) which are changed randomly during the data collection process during which the EEG data (along with other data) is collected. The random changes of object's weight and surface also contribute to the dynamic nature of the EEG dataset. The entire user activity (mental task) involves prompted user reaching out for the object, grasping it with thumb and index finger, lifting and holding it for a couple seconds, and finally putting it back on the support surface, thereafter releasing it and returning the hand to a designated rest position. The EEG is captured using a 32-channel Acti-cap where the sampling rate per channel is 500 Hz. Figure~\ref{fig:data_collect} shows the EEG data collection set up while user performs GAL trials. The original target of the state of the art~\cite{6-Luciw2014} is to utilize EEG signals for prosthetic control of object manipulation. The EEG scalp recording while GAL trials decodes sensation, intention, and action of each individual. \section{EXPERIMENTAL PROCEDURES} \label{sec:method} This section describes the Machine Learning pipeline followed to implement user identification. See Figure~\ref{fig:pipeline}. \subsection{Pilot tests} Two pilot tests are performed using LDA (Linear Discriminant Analysis) and SVM (Support Vector Machine) keeping an 80:20 data split for training:testing. In the pilot tests, 12 labels for twelve users' data are used and a multi-class classification is performed. This is to observe the learning capacities of the models on the dataset~\cite{6-Luciw2014}. The LDA with an eigensolver produces an accuracy of 53.3\% and the SVM with C=0.1 and gamma=scale as parameters produce an accuracy of 68\%. Actual user authentication experiments of genuine user versus impostors are performed after observing the results of true positives, true negatives, false positives, and false negatives from the confusion matrices in figures~\ref{fig:pilot_RF} and~\ref{fig:pilot_SVM}. The procedures of the actual authentication experiments are explained below. \begin{figure}[t] \centering \fbox{\includegraphics[width=0.75\linewidth]{figure3.png}} \caption{\small Confusion matrix of pilot testing using Linear Discriminant Analysis.} \label{fig:pilot_RF} \end{figure} \begin{figure}[t] \centering \fbox{\includegraphics[width=0.75\linewidth]{figure4.png}} \caption{\small Confusion matrix of pilot testing using Support Vector Machine.} \label{fig:pilot_SVM} \end{figure} \subsection{Data Preprocessing} The sampling rate per channel of the EEG data is 500 Hz. A bandpass filter between 0.2 Hz and 45 Hz is used to obtain the EEG signal. All original 32-channel EEG data are retained and no pruning is performed. All the 9 sessions from each user's data are taken where corresponding EMG data is available. The EMG data ensures the performance of GAL trials by users in all 9 sessions, although using the EMG data is out of the scope of this authentication study. See Appendix~\ref{SecondAppendix} for raw data visualization. To each user's data, a window of 250 milliseconds with a 50\% overlap is applied. Hence, in every second 8 windows are available $\approx 125$ samples in each. \begin{figure}[h] \centering \fbox{\includegraphics[width=0.9\linewidth]{figure5.png}} \caption{\small The number of important features chosen per user by the ExtraTreeClassifier algorithm.} \label{fig:feat_select} \end{figure} \subsection{Feature extraction and feature selection} Univariate statistical features - average, standard deviation, root mean square, mean absolute value, skewness, and kurtosis are extracted from each of the 32 EEG channels per window. The simplified statistical features can reduce data dimensionality, reduce signal-to-noise ratio, and enhance classifier performance~\cite{gupta2020next}. Therefore, in total there are $32 channels * 6 features = 192$ feature columns. The features are normalized after extraction. Features from sample users are plotted under Appendix~\ref{ThirdAppendix}. Given the feature dimension, a threshold-based feature selection algorithm (ExtraTreeClassifier) is used. ExtraTreeClassifier feature selection algorithm calculates feature importances. The pre-tuned threshold decides the number of features that are significant to a user. The number of important features selected per user by the ExtraTreeClassifier is shown in Figure~\ref{fig:feat_select}. The algorithm for ExtraTreeClassifier is shown below. \begin{algorithm}[h] \caption{ExtraTreeClassifier Feature Importance} \label{alg:feat_imp} \begin{algorithmic}[1] \State df $\gets$ training feature columns \State model $\gets$ ExtraTreesClassifier() \State model.fit(df, training labels) \State feature\_importance $\gets$ sorted(model.feature\_importances\_) \State imp\_feat $\gets$ [] \Comment\textit{array to store indices of important features} \For {$i \in$ range(len(feature\_importance))} \If {feature\_importance[i] $>=$ threshold} \State imp\_feat.append(i) \EndIf \EndFor \end{algorithmic} \end{algorithm} Appendix~\ref{FourthAppendix} shows a sample visualization of important features of a user with tuned user-specific threshold. Table~\ref{table:data_stat_pre_post} shows the data statistics under post-processing stage which is the statistics of the data after it is processed to this stage. \subsection{Data Splitting} The EEG data is split into non-overlapping train, validation, and test sets. For each user, the first 5 sessions out of the 9 sessions are used for training, the next 2 sessions are used for validation, and the remaining 2 are used for testing. After processing the data up to feature normalization, the number of training, validation, and testing samples of all users are 195290, 65038, and 49809 respectively. The validation and testing samples differ by $\approx 20000$ because the duration of the sessions varies for users. We label the three sets of data with 1 for genuine users and 0 for all other users (impostors). Each genuine user is authenticated against all other impostors. Figure~\ref{fig:one_vs_many} shows a schematic diagram of a one versus many authentication scenario where the orange highlighted user to be authenticated is genuine and all others are impostor users. The training data is balanced by downsampling each impostor's data such that the sum of the total impostor samples closely matches to the total number of genuine samples. \begin{table*}[h] \centering \small \begin{tabular}{ |c|c|c||c|c|c|c| } \hline \textbf{Performance} &\textbf{VAL ACC} &\textbf{VAL ACC} &\textbf{TEST ACC} &\textbf{TEST ACC} &\textbf{TEST F1 score} &\textbf{TEST F1 socre} \\ \textbf{Statistics} &LSVM &NLSVM & LSVM &NLSVM &LSVM &NLSVM \\\hline \textbf{Average} &86.37\% &88.81\% &85.18\% &86.92\% &87.51\% &88.94\% \\\hline \textbf{Maximum} &95.40\% &98.40\% &95.30\% &97.40\% &95.30\% &97.30\% \\\hline \textbf{Minimum} &79.80\% &76.40\% &76.40\% &78.90\% &81.00\% &83.20\% \\\hline \textbf{Median} &85.30\% &88.75\% &83.95\% &84.85\% &86.50\% &87.25\% \\\hline \textbf{Std. Dev.} &4.91\% &5.78\% &5.48\% &6.43\% &4.08\% &4.94\% \\\hline \end{tabular} \caption{\small Overall performance of linear SVM and non-linear SVM during validation and testing phases. Performance metric used during validation is accuracy. Performance metrics used during testing are accuracy and F1 score. Overall performance calculated across all 12 users using average, maximum, minimum, median, and standard deviation statistics. ACC: Accuracy; LSVM: Linear Support Vector Machine; NLSVM: Non-Linear Support Vector Machine; VAL: Validation.} \label{table:overall_results} \vspace{-1em} \end{table*} \begin{figure}[t] \centering \fbox{\includegraphics[width=0.75\linewidth]{figure6.png}} \caption{\small Individual user's accuracy during validation using linear SVM (LSVM) and non-linear SVM (NLSVM).} \label{fig:val_acc} \end{figure} \subsection{Classification} \label{sec:classification} For user identification, we initially use kNN as the classifier but it produces around 75\% of accuracy. Hence, we utilize both linear and non-linear SVMs for classification which enhanced the performance. We tune the classifier parameters using grid search. To tune kNN we use k = 4, 5, and 6 and distance metrics = Euclidean and Manhattan and perform grid search. The non-linear SVM has an RBF (Radial Basis Function) kernel and C = 0.1, 1, 10, and 100 and gamma = scale and auto are used for parameter tuning using grid search. The linear SVM does not have a gamma parameter and therefore the grid search is performed to tune C for which C values of 0.1, 1, 10, and 100 are chosen. \begin{figure}[t] \centering \includegraphics[width=0.85\linewidth]{figure7.png} \caption{\small Schematic diagram showing one versus many authentication scenario.} \label{fig:one_vs_many} \end{figure} \section{RESULTS} \label{sec:results} We identify each user as genuine against all other impostors both during validation and testing. Both linear and non-linear SVMs are utilized in this process. The metric used for measuring performance during validation is accuracy. We tune the classifier hyperparameters (as shown in Section~\ref{sec:classification}) through the validation step. The individual performances of the 12 users during validation are shown in Figure~\ref{fig:val_acc}. From the plot, we can see that the non-linear SVM has performed better than the linear SVM although the performances of both classifiers are very close. From this, we understand that the data samples of genuine and impostors overlap and the non-linear SVM is precise enough to make a better boundary between the overlapped classes. We use both accuracy and F1 score to measure performance during testing. Figure~\ref{fig:test} shows individual performances of users using both linear and non-linear SVMs. From both testing accuracy (figure~\ref{fig:acc_sub}) and F1 scores (figure~\ref{fig:f1_sub}) we can understand that the non-linear SVM performs better than the linear SVM but here too both performances are very close. Table~\ref{table:overall_results} shows the overall performances of individual classifiers in the validation and testing phases. We have calculated the average, maximum, minimum, median, and standard deviation of performances across users to estimate the overall performance of the system. During testing, the non-linear SVM shows average accuracy and F1 score of 86.92\% and 88.94\% respectively which outperform the linear SVM performances (average accuracy of 85.18\% and average F1 score of 87.51\%). But in both cases of linear and non-linear SVMs, we have users whose individual performances are above 95\% in validation and testing. See Appendix~\ref{FifthAppendix} for the system's specifications in which all the experiments are ran. \section{HYPOTHESIS TESTING} \label{sec:hypo_test} We perform hypothesis testing taking individual performances of users during testing using t-test. Before performing the t-test we verify the distribution of 12 performances (accuracy and F1 score) for both linear and non-linear SVMs are normally distributed using Kolmogorov-Smirnov (KS) test. In the KS tests, the distributions of linear SVM accuracies, non-linear SVM accuracies, linear SVM F1 scores, and non-linear SVM F1 scores show p values of 0.93, 0.65, 0.84, and 0.68 respectively. Each of the p value is greater than the significance level (0.05) so we accept the null (H0) hypothesis which states that the data does not differ significantly from that which is normally distributed. We, therefore, perform two t-tests one between the distributions of accuracies of linear and non-linear SVMs and the other between the distributions of F1 scores of linear and non-linear SVMs. The null (H0) and alternate (H1) hypotheses for the t-tests are stated below: H0: \textit{The two models' (linear and non-linear SVMs) performances are not significantly different from each other.} H1: \textit{The two models' (linear and non-linear SVMs) performances are significantly different from each other.} In the t-test with accuracy distributions between linear and non-linear SVMs, the p-value is 0.48 and the other t-test between both SVMs using F1 score distribution is 0.44. Each of the p values is greater than 0.05 significance level. Hence we fail to reject (therefore we accept) the null (H0) hypothesis. This further ensures the close performances between the linear and non-linear SVMs. \begin{figure}[h!] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \fbox{\includegraphics[height=2.0in]{figure8a.png}} \caption{} \label{fig:acc_sub} \end{subfigure} \begin{subfigure}[t]{0.5\textwidth} \centering \fbox{\includegraphics[height=2.0in]{figure8b.png}} \caption{} \label{fig:f1_sub} \end{subfigure} \caption{\small Individual user's performance using linear SVM (LSVM) and non-linear SVM (NLSVM). (a) Accuracy and (b) F1 score.} \label{fig:test} \end{figure} \section{DISCUSSION AND CONCLUSION} \label{sec:conclusion} We examine the uniqueness and permanence characteristics of the EEG data from WAY\_EEG\_GAL~\cite{6-Luciw2014} dataset for user authentication. For each user, we select important features using ExtraTreeClassifier. We have used both linear and non-linear SVMs for classification. The performances of the classifiers are very close to one another though non-linear SVM tends to perform better consistently. The hypothesis t-testing shows that the distribution of accuracies and F1 scores of the 12 users are not statistically significantly different and therefore the close performances of the classifiers are justified. The overall average accuracies of 85.18\% and 86.92\% are achieved using linear and non-linear SVMs respectively. We observe average F1 scores of 87.51\%and 88.94\% for linear and non-linear SVMs respectively. Beyond overall performance, we also observe individuals showing high performances of 95.3\% accuracy (95.3\% F1 score) with linear SVM and 97.4\% accuracy (97.3\% F1 score) with non-linear SVM. Therefore, authentication through EEG data collected when users are performing grasp-and-lift mental tasks is feasible. In other words, we observe acceptable uniqueness and permanence trend in the WAY\_EEG\_GAL~\cite{6-Luciw2014} dataset. There are many other datasets which could have been chosen for this study, but the justification of selecting this dataset is to authenticate users while they are performing mental tasks of lifting objects for 2 hours where the weights and surfaces of the object change randomly. Through the observation made in this study, we pose the following open questions to the research community. If the duration of the task performance per user is increased to more than 2 hours will the EEG based authentication accuracy be lowered than our obtained best? This question is toward this dataset and any other EEG datasets. In other words, can data collected over expanded time can pose a challenge to the permanence characteristics of EEG modality? If the user tasks performed during EEG data acquisition are not confined to rest state and monotonous mental tasks (like in WAY\_EEG\_GAL~\cite{6-Luciw2014}), and involve even more dynamic tasks, will the it impact the user authentication performance? Given the above, it is always encouraged to examine further EEG datasets to test out their discriminability and permanence characteristics. Further testing across different datasets collected while users are performing wide ranges of tasks (in addition to examining rest state EEG datasets) can further strengthen the fact of whether it is feasible to deploy EEG based biometric systems in the public domain. \bibliographystyle{ieee}
1,314,259,996,192
arxiv
\section{The Context} ESA's Herschel \emph{Space Laboratory} \citep{2010A&A...518L...1P} has given us an unprecedented view of the far-infrared sky. To take most advantage of its data, one must nevertheless beat the confusion due to the large beam size of it instruments, as illustrated in figure~\ref{fig:P8-21_f1}. \begin{figure}[htp] \centering \includegraphics[width=.7\textwidth]{P8-21_f1.eps} \caption{Same part of the sky observed in optical (composite image of SDSS g, r, and i bands) and with Herschel (composite image of the three SPIRE bands). The square in the centre delimits the zone used in the following figures. Because of the large beam, multiple sources cannot be distinguished in the SPIRE maps; we refer to this as confusion.} \label{fig:P8-21_f1} \end{figure} The \emph{Herschel Extragalactic Legacy Project} (HELP) is a European Research Executive Agency funded project that aims to capitalise on the distant Universe surveys made by Herschel. To overcome the confusion problem, HELP has developed XID+ \citep{2016MNRAS.tmp.1477H}, a new software to perform prior based source extraction on confused images. XID+ is being used on maps from Herschel SPIRE and PACS instruments as well as on Spitzer MIPS maps. \section{Using Bayesian Methods Gives Access to Full Posterior Probability} One way to overcome the confusion is to use information from resolved observations, at other wavelengths, that give the positions of known sources. XID+ uses Bayesian inference methods implemented within the Stan framework \citep{carpenter2016stan} to use this information to compute fluxes. Compared to maximum likelihood methods, this gives access to the full posterior probability of the flux distribution as illustrated in figure~\ref{fig:P8-21_f2}. \begin{figure}[htp] \centering \includegraphics[width=.8\textwidth]{P8-21_f2.eps} \caption{Analysis of the SPIRE $250~\mu{}m$ fluxes of three nearby sources with XID+ using only the position as prior information.} \label{fig:P8-21_f2} \end{figure} \begin{itemize} \item \emph{c)} is the actual SPIRE map; \item \emph{d)}, \emph{g)} and \emph{h)} are the joint probability distributions of the fluxes for each source pair; \item \emph{a)}, \emph{e)} and \emph{i)} are the marginalised probability distributions of each source flux; \item \emph{b)} is the replicated map corresponding to the green dots on the joint distributions. \end{itemize} \section{$p$-Value Maps} One interesting output of XID+ is the \emph{p-value} map. It indicates how well the real map is explained by the model and shows unexpected excesses or lacks in fluxes. Figure~\ref{fig:P8-21_f3} shows a zone with an unexplained excess in the SPIRE~$500~\mu{}m$ filter that may reveal some interesting objects not present in the original catalogues. \begin{figure}[htp] \centering \includegraphics[width=.8\textwidth]{P8-21_f3.eps} \caption{\emph{p-value} maps of the same area for SPIRE~$250$, $350$, and $500~\mu{}m$.} \label{fig:P8-21_f3} \end{figure} \section{Adding More Prior Information} The use of a Bayesian framework makes it possible to add new prior information. For instance, we can use our prior knowledge on redshifts, combined with some simple spectral energy distributions (SEDs) to better constrain the fluxes by eliminating impossible combinations. This is illustrated by figure~\ref{fig:P8-21_f4}: the red (lighter grey in black and white) probability density functions (PDFs) are those that don't use the redshift and SEDs as prior, the blue (darker grey) are those that use it. \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{P8-21_f4.eps} \caption{Effect of adding redshift value and simple SEDs as prior information.} \label{fig:P8-21_f4} \end{figure} \acknowledgements The research leading to these results has received funding from the European Union Seventh Framework Programme FP7/2007--2013/ under grant agreement 607254. \textit{This publication reflects only the author's view and the European Union is not responsible for any use that may be made of the information contained therein.}\\ This research made use of Astropy, a community-developed core Python package for Astronomy \citep{2013A&A...558A..33A}. This research made use of APLpy, an open-source plotting package for Python hosted at \url{http://aplpy.github.com}.
1,314,259,996,193
arxiv
\section{Omitted proofs from \secref{worsthit}} To prove the results in \secref{worsthit} we will need the following technical lemma: \begin{lemma} \label{lem:ballsize} Let $G$ be a graph with $n$ vertices and minimum degree $\delta$. Then for any integer $1 \le x \le n$, the number of vertices reachable in $x$ hops from any vertex is at least $\min\{\delta \cdot x/3, n\}$. \end{lemma} \begin{proof} The statement is trivially satisfied for $x=1$. For $x > 1$, let $v$ be a vertex at distance $x$ from a vertex $u$. Since $v$ has minimum degree $\delta$, there exist at least $\delta + 1$ vertices at distance $x$, $x-1$, or $x+1$ from $u$ (unless we can already reach all the vertices in the graphs in $x+1$ steps from $u$). Summing up the number of vertices that we can reach in $i=0,\dots,x$ steps we obtain the statement. \end{proof} \begin{lemma}[\lemref{dirichlet} (restated)] Let $P$ be the transition matrix of a lazy random walk on a graph $G \in \mathcal{G}$. Given a probability distribution $\sigma: V \to [0,1]$ with likelihood ratio $f = \sigma/\pi$ such that $\var_{\pi} f = \epsilon > 0$, \[ \mathcal{E}_{P}(f,f) \gtrsim \max\left\{ \frac{\epsilon^2}{m^* + 1/(\pi_*^2(1+\epsilon))}, \frac{\pi_* \epsilon^2}{ n} \right\}. \] \end{lemma} \begin{proof} Denote with $m$ the number of edges of $G$ and with $\delta$ its minimum degree so that $\pi_* = \min_u \{\pi(u)\} = \delta/m$. Let $x \triangleq \operatornamewithlimits{argmax}_{y} f(y)$, i.e., $\|f\|_{\infty} = f(x)$. Since $\Ev_{\pi} f = 1$ and $\Ev_{\pi} f^2 = 1+ \epsilon$, there exists at least one vertex $y$ such that $f(y) < 1$. Take the $y$ closest to $x$ (w.r.t. to the shortest path distance in $G$) satisfying $f(y) \le 1+\epsilon/2$ and let $\ell$ be the distance between $x$ and $y$. By construction, all the vertices $z$ in $B = B_{\ell-1}(x)$, the ball of radius $\ell-1$ around $x$, satisfy $f(z) > 1+\epsilon/2$. Therefore, by Markov's inequality, we have that \[ \pi(B) \le \pi\left(\left\{ z \colon f(z) > 1+\epsilon/2 \right\}\right) \le \frac{\Ev_{\pi} f}{1+\epsilon/2} = \frac{1}{1+\epsilon/2}. \] By \lemref{ballsize}, we also have that $|B| \ge \delta \cdot (\ell-1)/3$. Therefore, \[ \frac{\delta \cdot (\ell-1)}{3} \cdot \frac{\delta}{m} \le \pi(B) \le \frac{1}{1+\epsilon/2}, \] which implies that $\ell \le \min\{3m/(\delta^2 \cdot (1+\epsilon/2)) + 1, n/(3\delta)\}$. Since $x$ and $y$ are at distance $\ell$, there exists a path $x = u_0, u_1, \dots, u_{\ell} =y$ in $G$. Applying the Cauchy-Schwarz inequality and the bound on $\ell$, we obtain: \begin{align*} \mathcal{E}_{P}(f,f) &= \frac{1}{4m} \sum_{u \sim v} \left(f(u)-f(v)\right)^2 \nonumber \\ &\ge \frac{1}{4m} \sum_{i=0}^{\ell-1} \left(f(u_i) - f(u_{i+1})\right)^2 \nonumber \\ &\ge \frac{1}{4m \cdot \ell} \left(\sum_{i=0}^{\ell-1} \left(f(u_i) - f(u_{i+1})\right)\right)^2 \nonumber \\ &= \frac{1}{4m \cdot \ell} \left(f(x) - f(y)\right)^2 \nonumber \\ & \gtrsim \frac{\epsilon^2}{m \cdot \ell} \\ &\gtrsim \max\left\{\frac{\epsilon^2}{m + m^2/(\delta^2(1+\epsilon))}, \frac{\delta \epsilon^2}{m \cdot n} \right\} \nonumber \\ &\gtrsim \max\left\{ \frac{\epsilon^2}{m^* + 1/(\pi_*^2(1+\epsilon))}, \frac{\pi_* \epsilon^2}{ n} \right\} \nonumber. \end{align*} \end{proof} \begin{lemma}[\lemref{inftoell2} (restated)] Let $t_1 < t_2$. Then, for any $u,v \in V$, it holds that \[ \left|\rho^{[t_1,t_2]}_{v,u} - 1 \right| \le \max\left\{\var_{\pi} \left( P^{(\floor{\frac{t_1+t_2}{2}})} \cdots P^{(t_2)} \rho^{[t_1,t_1]}_{u,\cdot}\right) , \var_{\pi} \left(P^{(\floor{\frac{t_1+t_2}{2}-1})} \cdots P^{(1)}\rho^{[t_1,t_1]} _{v,\cdot} \right) \right\}. \] \end{lemma} \begin{proof} First notice that, for $t_2 > t_1$ and any $u,v \in V$, $\rho^{[t_1,t_2]}_{v,u} = \left\langle \rho^{[t_1,t_1]}_{u,\cdot} ,\rho^{[t_1,t_2]}_{v,\cdot} \right\rangle_{\pi}$. This can be easily seen by the fact that $\rho^{[t_1,t_1]}_{u,\cdot}$ is equal to $1/\pi(u)$ at $u$ and to zero everywhere else. Therefore, using the fact that, for a transition matrix $P$, $P1 = 1$, and the fact that $\Ev_{\pi} f = 1$ for any likelihood $f$, \begin{align*} &\left|\rho^{[t_1,t_2]}_{v,u} - 1\right| \\ &\qquad=\left|\left\langle \rho^{[t_1,t_1]}_{u,\cdot} - 1,\rho^{[t_1,t_2]}_{v,\cdot} -1 \right\rangle_{\pi} \right| \\ &\qquad= \left|\left\langle \rho^{[t_1,t_1]}_{u,\cdot} - 1 ,P^{(t_2)} \cdots P^{(t_1)} (\rho^{[t_1,t_1]} _{v,\cdot} - 1) \right\rangle_{\pi} \right| \\ &\qquad= \left|\langle P^{(\floor{(t_1+t_2)/2})} \cdots P^{(t_2)} (\rho^{[t_1,t_1]}_{u,\cdot}-1), P^{(\floor{(t_1+t_2)/2-1})} \cdots P^{(1)}(\rho^{[t_1,t_1]} _{v,\cdot} - 1)\rangle_{\pi}\right| \\ &\le \max\left\{\| P^{(\floor{(t_1+t_2)/2})} \cdots P^{(t_2)} \rho^{[t_1,t_1]}_{u,\cdot} - 1 \|_\pi^2, \|P^{(\floor{(t_1+t_2)/2-1})} \cdots P^{(1)}\rho^{[t_1,t_1]} _{v,\cdot} - 1\|_{\pi}^2\right\}, \end{align*} where the third equality follows from multiple applications of reversibility, i.e., \eq{selfadjoint}, and the last line by the Cauchy-Schwarz inequality. The statement follows by the definition of $\var_{\pi} f$. \end{proof} \begin{theorem}[\thmref{worsthit} (restated)] Let $\mathcal{G}$ be a sequence of connected graphs with $n$ vertices, the same stationary distribution $\pi$, and at most $m^*$ edges in each graph. Then, for a lazy random walk on $\mathcal{G}$: \begin{enumerate}\itemsep0pt \item $t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) = O(n/\pi_*)$, \item $\bigl| \frac{p^{[0,t]}_{u,v}}{\pi_v} - 1 \bigr| \lesssim \frac{m^*}{t} + \frac{1}{\pi_* \sqrt{t}}$, simplifying to $\bigl| \frac{p^{[0,t]}_{u,v}}{\pi_v} - 1 \bigr| \lesssim \frac{n}{\sqrt{t}}$ if all the graphs in $\mathcal{G}$ are $d$-regular, \item $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n \log{n}/\pi_*)$. Furthermore, if the graphs in $\mathcal{G}$ are $d$-regular, $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n^2)$. \end{enumerate} \end{theorem} \begin{proof} Thanks to \lemref{dirichlet} we can readily bound the mixing time of a random walk in $\mathcal{G}$. Let $u$ be an arbitrary vertex in $V$ and $\rho^{(t)} = \rho^{[0,t]}_{u,\cdot}$. Let $\epsilon^{(t)} = \var_{\pi} \rho^{(t)} = \Ev_{\pi} {\rho^{(t)}}^2 - 1$. First notice that $\var_{\pi} \rho^{(0)} = \epsilon^{(0)} \le 1/\pi_*$. Moreover, \lemref{dirichlet} implies that in $\bar{t} = O(n/(\epsilon^{(t)} \pi_*))$ steps $\epsilon^{(t)}>1$ is halved. More precisely, $\epsilon^{\left(t+\bar{t}\right)} \le \epsilon^{(t)}/2$. As a consequence, \begin{equation} \label{eq:tmix} t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) \lesssim \frac{n}{\pi_*} \cdot \sum_{i=1}^{-\log{ \pi_*}} 2^{-i} \lesssim \frac{n}{\pi_*}. \end{equation} We now use \lemref{inftoell2} to obtain bounds on the individual probabilities $p^{[0,t]}_{u,v}$. As remarked in \secref{worsthit} before \lemref{inftoell2}, while for time-homogeneous reversible Markov chains there exists a clear relation between the $\ell_{\infty}$ norm of a $t$-step probability distribution and its variance, in our case, since the chain is time-inhomogeneous, this relation doesn't necessarily holds. What \lemref{inftoell2} shows, however, is that after $t$ steps the decrease in the $\ell_{\infty}$ is roughly equivalent to the \emph{worst-case} decrease in the variance after $t/2$ steps. In particular, it essentially shows that $\rho^{[t_1,t_2]}_{v,u} = p^{[t_1,t_2]}_{v,u} / \pi$ is as small as $\var_{\pi} \rho^{\floor{(t_1+t_2)/2}}$ for some initial likelihood ratio $\rho^{(0)}$ and $\floor{(t_1+t_2)/2}$ steps of random walk on graphs appearing in $\mathcal{G}$ (not necessarily in the same order as they appear in $\mathcal{G}$). But any step decreases the variance of $\rho^{(t)}$ at least by the quantity given by \lemref{dirichlet}. More precisely, \lemref{dirichlet} states that, if $\var_{\pi} \rho^{(t)} = \epsilon^{(t)}$, $\epsilon^{(t)}$ is halved in $O(m^*/\epsilon^{(t)} + 1/(\pi_*^2 \epsilon^{(t)}(1+\epsilon^{(t)}) ))$ steps. From this and \lemref{inftoell2} it follows that \begin{equation} \label{eq:returns} \left| \frac{p^{[t_1,t_2]}_{v,u}}{\pi_u} - 1 \right| \lesssim \frac{m^*}{t_2-t_1} + \frac{1}{\pi_* \sqrt{t_2-t_1}}. \end{equation} Moreover, in the case of a sequence of $d$-regular graphs we can further simplify this bound using the simple fact that $p^{[t_1,t_2]}_{v,u} \le 1/d + 2^{-(t_2-t_1)}$ to obtain \begin{equation} \label{eq:returns_reg} \left| \frac{p^{[t_1,t_2]}_{v,u}}{\pi_u} - 1 \right| \lesssim \frac{n}{\sqrt{t_2-t_1}}. \end{equation} Let us now bound the maximum hitting time. For any $u,v \in V$, let $X^{(t)}_{u,v}$ be a boolean random variable which is equal to $1$ if and only if a walk starting from $v$ visits $u$ at time $t$. Let $Z = \sum_{t=T}^{2T} X_{v,u}$ be a random variable counting the number of times the walk hits $u$ starting from $v$ in a time $T = \Theta(n/\pi_*)$ such that $T \ge 4 t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G})$. We first need to lower bound $\E{Z}$, i.e., the expected number of visits to $v$ in that time interval. Since $T \ge 4 t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G})$, by the relation discussed above between the variance and the individual probabilities in the random walk distribution, we know that for any $t \ge 2t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G})$, $p^{[t_1,t_2]}_{v,u} \ge (8/9) \cdot \pi(u)$. Therefore, we have that $\E{Z} \gtrsim T \cdot \pi(u)$, and \begin{equation} \label{eq:hitupper} \Pr{Z \ge 1} = \frac{\E{Z}}{\E{Z \,|\, Z \ge 1}} \gtrsim \frac{(n/\pi_*) \cdot \pi(u)}{ \max_{T \le \tau \le 2T} \sum_{t=0}^{T} p^{[\tau,\tau+t]}_{u,u}}. \end{equation} It remains to upper bound $\E{Z \,|\, Z \ge 1} \le \max_{T \le \tau \le 2T} \sum_{t=0}^{T} p^{[\tau,\tau+t]}_{u,u}$ which corresponds to the expected number of returns to $u$ in the interval $[\tau,\tau+t]$. By \lemref{dirichlet} and \lemref{inftoell2}, we know that $\epsilon^{(t)} = |p^{[\tau,\tau+t]}_{u,u}/\pi(u) -1|$ is halved every $O(n/(\pi_* \epsilon^{(t)}))$ steps. Clearly, the sum of the return probabilities in a window of $O(n/(\pi_* \epsilon^{(t)}))$ steps is $O(\pi(u) \cdot n /\pi_*)$. Since there are at most $O(\log{1/\pi_*})$ such time windows, we have that \[ \E{Z \,|\, Z \ge 1} \le \max_{T \le \tau \le 2T} \sum_{t=0}^{T} p^{[\tau,\tau+t]}_{u,u} \lesssim \pi(u) \cdot (n/\pi_*) \cdot \log{1/\pi_*}. \] Together with \eq{hitupper}, we obtain that $\Pr{Z \ge 1} = \Omega(1/\log{n})$, which means that starting from $v$, with probability $\Omega(1/\log{n})$, we will hit $u$ in $O(n/\pi_*)$ steps. We can conclude then, since the vertex $v$ was chosen arbitrarily, that the hitting time is $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n\log{n}/\pi_* )$. For regular graphs, the same argument combined with \eq{returns_reg} and the fact that $\pi$ is uniform, leads to a $O(n^2)$ hitting time. \end{proof} \section{Omitted proof from \secref{twopluseps}} \begin{lemma}[\lemref{decrease_eps} (restated)] Let $G=(V,E)$ be a $d$-regular undirected graph with $|V| = n$ and $d=O(1)$ such that, for any $A \subset V$ with $1 \le |A| \le n/2$, $ |E(A,V \setminus A)| = \Omega(|A|^{\frac{1}{2} + \epsilon}) $ for $1/4 \ge \epsilon \ge 0$. Consider the transition matrix $P$ of a lazy random walk in $G$. Let $\sigma$ be any probability distribution and $f = \sigma/\pi$, where $\pi$ is the uniform distribution. If $\Ev_{\pi} f^2 = \beta > C$ for a large enough constant $C$, then \[ \mathcal{E}_P(f,f) \gtrsim \frac{\beta^{2-2\epsilon}}{n^{1-2\epsilon}}. \] \end{lemma} \begin{proof} We first define a set $K \subset V$ which contains all vertices with large probability mass: \[ K = \{ u \in V \colon f(u) \ge \beta/10 \} = \{ u \in V \colon \sigma(u) \ge \beta/(10 n) \}. \] We first assume that $|K| = \Omega(n/\beta)$ and later we will show how to drop this assumption. By the isoperimetric property of $G$, we can choose $s \gtrsim |K|^{\frac{1}{2} + \epsilon} d^{-2} \gtrsim (n/\beta)^{\frac{1}{2} + \epsilon} d^{-2} $ vertex-disjoint paths $\mathcal{P}_1,\dots,\mathcal{P}_s$ of length (respectively) $\ell_1,\dots,\ell_s$ such that each path starts in $K$, but all the other vertices visited by the path are outside $K$. Moreover, let $(u_1,v_1),\dots,(u_s,v_s)$ be the pair of starting/ending points of the paths. We claim we can choose the paths so that $f(v_i) < \beta/20$ and $(f(u_i)-f(v_i))^2 \ge (\beta/20)^2$ for any $1 \le i \le s$. Such paths can be constructed as follows: thanks to the isoperimetric property of $G$ we can choose $\Omega\left(|K|^{\frac{1}{2} + \epsilon}\right)$ edges that goes from $K$ to its complement. Since any vertex has degree at most $d$, we can pick $s \gtrsim |K|^{\frac{1}{2} + \epsilon} d^{-2} $ such edges that do not share any endpoint. Let $K' \subset V$ be the union of $K$ and the endpoints of these $s$ edges and let $s' = |\{u \in K' \colon f(u) < \beta/20\}|$. Again by the isoperimetric property, we can choose other $(s-s')$ edges from $K'$ to its complement, so that no edge involves any vertex in $K$ (if an edge involves a new, ``unused'' , vertex from $K$, we could have used that edge earlier in the construction of such paths). We can repeat this process till we have constructed a set $K'' \subset V$ containing at least $s$ vertices $v_1,\dots,v_s$ such that $f(v_i) < \beta/20$. Notice that each vertex $v_i$ is connected by a path $\mathcal{P}_i$ to a vertex $u_i \in K$ and the paths do not share any vertex. Given paths $\mathcal{P}_1,\dots,\mathcal{P}_s$ constructed in this way, we can bound $\mathcal{E}_P(f,f)$ as follows: \begin{align*} \mathcal{E}_P(f,f) &= \frac{1}{d n} \sum_{u \sim v} (f(u)-f(v))^2 \\ &\ge \frac{1}{d n} \sum_{i=1}^s \sum_{\{u,v\} \in \mathcal{P}_i} (f(u)-f(v))^2 \\ &\ge \frac{1}{d n} \sum_{i=1}^s \frac{(f(u_i)-f(v_i))^2}{\ell_i} \gtrsim \frac{\beta^2}{dn} \sum_{i=1}^s \frac{1}{\ell_i}. \end{align*} Since $\Ev_{\pi} f = 1$, by Markov's inequality there can be at most $20 n/\beta$ vertices $x$ such that $f(x) \ge \beta/20$. Hence, we have that $\sum_{i=1}^s \ell_i \le 20 n/ \beta$. Moreover, $s \gtrsim (n/\beta)^{\frac{1}{2} + \epsilon} $ (from now on we drop the dependence on $d$ since $d=O(1)$). By the relation between harmonic and arithmetic mean, $\sum_{i=1}^s \frac{1}{\ell_i} \ge \frac{s^2}{\sum_{i=1}^s \ell_i}$, and \[ \mathcal{E}_P(f,f) \gtrsim \frac{\beta^2 s^2}{n \sum_{i=1}^s \ell_i} \gtrsim \frac{\beta^{2-2\epsilon}}{n^{1-2\epsilon}}, \] which conclude the proof as long as $|K| = \Omega(n/\beta)$. We now show how to achieve the same result without this assumption on $K$. We divide the vertices into buckets: $ K_i:= \left\{ u \in V \colon f(u) \ge \beta/100 \cdot 2^i \right\}. $ Suppose now that for every $K_i$ we have $|K_i| \leq n/\beta \cdot 2^{-2i}$. Then it follows that: \[ \Ev_{\pi} f^2 \le \frac{1}{n} \sum_{\substack{u \in V \\ f(u) \leq \beta/100}} f(u)^2 + \frac{1}{n} \sum_{i=1}^{\log_2 n} |K_i| \cdot (\beta/100)^2 \cdot 2^{i}. \] Since $\Ev_{\pi} f = 1$, the first term is bounded by $\beta/100$. Furthermore, the second term is also at most a small constant times $\beta$ thanks to our condition on $|K_i|$. Therefore, $\Ev_{\pi} f^2 < \beta$, which contradicts the assumption $\Ev_{\pi} f^2 = \beta$. In conclusion, there must exist an index $1 \leq i \leq \log_2 n$ such that \[ |K_i| \geq n/\beta \cdot 2^{-2i}. \] In this case, we consider again the paths to vertices $v$ with $\sigma(v) \leq \beta \cdot 2^i/100$. As argued above, there will be $s \gtrsim {|K_i|}^{1/2+\epsilon}$ vertex-disjoint paths with lengths $\ell_1, \dots, \ell_s$ such that $\sum_{i=1}^s \ell_i \le 200 n /(\beta\cdot 2^i)$ and the gap between the values taken by $f$ on their extremities is at least $\Omega( \beta \cdot 2^i)$. Following the previous argument we have that \begin{align*} \mathcal{E}_P(f,f) &\gtrsim \frac{2^{2i}\beta^2}{n} \sum_{i=1}^s \frac{1}{\ell_i} \ge \frac{2^{2i} \beta^2 s^2}{n \sum_{i=1}^s \ell_i} \gtrsim \frac{2^{i-4\epsilon}\beta^{2-2\epsilon}}{n^{1-2\epsilon}} \gtrsim \frac{\beta^{2-2\epsilon}}{n^{1-2\epsilon}}. \end{align*} \end{proof} \begin{theorem}[\thmref{twopluseps} (restated)] Let $\mathcal{G}=\{G^{(t)}\}_{t=1}^{\infty}$ be a sequence of $n$-vertex graphs such that each $G^{(t)}$ is regular, has bounded degree, and satisfies the following isoperimetric condition: there exists $\epsilon \in [0,1/4]$ such that, for any subset of vertices $A$ with $1 \le |A| \le n/2$, $|E(A,V \setminus A)| = \Omega(|A|^{\frac{1}{2} + \epsilon})$. Then, \begin{enumerate}\itemsep0pt \item $t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) = O(n^{1-2\epsilon})$, \item $\bigl| \frac{p^{[0,t]}_{u,v}}{v} - 1 \bigr| = O\left(\frac{1}{t^{1+2\epsilon}}\right)$, \item $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n)$ if $\epsilon>0$, $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n\log{n})$ if $\epsilon=0$. \end{enumerate} \end{theorem} \begin{proof} We start deriving bounds on mixing and $t$-step probabilities. As shown by \lemref{inftoell2}, $\rho^{[\tau,\tau+t]}_{v,v} = p^{[\tau,\tau+t]}_{v,v}/ \pi(v)$ decreases at a rate which is proportional to the rate given by \lemref{decrease_eps}. In particular, $\beta= \rho^{[\tau,\tau+t]}_{v,v}$ gets halved after $O(n/\beta)^{1-2\epsilon}$ steps, which implies that $p^{[\tau,\tau+t]}_{v,v} \le \frac{1}{n} + O\left(\frac{1}{t^{1+2\epsilon}}\right)$ and $t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) = O(n^{1-2\epsilon})$. We now want to bound the expected hitting time from a vertex $u$ to a vertex $v$. Let $\pi$ be the uniform distribution. As in the proof of \thmref{worsthit}, we introduce a variable $Z$ which counts the number of visits to $v$ in some time window of length $\Theta(n)$. More precisely, let $T = c \cdot n$ for some large enough constant $c$. We define $Z = \sum_{t=T}^{2T} X_{u,v}^{(t)}$ where $X_{u,v}^{(t)}$ is a Boolean variable that is equal to $1$ if and only if the walk, which started at $u$, is at vertex $v$ at time $t$. Since $T \ge t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G})$, $\E{Z} \gtrsim T \cdot \pi(v) = \Omega(1)$. We then want to bound $\E{Z | Z \ge 1} \le \max_{T \le \tau \le 2T} \sum_{t=0}^T p^{[\tau,\tau+t]}_{v,v}$. We first consider the case when $\epsilon > 0$. From the bound on $t$-step probabilities derived above, we have $\sum_{t=0}^T p^{[\tau,\tau+t]}_{v,v} = O(1)$. Therefore, $\Pr{Z \ge 1} = {\E{Z}}/{\E{Z | Z \ge 1}} = \Omega(1)$, from which it follows that $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n)$. For the case when $\epsilon = 0$, instead, we have that $\E{Z | Z \ge 1} \le \max_{T \le \tau \le 2T} \sum_{t=0}^T p^{[\tau,\tau+t]}_{v,v} = O(\log{n})$. Therefore, $\Pr{Z \ge 1} = {\E{Z}}/{\E{Z | Z \ge 1}} = \Omega(1/\log{n})$. Since we have a $\Omega(1/\log{n})$ probability to hit $v$ in $O(n)$ steps, we just need to repeat the process $O(\log{n})$ times to obtain a constant probability of hitting $v$, which result in $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n\log{n})$. \end{proof} \section{Omitted proofs from \secref{average}} \begin{lemma}[\lemref{imp} (restated)] Let $p^{(0)}$ be an arbitrary initial probability distribution, and $\rho^{(0)} = p^{(0)}/\pi$. Suppose that for some $t \ge 1$ and $u \in V$, $| \rho^{(t)}(u) - \rho^{(0)}(u) | \ge \epsilon > 0$. Then, \[ \var_{\pi}{\rho^{(0)}} - \var_{\pi}{\rho^{(t)}} \ge \frac{\alpha_u}{4m} \sum_{i=1}^t \sum_{v \sim_i u} \left( \rho^{(i-1)}(u) - \rho^{(i-1)}(v) \right)^2 \ge \frac{2\epsilon^2 \pi(u)}{t}. \] \end{lemma} \begin{proof} First notice that, for any $1 < i \le t$, \[ p^{(i)}(u) = \frac{p^{(i-1)}(u)}{2} + \sum_{v \sim_i u} \frac{p^{(i-1)}(v)}{2d_v} = \frac{1}{2}\left(p^{(i-1)}(u) + \sum_{v \sim_i u} \frac{\alpha_u}{2m} \rho^{(i-1)}(v)\right), \] where we used the fact that $\pi(v) = \alpha_u d_v/2m$ for $v \sim_i u$. Therefore, subtracting $p^{(i-1)}(u)$ to both sides, \begin{align*} p^{(i)}(u) - p^{(i-1)}(u) &= \frac{1}{2} \sum_{v \sim_i u}\left( \frac{\alpha_u}{2m} \rho^{(i-1)}(v) - \frac{p^{(i-1)}(u)}{d_u}\right) \\ &= \frac{\alpha_u}{4m} \sum_{v \sim_i u}\left( \rho^{(i-1)}(v) - \rho^{(i-1)}(u)\right). \end{align*} Taking the absolute value and applying Cauchy-Schwarz, we obtain that \begin{align*} \left| p^{(i)}(u) - p^{(i-1)}(u) \right| &\le \frac{\alpha_u}{4m} \sum_{v \sim_i u}\left| \rho^{(i-1)}(v) - \rho^{(i-1)}(u)\right| \\ &\le \frac{\alpha_u}{4m} \sqrt{d_u \sum_{v \sim_i u}\left( \rho^{(i-1)}(v) - \rho^{(i-1)}(u)\right)^2} \end{align*} Summing over all $0 < i \le t$, \begin{align*} \epsilon &\le | \rho^{(t)}(u) - \rho^{(0)}(u) | = \frac{1}{\pi(u)} | p^{(t)}(u) - p^{(0)}(u) | = \frac{1}{\pi(u)} \left| \sum_{i=1}^t (p^{(i)}(u) - p^{(i-1)}(u)) \right| \\ &\le \frac{\alpha_u}{4 \pi(u) m} \sum_{i=1}^t \sqrt{d_u \sum_{v \sim_i u}\left( \rho^{(i-1)}(v) - \rho^{(i-1)}(u)\right)^2} \\ &\le \frac{1}{2} \sqrt{\frac{t}{d_u} \cdot \sum_{i=1}^t \sum_{v \sim_i u} \left( \rho^{(i-1)}(v) - \rho^{(i-1)}(u)\right)^2} \end{align*} Therefore, by \eq{mihai}, \begin{align*} \var_{\pi}{\rho^{(0)}} - \var_{\pi}{\rho^{(t)}} &= \sum_{i=1}^t \left(\var_{\pi}{\rho^{(i-1)}} - \var_{\pi}{\rho^{(i)}}\right)\\ &\ge \sum_{i=1}^t {\mathcal{E}}_{P^{(i)}}(\rho^{(i-1)},\rho^{(i-1)}) \\ &\ge \sum_{i=1}^t \sum_{v \sim_i u} \frac{\pi(u)}{2d_u} \left( \rho^{(i-1)}(u) - \rho^{(i-1)}(v) \right)^2 \\ &\ge \frac{\alpha_u}{4m} \sum_{i=1}^t \sum_{v \sim_i u} \left( \rho^{(i-1)}(u) - \rho^{(i-1)}(v) \right)^2 \\ &\ge \frac{\epsilon^2 \alpha_u d_u}{mt} \\ &= \frac{2\epsilon^2 \pi(u)}{t}. \end{align*} \end{proof} \begin{theorem}[\thmref{average} (restated)] Given a time interval of length $t$ labelled $[1,t]$, let $\overline{P} =\frac{1}{t}(P^{(1)} + P^{(2)} + \cdots + P^{(t)})$ with spectral gap $\lambda(\overline{P})$. Then, for any initial probability distribution $p^{(0)}$ with likelihood $\rho^{(0)} = p^{(0)}/\pi$, it holds that \[ \var \rho^{(0)} - \var \rho^{(t)} \ge \frac{1}{15t} \mathcal{E}_{\overline{P}}(\rho^{(0)},\rho^{(0)}) \ge \frac{\lambda(\overline{P})}{15t}. \] \end{theorem} \begin{proof} Recall that \[ \mathcal{E}_{\overline{P}}(\rho^{(0)},\rho^{(0)}) = \frac{1}{t} \sum_{i=1}^t \mathcal{E}_{P^{(i)}} (\rho^{(0)},\rho^{(0)}) = \frac{1}{4 m t} \sum_{i=1}^t \sum_{u \in V} \sum_{v \sim_i u} \alpha_u \left(\rho^{(0)}(u) - \rho^{(0)}(v) \right)^2. \] Denote with $\mathcal{E}_u$ the contribution given by $u \in V$ to $\mathcal{E}_{\overline{P}}(\rho^{(0)},\rho^{(0)})$ normalised by $\pi(u)$, i.e., \[ \mathcal{E}_u \triangleq \frac{1}{4 t d_u} \sum_{i=1}^t \sum_{v \sim_i u} \left(\rho^{(0)}(u) - \rho^{(0)}(v) \right)^2. \] Let $U \in V$ the subset of vertices $u$ for which there exists an $1 \le i \le t$ such that $\left(\rho^{(i)}(u) - \rho^{(0)}(u) \right)^2$ is greater than $\mathcal{E}_u/30$: \[ U \triangleq \{ u \in V \colon \exists \, 1 \le i \le t \text{ s.t. } |\rho^{(i)}(u) - \rho^{(0)}(u)| > \sqrt{\mathcal{E}_u/30} \}. \] We consider the contribution to $\var_{\pi}{\rho^{(0)}} - \var_{\pi}{\rho^{(t)}}$ given by vertices that belong to $U$ and vertices that do not separately: \begin{align} \var_{\pi}{\rho^{(0)}} - \var_{\pi}{\rho^{(t)}} \ge & \frac{1}{4m} \sum_{u \in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left( \rho^{(i-1)}(u) - \rho^{(i-1)}(v) \right)^2 \nonumber \\ &+ \frac{1}{4m} \sum_{u \not\in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left( \rho^{(i-1)}(u) - \rho^{(i-1)}(v) \right)^2. \label{eq:sum0} \end{align} We begin with the first part of the sum. By \lemref{imp}, the definition of $U$, and $\pi(u) = \alpha_u d_u/(2m)$, we have that \begin{align} \frac{1}{4m} \sum_{u \in U} \sum_{i=1}^t \sum_{v \sim_i u} &\alpha_u \left( \rho^{(i-1)}(u) - \rho^{(i-1)}(v) \right)^2 \nonumber\\ &\ge \frac{1}{15t} \sum_{u \in U} \mathcal{E}_u \cdot \pi(u) \nonumber\\ &= \frac{1}{15 \cdot 4 m \cdot t^2} \sum_{u \in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left(\rho^{(0)}(u) - \rho^{(0)}(v) \right)^2. \label{eq:sum1} \end{align} We now look at the second part of the sum. Using the inequality $(a+b+c)^2 \ge a^2/2 - 6b^2 - 6c^2$ we obtain that \begin{align*} \frac{1}{4m} \sum_{u \not\in U} &\sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left( \rho^{(i-1)}(u) - \rho^{(i-1)}(v) \right)^2 \\ &\ge \frac{1}{4m} \sum_{u \not\in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left( \rho^{(i-1)}(u) - \rho^{(0)}(u) + \rho^{(0)}(u) - \rho^{(0)}(v) + \rho^{(0)}(v) - \rho^{(i-1)}(v) \right)^2 \\ &\ge \frac{1}{8m} \sum_{u \not\in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left(\rho^{(0)}(u) - \rho^{(0)}(v) \right)^2 - \frac{3}{m} \sum_{u \not\in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left(\rho^{(0)}(u) - \rho^{(i-1)}(u) \right)^2. \end{align*} By the definition of $U$, it follows that \begin{align*} \frac{3}{m} \sum_{u \not\in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left(\rho^{(0)}(u) - \rho^{(i-1)}(u) \right)^2 &\le \frac{1}{10m} \sum_{u \not\in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \mathcal{E}_u\\ &=\frac{1}{10 m} \sum_{u \not\in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left(\rho^{(0)}(u) - \rho^{(0)}(v) \right)^2. \end{align*} Putting together the last two sums, we have that \begin{align} \frac{1}{4m} \sum_{u \not\in U} \sum_{i=1}^t \sum_{v \sim_i u} &\alpha_u \left( \rho^{(i-1)}(u) - \rho^{(i-1)}(v) \right)^2 \ge \frac{1}{40m} \cdot \sum_{u \not\in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left( \rho^{(0)}(u) - \rho^{(0)}(v) \right)^2. \label{eq:sum2} \end{align} Finally, combining \eq{sum0} with \eq{sum1} and \eq{sum2}, it follows that \begin{align*} \var_{\pi}{\rho^{(0)}} - \var_{\pi}{\rho^{(t)}} &\ge \frac{1}{15 \cdot 4m \cdot t^2} \sum_{u \in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left(\rho^{(0)}(u) - \rho^{(0)}(v) \right)^2 \\ &\qquad + \frac{1}{40m} \cdot \sum_{u \not\in U} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left( \rho^{(0)}(u) - \rho^{(0)}(v) \right)^2 \\ &\ge \frac{1}{15 \cdot 4m \cdot t^2} \sum_{u \in V} \sum_{i=1}^t \sum_{v \sim_i u} \alpha_u \left(\rho^{(0)}(u) - \rho^{(0)}(v) \right)^2 \\ &= \frac{1}{15t} \mathcal{E}_{\overline{P}}(\rho^{(0)},\rho^{(0)}) \\ &\ge \frac{\lambda(\overline{P})}{15t}, \end{align*} where the last inequality follows from the relation between the Dirichlet form and the spectral gap of a transition matrix. \end{proof} \begin{proposition}[Proposition \ref{pro:nomixing} (restated)] For any $t=\omega(\log n)$, there is a sequence of connected $n$-vertex bounded-degree expander graphs ${\mathcal{G}} =\{G^{(i)}\}_{i=1}^{\infty}$ and a constant $c > 0$ so that $p_{u,v}^{(t)} \geq n^{-1+c}$ for some vertices $u$ and $v$. \end{proposition} \begin{proof} Assume that $p^{(0)}=(1/n,\ldots,1/n)$ is the initial distribution, and $p^{(i)}=p P^{(1)} P^{(2)} \cdots P^{(i)}$ will be the resulting distribution after $i$ steps. Then select a sets of size $S_1 \subseteq S_0=V$ of size $|S_1|=|S_0|/10$ and connect every vertex in $S_1$ to $6$ other vertices in $S_0 \setminus S_1$ in a way so that every vertex in $S_0 \setminus S_1$ has degree at most $1$ (so far). Then add a $3$-regular expander graph to all vertices $V$ to ensure that the resulting graph is a bounded-degree (but non-regular) expander. Note that for any vertex $u \in S_1$, \[ p^{(1)}(u) \geq \frac{1}{2} p^{(1)}(u) + \sum_{v \in S_1, v \sim_{1} u} \frac{1}{8} \cdot p^{(0)}(v) \geq \frac{10}{8} \cdot \frac{1}{n}, \] since every vertex $v \in S_1, v \sim_{1} u$ has degree $1+3=4$. Now proceed by induction. So assume the existence of a subset $S_i$ such that $p^{(i)}(u) \geq (10/8)^i \cdot \frac{1}{n}$ and $|S_i| \geq 10^{-i} \cdot n$. Then again, we choose an arbitrary set $S_{i+1} \subseteq S_i$ of size $|S_{i+1}| \geq |S_i|/10$ and proceed with the same construction as above, replacing $S_0$ by $S_i$ and $S_1$ by $S_{i+1}$. With this substitution we conclude \[ p^{(i)}(u) \geq \frac{1}{2} p^{(i)}(u) + \sum_{v \in S_i, v \sim_{i} u} \frac{1}{8} \cdot p^{(i-1)}(v) \geq \frac{10}{8} \cdot \left( \frac{10}{8} \right)^{i-1} \cdot \frac{1}{n} = \left( \frac{10}{8} \right)^{t} \cdot \frac{1}{n}. \] Thus we can proceed inductively as long as $|S_i| \geq 1$, which holds as long as $i \leq c' \cdot \log n$ for some constant $c' > 0$. To obtain the statement of theorem for values of $t=\Omega(\log n)$, simply pad $t-c \log n$ many $3$-regular expander graphs to the beginning of the sequence ${\mathcal{G}}$, which will leave the probability distribution unchanged until step $t-c' \log n$. Thus for $p^{(0)}$ being the uniform distribution, we can infer that $p_{u,v}^{(t)} \geq n^{-1+c}$ for $c > 0$ being a constant, $t=\omega(\log n)$ and proper vertices $u,v \in V$. \end{proof} \begin{proposition}[Proposition \ref{pro:nohitting} (restated)] There is a sequence of $n$-vertex bounded-degree graphs ${\mathcal{G}} =\{G^{(i)}\}_{i=1}^{\infty}$ with transition matrices $\{P^{(i)}\}_{i=1}^{\infty}$ and a probability distribution $\pi$ such that (1) for any $i$, $\pi$ is stationary for $P^{(i)}$; (2) the average transition matrix $\overline{P}$ of any $4n$ consecutive steps is ergodic; (3) for any $t \ge 0$ there are two vertices $u,v$ such that $p_{u,v}^{[0,t]} \leq 2^{-(n/4)-2}$. Moreover, $t_{\mbox{\scriptsize{\textsf{mix}}}}({\mathcal{G}}) = O(\operatorname{poly}(n))$, while $t_{\mbox{\scriptsize{\textsf{hit}}}}({\mathcal{G}}) = 2^{\Omega(n)}$. There is also a sequence ${\mathcal{G}}'$ satisfying (1), (2), and (3) such that $t_{\mbox{\scriptsize{\textsf{mix}}}}({\mathcal{G}}) = 2^{\Omega(n)}$. \end{proposition} \begin{proof} W.l.o.g.~let $n$ be a multiple $4$. Let $V$ be the vertex set of ${\mathcal{G}}$. Divide $V$ into $n/4$ buckets of size $4$ each, and label the buckets $V_1,V_2,\ldots,V_{k}$ with $k=n/4$. In the first $6$ steps, the graphs $G^{(1)},G^{(2)},\ldots,G^{(6)}$ will only have edges between $V_1$ and $V_2$, and in the next $6$ steps, there will be only edges between $V_2$ and $V_3$ and so on. Finally we apply the same scheme periodically, but in reserved order i.e., $G^{(6(n/4)+i)}=G^{(6(n/4)+1-i)}$ for any $1 \leq i \leq 6(n/4)$. From then on we repeat the same sequence, i.e., $G^{(12(n/4)+i)}=G^{(i)}$ for any $i \geq 1$. In each $G^{(1)}$, with $1 \leq i \leq 6$, we will form a complete bipartite graph between two vertices in $V_1$ and (all) four vertices in $V_2$. The number of different bipartite subgraphs is $\binom{4}{2}=6$, so each $G^{(1)}$ will correspond to exactly one complete bipartite subgraph. Note that by construction and the fact that we reversed the order in the second period, it is clear that the union of the graphs $G^{(1)},G^{(2)},\ldots,G_{12(n/4)+1}$ is connected, and since we are considering lazy random walks, the average transition matrix $\overline{P}$ of such sequence of graphs will be ergodic. Let us now verify that $\pi$ with $\pi(u) = 2^{-i-2}$ for any $u \in V_i$ is a stationary distribution, i.e., does not change over time when applied to the graph sequence. To this end, we simply note that the stationary distribution in a complete bipartite graph is proportional to the respective degree, and since the degrees of $V_i$ and $V_{i+1}$ differ by a factor of two, it follows that every $G^{(1)}$ will keep the distribution unchanged. Therefore $\pi$ is the unique stationary distribution of $\overline{P}$. The bound $t_{\mbox{\scriptsize{\textsf{hit}}}}({\mathcal{G}}) = 2^{\Omega(n)}$ follows trivially. To bound $t_{\mbox{\scriptsize{\textsf{mix}}}}$, notice that $\Phi_{P} = \Omega(1)$, and $t_{\mbox{\scriptsize{\textsf{mix}}}}({\mathcal{G}}) = O(\operatorname{poly}(n))$ follows from \thmref{average}. To obtain a sequence with exponential mixing and hitting times, we construct a sequence ${\mathcal{G}}'$ with vertex set $V \cup V'$ where $V'$ is a copy of $V$. Any graph in ${\mathcal{G}}'$ corresponds to two disjoint copies of the corresponding graph in ${\mathcal{G}}$, but every $3n+1$ steps, we add a perfect matching between the vertices in $V_{n/4}$ and the corresponding vertices in $V_{n/4}'$. It is not difficult to see that $t_{\mbox{\scriptsize{\textsf{mix}}}}({\mathcal{G}}') = 2^{\Omega(n)}$ (and, indeed, if we consider the average transition matrix $\overline{P}$ of any consecutive $3n + 2$ steps, it will be ergodic but with $\Phi(\overline{P}) = 2^{-\Omega(n)}$). \end{proof} \section{Omitted proofs and details from \secref{cutsets}} The following lemma borrows some ideas from the proof of Proposition 4.42 in the book by Aldous and Fill~\cite{aldousFill}. \begin{lemma}[Lemma~\ref{lem:general} (restated)] For any graph $G = (V,E)$ and $s,t \in V$, there exists a labelling of the vertices from $1$ to $n$ such that \[ C_{st} \le 2m \sum_{j=1}^{n-1}\frac{1}{|\partial [j]|}. \] Furthermore, by considering the reversal of the labelling, we can also conclude that $ C_{st} \le 4m \sum_{j=1}^{n/2} \frac{1}{|\partial [j]|}. $ \end{lemma} \begin{proof} Fix a function $g \colon V \to \mathbb{R}$ such that $0 \le g \le 1$ and $0 = g(1) \le g(2) \le \cdot \le g(n) = 1$ for $s = 0, t=n$. Then we have \begin{align*} {\mathcal{E}}_P(g,g) &= \sum_i \sum_{i < k} \pi(i) P(i,k) (g(i) - g(k))^2 \\ &\ge \sum_i \sum_{i < k} \sum_{i\le j < k} \pi(i) P(i,k) (g(j+1) - g(j))^2 \\ &= \sum_{j} (g(j+1) - g(j))^2 Q([j], V \setminus [j]). \end{align*} But by applying Cauchy-Schwarz we obtain \begin{align*} 1 = \sum_j (g(j+1) - g(j)) &= \sum_j (g(j+1) - g(j)) \frac{Q([j], V \setminus [j])^{1/2}}{Q([j], V \setminus [j])^{1/2}} \\ & \le \sum_{j} (g(j+1) - g(j))^2 Q([j], V \setminus [j]) \sum_j \frac{1}{Q([j], V \setminus [j])} \\ & \le {\mathcal{E}}_P(g,g) \sum_j \frac{1}{Q([j], V \setminus [j])} \end{align*} and by \eq{var_hit} \begin{align*} C_{st} \le \sum_j \frac{1}{Q([j], V \setminus [j])} = 2m \sum_{j} \frac{1}{|\partial [j]|}. \end{align*} \end{proof} \begin{lemma}[Lemma \ref{lem:edgeconn} (restated)] Let $G=(V,E)$ be any graph with minimum degree $\delta$ so that any subset $S \subseteq V$ with $1 \leq |S| \leq n-1$ satisfies $|\partial S| \geq \rho$ (in other words, $G$ has edge-connectivity at least $\rho$). Then we have \[ \sum_{i=1}^{n-1} \frac{1}{|\partial[i]|} = O(n/\delta^2 \cdot \log \delta + n/(\rho \delta)). \] \end{lemma} Before proving this lemma, as a warmup we recover a well-known hitting time bound for regular graphs using \lemref{general}: \[ C_{st} \leq 2 |E| \operatorname{dist}(s,v) = O\left(n^2 \cdot \frac{d}{\delta}\right). \] Specifically, we will prove that \begin{align} \sum_{j=1}^{n-1}\frac{1}{|\partial[j]|} = O(n/\delta). \label{eq:sum} \end{align} Note that whenever there is a $j$ for which \[ |\partial[j]| \in (1,\delta/4], \] then we must have \[ |\partial[j+1]| \geq \delta/2, \] since the vertex $j+1$ has degree $d$. For the same reason, at least $d/2$ edges from $j+1$ go to different endpoints in $\{j+2,\ldots,n\}$. Therefore, for any $1 \leq i \leq \delta/4$ we have \[ |\partial[j+i]| \geq \delta/4. \] Hence a simple amortisation argument shows that any sequence of $\delta/4$ consecutive terms in the sum of \eq{sum} contribute at most $O(1)$, and overall \[ \sum_{j=1}^{n-1}\frac{1}{|\partial [j] |} = O(n/\delta), \] as claimed. \begin{proof}[Proof of Lemma \ref{lem:edgeconn}] Let $1 \leq j \leq n-\epsilon \delta-1$ be arbitrary. We will now perform an amortised analysis on \begin{align*} \Delta_{j} &= \sum_{i=j}^{j+\epsilon \cdot \delta} \frac{1}{|\partial[i]|}, \end{align*} showing that it is bounded from above by $O(\log \delta/\delta)$, where $\epsilon$ is a sufficiently small constant (in particular, $\epsilon \leq 1/8$). We define a partition of the next $\epsilon d$ vertices $i \in \{j+1,\ldots,j+\epsilon \cdot \delta\}=:\mathcal{I}$ into two classes $1$ and $2$. Intuitively, a vertex $i$ will be in class $1$ if most of its neighbours are vertices in $\{i+1,\ldots,n\}$, while a vertex $i$ will be in class $2$ if most of its neighbours are vertices in $\{1,\ldots,i\}$. For technical reasons, however, it is advantageous to define the partition into classes based on the first $j$ vertices only. This way we ensure that the division into classes is invariant under any permutation of the vertices in $\mathcal{I}$, which turns out to be useful in the proof. \textbf{Definition of Class 1.} We say that a vertex $u \in \partial[j]$ is in Class $1$ if it has at least $(7/8) \delta$ neighbours in $V \setminus [j]$. Note that for any rank $i \in \mathcal{I}$ assigned to $u$ in the linear order, $u$ will contribute at least $(7/8) \delta - (i-j) \geq (7/8) \delta - \epsilon \delta = (3/4) \cdot \delta$ cut-edges towards $\partial[i]$, for any $\Delta_k$ with $k \in \mathcal{I}$. Furthermore, any vertex $v \not\in \partial[j]$ will be also in Class $1$. Again, if $v$ is assigned any rank $i \in \mathcal{I}$, it follows that $v$ will contribute at least $\delta - (i-j) \geq \delta - \epsilon \delta \geq (3/4) \delta$ cut-edges towards $\partial[i]$, for any $\Delta_k$ with $i \leq k \leq j + \epsilon \delta$. To summarise, we can conclude that whenever a vertex $u$ from Class $1$ is assigned any rank $i$ between $j+1$ and $j+\epsilon \cdot \delta$, then it will contribute at least $(3/4) \cdot \delta$ cut-edges towards $\Delta_k$, for any $i \leq k \leq j + \epsilon \cdot \delta$. \textbf{Definition of Class 2.} We say that a vertex $u \in \partial[j]$ is in Class $2$ if it is not in Class $1$. This implies that vertex $u$ must have at least $(1/8) \delta$ neighbours in $[j]$. Notice that if $i \in \mathcal{I}$ is the rank of vertex $u$, then for any $j \leq k \leq i-1$, we have \[ |E(\{u\}, [i-1])| \geq |E(\{u\}, [j])| \geq (1/8) \delta, \] so $u$ contributes at least $(1/8) \delta$ cut-edges towards $\Delta_k$. \textbf{Lower Bounding the Number of Cut-Edges.} Based on the partition of vertices in $\mathcal{I}$ into Class $1$ and Class $2$, we will now derive a lower bound on $\partial[i]$ for any $i \in \mathcal{I}$. This lower bound will be universal in the sense that it will hold for any ordering of the remaining vertices $\{j+1,\ldots,n\}$. Intuitively, the worst-case scenario is if the first vertices in $\mathcal{I}$ are all in Class $2$, which may lead to a steady decrease in the edge-boundary. However, thanks to the definition of Class $1$, we eventually ``run out'' of vertices in Class $2$, which means that all remaining vertices will be in Class $1$, which will lead to a steady increase in the edge-boundary. Obviously, there may be orderings in which vertices in Class $1$ and Class $2$ are intermixed in an arbitrary manner, and the proof below has to cope with this case. Let us proceed with the formal argument. We will denote by $\gamma$ the number of vertices in $\mathcal{I}$ which are in Class $2$. Note that $0 \leq \gamma \leq \partial(j)$. Obviously, among the vertices $\{j+1,\ldots,j+i\}$, at most $\min\{i,\gamma\}$ vertices can be in Class $2$, meaning that among the vertices $\{j+i+1,\ldots,j+\epsilon \delta$ we have at least $\gamma - \min\{i,\gamma\} = \max\{\gamma - i,0\}$ vertices in Class $2$. This implies for any $i$ with $j \leq i \leq j + \gamma - 1$, \[ |\partial[i]| \geq |E(\{j+i+1,\ldots,j+\epsilon \delta \},[j]\})| \geq (\gamma - i) \cdot (1/8) \delta. \] For the boundary case $i=j+\gamma$, we use the lower bound based on the edge-connectivity to conclude that \[ |\partial[i]| \geq \rho. \] Furthermore, if $\gamma < \epsilon \cdot d$, then for any $j+\gamma < i \leq j + \epsilon \cdot \delta$, there must be at least $(i-\gamma)$ vertices among $\{j,j+1,\ldots,j+i\}$ which are in Class 1, which in turn implies \[ |\partial[i]| \geq (i-\gamma) \cdot (3/4) \cdot \delta \geq (i-\gamma) \cdot (1/8) \delta. \] Combining the three inequalities from above yields \[ \Delta_j \leq 2 \sum_{k=1}^{\epsilon \delta} \frac{1}{k \cdot (1/8) \delta} + \frac{1}{\rho} \leq 2 \sum_{k=2}^{\epsilon \delta} \frac{1}{k \cdot (1/8) \delta} + \frac{1}{\rho} = O(\log \delta/\delta + 1/\rho). \] Since this holds for any $1 \leq j \leq n - \epsilon \delta - 1$, we conclude that by the second statement of Lemma~\ref{lem:edgeconn}, \begin{align*} 2 \sum_{i=1}^{n/2} \frac{1}{|\partial[i]|} &\leq 2 \sum_{\lambda=0}^{n/(\epsilon \delta)-1} \Delta_{1+\lambda \cdot \epsilon \delta} \\ &\leq \left\lceil \frac{n}{\epsilon \delta} \right\rceil \cdot O(\log \delta/\delta + 1/\rho) \\ &= O(n/\delta^2 \cdot \log \delta + n/(\rho \delta)), \end{align*} as required. \end{proof} \begin{lemma} \label{lem:connected} Let $g : V \to \mathbb{R}$ be a maximiser for \eq{var_hit}. Then, there exists a linear ordering of $V$ such that $0 = g(1) \le \cdots \le g(n) = 1$ and, for any $j=1,\dots,n$, $\{1,\dots,j\}$ forms a connected vertex-induced subgraph of $G$. \end{lemma} \begin{proof} Assume w.l.o.g. that $s=1$ and $t=n$ and suppose by contradiction no such ordering exists. Then there exists $1 < j < n-1$ such that $\{1,\dots,j-1\}$ is connected, but $\{1,\dots,j\}$ is not, i.e. $j$ is adjacent only to vertices in $\{j+1,\dots,n\}$. Let $j'$ be the minimum index such that $j' > j$ and there exists an edge connecting $j'$ to $\{1,\dots,j-1\}$. Moreover, we can assume that $g(j') > g(j)$, otherwise we can reorder the vertices so that $\{1,\dots,j\}$ is connected. We construct a new function $h : V \to \mathbb{R}$ in this way: $h(i) = g(i)$ for any $i=1,\dots,j-1$, $h(i) = g(j')$ for any $i=j,\dots,j'$, and $h(i) = g(i)$ for any $i > j'$. Since there are no edges between $j$ and $\{1,\dots,j'-1\}$, but the graph is connected, there must be an edge $\{j,k\}$ with $k \in \{j',\dots,n\}$. But then we have that $(h(k)-h(j))^2 < (g(k)-g(j))^2$ and, in general, $h^{\intercal} {\mathcal{L}} h < g^{\intercal} {\mathcal{L}} g$. This implies that $g$ is not a maximiser for \eq{var_hit}, reaching a contradiction. \end{proof} \begin{lemma} \label{lem:optimalconn} For any pair of $\rho$ and $d$ there is a graph matching the upper bound in Theorem~\ref{thm:connectivity} up to a factor of $O(\log d)$. \end{lemma} \begin{proof} We show that, for any degree $d$ and edge-connectivity $\rho$, there exists a graph that matches the bound of \lemref{edgeconn} up to a $O(\log{d})$-factor. In fact, we take the same graph as in \cite[Example~6.24]{nonrevFill} by Aldous and Fill. In this example, we have $n$ vertices $\{0,1,\ldots,n-1\}$ and the edge set is $(i,i+u \operatorname{mod} n)$ for any $0 \leq i \leq n-1$ and $1 \leq u \leq \rho$. As they argue, the maximum commute time is $\Theta(n^2 / \rho)$, while their bound only gives $\Theta(n^2 / \rho^{1/2})$. Note that our bound gives $\Theta( n^2 \cdot \log \rho/ \rho)$, which is tight up to a logarithmic factor. \end{proof} \section{Bounds on mixing based on average transition probabilities} \label{sec:average} Unlike in the time-homogeneous case, eigenvalues of the individual transition matrices of a time-inhomogenous Markov chain are not necessarily indicative of its mixing time, even when there exists a unique time-independent stationary distribution. An emblematic example is the following: consider a sequence of graphs $\mathcal{G} = \{G^{(t)}\}_{t=1}^{\infty}$ defined over a vertex set $V = \{1,\dots,2n\}$ such that, at odd $t$, $G^{(t)}$ is the union of two expanders (graphs with constant spectral gap), one over $\{1,\dots,n\}$, the other over $\{n+1,\dots,2n\}$, while at even $t$, $G^{(t)}$ is a perfect matching between $\{1,\dots,n\}$ and $\{n+1,\dots,2n\}$. Since all the graphs are disconnected, each transition matrix has spectral gap equals to $0$, and eigenvalue bounds are, in this case, useless to analyse convergence to stationarity. On the other hand, it is quite clear that a lazy random walk on $\mathcal{G}$ mixes in $\Theta(\log{n})$ time. A more precise way to study mixing in time-inhomogeneous random walks would be to consider the spectral gap of the product of the transition matrices $P^{(1)} \cdots P^{(t)}$. Unfortunately, spectral bounds for the product of matrices are notoriously hard to come by. What is significantly easier is to study the \emph{average} transition matrix $\overline{P} =\frac{1}{t}\left(P^{(1)} + P^{(2)} + \cdots + P^{(t)}\right)$, which at least does not dependent on the order in which the transition matrices appear. For this reason, in this section we give bounds on mixing on $\mathcal{G}$ that depend on the Dirichlet form of $\overline{P}$. In particular, consider the aforementioned example where $G^{(t)}$ is two disjoint expanders at odd times, and a perfect matching between the two sets at even times. Consider the average transition matrix $\overline{P} =\frac{1}{2}\left(P^{(\ell)} + P^{(\ell + 1)}\right)$ for any two consecutive steps $\ell,\ell + 1$: $\overline{P}$ is just the transition matrix of a random walk on an expander graph defined over the entire set of vertices. Our results, then, make us easily derive the correct bound $t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) = O(\log{n})$ Throughout this section we assume that $\mathcal{G} = \{G^{(t)}\}_{t=1}^{\infty}$ is a sequence of undirected graphs over a vertex set $V$ with $|V| = n$. The graphs are not necessarily connected, which means they might have multiple stationary distributions. We require, however, that there exists a time-independent distribution $\pi$ which is a stationary distribution for all the graphs in $\mathcal{G}$. Fixing a time interval $[t_1,t_2]$, we consider $\overline{P} =\frac{1}{t_2-t_1}\left(P^{(t_1)} + \cdots + P^{(t_2)}\right)$. We consider time intervals for which $\overline{P}$ is irreducible. Notice that since the $P^{(i)}$'s are strongly aperiodic and reversible with respect to $\pi$, so is $\overline{P}$. Therefore, we can always assume that $\overline{P}$ is ergodic and has a unique stationary distribution $\pi$, unlike the individual $P^{(i)}$'s. For simplicity, we assume in our proofs that each graph in $\mathcal{G}$ has the same number of edges $m$. Our results, however, also hold for sequences of graphs with different edges densities. Notice that, by the detailed balance condition, if $u \sim_i v$ for some step $i$, $\pi(u) / \pi(v) = d_v / d_u$, where $d_u$ and $d_v$ are, respectively, the (time-independent) degrees of $u$ and $v$\footnote{it may happen that $u$ is isolated in some round $i$, leading to $u$ having degree $0$ in that round. However, in that case, $u$ can be safely ignored when computing ${\mathcal{E}}_{P^{(i)}}$. Hence, because the stationary distribution is always the same and so is the number of edges, we may assume that the degree of $u$ is always $d_u$}. In particular, this means there exists some $\alpha_u \ge 0$, which is independent from $t$, such that $\pi(u) = \alpha_u d_u/2m$ and $\pi(v) = \alpha_u d_v/2m$. \begin{lemma} \label{lem:imp} Let $p^{(0)}$ be an arbitrary initial probability distribution, and $\rho^{(0)} = p^{(0)}/\pi$. Suppose that for some $t \ge 1$ and $u \in V$, $| \rho^{(t)}(u) - \rho^{(0)}(u) | \ge \epsilon > 0$. Then, \[ \var_{\pi}{\rho^{(0)}} - \var_{\pi}{\rho^{(t)}} \ge \frac{\alpha_u}{4m} \sum_{i=1}^t \sum_{v \sim_i u} \left( \rho^{(i-1)}(u) - \rho^{(i-1)}(v) \right)^2 \ge \frac{2\epsilon^2 \pi(u)}{t}. \] \end{lemma} We are now able to relate $\var_{\pi} \rho^{(0)} - \var_{\pi} \rho^{(t)}$ to $\mathcal{E}_{\overline{P}}(\rho^{(0)},\rho^{(0)})$. The proof of the next theorem works roughly as follows. We divide the vertices in two classes: $U$ contains all the vertices for which there exists an $1 \le i \le t-1$ such that $\rho^{(i)}(u)$ differs \emph{significantly} from $\rho^{(0)}(u)$, while $V \setminus U$ contains the rest. We then use \lemref{imp} to lower bound the contribution given by vertices in $U$ to $\var_{\pi} \rho^{(0)} - \var_{\pi} \rho^{(t)}$. Since for $u \not\in U$, $\rho^{(i)}(u)$ has not changed much from $\rho^{(0)}(u)$, we can instead directly lower bound its contribution to $\var_{\pi} \rho^{(0)} - \var_{\pi} \rho^{(t)}$ just looking at its contribution to $\mathcal{E}_{\overline{P}}(\rho^{(0)},\rho^{(0)})$. \begin{theorem} \label{thm:average} Given a time interval of length $t$ labelled $[1,t]$, let $\overline{P} =\frac{1}{t}(P^{(1)} + P^{(2)} + \cdots + P^{(t)})$ with spectral gap $\lambda(\overline{P})$. Then, for any initial probability distribution $p^{(0)}$ with likelihood $\rho^{(0)} = p^{(0)}/\pi$, it holds that \[ \var_{\pi} \rho^{(0)} - \var_{\pi} \rho^{(t)} \ge \frac{1}{15t} \mathcal{E}_{\overline{P}}(\rho^{(0)},\rho^{(0)}) \ge \frac{\lambda(\overline{P})}{15t}. \] \end{theorem} We remark we do not know if the dependency of $t$ in the bound of $\thmref{average}$ (which appears as a result of an application of the Cauchy-Schwarz inequality) is tight, or even if any dependency on $t$ is needed at all. From \thmref{average} it is easy to derive the following corollary: \begin{corollary} \label{cor:average} Given a lazy random walk on a sequence ${\mathcal{G}}$ of graphs with transition matrices $\{P^{(i)}\}_{i=1}^{\infty}$ such that (1) there exists $\pi$ which is a stationary distribution for any $P^{(i)}$; (2) a time-window $t \ge 0$ such that, for any $i \ge 0$, $\overline{P}^{[i,i+t]} =\frac{1}{t}(P^{(i)} + P^{(i+1)} + \cdots + P^{(i+t)})$ is ergodic and has spectral gap $\lambda\bigl(\overline{P}^{[i,i+t]}\bigr) \ge \lambda > 0$. Then, $ t_{\mbox{\scriptsize{\textsf{mix}}}}({\mathcal{G}}) = O\bigl(\frac{t^2\log(1/\pi_*)}{\lambda}\bigr)$. \end{corollary} To highlight the applicability of Corollary \ref{cor:average}, consider a sequence of connected graphs $\mathcal{G}$ with time-independent stationary distribution $\pi$ in which, for any interval of $t$ consecutive steps and subset of vertices $A$, there exists a transition matrix $P^{(i)}$ of a graph in the interval such that $\Phi_{P^{(i)}}(A) \ge \phi$. Then, $\Phi\left(\overline{P}\right) \ge \phi/t$ and $\lambda\left(\overline{P}\right) \ge \phi^2/t^2$. Hence, Corollary \ref{cor:average} gives us that $t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) = O(t^3 \log{n}/ \phi^2 )$. Another natural question is whether our condition on the stationary distribution being fixed could be relaxed. This question is answered negatively by the following result: \begin{proposition}\label{pro:nomixing} For any $t=\omega(\log n)$, there is a sequence of connected $n$-vertex bounded-degree expander graphs ${\mathcal{G}} =\{G^{(i)}\}_{i=1}^{\infty}$ and a constant $c > 0$ so that $p_{u,v}^{(t)} \geq n^{-1+c}$ for some vertices $u$ and $v$. \end{proposition} In \secref{worsthit} and \secref{twopluseps} we have shown that the behaviour of a lazy random walk on a sequence of \emph{connected} graphs with the same stationary distribution is comparable to the behaviour of random walks on static graphs, at least regarding mixing and hitting times. When the graphs are disconnected, however, the behaviour of random walks on dynamic graphs becomes more complicated. \thmref{average} shows that, if every $t$ steps the average of the transition matrices applied in those steps is irreducible and strongly aperiodic with stationary distribution $\pi$, then the random walk will converge to $\pi$. However, $\pi$ can be highly imbalanced and, as a result, mixing and hitting can be exponential in $t$ and the number of vertices $n$. The next lemma shows an example of this behaviour. \begin{proposition} \label{pro:nohitting} There is a sequence of $n$-vertex bounded-degree graphs ${\mathcal{G}} =\{G^{(i)}\}_{i=1}^{\infty}$ with transition matrices $\{P^{(i)}\}_{i=1}^{\infty}$ and a probability distribution $\pi$ such that (1) for any $i$, $\pi$ is stationary for $P^{(i)}$; (2) the average transition matrix $\overline{P}$ of any $4n$ consecutive steps is ergodic; (3) for any $t \ge 0$ there are two vertices $u,v$ such that $p_{u,v}^{[0,t]} \leq 2^{-(n/4)-2}$. Moreover, $t_{\mbox{\scriptsize{\textsf{mix}}}}({\mathcal{G}}) = O(\operatorname{poly}(n))$, while $t_{\mbox{\scriptsize{\textsf{hit}}}}({\mathcal{G}}) = 2^{\Omega(n)}$. There is also a sequence ${\mathcal{G}}'$ satisfying (1), (2), and (3) such that $t_{\mbox{\scriptsize{\textsf{mix}}}}({\mathcal{G}}) = 2^{\Omega(n)}$. \end{proposition} \section{Bounds in terms of average edge connectivity}\label{sec:cutsets} Recall that in Section~\ref{sec:twopluseps} we proved several bounds which hold for graphs with sufficient expansion for small sets of vertices. Following a different direction, we now derive bounds on commute times for random walks on $d$-regular \emph{static} graphs based on \emph{average} connectivity measures (see the end of Section $2$ for some basic relations between the commute time and hitting time). We assume $G = (V,E)$ is a connected, undirected and static graph with vertex set $V = \{1,\dots,n\}$ and $m$ edges. We denote with $P$ the transition matrix of a lazy random walk on $G$ and $\pi$ its stationary distribution. Given $A, B$, the probability flow between $A$ and $B$ is defined as $\sum_{u \in A} \sum_{v \in B} \pi(u) P(u,v)$. The edge boundary of $A$, denoted with $\partial A$, is the set of edges with one endpoint in $A$ and one in $V \setminus A$. For ease of notation we define $[i] = \{1,\dots,i\}$. Also recall that we denote with $C_{st}$ the expected commute time between $s$ and $t$. We will use the following variational characterisation of the average commute time (see Aldous and Fill, \cite[Theorem 3.36]{aldousFill}): \begin{equation} \label{eq:var_hit} C_{st} = \max_{g \colon V \to \mathbb{R}} \{ 1/{\mathcal{E}}_P(g,g) \colon 0 \le g \le 1, g(s) = 0, g(t) = 1 \}. \end{equation} \begin{lemma}\label{lem:general} For any graph $G = (V,E)$ and $s,t \in V$, there exists a labelling of the vertices from $1$ to $n$ such that \[ C_{st} \le 2m \sum_{j=1}^{n-1}\frac{1}{|\partial [j]|}. \] Furthermore, by considering the reversal of the labelling, we can also conclude that $ C_{st} \le 4m \sum_{j=1}^{n/2} \frac{1}{|\partial [j]|}. $ \end{lemma} Note that the well-known Nash-Williams's inequality \cite[Proposition 9.15]{levinPeres} gives a very similar lower bound: it states that for every set of edge-disjoint cutsets separating $s$ from $t$, $\{E_1,E_2,\ldots,E_k\}$, $C_{st} \geq 2m \sum_{j=1}^{k} \frac{1}{|E_j|}$. Note that in our upper bound however, the cutsets $\partial[j]$ are in general not edge-disjoint. Another interesting consequence can be obtained by expressing the above upper bound in terms of some variant of conductance profile. To this end, assume $G$ is $d$-regular and for any $1 \leq k \leq n/2$, let $\Phi_{k} = \min_{|S| \subseteq V, |S|=k} \frac{|E(S, V \setminus S|}{d \cdot |S|}$. Then, \[ C_{st} \leq 4 nd \sum_{j=1}^{n/2} \frac{1}{\Phi_k \cdot d j} = 4n \sum_{j=1}^{n/2} \frac{1}{\Phi_j \cdot j} \leq 4n \log n \cdot \frac{1}{\Phi}. \] Also let us note that if we replace by $C_{st}$ the maximum commute time, and use the Random Target Lemma~\cite{aldousFill} as a lower bound, we obtain the inequality \begin{align} \sum_{k=2}^n \frac{1}{1-\lambda_k} \leq 4n \sum_{j=1}^{n/2} \frac{1}{\Phi_j \cdot j}. \label{eq:interesting} \end{align} Note that for the graph $K_{n/2} \times K_2$, $\Phi_j \sim \frac{n/2-j}{n/2}$, and the right hand side is $O(n \log n)$, whereas the right hand side is $\Omega(n)$. Hence this inequality is almost tight for certain graphs, and could be seen as some ``average version'' of Cheeger's inequality. \begin{remark} As we will prove (see appendix, \lemref{connected}), the lemma above holds even for a labelling such that the subgraph induced by $[i]$ is connected for every $1 \leq i \leq n$. \end{remark} \subsection{Commute times and edge-connectivity} We now apply \lemref{general} to obtain a bound on commute times that depends on the edge-connectivity of the graph, improving a result by Aldous and Fill \cite[Proposition~6.22]{aldousFill}. \begin{lemma}\label{lem:edgeconn} Let $G=(V,E)$ be any graph with minimum degree $\delta$ so that any subset $S \subseteq V$ with $1 \leq |S| \leq n-1$ satisfies $|\partial S| \geq \rho$ (in other words, $G$ has edge-connectivity at least $\rho$). Then we have \[ \sum_{i=1}^{n-1} \frac{1}{|\partial[i]|} = O(n/\delta^2 \cdot \log \delta + n/(\rho \delta)). \] \end{lemma} \begin{theorem}\label{thm:connectivity} Let $G=(V,E)$ be any graph with minimum degree $\delta$, average degree $\overline{d}$ and edge-connectivity $\rho$. Then any commute time is bounded by $O(n^2 \overline{d} \cdot ( \frac{\log \delta}{\delta^2} + \frac{1}{\delta \rho} ) )$. \end{theorem} \begin{proof} Simply combining \lemref{edgeconn} with~\lemref{general} yields the statement of the theorem. \end{proof} We remark that Aldous and Fill~\cite[Proposition~6.22]{nonrevFill} proved that for any graph $G$ with average degree $\overline{d}$ which is $\rho$-edge-connected, the maximum commute time is $O(n^2 \overline{d} \cdot \rho^{-3/2})$. They also mention that if the graph is $\Omega(d)$-edge-connected, they obtain a bound of $O(n^2 \cdot d^{-1/2})$. For this case of maximal edge-connectivity, $\rho=\Theta(d)$, our bound is considerably better than the one by Aldous and Fill, and, modulo the $\log d$-factor, gives also the correct dependency on $d$. Furthermore, since the edge-connectivity $\rho$ satisfies $\rho \leq \delta \leq d$, it is easy to verify that our bound is never worse than the bound in Aldous and Fill. In fact, as soon as $\delta \rightarrow \infty$, our upper bound will be asymptotically smaller than the bound by Aldous and Fill. \begin{remark}[Proved in \lemref{optimalconn}] For any pair of $\rho$ and $d$ there is a graph matching the upper bound in Theorem~\ref{thm:connectivity} up to a factor of $O(\log d)$. \end{remark} \newpage \section{Introduction} \textbf{Problem and Motivation.} A random walk is a stochastic process on an undirected connected graph $G=(V,E)$. A particle starts on a specified vertex, and then at each time-step $t=1,2,\ldots$ it moves to a neighbouring vertex chosen uniformly at random. Random walks have proven to be extremely powerful in the design of various sampling schemes, exploration strategies, and distributed algorithms~\cite{LPW06}. They provide a simple yet robust way to explore a large network. Most of the studies on random walks, however, assume the underlying graph to be fixed. In contrast, many prevalent networks today (such as the Internet, social networks, and wireless communication networks) are subject to dramatic changes in their topology over time. Therefore, understanding the theoretical power and limitations of dynamic graphs has been identified as one of the key challenges in computer science~\cite{MS18}. Recently, several works have considered this problem and investigated the behaviour of random walks \cite{AugustinePR16,avin,DenysyukR14,spirakis,merging1,merging2} or similar processes \cite{icalp2016voter,ClementiCDFPS16,ClementiST15,GiakkoupisSS14,KuhnO11} on such dynamic graphs. Moreover, rather than a property of the underlying network itself, dynamic graphs may naturally arise in distributed algorithms when communication is performed on a changing, possibly disconnected, subgraph like a spanning-tree or a matching (see, e.g., \cite{gossip}). One very popular model is that of an evolving graph, where we consider a sequence of graphs $G^{(1)},G^{(2)},\ldots$ over the same set of vertices but with a varying set of edges. This model has been the subject of the majority of previous studies of random walks on dynamic graphs and will be the object of our study as well. Another important feature of dynamic networks is that, with a changing set of edges, the resulting connectivity (i.e., expansion) changes. This might be very common in communication networks, where nodes change their location in space over time and can only communicate if they are within a certain distance of each other. For example, \cite{KuhnO11} highlights the need to study such evolving graphs with relatively poor connectivity and \cite{MS18} emphasises the unpredictable nature of fast-changing dynamic networks. To incorporate these features into our model, we will consider evolving graphs with relatively mild assumptions on their connectivity and will not make any restriction on how fast they are changing. Our quantitative analysis is focused on the {\em mixing time}, the time to converge to the equilibrium distribution, and the {\em hitting time}, the expected number of steps required by a random walk that starts in a vertex $u$ to reach a vertex $v$. Analysing the mixing time of dynamic graphs is also useful for load balancing applications, where the mixing time represents the time it takes for all nodes to have (roughly) a load that is proportional to their stationary distribution. Most theoretical studies of load balancing so far assumed the graph to be fixed. \textbf{Our Results.} The main motivation for our work comes from the results by Avin et al.~\cite{avin}, which describe a remarkable dichotomy with respect to the behaviour of random walks in evolving graphs: while sequences of connected graphs that share the same stationary distribution are guaranteed to have mixing and hitting times polynomial in the size of the graphs, even small incremental changes to the stationary distribution can cause hitting times to become exponential in the worst case. We focus on the first case of this dichotomy and prove that, at least regarding mixing and hitting times, there is essentially no difference in the behaviour of random walks on static and evolving graphs with a time-independent stationary distribution. Recall that, for static graphs, it is well-known that the worst-case hitting time is $O(n^2)$ for regular graphs and $O(n^3)$ for arbitrary graphs \cite{Fe95a,Fe95b}. Quite surprisingly, we can show that something very similar holds in the setting of evolving graphs: our theorem below proves an upper bound of $O(n^2)$ for the mixing and hitting times of regular evolving graphs, which is optimal even for static graphs, an upper bound of $O(n^3)$ for the mixing time of non-regular evolving graphs, which is again optimal even for static graphs, and an $O(n^3 \log{n})$ upper bound for the maximum hitting time, which is only a factor of $O(\log{n})$ short from the optimal bound on static graphs (simply consider the Barbell graph, i.e., two cliques of size $n/3$ connected by a path of length $n/3$, which has $O(n^3)$ mixing and maximum hitting time). \begin{maintheorem}[{restated, see Theorem~\ref{thm:worsthit} on page~\pageref{thm:worsthit}}]\label{thm:firstmain} Let $\mathcal{G}=\{G^{(t)}\}_{t=1}^{\infty}$ be a sequence of connected graphs with $n$ vertices, the same stationary distribution $\pi$, and at most $m^*$ edges. Then: \begin{enumerate}\itemsep0pt \item $t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) = O(n/\pi_*)$, \item $\bigl| \frac{p^{[0,t]}_{u,v}}{\pi_v} - 1 \bigr| \lesssim \frac{m^*}{t} + \frac{1}{\pi_* \sqrt{t}}$, simplifying to $\bigl| \frac{p^{[0,t]}_{u,v}}{\pi_v} - 1 \bigr| \lesssim \frac{n}{\sqrt{t}}$ if all graphs in $\mathcal{G}$ are $d$-regular, \item $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n \log{n}/\pi_*)$. Furthermore, if the graphs in $\mathcal{G}$ are $d$-regular, $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n^2)$. \end{enumerate} \end{maintheorem} \begin{remark} In this work, we never explicitly derive upper bounds on the cover time (i.e., the expected time for a random walk to visit all vertices). However, analogous to Matthew's Bound for static graphs~\cite[Chapter~11.2]{LPW06}, all the {\em stated} upper bounds on hitting times can be converted into upper bounds on cover times at the cost of an additional $O(\log n)$-factor. \end{remark} \begin{remark} Unlike static graphs where the gap between cover and hitting time is $O(\log n)$ (thanks to~Matthew's Bound), for evolving graphs the gap could be $\Omega(n)$ even if the sequence consists of regular connected graphs. For example, for any $t \leq c n \ln n$, let $G^{(t)}$ be a complete graph with $n$ vertices, while for any $t > c n \ln n$, let $G^{(t)}$ be a cycle with $n$ vertices. We can choose the constant $c$ so that, with probability $1-\Theta(n^{-1})$, every fixed vertex is visited before $c n \ln n$ steps, but with constant nonzero probability, there is at least one unvisited vertex which is at distance $\Omega(n)$ from the location of the walk at step $c n \ln n$. This yields a $\Theta(n)$ maximum hitting time, but a $\Theta(n^2)$ cover time. \end{remark} A natural question is of course under which conditions the worst-case bound on the hitting time can be improved. For static graphs, it has been observed that for many regular networks, the hitting time is indeed optimal, i.e., $O(n)$. One very general and unifying condition is the conjecture of Aldous and Fill~\cite[Open Problem 6.20]{aldousFill} stating that for any bounded-degree, $d$-regular graph, an isoperimetric dimension of $2+\epsilon$ is enough for hitting times to be linear (which is as good as possible). Since the isoperimetric dimension is equal to the dimension of grids, it follows that grids of dimension $3$ or higher have a linear hitting time, while grids of dimension $2$ have a hitting time of $O(n \log n)$. For static graphs, a positive answer to the above conjecture by Aldous and Fill was first given by \cite{benjamini}, and another proof was found by \cite{gharan}. Both these proofs, however, exploit the connection between hitting times and electrical resistances~\cite{chandra}, which is not known to exist for evolving graphs (indeed, it is not even clear how to formalise such connection). Since our techniques for bounding hitting times are more probabilistic in nature and avoid arguments based on electrical networks, we are able to show that the conjecture by Aldous and Fill is true even in a dynamic setting. \begin{maintheorem}[restated, see Theorem~\ref{thm:twopluseps} on page~\pageref{thm:twopluseps}] \label{main:twopluseps} Let $\mathcal{G}=\{G^{(t)}\}_{t=1}^{\infty}$ be a sequence of $n$-vertex graphs such that each $G^{(t)}$ is regular, has bounded degree, and satisfies the following isoperimetric condition: there exists $\epsilon \in [0,1/4]$ such that, for any subset of vertices $A$ with $1 \le |A| \le n/2$, $|E(A,V \setminus A)| = \Omega(|A|^{\frac{1}{2} + \epsilon})$. Then, \begin{enumerate}\itemsep0pt \item $t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) = O(n^{1-2\epsilon})$, \item $\bigl| \frac{p^{[0,t]}_{u,v}}{n} - 1 \bigr| = O\left(\frac{1}{t^{1+2\epsilon}}\right)$, \item $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n)$ if $\epsilon>0$, $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n\log{n})$ if $\epsilon=0$. \end{enumerate} \end{maintheorem} Note that the isoperimetric condition essentially says that each graph in the sequence must be at least as well-connected as a $2+\epsilon$-dimensional grid. For $\epsilon=0$, we recover the $O(n \log n)$ hitting time for static two-dimensional grids. Both of these cases might be relevant in certain applications of moving wireless devices or robots performing terrain exploration. The first two results apply to settings where there is a ``stable connectivity'', but each graph in the sequence may have a relatively poor expansion. The next result applies to scenarios where connectivity is more intermittent, in fact some of the vertices may even be isolated at some time steps. However, ``averaging'' over a sufficiently long time window, the graph will not only be connected but might also satisfy some reasonably strong expansion guarantee. In this sense, this model is somewhat related to that of \cite{KLO10}, which stipulates the existence of a spanning subgraph over any time-interval of a certain length. More formally, in the next theorem we assume that the random walk is on a sequence ${\mathcal{G}}$ of graphs with transition matrices $P^{(1)},P^{(2)},...$ and there exists a time-independent distribution $\pi$ which is stationary for any $P^{(i)}$. We remark that we do not assume connectivity and, therefore, any individual $P^{(i)}$ might have multiple stationary distributions. We assume, however, that there exists a large enough time window $t$ such that, for any $i \ge 1$, $\overline{P}^{[i,i+t]} =\frac{1}{t}(P^{(i)} + P^{(i+1)} + \cdots + P^{(i+t)})$ is ergodic with a unique stationary distribution $\pi$ and spectral gap $\lambda(\overline{P}^{[i,i+t]}) \ge \lambda > 0$. We then can show that the distribution of a lazy random walk on ${\mathcal{G}}$ converges to $\pi$ at a rate that depends on $t$ and the spectral gap $\lambda$. We refer to \secref{average} for details on the set-up. \begin{maintheorem}[restated, see Corollary~\ref{cor:average} on page~\pageref{cor:average}] \label{main:average} Consider a dynamically evolving sequence $\mathcal{G}=\{G^{(t)}\}_{t=1}^{\infty}$ of graphs with transition matrices $\{P^{(i)}\}_{i=1}^{\infty}$ such that (1) there exists $\pi$ which is a stationary distribution for any $P^{(i)}$; and (2) there exists a time-window $t \ge 0$ such that, for any $i \ge 0$, $\overline{P}^{[i,i+t]} =\frac{1}{t}(P^{(i)} + P^{(i+1)} + \cdots + P^{(i+t)})$ is ergodic and has spectral gap $\lambda\bigl(\overline{P}^{[i,i+t]}\bigr) \ge \lambda > 0$. Then, $t_{\mbox{\scriptsize{\textsf{mix}}}}({\mathcal{G}}) = O\left(t^2\log(1/\pi_*)\lambda^{-1}\right)$, where $\pi_* = \min_u \pi(u)$. \end{maintheorem} This result is not only significant in the context of dynamically evolving graphs, but also in settings of static graphs where communication is restricted to a bounded-degree subgraph which potentially changes in each round. One prominent example are matching-based communications, where in each round a random matching is generated and only those edges can be used for averaging or exchanging information, e.g.,~\cite{gossip}. Even when the assumptions of Main Theorem \ref{main:average} are satisfied for a \emph{small} time-window $t$, we cannot always guarantee that hitting and mixing times will be polynomial in the size of the graphs. Indeed, we exhibit examples of dynamic evolving graphs of $n$ vertices that satisfy such conditions but have mixing and/or hitting times that are exponential in $n$ and $t$. We show in Proposition \ref{pro:nohitting} that, since graphs in the sequences need not be connected, it is possible to construct examples where the stationary distribution $\pi$ has exponentially small probability mass on some vertices. This could result in exponential mixing and hitting times, but somewhat surprisingly also possibly in polynomial mixing time and exponential maximum hitting time. Both of the constructed graph sequences rely on the idea of simulating a directed graph by a sequence of disconnected bipartite graphs. A natural question is whether we can relax the assumptions on the regularity or existence of a time-invariant stationary distribution. Unfortunately, we show that this is not always possible. We exhibit in Proposition \ref{pro:nomixing} a sequence of graphs which are connected, have bounded-degree and constant spectral gap, but for which $t$-step probabilities are very far from the uniform distribution even for a time $t$ which is larger than the mixing time of a random walk on any (static) graph in the sequence. Going back to Main Result~\ref{main:twopluseps}, the essence behind the proof is that, to achieve an optimal $O(n)$ hitting time, we do not need large sets to have high expansion. What we only need is that small sets have a ``sufficiently high'' expansion. We derive another result in the same spirit by upper bounding a variational characterisation of the commute time in terms of some version of the conductance profile~\cite{conductanceprof}. However, since we need to exploit a variational characterisation of the commute time, as opposed to the earlier results, this bound only holds for static graphs. Let $C_{st}$ be the commute time from $s$ to $t$. Then, we have the following. \begin{maintheorem}[restated, see \lemref{general} on page~\pageref{lem:general}] \label{main:cutsets} For any static graph $G = (V,E)$ and $s,t \in V$, there exists a labelling of the vertices from $1$ to $n$ such that $ C_{st} \le 2m \sum_{j=1}^{n-1}|\partial [j]|^{-1}$, where $\partial [j]$ is the set of edges with one endpoint in $\{1,\dots,i\}$ and one in $\{i+1,\dots,n\}$. \end{maintheorem} Note the relation between Main Theorem \ref{main:cutsets} and the well-known Nash-Williams' inequality \cite[Proposition 9.15]{levinPeres} which states that, for every set $\{E_1,E_2,\ldots,E_k\}$ of edge-disjoint cut-sets separating $s$ from $t$, $C_{st} \geq 2m \sum_{j=1}^{k} |E_j|^{-1}$. Our upper bound, however, differs from the Nash-Williams' inequality in two ways: (1) the cut-sets $\partial[j]$ are in general not edge-disjoint; (2) we prove the existence of a ``good'' labelling, while Nash-Williams holds for \emph{any} labelling. As an application of this result, we consider the hitting time on $d$-regular graphs in terms of the edge-connectivity, which does not impose any condition on the expansion of large sets. \begin{maintheorem}[restated, see Theorem~\ref{thm:connectivity} on page~\pageref{thm:connectivity}] Let $G=(V,E)$ be any static $d$-regular graph with edge-connectivity $\rho$. Then $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) \leq O(n^2 \cdot ( \frac{\log d}{d} + \frac{1}{\rho} ) )$. In particular, since $\rho \leq d$, we get the simpler (but potentially slightly weaker) upper bound $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n^2 \log d/\rho)$. \end{maintheorem} We remark that in Aldous and Fill~\cite[Proposition 6.22]{aldousFill}, it was shown that for any $d$-regular graph $G$ which is $\rho$-edge-connected, the maximum hitting time is $O(n^2 d \cdot \rho^{-3/2})$. They also mention that if the graph is $\Omega(d)$-edge-connected, they obtain a bound of $O(n^2 \cdot d^{-1/2})$. For this case of maximal edge-connectivity, $\rho=\Theta(d)$, our bound is considerably better than the one by Aldous and Fill, and, modulo the $\log d$-factor, gives also the right dependency on $d$. In particular, we demonstrate in \secref{cutsets} that the dependency on the edge-connectivity $\rho$ is as good as possible (neglecting logarithmic factors) in the sense that for any pair of $\rho$ and $d$, there exists a $d$-regular graph with edge-connectivity $\rho$ which matches the upper bound in Main Result 5 (\thmref{connectivity}) up to constant factors. \textbf{Further Related Work.} While in this work we focus on standard (lazy) random walks on graphs, we should point out that previous work has established an alternative in form of the so-called {\em max-degree walk} \cite{avin}. In this random walk variant, a large loop probability depending on the degree of the current vertex and (an estimate of) the maximum degree $\Delta$ is added. With this modification, the stationary distribution of each graph is identical (and uniform), which makes the analysis of this walk easier. However, one downside of this approach is that it either requires a good estimate of $\Delta$ (or even $n$), or the random walk may potentially be slowed down significantly. Also, studying standard random walks seems more natural and, as we will see later, it also helps us to uncover some of the subtle boundaries between fast mixing and polynomial hitting, and slow mixing and exponential hitting. One of the earliest appearances of dynamic graphs is in the context of load balancing~\cite{GhoshLMMPRRTZ99}, where the authors assumed a uniform (i.e., time-independent) lower bound on the edge and vertex expansion. A refinement is to instead relate the balancing (mixing) time to the geometric mean of the spectral gaps, which was used in~\cite{ElsasserMS04}. A result of a similar flavour for both the conductance and the vertex expansion was shown in \cite{GiakkoupisSS14} in the context of randomised rumour spreading, and more recently a similar result was shown for the cover model \cite{icalp2016voter}. In \cite{KLO10}, the authors analyse a sequence of graphs satisfying a $T$-interval connectivity property, which asserts that for every $T$ consecutive rounds there exists a stable connected spanning subgraph. The authors present upper bounds for several distributed computational problems. One specific graph model that has been very popular is the so-called Markovian evolving graph. In this model every edge is associated to the same but independent two-state birth and death chain which decides whether the edge is present or not in the next step. Many aspects of this network have been studied, most notably the (dynamic) diameter \cite{ClementiMMPS10} and the time to spread a piece of information \cite{ClementiST15}. Recently, however, Lamprou et al.~\cite{spirakis} also considered the cover time of these graphs. In particular, suppose there exists an underlying graph $G$ with minimum degree $\delta$ such that at each time $t$ the graph $G^{(t)}$ contains each edge in $G$ independently with probability $p$ (i.e., the presence of an edge does not depend on the past). They show the cover time of such dynamic graph is at most $t_{\sf cov}(G) / (1-(1-p)^{\delta})$, where $t_{\sf cov}(G)$ is the cover time of $G$. They also study \emph{random walks with a delay}, where at each step a particle chooses a random neighbour of the current vertex according to the topology of the underlying graph $G$, and moves there if the corresponding edge is present, otherwise waits till it becomes available. For this perhaps slightly less natural process, they give bounds on the cover time also for the case where the probability of an edge being available at time $t$ depends on whether that edge was available at time $t-1$. We also highlight dynamical percolation, a particular type of Markovian evolving graphs that has received recent attention (see, e.g., \cite{hermonSousi,pss15,sousiThomas}). Here, an ``open'' edge becomes ``close'' with probability $p$, while a close edge becomes open with probability $1-p$. In contrast to the literature above, however, works on random walks on dynamical percolation usually refer to continuous-time random walks. Another class of dynamic graph models involves agents that move in some bounded space and can interact only if they are close enough~\cite{LamLMSW12,PeresSSS11,PettarinPPU11}. In contrast to these works, our bounds are less tight but hold under much weaker assumptions on the graph and therefore capture a more dynamic and less ``regular'' setting. Finally, we mention that Saloff-Coste and Zuniga~\cite{merging1,merging2} have generalised spectral and geometric techniques, such as Nash and log-Sobolev inequalities, to time-inhomogeneous Markov Chains (of which random walks on dynamic graphs are a subset). In particular, in contrast to our results, they study chains where the individual transition matrices might not have the same time-independent stationary distribution. For this reason, they focus on \emph{merging} properties of these chains, i.e., the ability of the chain to ``forget'' the initial distribution. They obtain bounds on merging for chains that satisfy the $c$-stability property, which implies (but it is not equivalent) that the stationary distributions of the individual transition matrices do not change too much over time. Unfortunately, proving that a time-inhomogeneous chain is $c$-stable is itself very difficult, and they are able to obtain concrete bounds on merging only for very simple time-inhomogeneous Markov chains. \section{Notation and preliminaries}\label{sec:notation} Let $\mathcal{G} = \{G^{(t)}\}_{t=1}^{\infty}$ be an infinite sequence of undirected and unweighted graphs defined on the same vertex set $V$, with $|V| = n$. We study (lazy) random walks on $\mathcal{G}$: suppose that at a time $t\ge 0$ a particle occupies a vertex $u \in V$. At step $t+1$ the particle will remain at the same vertex $u$ with probability $1/2$, or will move to a random neighbour of $u$ in $G^{(t)}$. In other words, it will perform a single random walk step according to a transition matrix $P^{(t)}$, which is the transition matrix of a lazy random walk on $G^{(t)}$: $P^{(t)}(u,u) =1/2 $, $P^{(t)}(u,v) = 1/(2d_u)$ if there is an edge between $u$ and $v$ in $G^{(t)}$ (and in this case we write $u \sim_t v$), or $P^{(t)}(u,v) = 0$ otherwise. We denote with $p^{[t_1,t_2]}_{u,v}$ the probability that a random walk that visits vertex $u \in V$ at time $t_1$ will visit $v \in V$ at time $t_2 \ge t_1$. Notice that given an initial probability distribution $p^{(0)} : V \to [0,1]$, $p^{[0,t]} = p^{(0)} P^{[0,t]} = p^{(0)} P^{(1)} P^{(2)} \cdots P^{(t)}$ is the probability distribution after $t$ steps. Unless stated otherwise we assume that all the graphs in $\mathcal{G}$ are connected and have the same stationary distribution $\pi$, i.e., $\pi P^{(t)} = \pi$ for any $t \ge 0$. We denote the smallest value assumed by $\pi$ as $\pi_* = \min_{x \in V} \pi(x)$. We define the $\ell_2(\pi)$-inner product as $ \langle f,g \rangle_{\pi} = \sum_{u \in V} f(u) g(u) \pi(u)$ for any $f,g: V \to \mathbb{R}$. Analogously, we denote with $\| f \|_{2,\pi} = \sqrt{\langle f,f \rangle_{\pi}}$ the $\ell_2(\pi)$-norm of $f: V \to \mathbb{R}$. Notice that since all the graphs in $\mathcal{G}$ are undirected, for any $t\ge 0$, $P^{(t)}$ is reversible with respect to $\pi$, i.e., $\pi(x) P^{(t)}(x,y) = \pi(y) P^{(t)}(y,x)$ for any $x,y \in V$ (this is also called the detailed balance condition). Moreover, $P^{(t)}$ is self-adjoint for the $\ell_2(\pi)$-inner product: for any $f,g: V \to \mathbb{R}$, \begin{equation} \label{eq:selfadjoint} \langle P^{(t)} f, g \rangle_{\pi} = \langle f, P^{(t)} g \rangle_{\pi}. \end{equation} We will often work with the likelihood ratio $\rho^{[0,t]}_{u,\cdot} = p^{[0,t]}_{u,\cdot} / \pi(\cdot)$. When it is clear from the context, we will drop the starting point $u$ and use the shorthands $p^{(t)}$ and $\rho^{(t)}$ to indicate (respectively) the probability distribution of the random walk at time $t$ and its likelihood ratio. We define the $\ell_2$ mixing time as \[ t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) = \min \{t \colon \| \rho^{[0,t]}_{u,\cdot} - 1 \|_{2,\pi} \le 1/3 \text{ for any } u \in V\}. \] Observe that, since $\Ev_{\pi} \rho^{(t)} = 1$, we have that $\| \rho^{(t)} - 1 \|_{2,\pi}^2 = \var_{\pi} \rho^{(t)} = \Ev_{\pi} \left(\rho^{(t)}\right)^2 - 1$. Let $p$ be a probability distribution with likelihood ratio $\rho = p/\pi$. For a reversible $P$, \[ P\rho (u) = \sum_{v \in V} P(u,v) \rho(v) = \sum_{v \in V} P(u,v) \frac{p(v)}{\pi(v)} = \frac{1}{\pi(u)} \sum_{v \in V} P(v,u) p(v) = \frac{pP (u)}{\pi(u)}, \] from which it follows that $P^{(t)} \cdots P^{(1)}\rho^{(0)}(u) = \rho^{(t)}(u)$. Given a transition matrix $P$ with stationary distribution $\pi$ and a function $f \colon V \to \mathbb{R}$, we define the Dirichlet form as \[ \mathcal{E}_P(f,f) = \frac{1}{2} \sum_{u,v \in V} \left(f(u) - f(v)\right)^2 \pi(u) P(u,v). \] When $P$ is a transition matrix of a lazy random walk on a graph $G = (V,E)$ with $|E| = m$, $\mathcal{E}_P(f,f) = \frac{1}{4m} \sum_{u \sim v} \left(f(u) - f(v)\right)^2$, where $u \sim v$ stands for $\{u,v\} \in E$. As long as $P$ is lazy (i.e., $P(u,u) \ge 1/2$ for any $u \in V$), we can relate the $\ell_2^2$ distance of a distribution from stationary to its Dirichlet form \cite[Proposition 2.5]{nonrevFill}: \begin{equation} \label{eq:mihai} \var_{\pi} \rho^{(t)} \ge \var_{\pi} \rho^{(t+1)} + \mathcal{E}_{P^{(t+1)}}(\rho^{(t)} ,\rho^{(t)}). \end{equation} The \emph{spectral gap} of $P$ is defined as \[ \lambda(P) = \inf _{\substack{f: V \to \mathbb{R} \\ \var_{\pi}{f} \neq 0}} \frac{\mathcal{E}_P(f,f)}{\var{f}}. \] We denote with $\Phi_P(A)$ the \emph{conductance} of a subset of vertices $A \subset V$ : \[ \Phi_P(A) = \frac{\sum\limits_{{u \in A , v \not\in A}} \pi(u) P(u,v)}{\min\left\{\pi(A), \pi(V \setminus A)\right\}}, \] where $\pi(A) = \sum_{u \in A} \pi(u)$. The conductance of $P$ is then defined as $\Phi(P) = \min_{A \subset V} \Phi_P(A)$. Cheeger's inequality \cite{SJ89} relates $\lambda(P)$ to the conductance $\Phi(P)$ of a reversible $P$: $2\Phi(P) \ge \lambda(P) \ge \Phi(P)^2/2$. Given two vertices $u$ and $v$, we denote with $\tau_{u,v}$ the \emph{hitting time} of $v$ from $u$, i.e., the expected time to reach $v$ starting from $u$. The maximum hitting time is defined as $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = \max_{u,v} \tau_{u,v}$. The \emph{commute time} between $u$ and $v$ is instead defined as $C_{u,v} = \tau_{u,v} + \tau_{v,u}$. Clearly, $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) \le \max_{u,v} C_{u,v} \le 2t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G})$. Finally, we write $A \lesssim B$, respectively $A \gtrsim B$, to mean that there exists some absolute constant $C > 0$, independent of the parameters of the sequence of graphs $\mathcal{G}$, such that $A \le C \cdot B$, respectively $A \ge C \cdot B$. \section{Bounds on hitting times based on the isoperimetric dimension} \label{sec:twopluseps} Aldous and Fill conjectured in their book \cite[Open Problem 6.20]{aldousFill} that whenever a regular bounded-degree graph satisfies $|E(A,A^c)| = \Omega(|A|^{\frac{1}{2}+ \epsilon})$ for any small positive $\epsilon$, the maximum hitting time should be $O(n)$. Observe that this isoperimetric condition is satisfied by the torus in $3$ or higher dimensions, which has indeed $O(n)$ maximum hitting time. Furthermore, to have $O(n)$ maximum hitting time, $\epsilon$ needs to be strictly greater than zero: take for example the $2$-dimensional torus: there is a set $A$ for which $|E(A,A^c)| = \Theta(|A|^{1/2})$ and, indeed, the maximum hitting time is $\Theta(n \log{n})$. The conjecture was first proved in \cite{benjamini}, with a proof based on the relation between commute times and effective resistances in a graph. Since a similar relation is not known for time-inhomogeneous Markov chains, such a proof cannot be generalise to random walks on dynamic graphs. In this section we present a new proof of this result based on the ``conditional expectation trick'' already used in the proof of \thmref{worsthit}. We start by obtaining a bound on the Dirichlet form of a graph satisfying the aforementioned isoperimetric condition. \begin{lemma} \label{lem:decrease_eps} Let $G=(V,E)$ be a $d$-regular undirected graph with $|V| = n$ and $d=O(1)$ such that, for any $A \subset V$ with $1 \le |A| \le n/2$, $ |E(A,V \setminus A)| = \Omega(|A|^{\frac{1}{2} + \epsilon}) $ for $1/4 \ge \epsilon \ge 0$. Consider the transition matrix $P$ of a lazy random walk in $G$. Let $\sigma$ be any probability distribution and $f = \sigma/\pi$, where $\pi$ is the uniform distribution. If $\Ev_{\pi} f^2 = \beta > C$ for a large enough constant $C$, then \[ \mathcal{E}_P(f,f) \gtrsim \frac{\beta^{2-2\epsilon}}{n^{1-2\epsilon}}. \] \end{lemma} We now apply the previous lemma to prove the main result of this section, in an analogous way to the proof of \thmref{worsthit}. \begin{theorem} \label{thm:twopluseps} Let $\mathcal{G}=\{G^{(t)}\}_{t=1}^{\infty}$ be a sequence of $n$-vertex graphs such that each $G^{(t)}$ is regular, has bounded degree, and satisfies the following isoperimetric condition: there exists $\epsilon \in [0,1/4]$ such that, for any subset of vertices $A$ with $1 \le |A| \le n/2$, $|E(A,V \setminus A)| = \Omega(|A|^{\frac{1}{2} + \epsilon})$. Then, \begin{enumerate}\itemsep0pt \item $t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) = O(n^{1-2\epsilon})$, \item $\bigl| \frac{p^{[0,t]}_{u,v}}{n} - 1 \bigr| = O\left(\frac{1}{t^{1+2\epsilon}}\right)$, \item $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n)$ if $\epsilon>0$, $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n\log{n})$ if $\epsilon=0$. \end{enumerate} \end{theorem} \section{Worst-case bounds for mixing and hitting times} \label{sec:worsthit} In this section we assume that a particle performs a random walk on a sequence of graphs $\mathcal{G} = \{G^{(t)}\}_{t=1}^{\infty}$ where all the $G^{(t)}$ share the same set of $n$ vertices $V$, are connected, and have a time-independent stationary distribution $\pi$ with $\pi_* = \min_u \pi(u)$. In general, graphs in the sequence might have a different number of edges. We denote with $m^* \le n^2$ the maximum number of edges a graph in the sequence can have. Our goal is to bound mixing and maximum hitting times of a random walk on $\mathcal{G}$. We start by studying the rate of convergence to stationarity. By \eq{mihai}, our goal then becomes to study $\var \rho^{(t)} \le \var{\rho^{(0)}} - \sum_{i=1}^{t-1} \mathcal{E}_{P^{(i)}}(\rho^{(i-1)} ,\rho^{(i-1)} )$. The next lemma provides a lower bound on the Dirichlet form of graphs in $\mathcal{G}$. The main insight of this lemma is that it shows a faster decrease of the $\ell_2$-distance to stationarity when this distance is large, i.e., at the beginning of the walk. This is in the same vein as, for example, bounds on mixing based on the spectral profile~\cite{spectralprof}. \begin{lemma} \label{lem:dirichlet} Let $P$ be the transition matrix of a lazy random walk on a graph $G \in \mathcal{G}$. Given a probability distribution $\sigma: V \to [0,1]$ with likelihood ratio $f = \sigma/\pi$ such that $\var_{\pi} f = \epsilon > 0$, \[ \mathcal{E}_{P}(f,f) \gtrsim \max\left\{ \frac{\epsilon^2}{m^* + 1/(\pi_*^2(1+\epsilon))}, \frac{\pi_* \epsilon^2}{ n} \right\}. \] \end{lemma} While the previous lemma will be directly used to derive bounds on mixing, to obtain a bound on the hitting time we will need to study $t$-step probabilities. For this reason, we prove a technical lemma that relates $2t$-step probabilities to the variance of the likelihood ratio of a $t$-step probability distribution, generalising a well-known result for time-homogeneous reversible Markov chains (see, e.g., \cite[Lemma 3.20]{aldousFill}). We remark, however, that while in time-homogeneous Markov chains $2t$-step transition probabilities will be as small as the variance of their $t$-step likelihood ratio, in our case, since the order in which transition matrices are applied can matter significantly, this might not be necessarily true: we can only relate these probabilities to the variance of the $t$-step likelihood ratio of a related but slightly different Markov chain. \begin{lemma} \label{lem:inftoell2} Let $t_1 < t_2$. Then, for any $u,v \in V$, it holds that \[ \left|\rho^{[t_1,t_2]}_{v,u} - 1 \right| \le \max\left\{\var_{\pi} \left( P^{(\floor{\frac{t_1+t_2}{2}})} \cdots P^{(t_2)} \rho^{[t_1,t_1]}_{u,\cdot}\right) , \var_{\pi} \left(P^{(\floor{\frac{t_1+t_2}{2}-1})} \cdots P^{(1)}\rho^{[t_1,t_1]} _{v,\cdot} \right) \right\}. \] \end{lemma} Using \lemref{dirichlet} and \lemref{inftoell2} we can obtain almost optimal worst case bounds on mixing, hitting, and $t$-step probabilities of a random walk on $\mathcal{G}$. In particular, when $\mathcal{G}$ comprises only regular graphs, the next theorem implies a $O(n^2)$ bound on mixing and hitting times, which matches the well-known results for a random walk on a static undirected graph. In the general non-regular case, we prove a $O(n^3)$ bound on mixing and a $O(n^3 \log{n})$ bound on hitting, which almost matches the $O(n^3)$ bound for mixing and hitting in static graphs. This improves upon \cite{avin}, which presents a bound of $O(n^3 \log{n})$ for hitting on regular graphs and a bound of $O(n^5 \log{n})$ for hitting in the general case. \begin{theorem} \label{thm:worsthit} Let $\mathcal{G}$ be a sequence of connected graphs with $n$ vertices, the same stationary distribution $\pi$, and at most $m^*$ edges in each graph. Then, for a lazy random walk on $\mathcal{G}$: \begin{enumerate}\itemsep0pt \item $t_{\mbox{\scriptsize{\textsf{mix}}}}(\mathcal{G}) = O(n/\pi_*)$, \item $\bigl| \frac{p^{[0,t]}_{u,v}}{\pi_v} - 1 \bigr| \lesssim \frac{m^*}{t} + \frac{1}{\pi_* \sqrt{t}}$, simplifying to $\bigl| \frac{p^{[0,t]}_{u,v}}{\pi_v} - 1 \bigr| \lesssim \frac{n}{\sqrt{t}}$ if all the graphs in $\mathcal{G}$ are $d$-regular, \item $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n \log{n}/\pi_*)$. Furthermore, if the graphs in $\mathcal{G}$ are $d$-regular, $t_{\mbox{\scriptsize{\textsf{hit}}}}(\mathcal{G}) = O(n^2)$. \end{enumerate} \end{theorem} The proof, which is deferred to the appendix, proceeds as follows. First we establish the bound on the mixing time based on \lemref{dirichlet}, which readily implies that starting from a distance to stationarity equal to $\epsilon$, such distance is halved in $O(n/(\epsilon \pi_*))$ steps. We then connect distance to stationarity to $t$-step probabilities with \lemref{inftoell2}, obtaining the second result of \thmref{worsthit}. Finally, to bound the hitting time, we employ a probabilistic argument already exploited in, e.g., \cite{KMS16}, and which makes use of both our bounds on mixing time and on $t$-step probabilities.
1,314,259,996,194
arxiv
\section{Introduction} \label{sec:intro} Protoplanetary disks surrounding young stars are the birthplace of planets. Forming planets are thought to interact with the parent protoplanetary disk and cause various substructures, such as an inner hole, gaps and rings, and large-scale asymmetries. High-resolution observations with radio interferometers, such as the Atacama Large Millimeter/submillimeter Array (ALMA), have revealed that dust substructures within protoplanetary disks are common and are rich in variety \citep[e.g.,][]{bib:andrews2018}. Recent high-resolution ALMA observations with deep integrations have revealed au-scale dust substructures that may be caused by a forming planet and a surrounding circumplanetary disk \citep{bib:tsukagoshi2019b,bib:isella2019}. Further observational constraints are essential for confirming the physical origins of these substructures. The first steps towards forming a planet involves the coagulation and growth of dust grains. Hence, revealing the evolution of dust grains in protoplanetary disks is a key for understanding the origin and diversity of planetary systems, and observational constraints on the dust size distribution is crucial. Theoretical models of dust transport, fragmentation, and size evolution predict that the average size of grains varies with the disk radius \citep{bib:dullemond2005}. The picture of dust filtration by a forming planet in a protoplanetary disk assumes that a planet-induced gap filters large dust grains at the outer edge of the gap, while the remaining small grains pass across the gap \citep{bib:zhu2012}. It is also suggested that the maximum grain size should be smaller by a factor of 100 inside the condensation front of water ice, i.e., the H$_2$O snow line \citep{bib:banzatti2015}. Because the H$_2$O snow line may be a boundary that determines the type of planet formed \citep[e.g., terrestrial planets, gas giants, or icy giants;][]{bib:hayashi1981}, it is important to reveal the position of the snow line in the protoplanetary disk to understand the planetary formation process. Multi-frequency observations at (sub)millimeter wavelengths are an effective way to measure the dust size distribution and obtain a high-sensitivity intensity image by increasing the total bandwidth. The dust size distribution can be inferred by measuring the spectral index $\alpha$ at (sub)millimeter frequencies. When the dust continuum emission at (sub)millimeter frequencies is optically thin, the frequency dependence of the dust mass opacity coefficient $\kappa_\nu$ is evident in the $\alpha$ profile. The dust mass opacity coefficient is often described as having a power-law form ($\kappa_\nu \propto \nu^\beta$), and in the Rayleigh-Jeans limit, $\beta$ is related to $\alpha$ as $\beta=\alpha-2$. It is known that $\beta$ is affected by the dust size; $\beta\sim1.7$ for sub-micron-sized interstellar grains, while it changes to $\beta\sim1$ or less owing to grain growth in protoplanetary disks \citep[e.g.,][]{bib:miyake1993}. Therefore, multi-frequency observations at optically thin (sub)millimeter wavelengths are essential to reveal the dust size distribution of the disk. High-resolution multi-frequency observations with the Atacama Large Millimeter/submillimeter Array (ALMA) have been conducted on protoplanetary disks to resolve the radial dependence of the dust size distribution \citep[e.g,][]{bib:alma2015,bib:dent2019,bib:huang2020,bib:long2020}. There are several ways to concatenate multi-frequency data for imaging the combined intensity and producing the spectral index maps. The first is an image-oriented method. This is the traditional method, in which the intensity map at each band is created separately and the frequency dependence is fitted pixel-by-pixel using a power-law function. This method requires matching the beam sizes across all images before the fitting. Another method to concatenate multi-frequency data is multi-scale multi-frequency synthesis (multi-scale MFS) introduced by \citet{bib:rau2011}, in which all observed visibilities are concatenated to simultaneously create combined intensity and spectral index maps. The visibility-domain operation can provide higher-resolution maps than those of the image-oriented method. \citet{bib:rau2011} demonstrated that multi-scale MFS works well for the imaging of a compact, flat-spectrum source at lower frequencies ($\sim$1~GHz), motivated by the application to synchrotron emission. The authors pointed out that the UV coverage has a significant impact on reconstructing the spectral index map of spatially extended emission. On the other hand, the thermal continuum emission within the dust of a protoplanetary disk has a spectral slope $\alpha$ of 2--4 depending on the optical depth and dust mass opacity coefficient. In addition, recent high-resolution observations have revealed that the disks are often spatially extended. Therefore, it is worth validating whether MFS works well for reconstructing the (sub)millimeter continuum emission of a protoplanetary disk and its frequency dependence. TW~Hya is a 0.8 $M_\sun$ T Tauri star surrounded by a gas-rich protoplanetary disk at a distance of 59.5 pc \citep[e.g.,][]{bib:gaia2016}. The disk is almost face-on with an inclination angle of 5--6$\degr$ \citep{bib:huang2018,bib:teague2019}; thus, it is one of the best targets for investigating the radial structure of a protoplanetary disk. The disk has been well resolved at (sub)millimeter, near-infrared, and optical wavelengths. Multiple gap structures in the near-infrared scattered light have been reported \citep{bib:akiyama2015,bib:boekel2017}. ALMA has also resolved gaps at (sub)millimeter wavelengths, and an inner disk with a size of $\sim$1~au has also been identified \citep{bib:andrews2016,bib:tsukagoshi2016,bib:huang2018}. The features detected thus far within the protoplanetary disk are almost axisymmetric except for a moving surface brightness asymmetry, probably due to a disk shadow \citep{bib:debes2017} and a spiral pattern found in the CO gas \citep{bib:teague2019}. Another asymmetric structure of the disk is a localized compact ($\sim$1~au) excess emission at millimeter wavelengths near the edge of the dust disk identified in high-sensitivity ALMA observations \citep{bib:tsukagoshi2019b}. The origin of the emission feature remains unclear, but it may be caused by a circumplanetary disk surrounding a Neptune-mass planet or dust grains accumulated within a small-scale gas vortex. According to a recent theoretical study, the emission feature could also be a dust-losing young planet that has already been formed \citep{bib:nayakshin2020}. The dust size distribution of the TW~Hya disk has been inferred using high-resolution multi-frequency observations with ALMA \citep{bib:tsukagoshi2016,bib:huang2018}. The observations have revealed that the spectral index $\alpha$ decreases toward the disk center, and there is an enhancement near the gap at 25~au. This enhancement may be attributed to the dust filtration effect, in which the gap is deficient in large grains \citep{bib:zhu2012}. However, there is still uncertainty on the radial variation of the $\alpha$ profile. The UV sampling of our previous observations in 2015 was particularly sparse at $<200$~k$\lambda$, and the integration time was as short as $\lesssim40$~min \citep{bib:tsukagoshi2016}. Hence, the poorly sampled UV coverage makes the image reconstruction difficult because the synthesized beam shows a complicated sidelobe pattern. This caused difficulties in the image reconstruction by CLEAN, which highly depends on the imaging parameters, such as the weighting and the scale parameters of multiscale cleaning. Additional uncertainty arises from adopting only two ALMA bands to derive the spectral index. A combination of more than two bands can better constrain the spectral index by improving the frequency leverage. Most recently, \citet{bib:macias2021} presented an analysis of the spectral index distribution of TW~Hya's disk using sets of high-resolution ALMA data from Bands~3 to 7, and the variation of the spectral index within the analyzed frequency range was reported. As they focused on the spectral indices between two adjacent bands, a high-sensitivity continuum image integrated over all the bands was not presented. In this study, we attempt to reconstruct a higher-sensitivity millimeter continuum image and revisit the spectral index distribution of the TW~Hya disk using multiple sets of ALMA data at Bands~3, 4, 6, and 7. Two imaging methods, MFS and the image-oriented method, are adopted to combine all the data and to derive the spectral index map. The details of the observations and data reduction are presented in \S~\ref{sec:obs}. In \S~\ref{sec:results}, the images of the combined intensity and the spectral index are shown, and we compare them from the viewpoint of the different imaging methods. To validate our reconstructed images, we tested the imaging methods using simulations with a disk model in \S~\ref{sec:model}. In \S~\ref{sec:discussion}, we compare our results with recent high-resolution spectral index profiles presented by \citet{bib:macias2021}. We also discuss the dust size distribution in the disk by deriving the distribution of the optical depth $\tau$, power-law index of the dust mass opacity coefficient $\beta$, and dust temperature $T_\mathrm{d}$. Lastly, we present a summary of this paper in \S\ref{sec:summary}. \section{Observations and Data Reduction} \label{sec:obs} In this study, we used sets of ALMA archive data at Bands~3, 4, 6, and 7 to reconstruct a high-sensitivity combined intensity map and a spectral index map covering these frequencies. Here, we describe the details of our observations and data reduction. The Band 4 and 6 data include our new observations, and the details of the observations are described in the following subsections. All the ALMA measurement sets were reduced and calibrated using the Common Astronomical Software Application (CASA) package \citep{bib:mcmullin2007}. The data IDs used in this study and their detailed information are listed in Table \ref{tab:ALMAdata}. Figure\ \ref{fig:uv} shows the achieved UV coverage of each band's combined data. The method for obtaining the combined intensity and spectral index maps is also described in the following subsection. \begin{figure*}[htb] \begin{center} \plotone{plotuv_4band.pdf} \caption{Whole view of the UV coverage of the combined data of Bands 3, 4, 6 and 7 from left to right (top). Close-up view of the UV coverage inside $\pm$1000 k$\lambda$ (bottom).}\label{fig:uv} \end{center} \end{figure*} \begin{deluxetable*}{lcccccccc}\label{tab:ALMAdata} \tabletypesize{\footnotesize} \tablecaption{ALMA data employed in this study} \tablehead{ \colhead{ID} & \colhead{PI} & \colhead{Date} & \colhead{Configuration} & \colhead{$L_\mathrm{min}$} & \colhead{$L_\mathrm{max}$} & \colhead{$t_\mathrm{integ}$} & \colhead{$B_\mathrm{total}$} & \colhead{CASA ver.} \\ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{[m]} & \colhead{[m]} & \colhead{[min.]} & \colhead{[MHz]} & \colhead{} } \startdata \multicolumn{9}{c}{Band 3} \\ \hline 2016.1.00229.S & Bergin,~E. & Aug 1, 2017 & C40-7 & 17 & 149 & 41 & 2293 & 4.7.2 \\ 2018.1.01218.S & Macias,~E. & Jun 24--Jul 8, 2019 & C43-9/10 & 149 & 16196 & 209 & 7500 & 5.4.0 \\ \hline \multicolumn{9}{c}{Band 4} \\ \hline 2015.A.00005.S & Tsukagoshi,~T. & Dec 2, 2015 & C36-8/7 & 17 & 10803 & 43 & 7500 & 4.5.0 \\ 2015.1.00845.S & Favre,~C. & Apr 29, 2016 & C36-2/3 & 15 & 640 & 80 & 3750 & 4.5.3 \\ 2015.1.00845.S & Favre,~C. & Jun 1, 2016 & C40-4 & 15 & 713 & 76 & 1875 & 4.5.3 \\ 2016.1.00842.S & Tsukagoshi,~T. & Sep 28, 2017 & C40-6 & 19 & 1808 & 37 & 7500 & 4.7.0 \\ 2016.1.00842.S & Tsukagoshi,~T. & Oct 21, 2016 & C40-8/9 & 41 & 14851 & 23 & 7500 & 4.7.2 \\ 2016.1.00440.S & Teague,~R. & Oct 22, 2016 & C40-6 & 19 & 1400 & 141 & 1875 & 4.7.0 \\ \hline \multicolumn{9}{c}{Band 6} \\ \hline 2013.1.00387.S & Guilloteau,~S. & May 13, 2015 & C34-3 & 21 & 558 & 47 & 1875 & 4.2.2 \\ 2013.1.00114.S & O\"berg,~K. & Jul 19, 2014 & C34-4/5 & 34 & 650 & 43 & 938 & 4.2.2 \\ 2015.A.00005.S & Tsukagoshi,~T. & Dec 2, 2015 & C36-8/7 & 17 & 10803 & 40 & 7500 & 4.5.0 \\ 2016.1.00842.S & Tsukagoshi,~T. & May 15, 2017 & C40-5 & 15 & 1121 & 11 & 7500 & 4.7.2 \\ 2017.1.00520.S & Tsukagoshi,~T. & Nov 20, 2017 & C43-8 & 92 & 8548 & 118 & 7500 & 5.1.1 \\ \hline \multicolumn{9}{c}{Band 7} \\ \hline 2015.1.00686.S & Andrews,~S. & Nov 23, 2015 & C36-8/7 & 17 & 14238 & 132 & 6094 & 4.5.0\\ 2015.1.00308.S & Bergin,~E. & Mar 8, 2016 & C36-3 & 15 & 460 & 69 & 3750 & 4.5.2 \\ 2016.1.00229.S & Bergin,~E. & Nov 23, 2016 & C40-4 & 15 & 704 & 49 & 3281 & 4.7.0 \\ 2016.1.00440.S & Teague,~R. & Nov 27, 2016 & C40-3 & 15 & 704 & 48 & 1172 & 4.7.2 \\ 2016.1.00464.S & Walsh,~C. & Dec 3, 2016 & C40-4 & 15 & 704 & 342 & 1875 & 4.7.2 \\ 2016.1.01495.S & Nomura,~H. & Dec 6, 2016 & C40-3 & 15 & 704 & 43 & 1172 & 4.7.0 \\ 2016.1.00629.S & Cleeves,~I. & Dec 30, 2016 & C40-3 & 15 & 460 & 84 & 2578 & 4.7.0 \\ 2016.1.00311.S & Cleeves,~I. & May 21, 2017 & C40-5 & 15 & 390 & 48 & 1875 & 4.7.2 \\ \enddata \end{deluxetable*} \subsection{Band~3 data} We have used archival data from two ALMA projects which were recently published by \citet{bib:macias2021}. The details of the archive data and the CASA version used for the pipeline analysis are listed in Table~\ref{tab:ALMAdata}. The pipeline script provided by ALMA was used for the initial data flagging and the calibration of the bandpass characteristics and the complex gain. To concatenate the data obtained at different epochs, we first created a dirty map of each data set to determine the representative position of the emission, i.e., the center of the disk emission. The dirty map was reconstructed with Briggs weighting with a robust parameter of 0.5, and the position of the emission peak was measured by a 2D Gaussian fitting to the emission using CASA {\it imfit}. Subsequently, the field center of each measurement set was corrected to be the disk center by CASA {\it fixvis}. Then, all the measurement sets were concatenated by CASA {\it concat} with a direction shift tolerance to be a single field center for correcting the proper motion of the source. The concatenated visibilities were imaged using the CASA {\it tclean} task. The CLEAN map was reconstructed by adopting the Briggs weighting with a robust parameter of 0.5. We also employed the multiscale CLEAN algorithm with scale parameters of [0, 50, 150]~mas. After the initial CLEAN map was reconstructed, we applied phase self-calibration to the concatenated data. We adopted solution intervals varying from 1200 to 60~sec for the shorter baseline data and from 6000 to 900~sec for the longer baseline data. The self-calibration started from longer solution intervals than the target scan to remove the systematic phase offsets between the concatenated measurement sets. Then, the self-calibration was stopped at the shortest solution interval where the signal-to-noise ratio is enough to solve above 2$\sigma$, i.e., only a small amount of visibilities were flagged out. After the phase self-calibration was done, one round of amplitude self-calibration was applied with a solution interval for each observation period. However, the image sensitivity was less affected by self-calibration so that the noise level of the final CLEAN map was 4.3~$\mu$Jy~beam$^{-1}$. The beam size was $53.0\times50.7$~mas, with a position angle of $-7.7\degr$. \subsection{Band~4 data} Our ALMA Band 4 observations (2016.1.00842.S) were conducted on 2016 September 28, 2016, with array configuration C40-9 and on October 21, 2016, with C40-6. The total integration times were 12~min and 38~min, respectively. In addition to the observed data set, we employed ALMA archive data (2015.A.00005.S, 2015.1.00845.S, and 2016.1.00440.S) and concatenated them to obtain better sensitivity and UV coverage. The initial flagging and calibrations were performed by using the pipeline scripts, and the calibrated visibilities were concatenated using the same procedure as for the Band 3 data. The CLEAN map of the combined measurement set was reconstructed by adopting Briggs weighting with a robust parameter of 0. We employed a multiscale option with scale parameters of [0, 50, 150] mas. The self-calibration in phase was applied with solution intervals of 3600, 900, 300, and 150~sec, and followed by the amplitude self-calibration with intervals of each observation period. The spatial resolution of the final CLEAN map at Band~4 was 85.1$\times$50.4~mas with a position angle of 45.4$\degr$, and the noise level of the self-calibrated CLEAN map was 7.8~$\mu$Jy~beam$^{-1}$. \subsection{Band~6 data} Our Band~6 observations were carried out on May 15, 2017, with array configuration C40-5 (2016.1.00842.S) and in the period from 2017 November 20 to 25, 2017, with C43-8 (2017.1.00520.S) during ALMA cycles 3 and 4. A description of the observations and the obtained image has already been published \citep{bib:tsukagoshi2019b}. To improve the sensitivity, we obtained archive data 2013.1.00114.S, 2013.1.00387.S, and 2015.A.00005.S and concatenated them with the observed data to create the final Band~6 image. After the initial data flagging and calibrations using the pipeline script, the same procedure as for the Band 3 data was applied to concatenate the calibrated measurement sets. The imaging procedure was the same as that for the Band~3 data, except for some imaging parameters. The CLEAN map was reconstructed using Briggs weighting with a robust parameter of 0.5. The scale parameters for the multiscale CLEAN were set to [0, 42, 126] mas. The phase-only self-calibration was applied varying the solution interval from 3600 to 120~sec, and was followed by the amplitude self-calibration. The noise level of the final CLEAN image was 8.1~$\mu$Jy~beam$^{-1}$. The beam size of the final image was 46.9$\times$40.6 mas with a position angle of 86.4$\degr$. \subsection{Band~7 data} To create a high-resolution Band 7 image, we have used eight sets of ALMA archive data presented in \citet{bib:tsukagoshi2019b}, including the highest resolution data obtained by \citet{bib:andrews2016}. The data reduction and imaging was performed with the same procedure as for the other bands, except for some imaging parameters. We employed the Briggs weighting with a robust $-1.0$ for reconstructing the CLEAN map. The phase-only self-calibration was applied varying the solution interval from 7200 to 1200~sec. With the phase-only self-calibration, the image noise level was improved from 124 to 32.8~uJy~beam$^{-1}$, corresponding to an improvement in the SNR from $\sim$18 to $\sim$62. Then, the amplitude self-calibration was performed with a solution interval of each observation period. The noise level and the beam size of the final CLEAN image are 21.8~$\mu$Jy~beam$^{-1}$ and 36.4$\times$28.9~mas with a position angle of 69.9$\degr$, respectively. The details of the data reduction were also described in \citet{bib:tsukagoshi2019b}. \subsection{Reconstruction of the intensity and spectral index maps from all bands data} To combine the entire data set from Bands~3 to 7, we first corrected the proper motion by aligning the field center in the same manner for each band data. The disk center was derived by 2D Gaussian fitting to the bright part of the emission in the CLEAN map of each band. The field center of the measurement set at each band was updated to be the disk center. Then, all the measurement sets were concatenated using {\it concat} with a direction tolerance being a single field. With this concatenated measurement set, we reconstructed the intensity and spectral index maps at the central frequency using the two methods described in the following subsections. To match the minimum and maximum UV length between all band data, we employed the data in the baseline range of 14--5100 k$\lambda$. We used CASA version 6.2 for reconstructing the images. \subsubsection{Image-oriented method} Before making the maps, the concatenated measurement set was first divided into each band, and CLEAN images were made with Briggs weighting with a robust parameter of 0. We employed the same image size and cellsize to directly apply the mathematical operation to the images. The multiscale CLEAN algorithm was also employed with scales of 0, 54, 162~mas. All the reconstructed CLEAN images were convolved to have a circular beam with a full width at half maximum (FWHM) of 108~mas ($\sim6.4$~au), which is the largest beam major axis among the CLEAN images. For each pixel in the convolved images, we fit a power-law function $I=I_0 (\nu/\nu_0)^{I_\alpha}$ along the frequency axis. Here, $I_0$ is the intensity at the central frequency $\nu_0=$221~GHz. Note that image pixels where the emission is higher than 5$\sigma$ are used for the fit. The noise levels of the CLEAN maps are 4.3, 9.4, 13, and 49~$\mu$Jy~beam$^{-1}$ for Bands~3, 4, 6, and 7, respectively. Fitting was performed using {\it curve\_fit} in the {\it scipy} package\footnote{https://scipy.org} \citep{bib:scipy2020}. According to the ALMA proposer's guide, the uncertainties of the absolute flux calibration of ALMA are 5, 5, 10, and 10\% for Bands~3, 4, 6, and 7, respectively. This corresponds to the fitting error of the spectral index with the image-oriented method being less than 0.01. Note that the uncertainty of the absolute flux calibration is lower than the above value because we combine some measurement sets for each band. \subsubsection{Multi-scale and multi-frequency synthesis} To create the combined intensity and spectral index maps, we also employed a multi-scale multi-frequency synthesis method (hereafter multi-scale MFS) implemented in the CASA {\it tclean} task \citep[deconvolver=mtmfs;][]{bib:rau2011}. In this method, the images are reconstructed by simultaneously solving the CLEAN components in the spatial and spectral regimes. In particular, the MFS method solves the frequency dependence of the intensity by adopting the Taylor expansion of the following equation, \begin{eqnarray} I_\nu && = I_{\nu_0} \left( \frac{\nu}{\nu_0} \right) ^{I_\alpha + I_\beta \log \left( \frac{\nu}{\nu_0} \right)} \label{eq:mfs1} \\ && \sim I_0 + I_1 \left( \frac{\nu-\nu_0}{\nu_0} \right) + I_2 \left( \frac{\nu-\nu_0}{\nu_0} \right)^2 + \dots \ . \end{eqnarray} Here, $I_{\nu_0}$ is the intensity value at the representative frequency $\nu_0$, and $I_\alpha$ and $I_\beta$ are the values of the power-law index and the curvature of the frequency dependence, respectively. The Taylor coefficients $I_\mathrm{n}$ ($n=0,1,2,\dots$) were determined via the deconvolution process. If we take the first order of the Taylor expansion, the first two coefficients $I_0$ and $I_1$ correspond to $I_0 = I_{\nu_0}$ and $ I_1=I_\alpha I_{\nu_0}$, and thus $I_{\nu_0}$ and $I_\alpha$ can be calculated from the coefficients. For the second order of the Taylor expansion, $I_\beta$ can be obtained using the third coefficient \begin{equation}\label{eq:mfs2} I_2 = \biggl( \frac{I_{\alpha} (I_{\alpha}-1)}{2} + I_\beta \biggr) I_{\nu_0}\ . \end{equation} The polynomial approximation of the power-law function is a source of errors. Although increasing the number of Taylor terms would be better for reproducing the power-law dependence of the frequencies, the use of too many terms could increase the critical errors for noisy data because of the increasing number of free parameters. In addition, the total frequency coverage of available images with respect to the representative frequency, i.e., the bandwidth ratio, could be a source of errors. This is because the wider the bandwidth, the more Taylor terms are required to reproduce the power-law dependence. The concatenated measurement set with data from all bands was imaged by adopting the {\it mtmfs} option in {\it tclean}, in which the number of Taylor coefficients is controlled by the {\it nterms} parameter; {\it nterms}=2 and {\it nterms}=3 mean that the frequency dependence is described by the Taylor expansion to the first and second orders, respectively. We created maps with {\it nterms}=2, 3, and 4 because the frequency range is wide, with a value of 95--360~GHz. The combined intensity map at a central frequency of 221~GHz and the spectral index map were reconstructed from all the calibrated visibilities using Briggs weighting with a robust parameter of 0. The scale parameter of the multiscale CLEAN was set to [0, 54, 162]~mas. The resolution of the final images was46.0$\times$42.5~mas with a position angle of 42.3$\degr$. Note that the uncertainty in the spectral index measurement with this method due to the absolute flux calibration is estimated to be less than 8\% from mock observations with an intensity model. Moreover, the uncertainty does not affect the shape of the $I_\alpha$ profile, but the entire profile was scaled. See \S~\ref{sec:model} for more detail. \section{Results} \label{sec:results} Figure \ref{fig:spindex} (top) shows the intensity map at the central frequency (221~GHz) $I_0$ and the spectral index map $I_\alpha$ obtained using the image-oriented method. Although the beam size is almost doubled with respect to that of previous studies \citep{bib:andrews2016,bib:tsukagoshi2016,bib:huang2018}, the combined intensity map resolves the disk substructures, two clear gaps and an inner hole, as shown in the leftmost panel of Figure \ref{fig:spindex} (top). The total flux density integrated over the disk emission is estimated to be 403~mJy. The spectral index map shows the radial variation as previously reported \citep{bib:tsukagoshi2016,bib:huang2018}. The spectral index is $\sim$3.0 near the outermost disk and decreases to less than 2 inside 20~au. There are enhancements of the spectral index likely associated with the gaps in the intensity distribution at 25 and 42 au. The rightmost panel of Figure\ \ref{fig:spindex}(top) shows the deprojected radial profiles of the intensity and the spectral index maps. The error bars are determined by the standard error through the azimuthal averaging. Note that for the deprojection, we employed an inclination of 7$\degr$ \citep{bib:qi2004}, which is 1--2$\degr$ larger than that determined by recent works \citep{bib:huang2018,bib:teague2019}. This slight difference does not affect the profiles. The $I_0$ and $I_\alpha$ maps reconstructed using MFS are also shown in Figure \ref{fig:spindex} (second to bottom). The deprojected radial profiles are also shown in the right panels of the Figure. The shaded region of the $I_\alpha$ profile shows the $I_\alpha$ error map, which is the outcome of the CASA {\it tclean} with the {\it mtmfs} option. Evidently, with a higher spatial resolution of MFS than that of the image-oriented method, the $I_0$ map shows the disk substructures more clearly. The intensity maps reconstructed with different {\it nterms} show no clear difference. The image noise level of the maps is 7.5~$\mu$Jy~beam$^{-1}$. The peak intensities are 1.79, 1.73, and 1.83~mJy~beam$^{-1}$, and the total flux densities are 526, 471, and 549~mJy for {\it nterms}=2, 3 and {\it 4}, respectively. There is a slight difference in the measured flux densities, but it is less than 10\%. The shape of the $I_\alpha$ profile reconstructed by MFS with {\it nterms}=2 is similar to that of the image-oriented method. Starting from the outermost part, $I_\alpha$ gradually decreases toward $\sim$25~au with slight enhancements associated with the intensity gaps, suddenly drops near 20~au, and has a lower value in the innermost region. However, the absolute value of $I_\alpha$ reconstructed with {\it nterms}=2 is much smaller than that of the image-oriented method. In contrast to the {\it nterms}=2 case, the absolute value of $I_\alpha$ is similar to that for the {\it nterms}=3 and 4 cases. Both cases show a similar $I_\alpha$ profile varying from $\sim$3 to $\lesssim$2 toward the disk center, whereas there is a slight difference between them ($\sim$0.3). The $I_\alpha$ enhancements at the gaps are also observed to have a similar value to the image-oriented case. To determine how the order of the Taylor expansion reproduces the power-law dependence across the observed frequencies, we performed a least-square fitting of the first and second orders of the Taylor expansion of a power-law function to a model spectrum with $I_\nu \propto \nu^{2.5}$ sampled at the observed frequencies. A pure power-law function was also employed for the fitting as a reference. The results of the fitting is shown in Figure~\ref{fig:model_sed}. As mentioned previously, the first order of the Taylor expansion ({\it nterms}=2) is insufficient to reproduce the power-law form of the submillimeter spectrum between the observed bands. In contrast, the second order of the Taylor expansion can almost reproduce the power-law dependence. This indicates that at least the second order of the Taylor expansion is required to measure the spectral index between the observed bands with MFS. If we adopt {\it nterms}=3 and 4, we can obtain a map showing the spectral curvature $I_\beta$ (Eq.~\ref{eq:mfs2}), as shown in Figure\ \ref{fig:MFS_beta}. The maps of the spectral curvatures $I_\beta$ and their deprojected profiles clearly show that $I_\beta$ varies with radius for both cases. Non-zero $I_\beta$ implies a frequency dependence in the spectral slope within the observed bands. There is a difference in the value of $I_\beta$ between {\it nterms}=3 and 4 cases. For {\it nterms}=3, $I_\beta$ is $\sim0$ near the disk center and gradually decreases to $-1.0$ toward the outer disk with a slight variation in the intensity gaps. This indicates that, in almost all regions of the disk, the spectral index decreases as the frequency increases. The positive $I_\beta$ at the innermost part of the disk implies the opposite trend of the spectral index, which is consistent with the existence of free-free emission at the stellar position suggested in previous studies \citep{bib:pascucci2012,bib:macias2021}. On the other hand, for {\it nterms}=4, $I_\beta$ is from $-1.0$ to $-0.5$ near the disk center and drastically decreases to $\sim-1.7$ at $\sim$20~au. The relatively large error bars of $I_\beta$ for the {\it nterms}=4 case is probably because a larger number of Taylor coefficients must be determined for higher orders of the Taylor expansion. The submillimeter spectrum inferred from the $I_\alpha$ and $I_\beta$ profiles using Eq.~\ref{eq:mfs1} is also shown in Figure\ \ref{fig:MFS_beta}. When comparing the flux densities of each band, it is clear that the combination of $I_\alpha$ and $I_\beta$ for the {\it nterms}=3 case reproduces the observed submillimeter spectrum better than the {\it nterms}=4 case. This difference is because of the number of Taylor terms to describe the submillimeter spectrum. In MFS, the submillimeter spectrum is described by Eq.~\ref{eq:mfs1} using three imaging parameters $I_0$, $I_\alpha$, and $I_\beta$, and they are calculated using the first three Taylor coefficients, $I_0$, $I_1$, and $I_2$. With MFS {\it nterms}=3, the spectrum is described with the first three Taylor coefficients, and thus it is preferable because the parameters of the spectrum can be determined uniquely. For {\it nterms}=4, on the other hand, we obtain four Taylor coefficients from the MFS imaging. However, the final Taylor term is not employed for the spectrum calculation, though it is non-zero. This likely causes a difference between the calculated spectrum and the observed flux density for the case of {\it nterms}=4. The point source sensitivity of our millimeter continuum image reconstructed with MFS is improved by $\sim$30 \% from the deepest one so far for TW Hya at high-resolution ($<$50~mas) by \citet{bib:tsukagoshi2019b}. The high-resolution and high-sensitivity continuum map reconstructed with MFS provides an opportunity to search for substructures associated with the millimeter blob located at 52~au, as found by \citet{bib:tsukagoshi2019b}. Figure \ref{fig:tail} shows the intensity map of MFS {\it nterms}=3 deprojected into a map in polar coordinates, whose intensity scale is normalized by an exponential function (see \S4) to more easily identify substructures. We tentatively find an emission feature that could be a trailing tail that is emerged from the millimeter blob \citep{bib:nayakshin2020}. However, it is also possible that the emission feature is an artifact caused by the residual of the sidelobe pattern. Another emission feature is that the emission ring at 45~au contains azimuthal wiggles, while dust wiggles are not present at the 33~au ring and 25~au gap. This indicates that the 45~au ring has non-zero eccentricity or that the center of the ring orbit is slightly different to that for the inner rings/gaps. Alternatively, the inner and outer disks might not be coplanar, or there might be an azimuthal variation of the ring scale height \citep{bib:doi2021}. These emission features will be confirmed and discussed through future observations and more detailed analysis. \begin{figure*}[htb] \begin{center} \epsscale{0.8} \plotone{twhya_4band_spindex_from_image.pdf} \plotone{twhya_mfs_4band.pdf} \plotone{twhya_mfs_4band_nt3.pdf} \plotone{twhya_mfs_4band_nt4.pdf} \caption{Combined intensity (left) and spectral index $I_\alpha$ (center) maps. The circle at the bottom-left corner of the intensity panel shows the beam size. The righthand panel shows the radial profiles of the intensity (black) and $I_\alpha$ (red) azimuthally averaged after the image deprojection. The error bars show the standard error through the azimuthal averaging. The shaded region indicates the value of the $I_\alpha$ error map that CASA {\it tclean} provided. The bar at the bottom-left corner denotes the geometric mean of the beam size. The images reconstructed by the image-oriented analysis are shown in the top row, and those reconstructed by MFS are shown from second to the bottom. The noise of the intensity map of the image-oriented method is 95~$\mu$Jy~beam$^{-1}$, and that for the MFS maps of {\it nterms}=2--4 is 7.5~$\mu$Jy~beam$^{-1}$. The green arrows in the right panels indicate the positions of the 25 and 41~au gaps.}\label{fig:spindex} \end{center} \end{figure*} \begin{figure}[htb] \begin{center} \plotone{MFS_model_sed_4band.pdf} \caption{Reproducibility of the submillimeter spectrum for different {\it nterms}. The flux densities are determined to follow a power-law index of 2.5, and the spectrum is sampled at the frequencies of the observation data. The solid lines show the fit using the equations for {\it nterms}=2 (red) and {\it nterms}=3 (yellow), while the fit using a power-law function is shown by the dotted line in green. The vertical axis shows the flux density in arbitrary units. `TE' is an abbreviation of Taylor expansion.}\label{fig:model_sed} \end{center} \end{figure} \begin{figure*}[htb] \begin{center} \epsscale{1.0} \plotone{beta_plot_nt3.pdf} \plotone{beta_plot_nt4.pdf} \caption{Distribution of the spectral curvature $I_\beta$ and the inferred submillimeter spectrum for the cases of {\it nterms}=3 and 4 for the upper and lower panels, respectively. (Left) Map of $I_\beta$ reconstructed using the MFS method. The black circle at the bottom-left corner denotes the beam size. (Middle) Deprojected and azimuthally averaged profile of $I_\beta$. The error bars represent the standard error of the azimuthal averaging. The bar in the box at the bottom-left corner shows the geometric mean of the beam size. The green arrows in all the panels indicate the positions of the 25 and 41~au gaps. (Right) Submillimeter spectrum inferred from the $I_\alpha$ and $I_\beta$ profiles reconstructed with MFS. The azimuthally averaged spectrum at 10, 30, and 50~au are shown in blue, orange, and green, respectively. The black dots indicate the flux densities measured in each band map. The dashed and dash-dotted lines in gray show $\nu^2$ and $\nu^3$ dependence, respectively, as a reference.} \label{fig:MFS_beta} \end{center} \end{figure*} \epsscale{0.8} \begin{figure*} \begin{center} \plotone{az-r_whole_3.pdf} \caption{Intensity map of MFS {\it nterms}=3 deprojected onto a polar coordinates. The intensity scale is divided by $1.7\times10^{-0.02R}$ mJy~beam$^{-1}$ to clearly view the features embedded in the background emission of the protoplanetary disk (see \S\ref{sec:model}). The millimeter blob emission reported by \citet{bib:tsukagoshi2019b} and a candidate trailing tail found in this study are labeled.}\label{fig:tail} \end{center} \end{figure*} \section{Validation of the Imaging Methods Using Intensity Models}\label{sec:model} Our results indicate that for the MFS imaging, a higher-order Taylor expansion is required to reconstruct a reliable $I_\alpha$ map from datasets with wide frequency coverage at millimeter/submillimeter wavelengths. The higher orders of Taylor expansion, however, require a significant SNR of the data. On the other hand, although the resolution of the image is poorer than that of MFS, the image-oriented method provides an $I_\alpha$ map without using the Taylor series approximation for the frequency dependence. In this section, we investigate the behavior of the MFS method using an intensity model to validate the reconstructed spectral index maps. The intensity model is motivated by the intensity distribution of the TW~Hya disk. The combined intensity and $I_\alpha$ maps were created using the same procedure as for the observed data. We compared them to determine which is the more reliable procedure to make the $I_\alpha$ map from datasets with a wide frequency coverage. Note that, for simplicity, we ignored the frequency dependence of the spectral slope, i.e., the spectral curvature. The intensity model was assumed to be an exponential function, as described by $I=1.7\times10^{-0.02R}$ mJy~beam$^{-1}$ at a representative frequency, i.e., the central frequency ($\sim$221~GHz). The intensity profile was truncated at 1 and 60~au for the inner and outer radii, respectively. We also added an intensity gap at 25~au to the model profile to more closely resemble that of the TW~Hya disk. The gap is modelled using a Gaussian function with a FWHM of 5~au and a fractional depth of 0.5. Figure \ref{fig:model_profile} shows the comparison of the adopted intensity distribution and the observed intensity. The intensity profile of the model is more similar to the observed profile than the standard power-law dependence ($I\propto R^{-p}$). For the radial dependence of the spectral index $\alpha(R)$, we assumed two cases. One is a constant over the disk with a value of 2.5. The other is a linear dependence with disk radius, in which $\alpha$=2 at 10~au and 3 at 50~au are assumed. We assumed three cases for the radial dependence of the spectral index $I_\alpha(R)$. The first one is a constant over the disk with a value of 2.5. The second one is a linear dependence with disk radius, in which $I_\alpha$=2 at 10~au and 3.0 at 50~au are assumed. Finally, we adopted the linear dependence assumed above with an enhancement at the 25~au gap. The enhancement has a Gaussian form with the same width as the intensity gap (5~au in FWHM). The peak value of $I_\alpha$ enhancement is set to be 3. Under these assumptions, model images were created at the same frequency sampling as the observed datasets. The model images were converted to visibilities and resampled to match each of the observations. The visibilities were resampled using the Python code {\it vis\_sample}\footnote{https://github.com/AstroChem/vis\_sample} \citep{bib:loomis2017}. Then, the model visibilities were imaged with the same parameters as for the observed datasets using the {\it tclean} task of CASA. Figures\ \ref{fig:sim_result1}, \ref{fig:sim_result2}, and \ref{fig:sim_result3} compare the simulated images reconstructed from the model visibilities. The reconstructed images of the intensity, spectral index, and their radial profiles are shown from left to right, respectively. The results of the image-oriented method and of MFS with {\it nterms}=2, 3, and 4 are displayed from top to bottom. We summarize the results of the imaging tests below. \begin{itemize} \item{ As mentioned in \S 2, the first order of the Taylor expansion ({\it nterms}=2) cannot reproduce the spectrum between Bands 4 and 7. The simulated $I_\alpha$ is $\sim$20\% lower than the input value for both the $I_\alpha$ models. The intensity of the combined map is also affected. By adopting the first order of the Taylor expansion for the MFS imaging, the intensity at the central frequency tends to be overestimated by 20-40\% (see Figure \ref{fig:model_sed}). } \item{ Following the above mentioned case, the MFS imaging that adopts {\it nterms}=3 and 4 reasonably reproduces the $I_\alpha$ profile, not only for the constant $I_\alpha$ case but also for the structured $I_\alpha$ cases. The difference between the mean $I_\alpha$ and the input value was typically less than 5\%. In addition, there appear to be artificial ripples over the disk with a spatial scale of $\sim$10~au, seen particularly in the case of {\it nterms}=3. Because the peak positions of the ripples vary if we adopt different multiscale parameters in CLEAN, the ripples could be caused by the combination of scale parameters. } \item{ Despite the difference in the $I_\alpha$ maps, the combined intensity is not significantly affected when {\it nterms}=3 or 4 is adopted. In all the $I_\alpha$ model cases, the difference between the peak intensities of {\it nterms}=3 and 4 is less than 5\%, indicating that both profiles describe the radial distribution of the disk emission well. } \item{ In both the imaging methods, the existence of the intensity gap does not significantly affect the $I_\alpha$ profile. If we adopt {\it nterms}=3 and 4, only a $<$2\% variation around the 25~au gap is found when the linear dependence of $I_\alpha$ is the case. If there is an enhancement of $I_\alpha$ at the gap, the peak value of $I_\alpha$ is underestimated. However, the difference is as small as $\sim$10\%. Beam smearing could also be a reason for the decrease in $I_\alpha$ in the image-oriented method. } \item{ The image-oriented method is a good method to reproduce a reliable $I_\alpha$ map, although image resolution is sacrificed. The radial dependence of $I_\alpha$ agrees reasonably well with the input one, and the noise level is significantly lower than that of higher-order MFS images. One concern is that, in all the $I_\alpha$ model cases, the values of the simulated $I_\alpha$ profiles are slightly larger than the model ($\lesssim6$\%). This could be caused by the imaging of each band's intensity without using MFS because the bandwidth ratio of each type of data is not negligible. Alternatively, how deep we take clean components to make each band's image may also affect the spectral index map. } \end{itemize} Based on these results, we conclude that MFS with the second order of the Taylor expansion ({\it nterms}=3) is a reasonable method to create a high-resolution combined intensity map. This is because {\it nterms=2} cannot reproduce the flux density correctly because of the wide frequency coverage, and {\it nterms}=4 or higher causes difficulty in reconstructing the spectral curvature. Although {\it nterms}=4 can provide a spectral slope comparable to or better than {\it nterms}=3, the number of Taylor coefficients is larger than the number of parameters required for describing a submillimeter spectrum ($I_0$, $I_\alpha$, and $I_\beta$), as shown in \S\ref{sec:results}. The artifact of the $I_\alpha$ profile associated with the intensity gaps is negligible. However, if $I_\alpha$ is enhanced at the gap, the peak value is underestimated by $\sim$10\%. Although the resolution is lower than that of the MFS images, the image-oriented method provides a more robust $I_\alpha$ map. The uncertainty owing to the selection of the imaging method is expected to be $\lesssim10$\% if the spectral curvature is negligible. Thus, we conclude that the imaging method is reliable in checking both the $I_\alpha$ images reconstructed from MFS ({\it nterms=3}) and the image-oriented method. \begin{figure}[htb] \begin{center} \plotone{model_profile.pdf} \caption{Comparison of the model intensity profile (red) to that of the observations (black dots). The radial profile of the combined intensity map reconstructed by MFS with {\it nterms=3} is adopted for the observed profile. The blue lines show a standard power-law form with $R^{-p}$ dependence.}\label{fig:model_profile} \end{center} \end{figure} \begin{figure*}[htb] \begin{center} \epsscale{0.9} \plotone{4band_spindex_from_image_const.pdf} \plotone{4band_resampled_c4_cln_nterm2_const.pdf} \plotone{4band_resampled_c4_cln_nterm3_const.pdf} \plotone{4band_resampled_c4_cln_nterm4_const.pdf} \caption{Comparison of the reconstructed images and radial distributions from the simulated visibilities using the image-oriented analysis (top) and the MFS method (from 2nd to bottom rows). The figure description is the same as that in Figure\ \ref{fig:spindex}. The intensity scale is in arbitrary units. The results for the constant $I_\alpha$ model are shown. The green arrow indicates the gap position in the model profile. The blue line represents the model $I_\alpha$ profile.}\label{fig:sim_result1} \end{center} \end{figure*} \begin{figure*}[htb] \begin{center} \epsscale{0.9} \plotone{4band_spindex_from_image_dist.pdf} \plotone{4band_resampled_c4_cln_nterm2_dist.pdf} \plotone{4band_resampled_c4_cln_nterm3_dist.pdf} \plotone{4band_resampled_c4_cln_nterm4_dist.pdf} \caption{Same as Figure\ \ref{fig:sim_result1}, but for the gradient $I_\alpha$ model.}\label{fig:sim_result2} \end{center} \end{figure*} \begin{figure*}[htb] \begin{center} \epsscale{0.9} \plotone{4band_spindex_from_image_dist_gap.pdf} \plotone{4band_resampled_c4_cln_nterm2_dist_gap.pdf} \plotone{4band_resampled_c4_cln_nterm3_dist_gap.pdf} \plotone{4band_resampled_c4_cln_nterm4_dist_gap.pdf} \caption{Same as Figure\ \ref{fig:sim_result1}, but for the model of the gradient $I_\alpha$ with an enhancement at 25~au.}\label{fig:sim_result3} \end{center} \end{figure*} Finally, we checked how the uncertainty in the absolute flux density calibration affects the $I_\alpha$ profile using the same procedure as the mock observation for the intensity model. We employed an intensity model whose spectral index increases linearly with radius with an enhancement at the gap. To observe the effect of the absolute flux calibration uncertainty on the reconstructed spectral slope, we ran mock observations for four cases in which the flux density of the model profile was modified by $\pm$10\% at Band~7 and $\pm$5\% at Band~3 and keeping the original $I_\alpha$ profile. The flux densities at Bands~4 and 6 were unchanged. Figure\ \ref{fig:flux_calibration} shows the results of the reconstructed radial profile of $I_\alpha$ using MFS({\it nterms}=3) and the image-oriented method. It is clear that the uncertainty of the flux calibration does not affect the shape of the $I_\alpha$ profile, but does affect the value of $I_\alpha$. Moreover, the value of $I_\alpha$ is more dependent on the uncertainty of the Band~7 flux calibration than that of Band~3. The differences to the original profile are typically 7\% for the MFS method and 8\% for the image-oriented method. Note that this is a conservative technique to determine the uncertainty due to the absolute flux calibration because we combine multiple measurement sets for each band. Thus, the uncertainty of the absolute flux calibration should be lower than those reported by ALMA (10\% for Band~7 and 5\% for Band~3). \begin{figure*}[htb] \begin{center} \epsscale{1.0} \plotone{influence_flux_calibration.pdf} \caption{Reconstructed $I_\alpha$ profiles simulated by adding the modulation of the flux density scale. The cases of $+10$\% and $-$10\% flux density modulation at Band~7 are shown in solid and dashed lines, respectively, and the cases of $+$5\% and $-$5\% modulation at Band~3 are shown in blue and red, respectively. The original profile without flux density modulation is shown in black. The green arrow indicates the gap position in the model profile. The results for MFS({\it nterms}=3)(a) and the image-oriented method (b) are shown.}\label{fig:flux_calibration} \end{center} \end{figure*} \section{Discussion} \label{sec:discussion} \subsection{Comparison with the spectral index distribution of \citet{bib:macias2021}} With $I_0$, $I_\alpha$, and $I_\beta$ derived with MFS {\it nterms}=3 (see Figures \ref{fig:spindex} and \ref{fig:MFS_beta}), we can describe the submillimeter spectrum using Eq.~\ref{eq:mfs1} and measure the spectral slope $\alpha_\nu$ at a specific frequency. Figure\ \ref{fig:compare_spindex} shows the derived $\alpha_\nu$ at frequencies of 121, 190, and 290~GHz, which correspond to the central frequencies between ALMA Band~3 and 4 (Band3+4), 4 and 6 (Band4+6), and 6 and 7 (Band6+7), respectively. The overall trend is that $\alpha_\nu$ decreases as the frequency increases. This trend is more prominent at $\gtrsim$20~au; $\alpha_\nu$ decreases to a value of $\sim$0.5 from 121 to 290~GHz and $\sim$0.2 near 10~au, respectively. The enhancements of $\alpha_\nu$ at the intensity gaps (25 and 43~au) appear in all $\alpha_\nu$ cases, and their excess compared to the surroundings decreases as the frequency increases. As the spectral index determined by the image-oriented method using data from all bands is independent of the frequency, it seems to agree with the profile for the Band3+4 case, but cannot describe the profile for the Band6+7 case. If we make $\alpha_\nu$ profiles of the image-oriented method using band-to-band fitting, the same trend in frequency as the MFS profiles is found although they have larger uncertainty. Recently, \citet{bib:macias2021} presented the distribution of $\alpha_\nu$ for the TW~Hya disk at a resolution of 50~mas. The difference from our study is that they measured $\alpha_\nu$ between Band~3--4, 4--6, and 6--7 separately by using MFS {\it nterms}=2, while our study focuses on determining $I_\alpha$ and $I_\beta$ through the MFS {\it nterms}=3 imaging. Figure\ \ref{fig:compare_spindex} also compares our results of $\alpha_\nu$ with those derived by \citet{bib:macias2021}. Our $\alpha_\nu$ profiles can reproduce the frequency dependence of \citet{bib:macias2021}. The radial variation of the profiles is also almost consistent. However, there is still a discrepancy in the excess in $\alpha_\nu$ at the intensity gaps; the results of \citet{bib:macias2021} show that the excess of $\alpha_\nu$ at the gaps is largest at Bands 6 and 7, whereas our result shows the opposite trend. This is probably because our $\alpha_\nu$ measurement by combining data over four bands improves the signal-to-noise ratio of the profile. \begin{figure*}[htb] \epsscale{1.0} \begin{center} \plotone{compare_spindex_nterms3_indiv_beam.pdf} \caption{Radial profile of the spectral slope $\alpha_\nu$ for the case of MFS ({\it nterms}=3) is shown in red. The profiles at the frequencies between Bands~3 and 4 (Band3+4;121~GHz), Bands~4 and 6 (Band4+6;190~GHz), and Bands~6 and 7 (Band6+7;290~GHz) are shown in (a)--(c), respectively. The $\alpha_\nu$ profile determined by the image-oriented method using data from all bands is shown in blue, while those derived from the band-to-band fitting are shown in cyan. The shaded area indicates the standard error. The green arrows in the right panels indicate the positions of the 25 and 41~au gaps. For reference, the radial profile of the spectral index measured in \citet{bib:macias2021} are shown by gray dots. The bars in the box represent the beam sizes of the profiles. }\label{fig:compare_spindex} \end{center} \end{figure*} \subsection{Implication of the dust size distribution} In this subsection, we deduce the optical depth $\tau_0$ at the central frequency $\nu_0$, the power-law index of the dust opacity $\beta$, and the temperature of the dust disk $T_\mathrm{d}$ using the submillimeter spectrum derived from our MFS imaging. If we neglect the scattering of dust, the intensity of the dust emission $I_\nu$ is expressed as \begin{equation} I_\nu = B(T_\mathrm{d}) \left[ 1-\exp \left\{-\tau_0 \left(\frac{\nu}{\nu_0} \right)^\beta \right\} \right], \end{equation} where $B(T_\mathrm{d})$ is the Planck function. There are three unknown variables, $\tau_0$, $\beta$, and $T_\mathrm{d}$ in this equation. On the other hand, the observed submillimeter intensity, $I_\nu \mathrm{(obs)}$, can be expressed using three parameters determined by our MFS imaging $I_{\nu_0}$, $I_\alpha$, and $I_\beta$ as \begin{equation} I_\nu \mathrm{(obs)}= I_{\nu_0} \left( \frac{\nu}{\nu_0} \right) ^{I_\alpha + I_\beta \log \left( \frac{\nu}{\nu_0} \right)}. \end{equation} This implies that we can solve the unknown three variables from the submillimeter spectrum. To address this problem, we calculated the minimization of $\Delta I_\nu \equiv I_\nu - I_\nu\mathrm{(obs)}$ by varying $\tau_0$, $\beta$, and $T_\mathrm{d}$. We used {\it curve\_fit} in the {\it scipy} optimization module to minimize $\Delta I_\nu$. To prevent divergence, the solution is searched with the minimum and maximum bounds of 0.001--10, -3--3, and 10--300 for $\tau_0$, $\beta$, and $T_\mathrm{d}$, respectively. The standard errors in the radial profiles of the observed parameters were used to determine the weight of the minimization. The derived profiles of $\tau_0$, $\beta$, and $T_\mathrm{d}$ are shown in Figure\ \ref{fig:tau-beta}. The disk is entirely optically thin at 221~GHz, although only marginally so at $\sim$15~au. The shape of the $\tau_0$ profile is consistent with that obtained in \citet{bib:tsukagoshi2016} at $>$15~au, but it deviates at the inner radii mainly due to the difference in disk temperature profiles adopted. As shown in Figure\ \ref{fig:tau-beta}(c), our direct measurement of the dust temperature $T_\mathrm{d}$ agrees well with the estimate obtained by a modeling approach for the gas disk \citep{bib:zhang2017}. The radial dependence of $\beta$ is similar to that derived in \citet{bib:tsukagoshi2016}. The value is slightly smaller ($\sim$0.1--0.3) than that derived in \citet{bib:tsukagoshi2016}, probably because of the difference of the frequency range over which $\beta$ was determined. The enhancements of $\beta$ associated with the 25 and 41~au gaps are also seen. This result still supports the conclusion of \citet{bib:tsukagoshi2016}, which is that this can be explained by a deficit of millimeter-sized grains within the gap. In the inner region of the disk ($R\lesssim$15~au), where the effect of the optical depth cannot be ignored, $\beta$ is less than 0; this indicates that the emission is blackbody-like, or that the scattering of millimeter radiation is effective \citep{bib:liu2019,bib:ueda2020}. The scattering should be responsible for an approximately one order of magnitude difference between our estimate of the optical depth and that derived by \citet{bib:macias2021}. According to theoretical predictions of the dust opacity \citep[e.g.,][]{bib:birnstiel2018}, $\beta \sim 1$ implies that the power-law index of the dust size distribution $q$ is $\sim$ 3.5 and that the maximum dust particle size is above 1~mm. Beyond the 25~au gap, where the emission is optically thin, $\beta$ varies up to $\sim$1.5 at 60~au, meaning that the maximum dust size could be a few millimeters. This conclusion is supported by the detailed modeling of the dust size distribution for sets of high-resolution ALMA data \citep{bib:macias2021}. Note that, in our study, the disk parameters are determined from optically thinner frequency bands (Bands~3, 4, 6, and 7). By adding optically thick continuum data at higher frequency bands (Band~9 or 10), the disk parameters, particularly the dust temperature profile, can be determined more robustly \citep{bib:kim2019}. \begin{figure*}[htb] \begin{center} \epsscale{1.0} \plotone{4band_tau-beta.pdf} \caption{Radial profiles of the optical depth at 221~GHz $\tau_0$ (a), power-law index of the opacity coefficient $\beta$ (b), and temperature of the dust disk $T_\mathrm{d}$ (c). The profiles are calculated based on the maps reconstructed by MFS {\it nterms=3}. The shaded region corresponds to minimization error. As a reference in (a) and (b), $\tau$ and $\beta$ at 190~GHz derived by \citet{bib:tsukagoshi2016} are shown by the dashed red line with the shaded region showing the standard error ($T_\mathrm{10}=$26~K and $q=-0.4$ case). In (c), the disk midplane temperature profiles assumed in \citet{bib:tsukagoshi2016} and derived by \citet{bib:zhang2017} are indicated by the red and yellow dashed lines, respectively. The green arrows in all the panels indicate the positions of the 25 and 41~au gaps. The mean beam sizes of this study and \citet{bib:tsukagoshi2016} are shown by the bars in black and red, respectively, at the bottom-left corner of each panel.}\label{fig:tau-beta} \end{center} \end{figure*} \section{Summary}\label{sec:summary} To obtain a higher-sensitivity intensity map at millimeter wavelengths and to revisit the dust size distribution of the protoplanetary disk around TW~Hya, we created high-resolution maps of the intensity and the spectral index by combining sets of ALMA data at Bands~3, 4, 6, and 7. In addition to using the existing ALMA archive data, we have newly conducted high-resolution observations at Bands~4 and 6, a part of which has already been published in \citet{bib:tsukagoshi2019b}. Two methods are employed to reconstruct the combined intensity and the spectral index maps; a traditional image-oriented method and multi-scale and multi-frequency synthesis (multi-scale MFS). The impact of the choice of the methods was also investigated using an intensity model motivated by TW~Hya. The results of this paper are summarized as follows: \begin{itemize} \item We show the spectral index map reconstructed with both imaging methods. A reasonable method to reconstruct the spectral index map is MFS with the second order of the Taylor expansion for the frequency ({\it nterms}=3). With a smaller order of the Taylor expansion ({\it nterms}=2), the number of Taylor coefficients is too small to reproduce the frequency dependence from Bands 3 to 7. Meanwhile, the higher-order ({\it nterms}=4) MFS imaging requires a larger number of Taylor coefficients and a higher signal-to-noise ratio. Although the resolution is almost twice as poor, the image-oriented method provides a consistent spectral index map with MFS ({\it nterms}=3) imaging. \item The spectral index reconstructed with MFS {\it nterms}=3 agrees well with that derived in previous studies \citep{bib:tsukagoshi2016,bib:huang2018,bib:macias2021}. The index decreases toward the disk center and shows enhancements in the intensity gaps. The spectral index of the image-oriented method showed similar structures. Our MFS {\it nterms}=3 imaging shows that the submillimeter spectrum of TW~Hya has spectral curvature, indicating that the spectral index depends on the frequency. \item We investigated how the substructures of intensity distribution affect the reconstructed spectral index map using an intensity model and noise-free mock observations. We validated that the first order of the Taylor expansion is insufficient to reproduce the frequency dependence from Bands 3 to 7, and the higher-order of Taylor expansion of MFS ({\it nterms}=3 and 4) is necessary. We found that the higher-order MFS method can provide a high-resolution spectral index distribution with an uncertainty of $<10$~\% and the presence of the intensity gap does not significantly influence the reconstruction of the spectral index distribution. Although the resolution is lower than that of the MFS images, the image-oriented method also provides a robust distribution of the spectral index if there is no frequency dependence in the spectral index. \item We formulated the submillimeter spectrum of the TW~Hya disk as a function of the disk radius by using the images reconstructed with MFS {\it nterms}=3. With the spectrum, the optical depth $\tau_0$, power-law index of the opacity coefficient $\beta$, and temperature of the dust disk $T_\mathrm{d}$ were derived under the assumption that scattering is negligible. The derived $\tau_0$ and $\beta$ agree well with those derived in our previous work \citep{bib:tsukagoshi2016}. The enhancement of $\beta$ at the intensity gaps was also confirmed, supporting a deficit of millimeter-sized grains within the gap. \item By combining all the visibilities from Bands~3 to 7, we made the highest sensitivity continuum map at millimeter wavelengths to date. The point source sensitivity of our map was improved by 30\% from the previous highest sensitivity continuum map of \citet{bib:tsukagoshi2019b}. The previously reported substructures in the dust emission were confirmed by our maps. The tentative detection of a new emission feature associated with the millimeter blob has also been reported, but it should be confirmed by future observations and detailed analysis. \end{itemize} \acknowledgments We would like to thank the referee for improving our manuscript. We are also grateful to Enrique Macias for sharing the spectral index profiles of the TW~Hya disk. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2013.1.00114.S, 2013.1.00387.S, 2015.A.00005.S, 2015.1.00308.S, 2015.1.00686.S, 2015.1.00845.S, 2016.1.00229.S, 2016.1.00311.S, 2016.1.00440.S, 2016.1.00464.S, 2016.1.00629.S, 2016.1.00842.S, 2016.1.01495.S, 2017.1.00520.S, and 2018.1.01218.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. A part of the data analysis was carried out on the common-use data analysis computer system at the Astronomy Data Center of NAOJ. This work was supported by JSPS KAKENHI Grant Numbers 17K14244, 18H05438, and 20K04017. CW acknowledges financial support from the University of Leeds, and the Science and Technology Facilities Council and UK Research and Innovation (grant numbers ST/T000287/1 and MR/T040726/1). TJM thanks the STFC for support via grants ST/P000312/1 and ST/T000198/1. M.T. was supported by JSPS KAKENHI grant Nos. 18H05442,15H02063,and 22000005. \vspace{5mm} \facilities{ALMA}, \software{CASA \citep{bib:mcmullin2007}, numpy \citep{bib:numpy2020}, scipy \citep{bib:scipy2020}, astropy \citep{bib:astropy2013}, matplotlib \citep{bib:matplotlib2007}, vis\_sample \citep{bib:loomis2017}}
1,314,259,996,195
arxiv
\section{Introduction} Let $K$ be a number field such that $[K:\mathbb{Q}]=d$ and let $E/K$ be an elliptic curve. A celebrated theorem of Mordell and Weil shows that $E(K)$ is a finitely generated abelian group. Therefore this group can be decomposed as $E(K)=E(K)_{tors} \oplus \mathbb{Z}^{r}$, $r \ge 0$. It is known that $E(K)_{tors}$ is of the form $C_{m} \oplus C_{n}$ for two positive integers $m,n$ such that $m$ divides $n$, where $C_m$ and $C_n$ denote cyclic groups of order $m$ and $n$, respectively. \par One of the goals in the theory of elliptic curves is the classification of torsion groups of elliptic curves defined over various fields. \par Let $d$ be a positive integer. Define $\Phi(d)$ to be the set of possible isomorphism classes of groups $E(K)_{tors}$, where $K$ runs through all number fields $K$ of degree $d$ and $E$ runs through all elliptic curves over $K$. In \cite{21}, Merel proved that $\Phi(d)$ is finite for all positive integers $d$. The set $\Phi(1)$ can be seen in Theorem \ref{Theorem 2.1} and was determined by Mazur \cite{mazurtorzija}. \begin{tm}[Mazur, \cite{mazurtorzija}] \label{Theorem 2.1} Let $E/\mathbb{Q}$ be an elliptic curve. Then \[ E(\mathbb{Q})_{tors} \cong \begin{cases} C_m, & m=1,...,10, 12, \\ C_{2} \oplus C_{2m}, & m=1,...,4. \end{cases} \] \end{tm} The set $\Phi(2)$ has been determined by Kenku, Momose and Kamienny \cite{kenkumomose}, \cite{kamienny}. Derickx, Etropolski, Hoeij, Morrow and Zureick-Brown have determined $\Phi(3)$ in \cite{hoeij}. \par Define $\Phi^{\mathrm{CM}}(d)$ to be the set of possible isomorphism classes of groups $E(K)_{tors}$, where $K$ runs through all number fields $K$ of degree $d$ and $E$ runs through all elliptic curves with complex multiplication ($\mathrm{CM}$). The set $\Phi^{\mathrm{CM}}(1)$ has been determined by Olson in \cite{32} and $\Phi^{\mathrm{CM}}(d)$ for $d=2,3$ by Zimmer and his collaborators in \cite{80}, \cite{81} and \cite{82}. The sets $\Phi^{\mathrm{CM}}(d)$, for $4\le d \le 13$ have been determined by Clark, Corn, Rice and Stankewicz in \cite{30}. Bourdon, Pollack and Stankewicz have determined torsion groups of $\mathrm{CM}$ elliptic curves over odd degree number fields in \cite{bourdonpollack}. \par Define $\Phi_{\mathbb{Q}}(d) \subseteq \Phi(d)$ to be the set of possible isomorphism classes of groups $E(K)_{tors}$, where $K$ runs through all number fields $K$ of degree $d$ and $E$ runs through all elliptic curves defined over $\mathbb{Q}$. For $d=2,3$, the sets $\Phi_{\mathbb{Q}}(d)$ have been determined by Najman \cite{najman_ratcubic}. \begin{tm} \label{najmanquadratic} Let $E/\mathbb{Q}$ be an elliptic curve and $K/\mathbb{Q}$ a quadratic extension. Then $E(K)_{tors}$ is isomorphic to the one of the following groups: \begin{gather*} C_{m}, \quad m = 1, 2, \dots, 9, 10, 12, 15, 16\\ C_{2} \oplus C_{2m}, \quad m = 1, 2, 3, 4, 5, 6\\ C_{3} \oplus C_{3m}, \quad m = 1, 2\\ C_{4} \oplus C_{4}. \end{gather*} $C_{15}$ is the only group which appears in only finitely many cases, and only over the extensions $\mathbb{Q}(\sqrt{5})$ and $\mathbb{Q}(\sqrt{-15})$. \end{tm} \begin{tm} \label{najmancubic} Let $E/\mathbb{Q}$ be an elliptic curve and $K/\mathbb{Q}$ a cubic extension. Then $E(K)_{tors}$ is isomorphic to the one of the following groups: \begin{gather*} C_{m}, \quad m = 1, 2, \dots, 10, 12, 13, 14, 18, 21\\ C_{2} \oplus C_{2m}, \quad m = 1, 2, 3, 4, 7. \end{gather*} $C_{21}$ is the only group which appears in only finitely many cases, and only over the extension $\mathbb{Q}(\zeta_{9})^{+}$. \end{tm} The set $\Phi_{\mathbb{Q}}(4)$ has been determined by Chou \cite{chou2} and Gonz\'alez-Jim\'enez and Najman \cite{growth}. The set $\Phi_{\mathbb{Q}}(5)$ has been determined by Gonz\'alez-Jim\'enez in \cite{38}. Gonz\'alez-Jim\'enez and Najman have also proved that $\Phi_{\mathbb{Q}}(p)=\Phi(1)$ for primes $p \ge 7$ in \cite{growth}. For an odd prime $\ell$ and a positive integer $d$, Propp \cite{90} has determined when there exists a degree $d$ number field $K$ and an elliptic curve $E/K$ with $j(E) \in \mathbb{Q} \setminus \{ 0, 1728 \}$ such that $E(K)_{tors}$ contains a point of order $\ell$. \smallskip Let $\mu_n$, for positive integer $n$, be the set of all complex numbers $\omega$ such that $\omega^n = 1$. Note that for a prime number $p$ we have that $\Q(\mu_p) = \Q(\zeta_p)$, where $\zeta_p$ is, as usual, $p$\textsuperscript{th} primitive root of unity. For a prime number $p$, we define a set $\mu_{p^\infty}$ as the set of all complex numbers $\omega$ for which there exists non-negative integer $k$ such that $\omega^{p^k} = 1$. Note that $\Qz$ is the set $\Q$ extended with all $p^{n^\text{th}}$primitive roots of unity. In \cite{guzkri}, the authors considered the following problem: assume that $E/\mathbb{Q}$ is an elliptic curve, $p$ a prime number and $K=\Q(\mu_p^{\infty})$. They show that the torsion subgroup of $E$ grows only over small subfields of $K$. More precisely, they showed the following: \begin{tm} \label{teo:rast_qzetap} Let $E/\Q$ be an elliptic curve, then for a prime number $p \geq 5$ it holds that \[E(\Qz)_\tors = E(\Qz[p])_\tors.\] Furthermore, \[E(\Qz[3][\infty])_\tors = E(\Qz[3^3])_\tors \qquad \text{and} \qquad E(\Qz[2][\infty])_\tors = E(\Qz[2^4])_\tors.\] \end{tm} \begin{remark} This result is ``the best possible''. For $E = \href{http://www.lmfdb.org/EllipticCurve/Q/27a4}{27a4}$ we have that \[E(\Qz[3^2])_\tors = C_{9} \subsetneq C_{27} = E(\Qz[3^3])_\tors\] and for $E = \href{http://www.lmfdb.org/EllipticCurve/Q/32a4}{32a4}$ it holds that \[E(\Qz[2^3])_\tors = C_{2} \oplus C_{4} \subsetneq C_{2} \oplus C_{8} = E(\Qz[2^4])_\tors.\] \end{remark} It becomes natural to ask how can the torsion group of $E/\mathbb{Q}$ grow when we consider the base change $E/\mathbb{Q}(\zeta_p)$. This becomes much harder then it seems as $p$ grows because our methods sometimes rely on pure computation. \\ Our results are the following theorems. \begin{tm} Let $E/\Q$ be an elliptic curve. Then $E(\Q(\zeta_{16}))_\tors$ is one of the following groups (apart from those in Mazur's theorem): \[C_{4} \oplus C_{4} \quad \href{https://www.lmfdb.org/EllipticCurve/Q/15a1}{(15a1)}, \qquad C_{2} \oplus C_{10} \quad \href{https://www.lmfdb.org/EllipticCurve/Q/2112bd2}{(2112bd2)}.\] \end{tm} \begin{tm} \label{zeta27} Let $E/\Q$ be an elliptic curve. Then $E(\Q(\zeta_{27}))_\tors$ can be only one of the next two groups (apart from those in Mazur's theorem): \[C_{3} \oplus C_{3}, \qquad C_{3} \oplus C_{6}, \qquad C_{3} \oplus C_{9}, \qquad C_{21}, \qquad C_{27}.\] \end{tm} Additionally, in Lemma \ref{petsedam} we give a description of the set of possible group structures $E(\mathbb{Q}(\zeta_p))_{tors}$ can be isomorphic to for $p=5,7$ and $11$, where $E/\mathbb{Q}$ is an elliptic curve. \\ We discuss further attempts to classify $E(\mathbb{Q}(\zeta_p))_{tors}$, for arbitrary prime number $p$. \\ Magma \cite{magma} code used in this paper can be found \href{https://github.com/brutalni-vux/TorsionCyclotomic?fbclid=IwAR32kNBmGbzdULcwWpmXhP5QZ0Ap4QjZf4vqSdbFJ_9dWko6d16W3YtRQpQ}{here.} \section{Notation and auxiliary results} Let $E/F$ be an elliptic curve defined over a number field $F$. There exists an $F$-rational cyclic isogeny $\phi \colon E \to E'$ of degree $n$ if and only if $\langle P \rangle$, where $P \in E(\overline{F})$ is a point of order $n$, is a $\Gal(\overline{F}/F)$-invariant group; in this case we say that $E$ has an $F$-rational $n$-isogeny. When $F=\mathbb{Q}$, the possible degrees of $n$-isogenies of elliptic curves over $\mathbb{Q}$ are known by the following theorem. \begin{tm}[Mazur \cite{mazurizog}, Kenku \cite{kenku1}, \cite{kenku2}, \cite{kenku3}, \cite{kenku4}] \label{Theorem 2.3} Let $E/\mathbb{Q}$ be an elliptic curve with a rational $n$-isogeny. Then \[ n \in \{ 1,...,19,21, 25,27, 37, 43, 67, 163 \} .\] There are infinitely many elliptic curves (up to $\overline{\mathbb{Q}}$-isomorphism) with a rational $n$-isogeny over $\mathbb{Q}$ for \[ n \in \{ {1, . . . , 10, 12, 13, 16, 18, 25} \} \] and only finitely many for all the other $n$. If $E$ does not have complex multiplication, then $n \le 18$ with $n \neq 14$ or $n \in \{21, 25, 37 \}.$ \end{tm} \subsection*{Galois representations} Let $E/\mathbb{Q}$ be an elliptic curve and let $n$ a positive integer. The field $\mathbb{Q}(E[n])$ is the number field obtained by adjoining to $\mathbb{Q}$ all the $x$ and $y$-coordinates of the points of $E[n]$. The absolute Galois group $\Gal(\overline{\mathbb{Q}}/\mathbb{Q})$ acts on $E[n]$ by its action on the coordinates of the points, inducing a mod $n$ Galois representation attached to $E$: \[ \rho_{E,n}: \Gal(\overline{\mathbb{Q}}/\mathbb{Q}) \to \Aut(E[n]) .\] After we fix a basis for the $n$-torsion, we can identify $\Aut(E[n])$ with $\GL_{2}(\mathbb{Z}/n\mathbb{Z})$. This means that we can consider $\rho_{E,n}(\Gal(\overline{\mathbb{Q}}/\mathbb{Q}))$ as a subgroup of $\GL_{2}(\mathbb{Z}/n\mathbb{Z})$, uniquely determined up to conjugacy. We shall denote $\rho_{E,n}(\Gal(\overline{\mathbb{Q}}/\mathbb{Q}))$ by $G_{E}(n)$. Moreover, since $\mathbb{Q}(E[n])$ is a Galois extension of $\mathbb{Q}$ and $\ker \rho_{E,n}= \Gal(\overline{\mathbb{Q}}/\mathbb{Q}(E[n]))$, by the first isomorphism theorem we have $G_{E}(n) \cong \Gal(\mathbb{Q}(E[n])/\mathbb{Q})$. We would like to know what are the possibilities for $G_{E}(n)$ as a subgroup of $\GL(C_{n})$. For some values of $n$, this can be seen in Tables \ref{tableSutherland} and \ref{tableSutherland2}. For most values of $n$ we do not have a list of possibilities of $G_{E}(n)$, but we have a result that helps us see if for a given matrix subgroup $M$ of $\GL(C_{n})$ there exists an elliptic curve $E/\mathbb{Q}$ such that $\rho_{E,n}(\Gal(\overline{\mathbb{Q}}/\mathbb{Q}))=M$ (up to conjugation). \\ \noindent The following lemma is well-known and it will be useful in cases where we analyze torsion group of elliptic curve $E/\mathbb{Q}$ over a maximal real subfield of $\mathbb{Q}(\zeta_p)$. \begin{lemma} \label{oddtors} Let $E/\Q$ be an elliptic curve and $L/K$ a quadratic extension of number fields with $L = K(\sqrt{d})$. Let $E(K)_{(2')}$ be the group of $K$-rational points of $E$ of odd order. Then we have: \[ E(K(\sqrt{d}))_{(2')} \cong E(K)_{(2')} \oplus E^{d}(K)_{(2')}. \] \end{lemma} Since all cyclotomic extensions are Galois over $\mathbb{Q}$, the following result imposes restrictions on the possibilities for torsion subgroup of $E/\mathbb{Q}$ over a cyclotomic fields. \begin{lemma} \label{nika} Let $E/\mathbb{Q}$ be an elliptic curve, $m, n \in \mathbb{N}$ and $K$ a finite Galois extension of $\mathbb{Q}$. Let $E(K)[mn] \cong C_{m}\oplus C_{mn}$ and $P \in E(K)$ point of order $mn$. Then we have: $$[\mathbb{Q}(mP) : \mathbb{Q}] \mid M(\phi(n), [K : \mathbb{Q}]),$$ where $M(\cdot, \cdot)$ is the greatest common divisor and $\phi$ is the Euler function. \end{lemma} \begin{proof} Let $P$ be a point of order $mn$ with coordinates in $K$. Then we can take $Q \in E[mn]$ such that $\{P, Q\}$ is a basis for $E[mn]$. Consider the Galois representation modulo $n$ with respect to $E$: $$\rho : \Gal(\overline{\mathbb{Q}}/\mathbb{Q}) \to \GL_{2}(\mathbb{Z}/n\mathbb{Z}).$$ Let $\sigma \in \Gal(K/\mathbb{Q})$. Then we have $P^{\sigma} = \alpha P + \beta Q$ for some $\alpha, \beta \in C_{mn}$ because the action of $\sigma$ on $P$ preserves the order of a point. \noindent Now we have $P^{\sigma} - \alpha P = \beta Q$, so $\beta Q \in E(K)$. From that follows $m\beta \equiv 0 \pmod{mn}$. Multiplying by $m$ gives us $(mP)^{\sigma} = \alpha (mP)$ so $(mP)^{\sigma} \in \langle mP \rangle$ for all $\sigma \in \Gal(K/\mathbb{Q})$. Because of preserving the order, $\alpha$ has to be in $(\mathbb{Z}/n\mathbb{Z})^{\times}$. \noindent Since by considering the restriction map we get $\Gal(K/\mathbb{Q}) \cong \Gal(\overline{\mathbb{Q}}/\mathbb{Q})/\Gal(\overline{\mathbb{Q}}/K)$, we have $(mP)^{\sigma} \in \langle mP \rangle$ for all $\sigma \in \Gal(\overline{\mathbb{Q}}/\mathbb{Q})$. Therefore, for all $\sigma \in \Gal(\overline{\mathbb{Q}}/\mathbb{Q})$: $$\rho(\sigma) = \begin{pmatrix} \varphi(\sigma) & \tau(\sigma) \\ 0 & \psi(\sigma) \end{pmatrix},$$ where $\varphi, \psi, \tau : \Gal(\overline{\mathbb{Q}}/\mathbb{Q}) \rightarrow C_{n}$, and $\varphi, \psi$ are homomorphisms with image in $(C_{n})^{\times}$. We know that $(mP)^{\sigma} = g(mP) \Leftrightarrow \varphi(\sigma) = g$, for all $\sigma \in \Gal(\overline{\mathbb{Q}}/\mathbb{Q})$. Therefore, we have: $$|Im(\varphi)| = |\{(mP)^{\sigma} : \sigma \in \Gal(K/\mathbb{Q})\}| = |Orb(mP)|.$$ \noindent It is clear that Stab$(mP) = \Gal(K/\mathbb{Q}(mP))$, so by orbit and stabilizer theorem we have: $$|Im(\varphi)| = \frac{|\Gal(K/\mathbb{Q})|}{|\Gal(K/\mathbb{Q}(mP))|} = [\mathbb{Q}(mP) : \mathbb{Q}].$$ On the other hand, we have $Im(\varphi) \leq (\mathbb{Z}/n\mathbb{Z})^{\times}$, so we have: $$[\mathbb{Q}(mP) : \mathbb{Q}] \mid \phi(n).$$ $[\mathbb{Q}(mP) : \mathbb{Q}] \mid [K : \mathbb{Q}]$ is obvious and the proof is complete. \end{proof} One of the crucial results that we will need is the main result from \cite{chou}: \begin{tm} \label{eva} Let $E/\mathbb{Q}$ be a rational elliptic curve. Then $E(\mathbb{Q}^{ab})_{tors}$ is isomorphic to the one of the following groups: \begin{gather*} C_{m}, \quad m = 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 25, 27, 37, 43, 67, 163\\ C_{2} \oplus C_{2m}, \quad m = 1, 2, \dots, 8, 9\\ C_{3} \oplus C_{3m}, \quad m = 1, 3\\ C_{4} \oplus C_{4m}, \quad m = 1, 2, 3, 4\\ C_{5} \oplus C_{5},\\ C_{6} \oplus C_{6},\\ C_{8} \oplus C_{8}. \end{gather*} \end{tm} \noindent This means that all of our candidate torsion subgroups are the subgroups of the groups in the above list. Our approach will mainly consist of eliminating a certain set of possibilities from the list above in order to classify torsion groups of elliptic curves over a specific cyclotomic field. \section{Torsion growth over \texorpdfstring{$\mathbb{Q}(\zeta_{16})$}{Q16}} \noindent Assume that $E/\mathbb{Q}$ is an elliptic curve and that $C_{m} \oplus C_{mn} \subseteq E(\mathbb{Q}(\zeta_{16}))_{tors}$. By the properties of the Weil pairing, we have $\mathbb{Q}(\zeta_{m}) \subseteq \mathbb{Q}(\zeta_{16})$. It follows that $m \in \{1,2,4,8 \}.$ We first eliminate a certain amount of cyclic groups listed in Theorem \ref{eva}. \begin{lemma} Let $E/\Q$ be an elliptic curve. Then $E(\Q(\zeta_{16}))_\tors$ is not isomorphic to $C_{n}$ if \[n \in \{11, 14, 17, 18, 19, 21, 25, 27, 37, 43, 67, 163 \}.\] \end{lemma} \begin{proof} Lemma \ref{nika} gives us that if $P_{n}$ is a point of order $n \not\in \{17, 21, 25, 37 \}$, we have $[\Q(P_{n}) : \Q] \mid 2$, which is impossible by Theorem \ref{najmanquadratic}. By the same lemma we get that if $P_{n}$ is a point of order $n \in \{21, 25, 37 \}$, then we have $[\Q(P_{n}) : \Q] \mid 4$, which is impossible by \cite[Theorem 1.4]{chou2}. \\ It remains to consider the case $n=17$. By \cite[Theorem 5.8]{growth} we conclude that the point $P_{17}$ of order $17$ cannot be defined over some strictly smaller subfield of $\Q(\zeta_{16})$. That means that all $\sigma \in \Gal(\Q(\zeta_{16}) / \Q)$ act differently on $P_{17}$. Since $\Gal(\Q(\zeta_{16}))$ has four elements $\sigma$ such that $\sigma^2 = id$, we have that $P_{17}^{\sigma^2} = k^2P_{17} = P_{17}$ for four different $\sigma$. That means that we have $k^2 \equiv 1 \pmod{17}$ for four different $k$, a contradiction. \end{proof} After eliminating plenty of cyclic groups, we discuss the cases when $E$ obtains full $2$-torsion over $\mathbb{Q}(\zeta_{16}).$ This is done by the following lemmas: \begin{lemma} Let $E/\Q$ be an elliptic curve. Then $E(\Q(\zeta_{16}))_\tors$ is not isomorphic to $C_{2} \oplus C_{14}$ or $C_{2} \oplus C_{18}$. \end{lemma} \begin{proof} \noindent We will prove the result for $C_{2} \oplus C_{14}$ and the proof for the case $C_{2} \oplus C_{18}$ is identical. \\ \noindent Let $P_{14} \in E(\Q(\zeta_{16}))$ be the point of order $14$. From Lemma \ref{nika} we get that $[\mathbb{Q}(2P_{14}) : \mathbb{Q}] \mid 2$. It is also well-known that $[\mathbb{Q}(E[2]) : \mathbb{Q}] \in \{1,2,3,6\}$. Since $E[2]$ is defined over $\Q(\zeta_{16})$, we have $[\mathbb{Q}(E[2]) : \mathbb{Q}] \in \{1,2\}$. \\ \noindent Let $Q_{2}$ be a point of order $2$ different from $7P_{14}$. We now have $[\mathbb{Q}(2P_{14}, 7P_{14}, Q_{2}) : \mathbb{Q}] \mid 4$. Since $2P_{14}, 7P_{14}$ and $Q_{2}$ generate our torsion subgroup $C_{2} \oplus C_{14}$, we now know that this torsion subgroup appears over some strictly smaller subfield of $\Q(\zeta_{16})$. \\ Now we get a contradiction by using Theorem \ref{najmanquadratic} and \cite[Theorem 1.4]{chou2}. \end{proof} \begin{lemma} Let $E/\Q$ be an elliptic curve. Then $E(\Q(\zeta_{16}))_\tors$ is not isomorphic to $C_{15}$, $C_{2} \oplus C_{12}$, $C_{4} \oplus C_{12}$, $C_{4} \oplus C_{8}$, $C_{4} \oplus C_{16}$ or $C_{8} \oplus C_{8}$. \end{lemma} \begin{proof} \noindent Both $X_1(15)$ and $X_1(2,12)$ are elliptic curves. A computation in Magma shows that $X_1(15)(\Q(\zeta_{16}))$ and $X_1(15)(\Q)$ have the same Mordell-Weil group structure. Therefore, since $C_{15} \not\in \Phi(1)$, it also cannot appear over $\Q(\zeta_{16})$. \\ The curves $X_1(2, 12)(\Q(\zeta_{16}))$ and $X_1(2, 12)(\Q(i))$ have the same Mordell-Weil group structure. It was proven in \cite[Lemma 7]{najCyclo} that $C_{2} \oplus C_{12}$ does not appear as a torsion subgroup over $\Q(i)$. Therefore, it also cannot appear over $\Q(\zeta_{16})$. This also covers the case $C_{4} \oplus C_{12}$. \\ \noindent We consider the modular curves $X_1(4, 8)(\Q(\zeta_{16}))$ and $X_1(4, 8)(\Q(\zeta_{8}))$ which are actually elliptic curves. A computation in Magma shows that $X_1(4, 8)(\Q(\zeta_{16}))$ has rank $0$ and the same torsion as $X_1(4, 8)(\Q(\zeta_{8}))$, which contains only cusps, see \cite[Case 6.11]{najman_bruin}. This also covers the cases $C_{8} \oplus C_{8}$ and $C_{4} \oplus C_{16}$. \end{proof} The following lemma is a bit more complicated than the previous ones. The idea is to consider the corresponding modular curve $X_{1}(16)$ and its Jacobian $J_{1}(16)$ over some cyclotomic fields in order to determine that the Jacobian has rank $0$. After that, we determine torsion of $J_{1}(\mathbb{Q}(\zeta_{16}))$ and consequently the number of points on $X_{1}(16)(\mathbb{Q}(\zeta_{16}))$, all of which turn out to be cusps. \begin{lemma} Let $E/\Q$ be an elliptic curve. Then $E(\Q(\zeta_{16}))_\tors$ is not isomorphic to $C_{16}$, $C_{2} \oplus C_{16}$ or $C_{13}$. \end{lemma} \begin{proof} \noindent We consider the modular curve $X_1(16)(\Q(\zeta_{16}))$ and its Jacobian $J_1(16)(\Q(\zeta_{16}))$. We will demonstrate the use of standard methods for determining points on $X_1(16)(\Q(\zeta_{16}))$. \\ A computation in Magma shows that $r(J_1(16)(\Q(\zeta_{16}))) = 0$. Since \[r(J_1(16)(\Q(\zeta_{8})(\sqrt{\zeta_{8}}))) = r(J_1(16)(\Q(\zeta_{8}))) + r(J_1^{\zeta_{8}}(16)(\Q(\zeta_{8}))),\] the computation of the rank becomes shorter and we obtain that the rank of our Jacobian is $0$. \\ Now we determine $J_1(16)(\Q(\zeta_{16}))_{tors}$. Rational prime $p = 17$ splits completely in $\Q(\zeta_{16})$ so by reducing modulo some $\mathfrak p$ that lies above $p$ we get an injection \[red_{\mathfrak p}:J_1(16)(\Q(\zeta_{16}))_{tors}\to J_1(16)(\F_{17}).\] This map is injective due to the result of Katz \cite{katz}. \\ A computation in Magma shows that $|J_1(16)(\F_{17})| = 400.$ It follows that $|J_1(16)(\Q(\zeta_{16}))| \le 400$. By using the generators of the $2$-torsion subgroup of $J_1(16)(\Q(\zeta_{16}))$ and some elements of $J_1(16)(\Q(\zeta_{16}))$ that we get from some known points on $X_1(16)(\Q(\zeta_{16}))$, we are able to generate a group with $400$ elements. Therefore, we know exactly how $J_1(16)(\Q(\zeta_{16}))$ looks like. \\ Now we are able to determine all points on $X_1(16)(\Q(\zeta_{16}))$ by considering the Mumford representations of the elements of $J_1(16)(\Q(\zeta_{16}))$. We easily get that $|X_1(16)(\Q(\zeta_{16}))| = 14$ with all points being cusps. Therefore, we can conclude that there are no elliptic curves $E/\Q(\zeta_{16})$ (and consequently $E/\Q$) with a point of order $16$ over $\Q(\zeta_{16})$. \\It remains to show that $E(\Q(\zeta_{16}))_\tors$ can't be $C_{13}$. We consider the modular curve $X_1(13)(\Q(\zeta_{16}))$ and its Jacobian $J_1(13)(\Q(\zeta_{16}))$. \noindent A computation in Magma shows that $r(J_1(13)(\Q(\zeta_{16}))) = 0$. As in the previous lemma, we obtain: \[r(J_1(13)(\Q(\zeta_{16}))) = r(J_1(16)(\Q(\zeta_{8}))) + r(J_1^{\zeta_{8}}(16)(\Q(\zeta_{8}))) = 0.\] The next step is to determine $J_1(13)(\Q(\zeta_{16}))_{tors}$. We determine the two-torsion subgroup, which turns out to be trivial. Using the result of Katz \cite{katz}, we get an injection: \[red_{\mathfrak p}:J_1(13)(\Q(\zeta_{16}))_{tors}\to J_1(13)(\F_{17}).\] We also get that rational prime $q = 41$ has inertia degree $2$ in $\Q(\zeta_{16})$ so we have another injection: \[red_{\mathfrak q}:J_1(13)(\Q(\zeta_{16}))_{tors}\to J_1(13)(\F_{41^2}).\] We notice that $gcd(\#J_1(13)(\F_{17}), \#J_1(13)(\F_{41^2})) = 76$, so $\#J_1(13)(\Q(\zeta_{16})) \mid 76$. \\ Since the two torsion subgroup is trivial, we get that $\#J_1(13)(\Q(\zeta_{16})) \mid 19$. We can find a point of order $19$ on our Jacobian. By checking the Mumford representations of those divisors, we find that all of the points on the Jacobian come from cusps on $X_1(13)(\Q(\zeta_{16}))$ (and actually $X_1(13)(\Q))$. \\ \noindent Therefore, we can conclude that there are no elliptic curves $E/\Q(\zeta_{16})$ (and consequently $E/\Q$) such that $E(\Q(\zeta_{16}))_\tors \cong C_{13}$. \end{proof} \section{Torsion growth over \texorpdfstring{$\mathbb{Q}(\zeta_{27})$}{Q27}} In this section we prove Theorem \ref{zeta27} using a series of lemmas. First we eliminate some possibilities for a cyclic group to appear as the subgroup of $E(\mathbb{Q}(\zeta_{27}))$. \noindent Assume that $E/\mathbb{Q}$ is an elliptic curve and that $C_{m} \oplus C_{mn} \subseteq E(\mathbb{Q}(\zeta_{27}))_{tors}$. By the properties of the Weil pairing and taking the Theorem \ref{eva} into account, we have $\mathbb{Q}(\zeta_{m}) \subseteq \mathbb{Q}(\zeta_{27})$. It follows that $m \in \{1,2,3,6 \}.$ \begin{lemma} Let $E/\Q$ be an elliptic curve. Then $E(\Q(\zeta_{27}))_{tors}$ is not isomorphic to $C_{n}$ if $n \in \{11, 13, 14, 15, 16, 17, 19,25, 37,43, 67, 163\}$. \end{lemma} \begin{proof} \fbox{$C_{11}, C_{25}$}: If $n \in \{11,25\}$ is a point of order $n$, then then by Lemma \ref{nika} we have $[\Q(P_{n}) : \Q] | 2$, which is impossible by Theorem \ref{najmanquadratic}. \\ \fbox{$C_{13}$}: Let $P_{13} \in E(\Q(\zeta_{27}))$ be the point of order $13$. By Lemma \ref{nika} we have $[\Q(P_{13}) : \Q] \mid 6$. Therefore, this torsion subgroup is defined over $\Q(\zeta_{9})$. \noindent Theorem \ref{najmanquadratic} tells us that this torsion subgroup cannot be defined over quadratic field. Therefore, it is defined over sextic or cubic field. Assume it is defined over sextic field (entire $\Q(\zeta_{9})$). \noindent Then we can use Lemma \ref{oddtors} to get: \[ C_{13} \cong E(\Q(\zeta_{9}))_{(2')} \cong E(\Q(\zeta_{9}^{+}))_{(2')} \oplus E^{-3}(\Q(\zeta_{9}^{+}))_{(2')}. \] This means that either $E$ or $E^{-3}$ has torsion subgroup $\Z / 13\Z$ defined over $\Q(\zeta_{9})^{+}$. \noindent Now we will be finished if we prove that torsion subgroup $\Z / 13\Z$ cannot appear over $\Q(\zeta_{9})^{+}$. To do this, we consider $X_1(13)(\Q(\zeta_{9})^{+})$. \\ As before, we use Magma to determine that $J_1(13)(\Q(\zeta_{9})^{+}) \cong J_1(13)(\Q)$ and that all points on the Jacobian come from cusps, which completes the proof. \\ \fbox{$C_{14}$}: Let $P_{14} \in E(\mathbb{Q}(\zeta_{27}))$ be a point of order $14$. By Lemma \ref{nika} it follows that $[\mathbb{Q}(2P_{14}):\mathbb{Q}]$ divides $6$, so $\mathbb{Q}(2P_{14})$ is contained in $\mathbb{Q}(\zeta_{9})$. The point $7P_{14}$ of order $2$ satisfies $[\mathbb{Q}(7P_{14}):\mathbb{Q}] \in \{1,2,3\}$, which means that it is also contained in $\mathbb{Q}(\zeta_{9})$. It follows that $P_{14} \in E(\mathbb{Q}(\zeta_{9})).$ Consider the modular curve $X_{1}(14)$. It is an elliptic curve with LMFDB label \href{https://www.lmfdb.org/EllipticCurve/Q/14/a/5}{14.a5}. On the LMFDB page of the mentioned curve we can see that its torsion subgroup does not grow in any number field contained in $\mathbb{Q}(\zeta_{9})$. It remains to show that $r(E(\mathbb{Q}))=r(E(\mathbb{Q}(\zeta_{9}))=0$, which turns out to be true by a computation in Magma \cite{magma}. Therefore $X_{1}(14)(\mathbb{Q})=X_{1}(14)(\mathbb{Q}(\zeta_{9}))$ and there does not exist an elliptic curve over $\mathbb{Q}$ with a point of order $14$ over $\mathbb{Q}(\zeta_{9})$ and consequently over $\mathbb{Q}(\zeta_{27})$. \\ \fbox{$C_{15}$}: Let $P_{15} \in E(\mathbb{Q}(\zeta_{27}))$ be a point of order $15$. Then $3P_{15}$ is a point of order $5$ and $[\mathbb{Q}(3P_{15}):\mathbb{Q}]$ is a divisor of $[\mathbb{Q}(\zeta_{27}):\mathbb{Q}]=18$. By Table \ref{tableSutherland} we see that $[\mathbb{Q}(3P_{15}):\mathbb{Q}] \in \{1,2\}$. The same way as in the Lemma \ref{lema43}, $C_{2} \oplus C_{12}$ case, we see that the point $5P_{15}$ of order $3$ is also defined over at most quadratic extension contained in $\mathbb{Q}(\zeta_{27})$. Since there is only one quadratic extension contained in $\mathbb{Q}(\zeta_{27})$, namely $\mathbb{Q}(\zeta_{3})$, we have $P_{15} \in E(\mathbb{Q}(\zeta_{3}))$, which contradicts the Theorem \ref{najmanquadratic}. \\ \fbox{$C_{16}$}: Let $P_{16} \in E(\mathbb{Q}(\zeta_{27}))$ be a point of order $16$. Then the point $8P_{16}$ has order $2$ and we have $[\mathbb{Q}(8P_{16}):\mathbb{Q}] \in \{ 1,2,3\}$. By \cite[Proposition 4.8.]{growth} we have $[\mathbb{Q}(P_{16}):\mathbb{Q}]=2^{a}3^{b}$, where $a \ge 0$ is an integer and $b \in \{0,1\}.$ Since the field $\mathbb{Q}(P_{16})$ is contained in $\mathbb{Q}(\zeta_{27})$, it follows that $2^{a}3^{b}$ divides $[\mathbb{Q}(\zeta_{27}):\mathbb{Q}]=18$. We conclude that $[\mathbb{Q}(P_{16}):\mathbb{Q}] \in \{ 1,2,3,6 \}.$ Assume that $[\mathbb{Q}(8P_{16}):\mathbb{Q}]=3$. This means that $\mathbb{Q}(8P_{16})$ is cyclic, so the entire $2$-torsion is contained in this field and $P_{16}$ is defined over a number field of degree $3$ or $6$, but such a field is contained in $\mathbb{Q}(\zeta_9)$. Therefore we have $C_{2} \oplus C_{16} \subseteq E(\mathbb{Q}(\zeta_9)),$ but this is impossible by \cite[Theorem 1.1]{tomi1}. It remains to consider the case when $[\mathbb{Q}(8P_{16}):\mathbb{Q}] \in \{ 1,2 \}$. It follows that $[\mathbb{Q}(P_{16}):\mathbb{Q}] \in \{ 1,2 \}$ again by \cite[Proposition 4.8.]{growth}. Since $\mathbb{Q}(P_{16})$ is at most quadratic extension of $\mathbb{Q}$ contained in $\mathbb{Q}(\zeta_{27})$, it follows that $\mathbb{Q}(P_{16}) \subseteq \mathbb{Q}(\zeta_3)$. By \cite[Theorem 1]{najman_torshypN} this turns out to be impossible. \\ \fbox{$C_{17}, C_{37}$}: Assume that $n \in \{ 17, 37 \}$ and that $P_{n} \in E(\mathbb{Q}(\zeta_{27}))$ is a point of order $n$. By \cite[Theorem 5.8]{growth} it follows that $[\mathbb{Q}(P_{n}):\mathbb{Q}]$ is divisible by $4$, but since $\mathbb{Q}(P_{n}) \subseteq \mathbb{Q}(\zeta_{27})$, this is impossible. \\ \fbox{$C_{19}$}: Let us consider the case when $n=19$. If $P_{19} \in E(\mathbb{Q}(\zeta_{27}))$ is a point of order $19$, then $E$ has a rational $19$-isogeny. By \cite{loz}, we have $j(E)=-2^{15} \cdot 3^{3}$. The $19$th division polynomial $f_{E,19}$ must have a root over $\mathbb{Q}(\zeta_{27})$. Using Magma, we check that this is not the case and therefore we arrive at the contradiction. \\ \fbox{$C_{43}, C_{67}, C_{163}$}: Assume that $n \in \{43,67,163 \}$ and $P_{n} \in E(\mathbb{Q}(\zeta_{27}))[n]$. By \cite[Theorem 2.1]{loz} it follows that $[\mathbb{Q}(P_{n}):\mathbb{Q}] \ge \frac{n-1}{2} > [\mathbb{Q}(\zeta_{27}):\mathbb{Q}]=18,$ a contradiction. \end{proof} \begin{lemma} \label{melisa} Let $E/\Q$ be an elliptic curve. Then $E(\Q(\zeta_{27}))_{tors}$ is not isomorphic to $C_{18}$ or $C_{2} \oplus C_{18}$. \end{lemma} \begin{proof} If $E(\Q(\zeta_{27}))_{tors} \cong C_{18}$, then Lemma \ref{nika} directly gives us that this torsion subgroup is defined over a number field of degree $6$ which can only be $\Q(\zeta_{9})$. \\ \noindent If $E(\Q(\zeta_{27}))_{tors} \cong C_{2} \oplus C_{18}$, then Lemma \ref{nika} gives us that if $P_{18} \in E(\Q(\zeta_{27}))_{tors}$ is a point of order $18$, then $2P_{18}$ is defined over $\Q(\zeta_{9})$. We know that $[\Q(E[2]) : \Q] \in \{1,2,3,6\}$, but all unique subextensions of $\Q(\zeta_{27})$ of those degrees are contained in $\Q(\zeta_{9})$, so again our torsion subgroup is defined over $\Q(\zeta_{9})$. \noindent Now from Lemma \ref{oddtors} we get that $$C_{9} \cong E(\Q(\zeta_{9}))_{(2')} \cong E(\Q(\zeta_{9}^{+}))_{(2')} \oplus E^{-3}(\Q(\zeta_{9}^{+}))_{(2')}.$$ Therefore, one of $E(\Q(\zeta_{9}^{+}))$ and $E^{-3}(\Q(\zeta_{9}^{+}))$ has a point of order $9$. Let $P_2 \in E(\Q(\zeta_{9}))_{tors}$ be a point of order $2$. If $[\Q(P_2) : \Q] \in \{1,3\}$, then $P_2$ is on both $E(\Q(\zeta_{9}^{+}))$ and $E^{-3}(\Q(\zeta_{9}^{+}))$. If $[\Q(P_2) : \Q] = 2$, then there is another point $Q_2$ of order $2$ on $E$ defined over $\Q$. In any case, both $E(\Q(\zeta_{9}^{+}))$ and $E^{-3}(\Q(\zeta_{9}^{+}))$ have a point of order $2$. Finally, one of them has a point of order $18$. However, it was proved in \cite[Lemma 3.4.7]{krijanphd} that all the points on $X_{1}(18)(\mathbb{Q}(\zeta_{9})^{+})$ are cusps, which completes the proof. \end{proof} \begin{lemma} \label{lema43} Let $E/\Q$ be an elliptic curve. Assume that $C_{2} \oplus C_{2n} \cong E(\Q(\zeta_{27}))_{tors}$. Then $n \in \{ 1,2,3,4 \}$. Additionally, $C_6 \oplus C_6 \not\subseteq E(\Q(\zeta_{27}))$. \end{lemma} \begin{proof} From Theorem \ref{eva} it follows that $n \le 9$. We have shown that $E(\Q(\zeta_{27}))$ cannot contain a point of order $18$ in Lemma \ref{melisa}. \fbox{$C_{2} \oplus C_{10}$}: Let $P_{5} \in E(\Q(\zeta_{27}))$ be the point of order $5$. It follows that $E$ has a rational $5$-isogeny and $[\Q(P_5):\Q] \in \{ 1, 2\}$. If $G_{E}(2) \subseteq 2B$, then by Table \ref{tableSutherland} we see that $[\Q(E[2]):\Q] \in \{ 1, 2\}$. Thus we have found two at most quadratic fields contained in $\Q(\zeta_{27})$. Since $\Q(\zeta_{27})$ has an unique quadratic subextension $F=\Q(\sqrt{-3})$, it follows that $F=\Q(E[2])=\Q(P_5)$. We conclude that $C_{2} \oplus C_{10} \subseteq E(F)_{tors}$, which is impossible by \cite[Theorem 1]{najman_torshypN}. \\ Assume that $G_{E}(2)=2Cn$. By Theorem \cite[Theorem 1.1]{zywina} it follows that $j(E)=t^2+1728$, for some $t \in \Q$. Since $E$ has a rational $5$-isogeny, by \cite[Theorem 1.3]{zywina} we have $j(E)=\frac{25(s^2+10s+5)^3}{s^5}$, for some $s \in \Q \setminus \{0\}$. It remains to find rational points on the induced curve. By \cite[Page 61]{tomi3} we see that such rational points do not exist. \par \fbox{$C_{2} \oplus C_{12}$}: Let $P_{3} \in E(\Q(\zeta_{27}))$ be the point of order $3$. The extension $\Q(P_3)$ is cyclic over $\Q$ since it is a subfield of $\Q(\zeta_{27})$. By Table \ref{tableSutherland} we see that $G_{E}(3)$ must be contained in the Borel subgroup of $\GL_{2}(\mathbb{Z}/3\mathbb{Z})$. Applying Theorem \cite[Theorem 9.3]{loz} it follows that $[\Q(P_3):\Q] \in \{ 1,2 \}$, so we have $\Q(P_3) \subseteq \Q(\zeta_3)$. Assume that $G_{E}(2) \subseteq 2B$. By \cite[Proposition 4.6]{growth} it follows that $C_{2} \oplus C_{4} \subseteq E(\Q(\zeta_{3}))$. We conclude that $C_{2} \oplus C_{12} \subseteq E(\Q(\zeta_3))$, which is impossible by \cite[Theorem 1]{najman_torshypN}. \\Consider the case when $G_{E}(2)=2Cn$. By \cite[Proposition 4.8]{growth} it follows that the point $P_{4}$ of order $4$ is defined over cubic or sextic subfield contained in $\mathbb{Q}(\zeta_{27})$. A computation in Magma \cite{magma} shows that if a point of order $4$ is defined over cubic or sextic number field, then $G_{E}(2)=\GL_{2}(\mathbb{Z}/2\mathbb{Z})$, a contradiction. \par \fbox{$C_{2} \oplus C_{16}$}: By \cite[Corollary 3.5]{pprimarnatorzija} it follows that this is impossible., \par \fbox{$C_{6} \oplus C_{6}$}: Since $\Q(E[6])$ is contained in $\Q(\zeta_{27})$, we have that $|G_{E}(6)|$ divides $[\Q(\zeta_{27}):\Q]=18$. Additionally, the group $G_{E}(6)$ is cyclic. If $|G_{E}(6)| < 6$, then it follows that $E$ obtains a full $6$-torsion over a number field of degree $1,2$ or $3$, but this is impossible by Theorem \ref{najmanquadratic} and Theorem \ref{najmancubic}. Assume that $|G_{E}(6)| \in \{ 6, 9 \}$. A search in Magma \cite{magma} shows that there exists only one cyclic group $G$ with such property, namely: \[G:= \Big\langle \begin{bmatrix} 2 & 3 \\ 3 & 1 \end{bmatrix} \Big\rangle. \] Reducing $G$ modulo $2$ and modulo $3$ we see that $G_{E}(2)=2Cn$ and $G_{E}(3)=3Cs.1.1$. By \cite[Theorem 1.1]{zywina} we have that $j(E)=t^2+1728$, for some $t \in \Q$. Similary from Theorem \cite[Theorem 1.2]{zywina} we have that $j(E)=f(s)^3$, for some rational function $f(s)$ and $s \in \Q$. A computation in Magma \cite{magma} shows that the only affine point on the elliptic curve \[t^2+1728=x^3 \] is $(t,x)=(0,12)$. A direct computation shows that the equation $12=f(s)$ does not have a rational solution in $\Q$. Therefore $E$ cannot have $C_6 \oplus C_6$ torsion over $\Q(\zeta_{27})$. \end{proof} \section{Torsion growth over \texorpdfstring{$\mathbb{Q}(\zeta_{5}),\mathbb{Q}(\zeta_{7}) \; \text{and} \; \mathbb{Q}(\zeta_{11})$}{Qp}} We note that if $p$ is a prime number and $E/\mathbb{Q}$ an elliptic curve such that $E(\Q(\zeta_p))_{tors}$ contains a subgroup isomorphic to $C_{n} \oplus C_{mn}$, then by the properties of Weil pairing we have $\Q(\zeta_n) \subseteq \Q(\zeta_p)$ which forces $n \le 2$. \begin{lemma}\label{petsedam} Let $E/\mathbb{Q}$ be an elliptic curve and let $p \in \{ 5,7,11\}$ be a prime number. Apart from the groups in Mazur's theorem, the group $E(\mathbb{Q}(\zeta_p))_{tors}$ can only be isomorphic to one of the following groups: \begin{itemize} \item If $p=5$, \[C_{5} \oplus C_{5} \; (\href{https://www.lmfdb.org/EllipticCurve/Q/550k2}{550k2}), \quad C_{15} \; (\href{https://www.lmfdb.org/EllipticCurve/Q/50/a/2}{50a2}) \quad \text{and} \quad C_{16} \; (\href{https://www.lmfdb.org/EllipticCurve/Q/15/a/7}{15a7}).\] \item $If p=7$, \[C_{13} \; \href{https://www.lmfdb.org/EllipticCurve/Q/147/c/2}{(147c2)}, \quad C_{14} \; \href{https://www.lmfdb.org/EllipticCurve/Q/49a1}{(49a1)},\] \[ C_{18} \; \href{https://www.lmfdb.org/EllipticCurve/Q/14a4}{(14a4)}, \quad C_{2} \oplus C_{14} \; \href{https://www.lmfdb.org/EllipticCurve/Q/49a4}{(49a4)}, \quad C_{2} \oplus C_{18} \; \href{https://www.lmfdb.org/EllipticCurve/Q/14a5}{(14a5)}.\] \item If $p=11$, \[C_{11} \; \href{https://www.lmfdb.org/EllipticCurve/Q/121b2}{(121b2)}, \quad C_{25} \; \href{https://www.lmfdb.org/EllipticCurve/Q/11a3}{(11a3)}, \quad C_{2} \oplus C_{10} \; \href{https://www.lmfdb.org/EllipticCurve/Q/10230bg2}{(10230bg2)}.\] \end{itemize} \end{lemma} \begin{proof} Assume that $p=5$. By \cite[Theorem 5]{najman_bruin} we conclude that the only possibilities are the ones listed in this Lemma and $C_{17}$. It is easy to rule out $C_{17}$ by using \cite[Theorem 5.8]{growth}. \par Consider the case when $p=11$. If $E/\mathbb{Q}$ is an elliptic curve and if $C_{n} \oplus C_{mn} \in E(\mathbb{Q}(\zeta_{11}))$, then by the properties of the Weil pairing we have $\mathbb{Q}(\zeta_{n}) \subseteq \mathbb{Q}(\zeta_{11})$, which forces $n \in \{1,2, 11\}$. Applying the Theorem \ref{eva} we eliminate the possibility $n=11$. By \cite[Lemma 6.0.8]{tomi3}, it remains to show that the groups $C_{15}, C_{16}$ and $C_{2}\oplus C_{12}$ do not occur. We note that the set $\Phi_{\mathbb{Q}}(2)$ is described by Theorem \ref{najmanquadratic} and the description of the set $\Phi_{\mathbb{Q}}(5)$ can be found in \cite[Theorem 1]{38}. \begin{itemize} \item $C_{15}$: Assume that $P_{15} \in E(\mathbb{Q}(\zeta_{11}))$ is a point of order $15$. Obviously we have \[ E(\mathbb{Q}(\zeta_{11}))[15] \cong C_{15}.\] By Lemma \ref{nika} it follows that $[\mathbb{Q}(P_{15}):\mathbb{Q}] \in \{1,2\}$, but this contradicts the Theorem \ref{najmanquadratic}. \item $C_{16}$: If $P_{16} \in E(\mathbb{Q}(\zeta_{11}))$ is a point of order $16$, then $8P_{16}$ has order $2$ and is defined over at most quadratic extension contained in $\mathbb{Q}(\zeta_{11})$, which is $\mathbb{Q}(\sqrt{-11})$. By \cite[Proposition 4.8]{growth} it follows that $[\mathbb{Q}(P_{16}):\mathbb{Q}] \in \{1,2\}.$ A computation in Magma \cite{magma} shows that $X_{1}(16)(\mathbb{Q}(\sqrt{-11})$ contains only cusps. \item $C_{2} \oplus C_{12}$: As in the previous case, we show that $C_{2} \oplus C_{12} \subseteq E(\mathbb{Q}(\sqrt{-11}))$. The modular curve $X_{1}(2,12)(\mathbb{Q}(\sqrt{-11}))$ has rank $0$ and same torsion as over $\mathbb{Q}$, which means that there does not exist an elliptic curve with $C_{2} \oplus C_{12}$ torsion over $\mathbb{Q}(\sqrt{-11})$. \end{itemize} \par It remains to consider $p=7$. Assume that $n \in \{11,15,16,17,21,25,19,37,43,67,163,27 \}$ and that $E(\Q(\zeta_7)) \cong C_{n}$. \begin{itemize} \item $n \in \{11, 15, 17, 25 \}$: Lemma \ref{nika} gives us that if $P_{n}$ is a point of order $n$, we have $[\Q(P_{n}) : \Q] \mid 2$. Now Theorem \ref{najmanquadratic} gives us the contradiction. \item $n \in \{19, 37, 43, 67, 163 \}$: From \cite[Theorem 5.8]{growth}, we get that the point of order $n$ cannot be defined over the field $\Q(\zeta_{7})$ (a degree $6$ extension). \item $n=27$: This follows from \cite[Theorem 1.1]{tomi1}. \item $n=16$: Lemma \ref{nika} gives us that if $P_{16} \in E(\Q(\zeta_7))$ is a point of order $16$, we have $[\Q(P_{16}) : \Q] \mid 2$. That means that $P_{16} \in E(\Q(\sqrt{-7}))$. We can use the similar methods as before in Magma to consider $X_1(16)(\Q(\sqrt{-7}))$ and prove that $E$ cannot have a point of order $16$ defined over $\Q(\sqrt{-7})$. \item $n=21$: For $C_{21}$, we conclude from \cite[Lemma 2.7]{chou2} that $E$ has a $\Q$-rational $21$-isogeny. There are $4$ elliptic curves (up to $\overline{\Q}$-isomorphism) with a rational $21$-isogeny (see \cite[p.78-80]{rj21}). Therefore, we can use the division polynomial method since the elliptic curves with the same $j$-invariant have identical division polynomials, up to scalar. We will consider the seventh division polynomials. We can use Magma \cite{magma} to factor those polynomials in the field $\Q(\zeta_7)$ and see that they have no zeroes there. Hence, this case is impossible. \end{itemize} It remains to eliminate only three non-cyclic groups. For $C_{2} \oplus C_{16}$, we can use Lemma \ref{nika} to show that if $P_{16} \in E(\Q(\zeta_7))$ is of order $16$, then $[\Q(2P_{16}) : \Q] \mid 2$. By \cite[Proposition 4.6]{growth}, we can conclude that $[\Q(P_{16}) : \Q] \in \{1, 2\}$. Hence, we have a point of order $16$ defined over $\Q(\sqrt{-7})$. However, we already proved that this can't happen when we considered $C_{16}$. For $C_{2} \oplus C_{10}$ and $C_{2} \oplus C_{12}$, we consider modular curves $X_1(2, 10)(\Q(\zeta_{7}))$ and $X_1(2, 12)(\Q(\zeta_{7}))$ and use Magma to show that they don't have non-cuspidal points, which completes the proof. \end{proof} \begin{remark} Ideally, one would like to give a useful description of possible isomorphism classes of $E/\mathbb{Q}(\zeta_p)$, where $E/\mathbb{Q}$ is an elliptic curve and $p$ is a prime number. One can start with the following question that seems to be out of reach for the authors at the time of writing this paper. \\ Let $n \in \{ 13, 16, 18 ,25 \}$. Under what conditions on the prime number $p$ does there exist an elliptic curve $E/\mathbb{Q}$ with a point $P_n \in E(\mathbb{Q}(\zeta_p)) $ of order $n$? \end{remark} \newpage \subsection{Appendix: Images of Mod \texorpdfstring{$p$}{Appendix} Galois representations associated to elliptic curves over \texorpdfstring{$\mathbb{Q}$}{Appendix}} For each possible known subgroup $G_E(p) \subsetneq \GL_2(\mathbb{F}_p)$ where $E/\mathbb{Q}$ is a non-CM elliptic curve and $p$ is a prime, Tables \ref{tableSutherland} and \ref{tableSutherland2} list in the first and second column the corresponding labels in Sutherland and Zywina notations, and the following data: \begin{itemize} \item $d_v=[G_{E}(p):G_{E}(p)_v]=|G_{E}(p).v|$ for $v\in\mathbb{F}_p^2$, $v\ne (0,0)$; equivalently, the degrees of the extensions $\mathbb{Q}(P)$ over $\mathbb{Q}$ for points $P\in E(\overline{\mathbb{Q}})$ of order $p$. \item$d=|G_{E}(p)|$; equivalently, the degree $\mathbb{Q}(E[p])$ over $\mathbb{Q}$. \end{itemize} Note that Tables \ref{tableSutherland} and \ref{tableSutherland2} are partially extracted from Table 3 of \cite{sutherlandgalrep}. The difference is that \cite[Table 3]{sutherlandgalrep} only lists the minimum of $d_v$, which is denoted by $d_1$ therein. \begin{table}[h!] \begin{footnotesize} \begin{center} \begin{tabular}{cc} \begin{tabular}[t]{cllcc} & Sutherland & Zywina & $d_v$ & $d$ \\ \toprule &\texttt{2Cs} & $G_1$ & 1 & 1\\ &\texttt{2B} &$G_2$ & 1\,,\,2 & 2\\ &\texttt{2Cn} & $G_3$ & 3 & 3\\ \midrule &\texttt{3Cs.1.1} &$H_{1,1}$ & 1\,,\,2 & 2\\ &\texttt{3Cs} & $G_1$ & 2\,,\,4 & 4 \\ &\texttt{3B.1.1} & $H_{3,1}$ & 1\,,\,6 &6 \\ &\texttt{3B.1.2} & $H_{3,2}$ & 2\,,\,3 & 6\\ &\texttt{3Ns} &$G_2$ & 4 & 8\\ &\texttt{3B} & $G_3$ & 2\,,\,6 & 12\\ &\texttt{3Nn} & $G_4$ & 8 & 16\\ \midrule &\texttt{5Cs.1.1} & $H_{1,1}$ & 1\,,\,4 & 4\\ &\texttt{5Cs.1.3} & $H_{1,2}$ & 2\,,\,4 & 4 \end{tabular} & \begin{tabular}[t]{cllcc} & Sutherland & Zywina & $d_v$ & $d$ \\ \toprule &\texttt{5Cs.4.1} & $G_1$ & 2\,,\,4\,,\,8 & 8\\ &\texttt{5Ns.2.1} & $G_3$ & 8\,,\,16 & 16\\ &\texttt{5Cs} & $G_2$ & 4\,,\,4 & 16\\ &\texttt{5B.1.1} & $H_{6,1}$ & 1\,,\,20 & 20\\ &\texttt{5B.1.2} & $H_{5,1}$ & 4\,,\,5 & 20\\ &\texttt{5B.1.4} & $H_{6,2}$ & 2\,,\,20 & 20\\ &\texttt{5B.1.3} & $H_{5,2}$ & 4\,,\,10 & 20\\ &\texttt{5Ns} & $G_{4}$ & 8\,,\,16 & 32\\ &\texttt{5B.4.1} & $G_{6}$ & 2\,,\,20 & 40\\ &\texttt{5B.4.2} & $G_{5}$ & 4\,,\,10 & 40\\ &\texttt{5Nn} & $G_{7}$ & 24 & 48\\ &\texttt{5B} & $G_{8}$ & 4\,,\,20 & 80\\ &\texttt{5S4} & $G_{9}$ & 24 & 96 \end{tabular}\\ \midrule \end{tabular} \end{center} \end{footnotesize} \end{table} \begin{table}[h!] \begin{footnotesize} \begin{center} \begin{tabular}{cc} \begin{tabular}[t]{cllcc} & Sutherland & Zywina & $d_v$ & $d$ \\ \toprule &\texttt{7Ns.2.1} & $H_{1,1}$ & $ 6\,,\,9\,,\,18$ & 18 \\ &\texttt{7Ns.3.1} & $G_{1}$ & $12\,,\,18$ & 36 \\ &\texttt{7B.1.1} & $H_{3,1}$ & $ 1\,,\,42$ & 42 \\ &\texttt{7B.1.3} & $H_{4,1}$ & $ 6\,,\, 7$ & 42 \\ &\texttt{7B.1.2} & $H_{5,2}$ & $ 3\,,\,42$ & 42 \\ &\texttt{7B.1.5} & $H_{5,1}$ & $ 6\,,\,21$ & 42 \\ &\texttt{7B.1.6} & $H_{3,2}$ & $ 2\,,\,21$ & 42 \\ &\texttt{7B.1.4} &$H_{4,2}$ & $ 3\,,\,14$ & 42 \\ &\texttt{7Ns} & $G_{2}$ & $ 12\,,\,36$ & 72 \\ &\texttt{7B.6.1} & $G_{3}$ & $ 2\,,\,42$ & 84\\ &\texttt{7B.6.3} & $G_{4}$ & $ 6\,,\, 14$ & 84 \\ &\texttt{7B.6.2} & $G_{5}$ & $ 6\,,\, 42$ & 84 \end{tabular} & \begin{tabular}[t]{cllcc} & Sutherland & Zywina & $d_v$ & $d$ \\ \toprule &\texttt{7Nn} & $G_{6}$ & $ 48$& 96 \\ &\texttt{7B.2.1} & $H_{7,2}$& $ 3\,,\,42$ & 126 \\ &\texttt{7B.2.3} & $H_{7,1}$ & $ 6\,,\, 21$ & 126 \\ &\texttt{7B} & $G_{7}$ & $ 6\,,\, 42$ & 252 \\ \midrule &\texttt{11B.1.4} & $H_{1,1}$ & 5\,,\,110 & 110\\ &\texttt{11B.1.5} & $H_{2,1}$ & 5\,,\,110 & 110\\ &\texttt{11B.1.6} & $H_{2,2}$ & 10\,,\,55 & 110\\ &\texttt{11B.1.7} & $H_{1,2}$ & 10\,,\,55 & 110\\ &\texttt{11B.10.4} & $G_{1}$ & 10\,,\,110 & 220\\ &\texttt{11B.10.5} & $G_{2}$ & 10\,,\,110 & 220\\ &\texttt{11Nn} & $G_{3}$ & 120 & 240 \end{tabular}\\ \bottomrule \end{tabular} \end{center} \end{footnotesize} \caption{Possible images $G_E(p)\ne \GL_2(\mathbb{F}_p)$, for $p\le 11$, for non-CM elliptic curves $E/\mathbb{Q}$.}\label{tableSutherland} \vspace{10pt} \end{table} \begin{table}[h!] \begin{footnotesize} \begin{center} \begin{tabular}{cc} \begin{tabular}[t]{cllcc} & Sutherland & Zywina & $d_v$ & $d$ \\ \toprule &\texttt{13S4} &$G_{7}$ & $72\,,\,96$ & 288 \\ &\texttt{13B.3.1} &$H_{5,1}$ & $3 \,,\, 156$ & 468 \\ &\texttt{13B.3.2} & $H_{4,1}$ & $ 12\,,\, 39$ & 468 \\ &\texttt{13B.3.4} & $H_{5,2}$ & $ 6\,,\,156$ & 468 \\ &\texttt{13B.3.7} & $H_{4,2}$ & $ 12 \,,\, 78$ & 468 \\ &\texttt{13B.5.1} & $G_{2}$ & $ 4 \,,\, 156$ & 624 \\ &\texttt{13B.5.2} & $G_{1}$ & $ 12 \,,\, 52$ & 624\\ &\texttt{13B.5.4} & $G_{3}$ & $ 12 \,,\, 156$ & 624 \end{tabular} & \begin{tabular}[t]{cllcc} & Sutherland & Zywina & $d_v$ & $d$ \\ \toprule &\texttt{13B.4.1} & $G_{5}$ & $ 6 \,,\, 156$ & 936 \\ &\texttt{13B.4.2} & $G_{4}$ & $ 12 \,,\, 78$ & 936 \\ &\texttt{13B} & $G_{6}$ & $ 12 \,,\, 156$ & 1872 \\ \midrule &\texttt{17B.4.2} & $G_{1}$ & $ 8 \,,\, 272$ & 1088 \\ &\texttt{17B.4.6} & $G_{2}$ & $ 16 \,,\, 136$ & 1088 \\ \midrule &\texttt{37B.8.1} & $G_{1}$ & 12 \,,\, 1332 & 15984 \\ &\texttt{37B.8.2} & $G_{2}$ & 36 \,,\, 444 & 15984 \end{tabular} \\ \bottomrule \end{tabular} \end{center} \end{footnotesize} \caption{Known images $G_E(p)\ne \GL_2(\mathbb{F}_p)$, for $p=13, 17$ or $37$, for non-CM elliptic curves $E/\mathbb{Q}$.}\label{tableSutherland2} \vspace{10pt} \end{table} \begin{acknowledgments} The authors gratefully acknowledge support from the QuantiXLie Center of Excellence, a project co-financed by the Croatian Government and European Union through the European Regional Development Fund - the Competitiveness and Cohesion Operational Programme (Grant KK.01.1.1.01.0004) and by the Croatian Science Foundation under the project no. IP-2018-01-1313. \end{acknowledgments} \nocite{*} \bibliographystyle{babplain-fl}
1,314,259,996,196
arxiv
\section{\label{}} \section{Introduction} The study of flavor-changing neutral current (FCNC) decays has been a fruitful avenue of research throughout the history of particle physics. Today rare $B$-meson decays based on the process $b \to s \ell^+ \ell^-$ (where $\ell = e$ or $\mu$) provide one of the most promising probes for new physics. Both $B$-factory experiments, \babar\ and Belle, have observed decays of this type and reported measurements which probe their underlying structure. The decays proceed through loop diagrams such as those shown in Figure~\ref{fig-slldiags}. \begin{figure}[h] \centering \includegraphics[width=80mm]{figure1.ps} \caption{The electroweak penguin (left) and $W$-box (right) diagrams responsible for $B \to K^{(*)}\ell^+ \ell^-$ decays.} \label{fig-slldiags} \end{figure} The theoretical treatment of $b \to s \ell^+ \ell^-$ transitions in the Standard Model (SM) invokes an effective field theory in which the Hamiltonian is a sum of terms, where each term consists of CKM factors and a Wilson coefficient multiplying an operator that is formed from the light quark and lepton fields. The Wilson coefficients, obtained by integrating out the heavy particles, characterize the short-distance physics in these decays. New physics (e.g., SUSY) would modify the Wilson coefficients by providing new particles inside the loops and may modify the Hamiltonian by adding scalar or pseudoscalar terms. To account for QCD effects that mix the operators, so-called effective Wilson coefficients are defined, which are functions of a renormalization scale~$\mu$ (typically taken to be 4.6~GeV in the $\overline{MS}$ scheme) and the squared dilepton mass $q^2 = m^2_{\ell \ell}$. Measurements of $b \to s \ell^+ \ell^-$ decays mainly probe the effective Wilson coefficients $\tilde C_7$, $\tilde C_9$, and $\tilde C_{10}$. Fully inclusive measurements are not possible, and the background environment is very difficult for analyses that combine multiple exclusive modes, so it is conventional to focus on the exclusive modes $B \to K \ell^+ \ell^-$ and $B \to K^\ast \ell^+ \ell^-$ (denoted together as $B \to K^{(\ast)} \ell^+ \ell^-$). Form factor uncertainties complicate the interpretation of exclusive measurements, but a number of measurable asymmetries, defined below, are relatively insensitive to these uncertainties. An up-to-date summary of the theoretical status, along with extensive references, can be found in Ref.\cite{Gerald-FPCP}. The related decays $B \to K^{(\ast)} \nu \bar \nu$ are also of considerable interest, but experimentally present almost insurmountable problems owing to the missing neutrinos. Nonetheless, the $B$-factory experiments have conducted searches, and \babar\ has a recent update. The decays $B \to \pi \ell^+ \ell^-$ and $B^0 \to \ell^+ \ell^-$ proceed through essentially the same diagrams, where the $s$ quark is replaced by a $d$ quark, leading to CKM suppression by a factor $|V_{\rm td}/V_{\rm ts}|^2\simeq 0.04$; in addition, the two-body decays are helicity suppressed. Consequently, these decays have quite small SM branching fractions and have thus far not been observed. Belle has recently reported a new limit on $B \to \pi \ell^+ \ell^-$. The decays $B^+ \to \ell^+ \nu$ proceed through simple $W$-exchange and are sensitive to the product $f_B |V_{ub}|$. Combined with a measurement of $|V_{ub}|$ from another source, measurement of any of these branching fractions would provide a good measurement of the $B$-meson decay constant $f_B$. For the $\ell = \mu$ and $e$ cases, the branching fractions are quite small due to helicity suppression, although the $B^+ \to \mu^+ \nu$ decay appears to be only just beyond the reach of the current $B$-factory experiments. The status of $B^+ \to \tau^+ \nu$ is presented in another contribution to this conference \cite{MBarrett}. Below, the status of these rare FCNC $B$ decays is described, with emphasis on recent results from Belle and \babar. \section{$ B \to K^{(*)} \ell^+ \ell^-$ Measurements} These decays experience interference from the processes $B \to K^{(*)} J/\psi \to K^{(*)} \ell^+ \ell^-$ and $B \to K^{(*)} \psi(2S) \to K^{(*)} \ell^+ \ell^-$. Consequently, it is necessary to remove events with lepton-pair mass close to the $J/\psi$ or $\psi(2S)$ peaks. On the positive side, these processes provide large and well-understood control samples which can be used to study efficiencies and to study the characteristics of signal-like events (e.g., to determine PDFs for fits). The main backgrounds to these decays arise from $B$ and $D$ semileptonic decays. The two leptons can come from the semileptonic decay of both the $B$ and $\overline{B}$ in an event, or from the semileptonic decays of a $B$ and also its daughter $D$. These backgrounds are suppressed by combining event shape information, vertex information, and missing energy information into multivariate analysis techniques (such as neural nets). Another important background, $B \to D \pi$ (followed by $D \to K^{(*)} \pi$) wherein pions are misidentified as muons, can be explicity vetoed by rejecting events in which one of the muons, when assigned the pion mass, reconstructs in combination with a kaon to the $D$ mass. Signal can then be separated from the residual background using maximum likelihood fits, typically utilizing the differing shapes of signal and background distributions in the quantities $\Delta E = E^*_B - E^*_{\rm beam}$ and $m_{ES}[{\rm \babar}]=m_{\rm bc}[{\rm Belle}] = \sqrt{E^{*2}_{\rm beam} - p^{*2}_B}$, where $E^*_B$ and $p^*_B$ are the CM energy and momentum of the reconstructed $B$. \babar\ has recently reported a series of measurements based on $349 \, {\rm fb^{-1}}$ of data (384 million $B\overline{B}$ pairs). These results are described below and are compared to Belle results where possible. The \babar\ analysis of $B \to K^{(*)} \ell^+ \ell^-$ reconstructs 10 submodes, reflecting five possible $K^{(*)}$ final states ($K^\pm$, $K^0_S$, $K^\pm \pi^\mp$, $K^\pm \pi^0$, and $K^0_S \pi^\pm$), each paired with $e^+e^-$ or $\mu^+ \mu^-$. Electrons (muons) are required to have $p>0.3 (0.7) \, {\rm GeV/c}$. Photons consistent with bremsstrahlung are combined with the associated electron. Two $q^2$-bins are defined: a low-$q^2$ bin $0.1 < q^2 < 7.02 \, {\rm GeV^2}$ and a high-$q^2$ bin $q^2 > 10.24 \, {\rm GeV^2}$ but excluding $12.96 < q^2 < 14.06$ for the $\psi(2S)$. More $q^2$ bins are desirable, but are not possible with the current event sample. The so-called pole region ($q^2 < 0.1 \, {\rm GeV^2}$) is excluded due to the $1/q^2$ term in $B \to K^* \gamma$. \subsection{Branching Fractions} The values of the recent \babar\ branching fraction measurements, which combine both $q^2$ bins and lepton-pair cases, and includes correction for the excluded $J/\psi$ and $\psi(2S)$ regions, are: \begin{center} ${\cal B}(B \to K \ell^+ \ell^-) = (3.9 \pm 0.07 \pm 0.2) \times 10^{-7},$ \qquad \\ ${\cal B}(B \to K^* \ell^+ \ell^-) = (11.1^{+1.9}_{-1.8} \pm 0.7) \times 10^{-7}.$ \end{center} Figure~\ref{fig-sllbfs} shows these results, along with prior measurements from other experiments and two theoretical results based on the SM. The experimental results are consistent with the SM predictions. \begin{figure}[h] \centering \includegraphics[width=80mm]{figure2.ps} \caption{Summary of branching fraction results from different experiments for $B \to K^{(*)}\ell^+ \ell^-$.} \label{fig-sllbfs} \end{figure} \subsection{Decay Asymmetries} By reconstructing 10 separate submodes for $B \to K^{(*)}\ell^+ \ell^-$, it becomes possible to construct a variety of decay asymmetries which test different aspects of the process. Direct CP violation, which is expected to be very small in the SM, can be tested by comparing the decay rates for $B$ to $\Bbar$: $$A_{CP} \equiv { {\cal B}(\Bbar \to \Kbar^{(\ast)} \ell^+ \ell^-) - {{\cal B}(B \to K^{(\ast)} \ell^+ \ell^-)} \over {\cal B}(\Bbar \to \Kbar^{(\ast)} \ell^+ \ell^-) + {{\cal B}(B \to K^{(\ast)} \ell^+ \ell^-)} } $$. The recent \babar\ analysis reports: \begin{center} $A_{CP}^K = -0.18^{+0.18}_{-0.18} \pm 0.01,$ \qquad \\ $A_{CP}^{K*} = 0.01^{+0.16}_{-0.15} \pm 0.01$. \end{center} A lepton-flavor asymmetry (test of $\mu$-$e$ universality) can be defined by the ratio: $$ R \equiv {{\cal B}(B \to K^{(*)} \mu^+ \mu^-) \over {\cal B}(B \to K^{(*)} e^+ e^-)}. $$ Models that would enhance $B_s \to \mu^+ \mu^-$, such as SUSY with a Higgs at large $\tan \beta$, would also enhance $R$ somewhat. At the current level of statistics, the test is not very restrictive, but is consistent with the SM expectation of unity. The recent \babar\ results are (for $q^2 > 0.1 \, {\rm GeV^2}$): \begin{center} $R_K = 0.96^{+0.44}_{-0.34} \pm 0.05,$ \qquad \\ $R_{K*} = 1.37^{+0.53}_{-0.40} \pm 0.09$. \end{center} The most interesting and potentially important recent results address the isospin asymmetry, which compares the decay of neutral to charged $B$'s: $$A_{I} \equiv { {\cal B}(B^0 \to K^{(\ast)0} \ell^+ \ell^-) - r{{\cal B}(B^\pm \to K^{(\ast)\pm} \ell^+ \ell^-)} \over {\cal B}(B^0 \to K^{(\ast)0} \ell^+ \ell^-) + r{{\cal B}(B^\pm \to K^{(\ast)\pm} \ell^+ \ell^-)} }, $$ where $r = \tau_0/\tau_+$ is the ratio of $B^0$ to $B^+$ lifetime. The value of $A_I$ is expected to be close to zero in the SM, although at low-$q^2$ some deviation is expected (up to about 10\%); in particular, the sign of this deviation depends on the sign of $\tilde C_7$ \cite{FM}. Figure~\ref{fig-sllai} shows the $A_I$ result in the two $q^2$-bins in the recent \babar\ analysis. A significant deviation from zero is observed in the low-$q^2$ bin for both $K$ and $K^*$ ($\sim 3\sigma$ in each case). \begin{figure}[h] \centering \includegraphics[width=80mm]{figure3.ps} \setlength{\unitlength}{1in} \begin{picture}(0,0) \put(-0.6,2.55){\textbf{\babarcap\ preliminary}} \end{picture} \caption{The isospin asymmetry versus $q^2$ from \babarcap: $B \to K \ell^+ \ell^-$ (solid dots, blue); $B \to K^* \ell^+ \ell^-$ (open circles, red).} \label{fig-sllai} \end{figure} The underlying fits to $m_{ES}$ distributions for the low-$q^2$ bin are shown in Figure~\ref{fig-aifits}. The corresponding partial branching fractions are (for $0.1 < q^2 < 7.02 \, {\rm GeV^2}$): \begin{center} ${\cal B}(B^\pm \to K^\pm \ell^+ \ell^-) = (2.5 \pm 0.5 \pm 0.1) \times 10^{-7},$ \qquad \\ ${\cal B}(B^0 \to K^0 \ell^+ \ell^-) < 0.9 \times 10^{-7}$ (90\% CL), \qquad \\ ${\cal B}(B^\pm \to K^{*\pm} \ell^+ \ell^-) = (9.8^{+2.6}_{-2.4} \pm 0.6) \times 10^{-7},$ \qquad \\ ${\cal B}(B^0 \to K^{*0} \ell^+ \ell^-) = (2.6^{+1.1}_{-1.0} \pm 0.2) \times 10^{-7}.$ \end{center} Subsequent to HQL08, Belle presented $A_I$ measurements based on a $625 \, {\rm fb^{-1}}$ dataset at ICHEP08 \cite{belle-ichep08}. Belle's new results also indicate negative $A_I$ values for $q^2$ below the $J/\psi$, but Belle observes less pronounced deviations from unity than \babar. Negative $A_I$ values at low $q^2$ tends to favor a flipped-sign for $\tilde C_7$. \begin{figure}[ht] \centering \includegraphics[width=80mm]{figure4.ps} \setlength{\unitlength}{1in} \begin{picture}(0,0) \put(-0.4,3.05){\textbf{\babarcap\ preliminary}} \end{picture} \caption{$m_{ES}$ distributions for the low-$q^2$ bin: $K^\pm \ell^+ \ell^-$ (upper left); $K^0 \ell^+ \ell^-$ (upper right); $K^{*\pm} \ell^+ \ell^-$ (lower left); $K^{*0} \ell^+ \ell^-$ (lower right). Fit results are superimposed: full fit (solid blue), signal (black dashed), combinatorial background (magenta dashed), misidentified muons (green dotted), peaking backgrounds (red dotted).} \label{fig-aifits} \end{figure} \subsection{Angular Analysis} Angular distributions as functions of $q^2$, particularly the forward-backward lepton asymmetry $A_{FB}$, are particularly sensitive to possible new physics. This asymmetry, defined as follows, basically measures the tendency of the $\ell^+ (\ell^-)$ to be in the same hemisphere as the $\Bbar(B)$ when viewed from the dilepton rest frame: $$ A_{FB}(q^2) = { 1 \over {d \Gamma \over dq^2} } \int^1_{-1} d \cos \theta_l {d^2 \Gamma \over dq^2 d \cos \theta_l} {\rm sgn}(\cos \theta_l), $$ where $\Gamma$ is the $B \to K^* \ell^+ \ell^-$ decay width and $\theta_\ell$ is the angle between the $\ell^-$ and $B$ in the $\ell^+ \ell^-$ rest frame. An additional angular variable of importance is the fraction of longitudinal $K^*$ polarization, $F_L$. The quantity $F_L$ has some sensitivity to new physics and also affects the angular distributions of $\theta_l$ which must be fit to determine $A_{FB}$. Extraction of $F_L$ and $A_{FB}$ from $B \to K^* \ell^+ \ell^-$ candidate events requires an understanding of the angular correlations of background events. These can be studied using $B \to K^* \mu^\pm e^\mp$ events in data, as well as Monte Carlo simulations. The analysis procedure and fitting method can be validated using the $B \to J/\psi [\psi(2s)] K^*$ control samples discussed earlier. In addition, a null result for $A_{FB}$ is expected in $B \to K \ell^+ \ell^-$, providing another check. \babar\ performs a three-step procedure, the results of which are shown in Figure~\ref{fig-afbfits}. The first fit determines signal and background yields; the second determines $F_L$; and the third determines $A_{FB}$. The results, in the low- and high-$q^2$ bins, are: \begin{center} $F_L^{\rm low} = 0.35 \pm 0.16 \pm 0.04,$ \qquad \\ $F_L^{\rm high} = 0.71^{+0.20}_{-0.22} \pm 0.04,$ \qquad \\ $A_{FB}^{\rm low} = 0.24^{+0.18}_{-0.23} \pm 0.05,$ \qquad \\ $A_{FB}^{\rm high} = 0.76^{+0.52}_{-0.32} \pm 0.07$. \end{center} Obviously these results are statistically limited, even in two $q^2$ bins. Nonetheless, they can be compared with SM expectations. Figure~\ref{fig-fl} shows the $F_L$ measurements along with the SM expectation, as well as a flipped-$\tilde C_7$ scenario. \begin{figure}[t] \centering \includegraphics[width=80mm]{figure5.ps} \setlength{\unitlength}{1in} \begin{picture}(0,0) \put(-0.5,4.05){\textbf{\babarcap\ preliminary}} \end{picture} \caption{\babarcap\ fit results for determining $F_L$ and $A_{FB}$ in two $q^2$ bins. (a) and (b) $m_{ES}$ distributions in low- and high-$q^2$ bins, respectively, used to fit signal and background yields; (c) and (d) $\cos \theta_K$ distributions fit to determine $F_L$; (e) and (f) $\cos \theta_l$ distributions fit to determine $A_{FB}$. The total fit is shown in red (solid); signal in blue (dashed); combinitorial background in black (dots); and peaking backgrounds in green (long dashes). } \label{fig-afbfits} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=80mm]{figure6.ps} \setlength{\unitlength}{1in} \begin{picture}(0,0) \put(-0.7,1.65){\textbf{\babarcap\ preliminary}} \end{picture} \caption{\babarcap\ $F_L$ results in two $q^2$ bins. The solid (blue) line is the SM expectation. The dashed green line shows a flipped-sign $\tilde C_7$ model. } \label{fig-fl} \end{figure} The $A_{FB}$ results are shown in Figure~\ref{fig-afb}, which overlays the recent \babar\ results on prior Belle results \cite{belle-afb}. The \babar\ and Belle results are consistent, and tend to favor positive values of $A_{FB}$, particularly at large $q^2$. This disfavors the models, shown in Figure~\ref{fig-afb}, with the sign of $\tilde C_9 \tilde C_{10}$ flipped. Subsequent to HQL08, Belle presented updated $A_{FB}$ results \cite{belle-ichep08} based on $625 \, {\rm fb^{-1}}$ at ICHEP08; the updated results are consistent with the prior Belle results and the recent \babar\ results. \begin{figure}[htb] \centering \includegraphics[width=80mm]{figure7.ps} \caption{\babarcap\ Preliminary \babarcap\ $A_{FB}$ results in two $q^2$ bins in red. The black points are from Belle \cite{belle-afb}. The solid black curve is the SM expectation. The dashed curve shows a flipped-sign $\tilde C_7$ model. The dotted curve and the dot-dashed curve show models with the sign of $\tilde C_9 \tilde C_{10}$ flipped. } \label{fig-afb} \end{figure} \section{Searches for $B \to \pi \ell^+ \ell^-$} Belle recently reported a new search for $B \to \pi \ell^+ \ell^-$ based on 657~million $B \Bbar$ pairs (about $607 \, {\rm fb^{-1}}$) \cite{belle-pill}. Continuum and semileptonic $B$ decay backgrounds are suppressed using likelihood ratios that combine event shape, vertex, and other information. Two-dimensional maximum likelihood fits are performed in the variables $\Delta E$ and $m_{\rm bc}$. No signals are observed and 90\% confidence level limits are set on $B^+ \to \pi^+ \ell^+ \ell^-$ and $B^0 \to \pi^0 \ell^+ \ell^-$, as well as an isospin-averaged combination of the two. These new limits are listed in Table~\ref{table-pill}, along with previous limits from \babar\ \cite{babar-pill}. The table also includes SM expectations \cite{theory-pill}. It is noteworthy that the Belle upper limit on $B^+ \to \pi^+ \ell^+ \ell^-$ is approaching the expected SM branching fraction. \begin{table}[h] \begin{center} \caption{Upper limits (90\% CL) on ${\cal B}(B \to \pi \ell^+ \ell^-)$ from Belle and \babarcap, along with SM expectations.} \begin{tabular}{|l|c|c|c|} \hline \textbf{Mode} & \textbf{SM} & \textbf{Belle} & \textbf{\babarcap} \\ \hline $B^+ \to \pi^+ \ell^+ \ell^-$ & $3.3\times 10^{-8}$ & $<4.9\times 10^{-8}$ & $<12\times 10^{-8}$ \\ \hline $B^0 \to \pi^0 \ell^+ \ell^-$ & $1.7\times 10^{-8}$ & $<15.4\times 10^{-8}$ & $<12\times 10^{-8}$ \\ \hline $B \to \pi \ell^+ \ell^-$ & $3.3\times 10^{-8}$ & $<6.2\times 10^{-8}$ & $<9.1\times 10^{-8}$ \\ \hline \end{tabular} \label{table-pill} \end{center} \end{table} \section{Searches for $B \to K^{(*)} \nu \bar \nu$} With missing neutrinos, $B \to K^{(*)} \nu \bar \nu$ decays are particularly difficult to isolate, and backgrounds are daunting. To clean up events, both \babar\ and Belle have performed these searches by requiring one $B$ in the event to be fully reconstructed, thereby removing continuum events and making it possible to assign particles in the event either to the tag $B$ or the signal candidate. \babar\ has reported recent results for both $B \to K \nu \bar \nu$ (based on $319 \, {\rm fb^{-1}}$) and $B \to K^* \nu \bar \nu$ (based on $413 \, {\rm fb^{-1}}$) in separate analyses, both of which rely on reconstructing one $B$ via $B \to D^{(*)} \ell \nu$. The $B \to K \nu \bar \nu$ search utilizes a Random Forest algorithm \cite{RandForest} to separate signal and background, while the $B \to K^* \nu \bar \nu$ analysis utilizes a maximum likelihood fit that relies on the differences in distribution of extra energy in the event between signal and background. The results from these searches are listed in Table~\ref{table-knn}, along with the SM expectations \cite{theory-knn} and the current Belle limits \cite{belle-knn}. \begin{table}[h] \begin{center} \caption{Upper limits (90\% CL) on ${\cal B}(B \to K^{(*)} \nu \bar \nu)$ from Belle and \babarcap, along with SM expectations. The \babarcap\ results are preliminary.} \begin{tabular}{|l|c|c|c|} \hline \textbf{Mode} & \textbf{SM} & \textbf{Belle} & \textbf{\babarcap} \\ \hline $B^+ \to K^{*+} \nu \bar \nu$ & $1.3\times 10^{-5}$ & $<14\times 10^{-5}$ & $<9\times 10^{-5}$ \\ \hline $B^0 \to K^{*0} \nu \bar \nu$ & $1.3\times 10^{-5}$ & $<34\times 10^{-5}$ & $<21\times 10^{-5}$ \\ \hline $B^+ \to K^+ \nu \bar \nu$ & $0.4\times 10^{-5}$ & $<1.4\times 10^{-5}$ & $<4.2\times 10^{-5}$ \\ \hline $B^0 \to K^0 \nu \bar \nu$ & $0.4\times 10^{-5}$ & $<16\times 10^{-5}$ & \\ \hline \end{tabular} \label{table-knn} \end{center} \end{table} \section{Searches for $B^0 \to \ell^+ \ell^-$} While these decays are sensitive to the same electroweak penguin and $W$-box diagrams as the other decays discussed above, the prospects of observing them at the SM level are rather remote. The decay $B^0 \to \tau^+ \tau^-$ is experimentally very challenging due to the difficulties associated with multiple missing neutrinos. And while the experimental signatures of $B^0 \to \mu^+ \mu^-$ and $B^0 \to e^+ e^-$ are nearly ideal, these decays are helicity suppressed to levels that make them inaccessible for the $B$-factories. The SM branching fractions are expected to be about $10^{-7}$ for $\tau^+ \tau^-$, $10^{-10}$ for $\mu^+ \mu^-$, and $10^{-15}$ for $e^+ e^-$. Non-SM scalar interactions would not be helicity suppressed, so that there is a window for new physics above the SM level. Thus, it is worthwhile to search for these modes even though the SM level remains inaccessible. The most promising opportunity, particularly for the $\mu^+ \mu^-$ mode (including $B_s \to \mu^+ \mu^-$), occurs at hadron colliders where the large hadronic $B$-production cross section provides a major advantage. Table~\ref{table-ll} summarizes the current status of these modes. \begin{table}[h] \begin{center} \caption{Upper limits (90\% CL) on ${\cal B}(B^0 \to \ell^+ \ell^- )$ from Belle, \babarcap, and CDF.} \begin{tabular}{|l|c|c|c|} \hline \textbf{Mode} & \textbf{Belle} & \textbf{\babarcap} & \textbf{CDF} \\ \hline $\tau^+ \tau^-$ & & $<4.1\times 10^{-3}$\cite{babar-tautau} & \\ \hline $\mu^+ \mu^-$ & $<1.6\times 10^{-7}$\cite{belle-ll} & $<5.2\times 10^{-8}$\cite{babar-ll} & $<1.8\times 10^{-8}$\cite{cdf-mumu} \\ \hline $e^+ e^-$ & $<1.9\times 10^{-7}$\cite{belle-ll} & $<1.1\times 10^{-7}$\cite{babar-ll} & \\ \hline \end{tabular} \label{table-ll} \end{center} \end{table} \section{Searches for $B^+ \to \ell^+ \nu$} $B^+ \to \ell^+ \nu$ decays would occur via the tree-level exchange of a $W$-boson. The branching fractions are precisely predictable in the SM given the relevant CKM factor, $|V_{ub}|$, and the $B$-meson decay constant $f_B$. $|V_{ub}|$ is best measured in $b \rightarrow u \ell \nu$ semileptonic decays; a review of the latest measurements can be found in this proceeding \cite{Petrella}. Considerable progress has been made by both Belle and \babar\ toward measurements of $B^+ \to \tau^+ \nu$. This is the topic of another contribution to this proceeding\cite{MBarrett} and will not be discussed here. Table~\ref{table-lnu} summarizes the status of the other modes. It is noteworthy that Belle, using $253 \, {\rm fb^{-1}}$, has set a limit in the $\mu \nu$ mode that is within about a factor of three of the SM expectation \cite{belle-lnu}. Thus, this mode appears likely to be just beyond the reach of the current $B$-factory experiments. \begin{table}[h] \begin{center} \caption{Upper limits (90\% CL) on ${\cal B}(B^+ \to \ell^+ \nu)$ from Belle\cite{belle-lnu} and \babarcap\ \cite{babar-lnu}, along with SM expectations. } \begin{tabular}{|l|c|c|c|} \hline \textbf{Mode} & \textbf{SM} & \textbf{Belle} & \textbf{\babarcap} \\ \hline $B^+ \to \mu^+ \nu$ & $\sim 5\times 10^{-7}$ & $<1.6\times 10^{-6}$ & $<6.6\times 10^{-6}$ \\ \hline $B^+ \to e^+ \nu$ & $\sim 1\times 10^{-11}$ & $<9.8\times 10^{-7}$ & \\ \hline \end{tabular} \label{table-lnu} \end{center} \end{table} \section{Conclusion} The rare FCNC decays discussed here are among the most interesting in $B$ physics. Considerable experimental progress has been made by Belle and \babar. Yet, a common theme emerges from this discussion of several different modes. It is the need for more data. Significant and probing measurements have become possible in decays involving the $b \to s \ell^+ \ell^-$ transition, but the results are statistically limited. The results reported thus far do not utilize the full data sets of \babar\ or Belle (for which the ultimate data set will be much larger), but even so, the final results from \babar\ and even Belle will clearly still be statistics limited. Other interesting processes such as $B^+ \to \pi^+ \ell^+ \ell^-$ and perhaps $B^+ \to \mu^+ \nu$ are just beyond the reach of the existing $B$-factories. To do this physics justice, it seems apparent that a dataset of at least $10 \, {\rm ab^{-1}}$ is needed. This is one of a number of strong arguments that make up the physics case for ``super" $B$-factories. \bigskip \begin{acknowledgments} I would like to thank G. Eigen and K. Flood for useful input. \end{acknowledgments} \bigskip
1,314,259,996,197
arxiv
\section{Collisions and collapses}\label{s-collisions} The aim of this section is to describe the drastic changes of behaviour appearing in the topological dynamics when a family $X_\mu$ crosses the boundary of the regions we described in Section~\ref{s.cartografia}. \subsection{Collisions}\label{ss.colisao} Let us consider a $1$ parameter family $X_\mu\in{\mathcal O}^{+,+}_\varphi$, $\mu\in[-1,1]$ with the following properties: \begin{itemize}\item The family is crossing transversely the hypersurface ${\mathcal H}{\mathcal E}^1_+$ at $\mu=0$: this means that $X_0$ exhibit a heteroclinic connection $q_1\in W^s_2$ and $q_2\in\Sigma_+$. \item for $\mu<0$ the vector field $X_\mu$ belongs to ${\mathcal L}^+$: it has a up Lorenz attractor and $\Sigma_-$ contains a fake horseshoe for $P$. \item for $\mu>0$ the vector field $X_\mu$ has a two-sided Lorenz attractor. \end{itemize} In other words, $X_\mu$ consists in a generic unfolding of the heteroclinic connection $q_1\in W^s_2$. The cuspidal point $q_{1,\mu}$ is moving with the parameter $\mu$ and is crossing transversely $W^s_{2,\mu}$ (stable leaf through $p_{2,\mu}$) for $\mu=0$. For $\mu<0$ , the up-Lorenz attractor is intersecting exactly every stable leaf in $\Sigma^+$ between $q_{2,\mu}$ and $q_{1,\mu}$, and the horseshoe in $\Sigma_-$ is bounded by the stable leaves $W^s_{1,\mu}$, $W^s_{2,\mu}$ and contains the periodic points $p_{1,\mu}$ $p_{2,\mu}$. When $\mu$ tends to $0$ the point $q_{1,\mu}$ tends to a point $q_{1,0}$ in $W^s_2$ and this point $q_{1,0}$ is also the limite of the intersection of $W^s_2$ with one of the rectangle (the one containing $p_1$) of the Markov partition of the fake horseshoe. This corresponds to a Cantor set in $W^s_2$ of points of the fake horseshoe for $X_\mu$, $\mu<0$ whose diameter tends to $0$ as $\mu\to 0$: all the Cantor set tends to $q_{1,0}$. For the parameter $0$, the Lorenz attractor no more admits an attracting neighborhood, and is no more robust. The fake horseshoe became singular, and intersects the Lorenz attractor along $\sigma$ and the orbits of $p_{2,0}$ and of $q_{1,0}$. When $\mu>0$ the Lorenz attractor and the (singular) fake horseshoe merge into a two-sided Lorenz attractor. Notice that if all the vector fields $X_\mu$ are assumed to be of class $C^2$ then all along the parameter, $X_\mu$ admits a unique SRB-measure $\nu_\mu$ (see \cite{APPV09}). For $\mu\leq 0$ the support of $\nu_\mu$ is the Lorenz attractor, and in particular does not intersect $\Sigma_-$. For $\mu>0$ the support of $\nu_\mu$ intersects any stable leaf in $\Sigma$. Thus the support of $\nu_\mu$ change drastically at the collision point. \begin{ques} Is the map $\mu\mapsto\nu_\mu$ continuous (for the weak topology) at $\mu=0$? \end{ques} \subsection{Collapse of the horseshoe: parameter families crossing ${\mathcal H}^{1,2}_+$ or ${\mathcal H}^{1,2}_-$} Recall that ${\mathcal H}^{1,2}_+$ is the codimension $2$ submanifold corresponding to the double homoclinic connection $q_1,q_2\in\gamma^s_+$. We consider here a $2$-parameter family $X_{\mu}$, $\mu=(\mu_1,\mu_2)\in[-1,1]^2$ which is a generic unfolding of this double homoclinic connection: in other words, the family cuts transversely ${\mathcal H}^{1,2}_+$ at $\mu=(0,0)$. We assume furthermore that, for any fixed $\mu_2$, the $1$ parameter family $X_{\mu_1,\mu_2}$ is tranverse to ${\mathcal H}^1$ at $\mu_1=0$ and, reciprocally, for any fixed for any fixed $\mu_1$, the $1$ parameter family $X_{\mu_1,\mu_2}$ is tranverse to ${\mathcal H}^1$ at $\mu_2=0$. More precisely one may assume that $X_{\mu_1,\mu_2}\in{\mathcal O}_\varphi^{\omega_1,\omega_2}$ where $\omega_i\in\{-,+\}$ is the sign of $\mu_i$. When one unfolds this double homoclinic connection, if $q_{1,\mu}$ enters in $\Sigma_2$ (that is, $\mu_1>0$), then a fixed point $p_{1,\mu}\in\Sigma_1$ is created and tends to $q_{1,0}\in \gamma^s_+$ as $\mu$ tends to $0$. In the same way, if $\mu_2>0$ then $q_2$ enters in $\Sigma_1$ and the fixed point $p_{2,\mu}$ is created in $\Sigma_2$ and tends to $q_{2,0}\in\gamma^s_+$. The stable leaves $W^s_{1,\mu}$ and $W^s_{2,\mu}$ bound the small strip $\Sigma^+$ in $\Sigma$ containing $\gamma^s_+$ (tending to the segment $\gamma^s_+$ as $\mu\to0$), and a large strip $\Sigma^-$. The vector field $X_\mu$ in ${\mathcal O}^{+,+}_\varphi$ belongs to ${\mathcal L}^+\cup{\mathcal L}^-$ if and only if both $q_1$ and $q_2$ belong to the same component $\Sigma^+$ or $\Sigma^-$. \begin{lema}\label{l.doubleconnection} For small $\mu$, $\{q_1,q_2\}$ is not contained in $\Sigma^+$. In other words the closure of ${\mathcal L}^+$ is disjoint from ${\mathcal H}^{1,2}_+$. For the set $\{\mu|\{q_{1,\mu},q_{2,\mu} \}\subset \Sigma_-\}$ is an open subset containing $(0,0)$ in its closure. More precisely, for any $\alpha >0$ there is $\mu_0$ so that for any $0<\mu<\mu_0$ the vector field $X_{\mu,\alpha\mu}\in{\mathcal L}^-$. \end{lema} \begin{proof} Assume (arguing by contradiction) that $\{q_1,q_2\}\subset \Sigma^+$. Thus $X\in{\mathcal L}^+$. Then $\Sigma^+$ is invariant under the action of $P$. However, as $\Sigma^+$ is contained in an arbitrarily small neighborhood of $\gamma^s_+$ the rate of expansion of $P$ in the unstable cone is arbitrarily large, in particular $\gg 2$. So $\Sigma^+$ cannot be invariant, ending the proof. The proof of the second point is as follonws: consider one half line $X_{\mu,\alpha\mu}$ and consider $\mu>0$ very small. Thus the point $q_1$ (resp. $q_2$) belongs to $\Sigma_2$ (resp. $\Sigma_1$) and its distance to $\gamma^s_+$ is $\mu$ (resp. $\alpha\mu$). Consider any point $p$ in $\Sigma_1$ at a small distance $>\frac12\alpha\mu$ de $\gamma^s_+$. Consider an unstable segment $\gamma_p$ realizing this distance and so that $\gamma_p$ starts at $\gamma^s_+$ and ends at $p$. Then $P(\gamma_p)$ is an unstable segment starting at $q_1$ and ending at $P(p)$. The length $\ell(P(\gamma_p))$ is arbitrarily larger than $\ell(\gamma_p)$ (say, larger than $100 (1+\alpha)\mu\ell(\gamma_p)$ as $\mu$ tends to $0$ (as the expansion rate close to $\gamma^s_+$ tends to infinity). One deduce that the point $P(p)$ belongs to $\Sigma_1$ and is at a distance larger than $ 99(1+\alpha)\mu$ of $\gamma^s_+$. One deduces that $P(p)$ (and thus $p$) does not belong to the stable leaf through the fixed point $p_1\in \Sigma_1$. In particular $q_2$ belongs to $\Sigma_-$. One proves in the same way than for $\mu>0$ small $q_1\in\Sigma_-$, and thus the vector filed belongs to ${\mathcal L}^-$, ending the proof. Figure \ref{f.l.doubleconn} displays a local bifurcation diagram in this case. \end{proof} The lemma implies that, when one consider a segment in the parameter plane crossing $(0,0)$ and entering in ${\mathcal O}^{+,+}_\varphi$ transversely to both ${\mathcal H}^1$ and ${\mathcal H}^2$, then one creates a down Lorenz attractor, which cuts every stable leaves in $\Sigma^-$. When the parameter tends to $(0,0)$ then $\Sigma^-$ tends to $\Sigma$ and the Lorenz attractor tends to what we called the fat Lorenz attractor. The horseshoe, corresponding to $\Sigma^+$ collapses into the double homoclinic connection. \subsection{Switching from up to down: family crossing ${\mathcal H}{\mathcal E}_1\cap {\mathcal H}{\mathcal E}_2$} In this section we consider a two parameter family $X_\mu\in {\mathcal O}_\varphi^{+,+}$, $\mu=(\mu_1,\mu_2)$ crossing transversely ${\mathcal H}{\mathcal E}_1\cap {\mathcal H}{\mathcal E}_2$ at $\mu=0$. In other words $X_{(0,0)}$ exibits two heteroclinic connections $q_1\in W^s_2$ and $q_2\in W^s_1$ (recall that $W^s_i$ is the stable leaf through the fixed point $p_i$). We have seen that this behavior implies that $X_{(0,0)}$ has two \emph{full Lorenz attractors} which intersect along $W^s(p_1)\cup W^s(p_2)\cup W^u(\sigma)$. See Figure \ref{f.p.HE1HE2}. Up to reparametrize the family one may assume that $$\begin{array}{c} \left(\mu_1=0 \Leftrightarrow q_1\in W^s_2\right) \mbox{ and }\left(\mu_2=0 \Leftrightarrow q_2\in W^s_1\right)\\ \left(\mu_1>0 \Leftrightarrow q_1\in \Sigma^+\right) \mbox{ and }\left(\mu_2>0 \Leftrightarrow q_2\in \Sigma^+\right) \end{array}$$ \begin{figure}[th] \centering \includegraphics[scale=0.2]{f-parametro-++.pdf} \hspace{0.3cm} \includegraphics[scale=0.2]{f-parametro-zero.pdf}\ \hspace{0.3cm} \includegraphics[scale=0.2]{f-parametro--.pdf} \caption{Local bifurcation: (a) $(\mu_1,\mu_2), \mu_i >0$, $(\mu_1,\mu_2), \mu_i =0$, (c) $(\mu_1,\mu_2), \mu_i < 0$.} \label{f-parametro} \end{figure} With these notations, note that Theorem~\ref{igual-a-l.switch} is just a reformulation of the results in the previous sections. \section{The topological dynamics in the different regions of ${\mathcal O}_\varphi$}\label{s.cartografia} \subsection{Up and down Lorenz attractor: the regions ${\mathcal L}^+$ and ${\mathcal L}^-$} The aim of this section is to prove \begin{teo}[A]\label{t.up} With the notation of Section~\ref{ss.region}, any vector field $X\in{\mathcal L}^+$ admits exactly $2$ chain recurrence classes: one is a up-Lorenz attractor, and the other is a hyperbolic basic set, topologically equivalent to the suspension of a fake horseshoe. The symetric statement holds in ${\mathcal L}^-$, interchanging the up and down. \end{teo} \begin{proof} The component $\Sigma_+$ is a rectangle. The stable leaf $\gamma^s_+$ cuts $\Sigma_+$ in two component. One is $\Sigma_1\cap\Sigma_+$ and is bounded by $W^s_1=W^s(p_1)$ and the other is $\Sigma_2\cap\Sigma_+$ and is bounded by $W^s_2$. Note that $q_1$ belongs to $\Sigma_2\cap\Sigma_+$ and $q_2$ belongs to $\Sigma_1\cap\Sigma_+$. Consider a stable leaf $L_1\in \Sigma_1\cap\Sigma_+$ separating $W^s_1$ from $q_2$, and a stable leaf $L_2\in \Sigma_2\cap\Sigma_+$ separating $W^s_2$ from $q_1$, see Figure \ref{Teo-up}. Then $L_1\cup L_2$ cut $\Sigma^+$ in $3$ components: one is bounded by $W^s_1$ another by $W^s_2$ and the third, denoted by $R_L$ is a rectangle bounded by both $L_1$ and $L_2$ and containing $q_2$,$q_1$ and $\gamma^s_+$. The leaf $\gamma^s_+$ cuts $R_L$ in two components, $R_{L,1}\subset\Sigma_1$ and $R_{L_2}\subset \Sigma_2$. Consider the restriction of $P$ to $R_L\setminus \gamma^+_s$. The images of $R_{L_1}$ and $R_{L_2}$ are cuspidal triangle with cusps at $q_1$ and $q_2$, and contained in $R_L$. Recall that $P$ is hyperbolic. Thus the restriction or $P$ to $R_L$ satisfies all the properties of the return map in the geometric model of Lorenz attractor with a rate expansion larger than $\varphi>\sqrt2$. The rectangle $R_L$ is an attracting region for $P$. On deduces that $U$ contains a sub-region which is attracting, and in which $X$ is a geometric model of Lorenz attractor. \begin{figure}[h] \centering \includegraphics[scale=0.17]{f-Up-Lorenz.pdf} \caption{The leaves $L_1$ and $L_2$, and the region $\Sigma_+$.}\label{Teo-up} \end{figure} Consider now the component $R_H$ of $\Sigma\setminus (L_1\cup L_2)$ disjoint from $R_L$, and containing $\gamma^s_-$. Consider the restriction of $P$ to $R_H\setminus\gamma^s_-$. The image of each of these components crosses $R_H$ in a Markovian way. Thus the maximal invariant in $R_H$ is far from the discontinuity, and is conjugated to the \emph{fake horseshoe}, as the map $P$ preserves the orientation of the unstable cone field. This ends the proof. The down case is similar. \end{proof} \subsection{Two-sided Lorenz attractor in ${\mathcal O}_\varphi^{-,-}$} \begin{prop}\label{p.--} For any $X\in {\mathcal O}_{\varphi}^{-,-}$ the maximal invariant set in $U$ is transitive and consists in a two-sided Lorenz attractor. \end{prop} \begin{proof}According to Lemma~\ref{l.cortaestavefixo}, for proving the transitivity of the maximal invariant it is enough to prove that the iterates of any unstable segment $S$ in $\Sigma$ cut all the stable leaves with a possible exception of a set with empty interior, see Figure \ref{f.p--}. \begin{figure}[h] \centering \includegraphics[scale=0.25]{f-pmenomenos.pdf} \caption{$q_1 \in \Sigma_1$ and $q_2 \in \Sigma_2$}\label{f.p--} \end{figure} So consider a unstable segment $S\subset \Sigma$. Consider the set of lengths of the connected components of all positive iterates $P^n(S)$. If this set of lengths is not bounded, then some component cuts every stable leaf and we are done. Otherwise, given any $\delta>1$, up to replace $S$ by a segment in one of its iterates, one may assume that any connected component $S'$ in any iterate $P^n(S)$ has a length bounded by $\delta\ell(S)$. We now fix $\delta\in]1,\frac{\lambda^2}2[$. If $S$ is disjoint from $\gamma^s_+\cup\gamma^s_-$ then $\ell(P(S))>\lambda \ell(S)>\delta\ell(S)$ contradicting the choice of $S$. So $S$ cuts $\gamma^s_+$ or $\gamma^s_-$. If it cuts both, then it crosses completely $\Sigma_1$ or $\Sigma_2$ so that $P(S)$ cuts all the stable leaves but $1$, and we are done. So we may assume that $S$ cuts exactly one of $\gamma^s_+$ or $\gamma^s_-$, say $\gamma^s_+$, for instance. Let $S_1$ be a component of $S\setminus\gamma^s_+$ with length larger or equal to $\frac12\ell(S)$. The length of $P(S_1)$ is at least $\frac\lambda2 \ell(S)$. If $P(S_1)$ is disjoint from $\gamma^s_+\cup\gamma^s_-$ then $$\ell(P(S_1)>\lambda \ell(S_1)> \frac{\lambda^2}2\ell(S)> \delta\ell(S).$$ contradicting the choice of $S$. So $P(S_1)$ cuts $\gamma^s_+$ or $\gamma^s_-$. Once again if it cuts both of them we are done, so one may assume that $P(S_1)$ cuts exactly one of $\gamma^s_+$ or $\gamma^s_-$. Note that one of the end points of $P(S_1)$ is a cuspidal point $q_i\in \Sigma_i$ as $X\in {\mathcal O}_1^{-,-}$. Thus Lemma~\ref{l.cusptogamma} applies and concludes the proof of the transitivity of the maximal invariant set $\Lambda_X$. Note that the compact set $\Lambda_P$ consists in a union of essential circles in $\Sigma$ and therefore allways cuts $\gamma^s_+$ and $\gamma^s_-$, so that the maximal invariant set $\Lambda_X$ intersect non-trivially both stable separatrices $W^s_+(\sigma)$ and $W^s_-(\sigma)$. Thus $\Lambda_X$ is a two-sided Lorenz attractor, ending the proof. See Figure \ref{f.p--}. \end{proof} \subsection{Two-sided Lorenz attractor in ${\mathcal O}^{+,-}_\varphi$ and ${\mathcal O}^{-,+}_{\varphi}$} \begin{prop}\label{p.+-} For any $X\in {\mathcal O}_{\varphi}^{+,-}\cup {\mathcal O}_{\varphi}^{-,+}$ the maximal invariant set in $U$ is transitive and consist in a two-sided Lorenz attractor. \end{prop} The proof of the proposition for $X\in {\mathcal O}_{\varphi}^{-,+}$ is identical to the proof for $X\in{\mathcal O}_{\varphi}^{+,-} $ interchanging the componentns $\Sigma_1$ and $\Sigma_2$, so we will write the proof only for $X\in {\mathcal O}_{\varphi}^{+,-}$. See Figure \ref{f-Up-Lorenz2-em-O++}. \begin{proof} Recall that the rate of exapansion of $X$ is $\lambda>\varphi$. \begin{figure}[th] \centering \includegraphics[scale=0.25]{f-p+-Correta.pdf} \caption{$q_1 \in \Sigma_+$ for $X\in {\mathcal O}^{+,-}$}\label{f-Up-Lorenz2-em-O++} \end{figure} As for Proposition~\ref{p.--} it is enough to prove that the iterates of any unstable segment $S$ in $\Sigma$ cut all the stable leaves with a possible exception of a set with empty interior. See Figure \ref{f-Up-Lorenz2-em-O++}. So consider a unstable segment $S\subset \Sigma$. Consider the set of lengths of the connected components of all positive iterates $P^n(S)$. If this set of lengths is not bounded, then some component cuts every stable leaf and we are done. Otherwise, given any $\delta>1$, up to replace $S$ by a segment in one of its iterates, one may assume that any connected component $S'$ in any iterate $P^n(S)$ has a length bounded by $\delta\ell(S)$. We now fix $\delta\in]1,\frac{\lambda}\varphi[$. If $S$ is disjoint from $\gamma^s_+\cup\gamma^s_-$ then $\ell(P(S)>\lambda \ell(S)>\delta\ell(S)$ contradicting the choice of $S$. So $S$ cuts $\gamma^s_+$ or $\gamma^s_-$. If it cuts both, then it crosses completely $\Sigma_1$ or $\Sigma_2$ so that $P(S)$ cuts all the stable leaves but $1$, and we are done. So we may assume that $S$ cuts exactly one of $\gamma^s_+$ or $\gamma^s_-$, say $\gamma^s_-$, for instance. Thus $S$ is cutted by $\gamma^s_-$ in two components $S_i=S\cap \Sigma_i$. Consider $P(S_2)$. If $P(S_2)\cap(\gamma^s_-\cup \gamma^s_+)\neq \emptyset$ then Lemma~\ref{l.cusptogamma} applies (because $q_2\in \Sigma_2$ by our assumption $X\in{\mathcal O}^{+,-}_\varphi$). Thus the iterate of $S_2$ cuts every stable leaf but a finite number of them, so that we are done. Thus we may assume that $P(S_2)$ is disjoint from $\gamma^s_-\cup \gamma^s_+$. Thus $P(S_2)$ is an unstable segment of length larger than $\lambda^2\ell(S_2)$. On the other hand $\ell(P(S_1))\geq\lambda \ell(S_1)$. Now Lemma~\ref{l.golden} implies that $$\max\{\ell(P(S_1)),\ell(P^2(S_2))\}\geq \frac\lambda\varphi \ell(S)>\delta\ell(S),$$ contradicting the choice of the segment $S$, finishing the proof. \end{proof} \subsection{Two-sided Lorenz attractor in $\widetilde{{\mathcal O}_\varphi}^{+,+}$} \begin{prop}\label{p.++} For any $X\in \widetilde{{\mathcal O}_{\varphi}}^{+,+}$ the maximal invariant set in $U$ is transitive and consist in a two-sided Lorenz attractor. \end{prop} Vector fields $X$ in $\widetilde{{\mathcal O}_\varphi}^{+,+}$ are characterized by the fact that the points $q_1$ and $q_2$ belong to different components $\Sigma_+$ (containing $\gamma^s_+$) and $\Sigma_-$ (containing $\gamma^s_-$) of $\Sigma\setminus (W^s_1\cup W^s_2)$ where $W^s_i=W^s(p_i)$ and $p_i$ is the fixed point of $P$ in $\Sigma_i$. Thus $\widetilde{{\mathcal O}_\varphi}^{+,+}$ is the union of two disjoint open subsets, defined by $q_1\in \Sigma_+$ or $q_1\in \Sigma_-$. The proof of the proposition is symetrical in these two open sets. We provide here the proof where $q_1\in \Sigma_+$. See Figure \ref{f-Up-Lorenz-em-OTilda++}. \begin{figure}[th] \centering \includegraphics[scale=0.16]{f-Up-Lorenz-em-OTilda++.pdf} \caption{$q_1 \in \Sigma_+$ for $X\in \widetilde{{\mathcal O}_\varphi}^{+,+}$}\label{f-Up-Lorenz-em-OTilda++} \end{figure} \begin{proof}Most of the proof is identical to the proofs of Propositions~\ref{p.--} and~\ref{p.+-}, and allow us to consider a segment $S$ so that any component in any iterate $P^n(S)$ has a length bounded by $\delta\ell(S)$ with $1<\delta<\frac\lambda\varphi$ where $\lambda$ is the expansion rate of $X$. Furthermore $S$ cuts exactly one of the stable leaves $\gamma^s_+$ or $\gamma^s_-$. Let us assume that $S$ cuts $\gamma^s_-$, and denote $S_i=S\cap \Sigma_i$. If one of $P(S_1)$ or $P(S_2)$ is disjoint from $\gamma^s_+ \cup\gamma^s_-$ then one concludes in the same way as in the proof of Proposition~\ref{p.+-} that, using Lemma~\ref{l.golden} one of the iterates $P(S_1)$, $P^2(S_1)$, $P(S_2)$ or $P^2(S_2)$ contains a segment of length larger than $\frac\lambda\varphi$ contradicting the choice of $S$. Thus one may assume that both $P(S_1)$ and $P(S_2)$ cut $\gamma^s_+ \cup\gamma^s_-$. For the orientation of ${\mathcal C}^u$, $S_1$ has its end point at $\gamma^s_-$. Thus $P(S_1)$ ends at $q_1\in \Sigma_+$. As seen above $P(S_1)$ cuts $\gamma^s_+ \cup\gamma^s_-$ and its orientation implies that indeed it cuts $\gamma^s_-$. In particular its goes out of $\Sigma_+$. One deduces that $P(S_1)$ cuts transversely $W^s_2=W^s(p_2)$, where $p_2$ is the fixed point in $\Sigma_2$ (recall that $X\in{\mathcal O}_\varphi^{+,+}$). Thus Lemma~\ref{l.cortaestavefixo} ensures that the iterates of $P(S_2)$ cross all stable leaves but a finite number of them, concluding that case. The case where $S$ cuts $\gamma^s_+$ is similar, just replacing $S_1=S\cap \Sigma_1$ and $W^s(p_2)$ by $S_2=S\cap \Sigma_2$ and $W^s(p_1)$. This concludes the proof. \end{proof} \subsection{Two-sided Lorenz attractor in hypersufaces ${\mathcal H}^i$ of homoclinic loops} \begin{prop}\label{p.Hi} For any $X\in{\mathcal O}_1$ in one of the hypersurfaces ${\mathcal H}^1$ or ${\mathcal H}^2$, the maximal invariant in $U$ is a transitive singular attractor meeting both stable separatrices $W^s_\pm(\sigma)$. \end{prop} \begin{proof} We assume that one of the points $q_1$,$q_2$, say $q_1$ belongs to one of the stable leaves $\gamma^s_+$ or $\gamma^s_-$, say $q_1\in \gamma^s_+$ (all the cases admits an identical proof, \emph{mutatis mutandi}). See Figure \ref{f.p.Hi1}. \begin{figure}[th] \centering \includegraphics[scale=0.15]{f-pHi1.pdf} \hspace{0.2cm} \includegraphics[scale=0.15]{f-pHi1-.pdf} \caption{$X \in {\mathcal H}^1 \cup {\mathcal H}^2$ \label{f.p.Hi1} \end{figure} In a similar way to the proof of the Propositions~\ref{p.--}, ~\ref{p.+-} and ~\ref{p.++}, the proof consist in considering an unstable segment $S$ which does not admit any segment $\tilde S$ of length larger than $\delta\ell(S)$, $1<\delta<\frac\lambda\varphi$, so that $\tilde S$, excepted a finite subset, is contained in the union of the iterate $P^n(S)$, $n\geq0$. One needs to prove that the iterates of such a segment $S$ cut any stable leaves but a finite number of them. Again, the choice of $S$ implies that $S$ cuts $\gamma^s_+$ or $\gamma^s_-$ and if it cuts both then $P(S)$ already cuts all stable leaves but one. So we assume that $S$ cuts only $1$ of these leaves. Assume first that $S$ cuts $\gamma^s_+$. Consider $S_1=S\cap \Sigma_1$. It is a segment starting at a point in $\gamma^s_+$ and contained in $\Sigma_1$. Then $P(S_1)$ is a segment a length larger than $\lambda\ell(S)$ and starting at $q_1\in \gamma^s_+$. If $P(S_1)$ is not included in $\Sigma_1$ then it cuts both $\gamma^s_+$ and $\gamma^s_-$ and $P^2(S_1)$ cuts all leaves but one. If $P(S^1)\subset \Sigma_1$ then $P^2(S_1)$ is a segment of length larger than $\lambda^2\ell(S)$ and starting at $q_1$. Iterating the process one gets that one of the iterates $P^k(S_1)$ crosses completely $\Sigma_1$, so that $P^{k+1}(S_1)$ cuts all stable leaves but one, and we are done. Assume now that $S$ cuts $\gamma^s_-$. Consider $S_i=S\cap \Sigma_i$. Then $P(S_1)$ is a segment ending at $q_1\in\gamma^s_+$, and $P(S_1)$ either crosses completely $\Sigma_2$ (and we are done) or is contained in $\Sigma_2$. Now $P^2(S_1)$ is a segment ending at $q_2$. On the other hand $P(S_2)$ is a segment staring at $q_2$. Now $P^2(S_1)\cup\{q_2\}\cup P(S_2)$ is a segment of length at least $\lambda \ell(S)$. This contradicts the choice of $S$, ending the proof. \end{proof} The statement of Proposition~\ref{p.Hi} does not announce two-sided Lorenz attractor, because the ${\mathcal H}^i$ are not open subsets, and then the robustness of the transitivity in not ensured. However we will see below that $X\in {\mathcal H}^i$ exhibits indeed a two-sided Lorenz attractor excepted for $X$ in a codimension $2$ submanifold. Let ${\mathcal H}^{1,2}_+$ and ${\mathcal H}^{1,2}_-$ the codimension $2$ submanifolds included in ${\mathcal H}^1\cup {\mathcal H}^2$ consisting in vector fields $X$ so that $q_1,q_2\in \gamma^s_+$ or $q_1,q_2\in \gamma^s_-$, repectively, see Figure \ref{f.p.Hi2}.Thus both unstable separatrices of $\sigma$ are homoclinic connections and are included in the same stable separatrix of $\sigma$. \begin{figure}[th] \centering \includegraphics[scale=0.15]{f-pHi4.pdf} \hspace{0.2cm} \includegraphics[scale=0.15]{f-pHi6.pdf} \caption{$X \in {\mathcal H}^{1,2}_+ \cup {\mathcal H}^{1,2}_-$}\label{f.p.Hi2} \end{figure} Next lemma ensures that, for $X\in{\mathcal H}^i$ out of ${\mathcal H}^{1,2}_+$ and ${\mathcal H}^{1,2}_-$ the transitivity of the attractor is robust so that the attractor is a two-sided Lorenz attractor. \begin{lema}\label{l.Hi} If $X\in {\mathcal H}^i\setminus ({\mathcal H}^{1,2}_+ \cup{\mathcal H}^{1,2}_-)$, a neighborhood of $X$ is contained in ${\mathcal O}^{-,-}_\varphi\cup{\mathcal O}^{+,-}_\varphi\cup{\mathcal O}^{-,+}_\varphi\cup\widetilde{{\mathcal O}^{+,+}_\varphi}\cup {\mathcal H}^1\cup{\mathcal H}^2$. \end{lema} \begin{proof}The proof consists in unfolding the homoclinic loops and check that all the possibilities lead to one of the sets listed. \end{proof} \subsection{Two-sided Lorenz attractor in ${\mathcal L}^{+-}$} Note that Propositions~\ref{p.--}, ~\ref{p.+-}, \ref{p.++} and \ref{p.Hi} and Lemma~\ref{l.Hi} together prove \begin{prop}\label{p.L+-} For any $X\in{\mathcal L}^{+,-}$ the maximal invariant set in $U$ is a transitive singular hyperbolic attractor meeting both stable separatrices of $\sigma$, that is, is a two-sided Lorenz attractor. \end{prop} The region ${\mathcal L}^{+,-}$ has been defined as a union of many regions and the propositions listed above prove the conclusion of Proposition~\ref{p.L+-} in each of these regions. Finally, some of these regions are not open so Lemma~\ref{l.Hi} checks that the vector fields in these non-open regions admits neighborhood contained it the union of the other regions. \subsection{The collisions of the Lorenz attractor and a fake horseshoe: vector fields in ${\mathcal H}{\mathcal E}_i$} In this section we consider vector fields $X$ in ${\mathcal O}^{+,+}_\varphi$, that is, so that the return map $P$ have $2$ fixed point $p_i\in\Sigma_i$, $i=1,2$, and a heteroclinic connections between $\sigma$ and one of the points $p_i$, more precisely $q_i$ belongs to the stable leaf $W^s_j$ through $p_j$ ; note that $j\neq i$ because $q_i$ is not in $\Sigma_i$ when $\Sigma_i$ contains a fixed point. The case when both $q_1$ and $q_2$ belong to $W^s_1\cup W^s_2$ corresponds to $X\in{\mathcal H}{\mathcal E}_1\cap{\mathcal H}{\mathcal E}_2$ which is a codimension $2$ submanifold. \begin{prop}\label{p.HE1HE2} For any $X\in {\mathcal H}{\mathcal E}_1\cap{\mathcal H}{\mathcal E}_2$, there is a unique chain recurrence class, which is not transitive. Both $\Sigma_-$ and $\Sigma_+$ are invariant by $P$. The maximal invariant of the restriction of $P$ to $\Sigma_i$ is transitive: every unstable segment in $\Sigma_i$ has its iterates which cut every segment in $\Sigma_i$. \begin{figure}[th] \centering \includegraphics[scale=0.2]{f-pHE1HE2.pdf} \caption{$X\in {\mathcal H}{\mathcal E}_1\cap{\mathcal H}{\mathcal E}_2$}\label{f.p.HE1HE2} \end{figure} The open set $U$ splits in $2$ regions , each containing a \emph{full Lorenz attractor} (that is, a Lorenz crossing all stable foliations over each region splited on $U$), and intersecting along $W^s(p_1)\cup W^s(p_2)\cup W^u(\sigma)$. See Figure \ref{f.p.HE1HE2}. \end{prop} \begin{proof}The first return map is illustrated in Figure... The study of the first return map in each rectangle $\overline \Sigma_i$ is classical for the expansion rate larger than $\sqrt2<\varphi$. \end{proof} The submanifold ${\mathcal H}{\mathcal E}_1\cap{\mathcal H}{\mathcal E}_2$ cuts ${\mathcal H}{\mathcal E}_1$ in two (relative) open subsets, as $q_2$ belongs either to $\Sigma_+$ or $\Sigma_-$ (connected components of $\Sigma\setminus (W^s_1\cup W^s_2)$. Consider $X\in {\mathcal H}{\mathcal E}_1\setminus ({\mathcal H}{\mathcal E}_1\cap{\mathcal H}{\mathcal E}_2)$, and assume $q_2\in \Sigma_+$. Then the stable leaf through $q_2$ cuts $\Sigma_+$ in two components, one, denoted as $\Sigma^+_1$ bounded by the stable leaf $W^s_1$ trough $p_1$ and the other, denoted as $\Sigma^+_2$ bounded by $W^s_2$ (which contains $q_1$ by definition of ${\mathcal H}{\mathcal E}_1$); notice that $\Sigma^+_2$ contains the stable leaf $\gamma^s_+$ , because $q_2\in\Sigma_1$. \begin{prop}\label{p.HE} Consider $X\in {\mathcal H}{\mathcal E}_1\setminus ({\mathcal H}{\mathcal E}_1\cap{\mathcal H}{\mathcal E}_2)$, and assume $q_2\in \Sigma_+$. Then : \begin{itemize} \item $X$ has a unique chain recurrence class in $U$, but this class is not transitive and consists in two (singular) homoclinic classes $K_-,K_+$ containing $\sigma$. \item The iterates by $P$ of any unstable segment in $\Sigma$ cut any stable leaf in $\Sigma^+_2$ but finitely many of them. Further more the return map $P$ restricted to the rectangle $\overline{\Sigma^+_2}$ is a classical Lorenz map (for the geometrical model of Lorenz attractor) for the parameter corresponding to one homoclinic connection. See Figure \ref{f.p.HE}. \begin{figure}[th] \centering \includegraphics[scale=0.2]{f-pHE.pdf} \hspace{0.5cm} \includegraphics[scale=0.2]{f-pHE-Sigma+.pdf} \hspace{0.3cm} \includegraphics[scale=0.2]{f-HE-Sigma+complementar.pdf} \caption{}\label{f.p.HE} \end{figure} \item The homoclinic class $K_-$ is a \emph{singular fake horseshoe} which intersects $\Sigma_-$ and is disjoint from $\Sigma_+$. \item the intersection $K_-\cap K_+\cap \Sigma$ is contained in $W^s_2$ and consists in $p_2$ and the orbit of $q_1$. \end{itemize} \end{prop} \begin{proof} We just refer the reader to the (very classical) pictures illustrating the restriction of the return map $P$ to $\Sigma^+_2$ and to $\Sigma\setminus \Sigma^+_2$. See Figure \ref{f.p.HE}. \end{proof} {A similar result holds for $X \in \mathcal{HE}_2 \setminus (\mathcal{HE}_1 \cap \mathcal{HE}_2)$ and $q_1 \in \Sigma_-$.} \subsection{Fat Lorenz attractor: vector fields in ${\mathcal H}^{1,2}_+$ and ${\mathcal H}^{1,2}_-$} \begin{prop}\label{p.H12+} If $X\in{\mathcal H}^{1,2}_+$ then the iterates of every unstable segment in $\Sigma\setminus\gamma^s_+$ cut every stable leaf excepted $\gamma^s_+$. The maximal invariant set in $U$ is a homoclinic class which is a singular attractor, that we call the fat Lorenz attractor. \end{prop} {A similar result holds for $X \in \mathcal{H}_-^{1,2}$, interchanging $\gamma_+^s$ by $\gamma_-^s$. } \begin{proof} We just refer the reader to the pictures illustrating the return map $P$, Figures \ref{f.p.Hi1} and~\ref{f.p.Hi2}. \end{proof} \section{An open set of singular hyperbolic flows}\label{s-the-open-set-1} Le $M$ be a compact $3$-manifold. We consider an open set ${\mathcal O}_1$ of vector fields $X$ on $M$ with the following properties described in the two next sections. \subsection{Topological conditions}\label{ss-the-open-set-1} \begin{enumerate} \item $X$ has a Lorenz-like singularity $\sigma= \sigma(X)$ varying continuously with $X$, and satisfying the Sternberg of non-resonance conditions. We denote by $W^s_+(\sigma)$ and $W^s_-(\sigma)$ the connected components of $W^s(\sigma)\setminus W^{ss}(\sigma)$. \item the vector field $X$ is transverse to $3$ embedded, disjoint annuli denoted by $\Sigma$ , $D^2$ and $D^1$. \item for any point $p\in D^i$ there is $t(p)$ depending smoothly on $p$ so that $X^{t(p)}(p)\in \Sigma$ and $X^s(p)\notin \Sigma\cup D^1\cup D^2$ for $s\in]0,t(p)[$. In other words, $X^{t(p)}(p)$ is the first return of the orbit of $p$ on $\Sigma\cup D^1\cup D^2$, and we denote it $R(p)=X^{t(p)}(p)$. Note $R$ is a diffeomorphism from $D^1\cup D^2$ to its image in $\Sigma$. In particular $R(D^1)$ and $R(D^2)$ are annuli. We require that each of them is \emph{an essential annulus} in $\Sigma$ (that is, is not homotopic to a point). \begin{figure}[h!] \centering \includegraphics[scale=0.4]{PertoSingularidade.pdf} \caption{The starting flow} \label{figure1} \end{figure} \item $\Sigma\cap W^s_+(p)$ and $\Sigma\cap W^s_-(\sigma)$ contain each a segment, $\gamma^s_+$ and $\gamma^s_-$, respectively, tranverse to the boundary $\partial \Sigma$ and connecting the two boundary components of the annulus $\Sigma$. The positive orbits of points in $\gamma^s_+$ and $\gamma^s_-$ go directly to the singular point and is disjoint from $\Sigma\cup D^1\cup D^2$. Then $\Sigma\setminus ( \gamma^s_+ \cup \gamma^s_-)$ consists in two components $\Sigma^1$ and $\Sigma^2.$ See Figure \ref{figure1}. \end{enumerate} \begin{enumerate} \setcounter{enumi}{4} \item For any $p\in \Sigma^1\cup \Sigma^2$ there is $t(p)$ depending smoothly on $p$ so that $X^{t(p)}(p)\in D^1\cup D^2$ and $X^s(p)\notin \Sigma\cup D^1\cup D^2$ for $s\in]0,t(p)[$. Thus, $X^{t(p)}(p)$ is the first return of the orbit of $p$ on $\Sigma\cup D^1\cup D^2$, and we denote it $S(p)=X^{t(p)}(p)$. Using item 3) one gets that $P=R\circ S\colon \Sigma\setminus(\gamma^s_+ \cup\gamma^s_-)\to \Sigma$ is the first return map of $X$ on $\Sigma$. \begin{figure}[!h] \centering \includegraphics[scale=0.42]{image2.pdf} \label{figure3} \caption{(a) The cross-sections $\Sigma\cup D^1\cup D^2$,\hspace{0,2cm} (b) The image $P(\Sigma^1\cup \Sigma^2)$.} \end{figure} \item Let $W^u_1(\sigma)$ and $W^u_2(\sigma)$ denote the unstable separatrices of $\sigma$, that is, the connected components of $W^u(\sigma)\setminus\sigma$. Then the interior of $D^i$, $i=1;2$ contains the first intersection point $\widetilde{q}_i$ of $W^u_i(\sigma)$ with $\Sigma\cup D^1\cup D^2$. Furthermore, $S(\Sigma^i)\cup \{\widetilde{q}_i\}$ is a annulus which is pinched at $\widetilde{q}_i$, and this pinched annulus is essential in $D^i$ (see Figure 3(a)) : by pinched we means that, in a neighborhood of $\widetilde{q}_i$ the set $S(\Sigma^i)\cup \{\widetilde{q}_i\}$ consists in two cuspidal triangles, with cusp at $\widetilde{q}_i$ and tangent at this point to the same line but oriented in opposite direction. As a consequence of item 5), the closure of $P(\Sigma^1\cup \Sigma^2)$ consists in $2$ parallel essential annuli in $\Sigma$, pinched at $q_i=R(\widetilde{q}_i)$. See Figure 3(b). \end{enumerate} \begin{enumerate} \setcounter{enumi}{6} \item As a consequence of the above items, there is an attracting region $U$ containing $\Sigma$, $D^1$, $D^2$, $\sigma$ so that every regular orbit in $U$ crosses $\Sigma$. The maximal invariant set $\Lambda$ in $U$ is an attracting invariant compact set (see Figure 4). \end{enumerate} \begin{figure}[h] \centering \includegraphics[scale=0.3]{image4.pdf} \label{image4} \caption{Attraction region for the flow.} \end{figure} \subsection{Singular hyperbolic conditions} \begin{enumerate} \setcounter{enumi}{7} \item The maximal invariant set $\Lambda$ in $U$ is singular hyperbolic with bundles $E^s$ and $E^{cu}$. This is equivalent to request that the first return map $P$ is hyperbolic. We will request more. \item There is a conefield ${\mathcal C}^u$ on the annulus $\Sigma\simeq {\mathbb S}^1\times [-1,1]$, transverse to the fibers $\{\theta\}\times[-1,1]$, and strictly invariant by the derivative $DP$ of $P$, and so that the vectors in ${\mathcal C}^u$ are uniformly expanded by a factor $\lambda>1$. The cone ${\mathcal C}^u(p)$ has two components: vectors cutting the fiber in the positive, or negative orientation. We require that $DP$ preserves this transverse orientation. \item \label{i.am} there exists $t>0$ such that \begin{equation}\|DX^t|_{E^s(x)}\|\cdot \|DX^{-t}|_{E^{cu}(X^t(x))}\|\cdot \|DX^t|_{E_x^{cu}}\|<1 \label{eq.cont.}\quad \mbox{ for all $x \in \Lambda$}. \end{equation} \end{enumerate} As a consequence of item~\ref{i.am}) and Theorem \ref{holonomia} the stable foliation, well defined in $U$ is of class $C^1$. This foliation is not tangent to $\Sigma$. However, there is a $2$ dimensional \emph{center-stable foliation } well defined on $U\setminus \sigma$, obtained by considering the $X^t$ orbits of the stable foliation: this foliation is $C^1$ too. This center stable foliation cuts the annulus $\Sigma$ transversely, along a $C^1$, one-dimensional foliation ${\mathcal F}^s$, which is the stable foliation of the return map $P$. The segment $\gamma^s_+$ and $\gamma^s_-$ are leaves of ${\mathcal F}^s$. The foliation ${\mathcal F}^s$ is transverse to the unstable cone field ${\mathcal C}^u$. The leaves of the foliation ${\mathcal F}^s$ are segments crossing $\Sigma$ (connecting the two boundary components of $\Sigma$). \section{Parameter families $X_\mu\in{\mathcal O}_\varphi$, with parameters in the torus} \subsection{${\mathbb T}^2$-parameter families} \begin{defi} Let $\pi\colon {\mathbb R}^2\to{\mathbb T}^2={\mathbb R}^2/{\mathbb Z}^2$ be the canonical projection. Let $V\subset {\mathbb R}^2$ be an open subset so that the projection $\pi(V)$ is the whole torus ${\mathbb T}^2$. We say that a family $\{X_\mu\in{\mathcal O}_1\}_{\mu\in V}$ is a $C^r$ family, $r\geq 0$ of vector fields in ${\mathcal O}_1$ parametrized by ${\mathbb T}^2$ if \begin{itemize} \item the map $\mu\to X_\mu$ is continuous for the $C^1$-topology. \item the map $(p,\mu)\mapsto X_\mu(p)$ is of class $C^r$. \item for any $\mu,\mu'$ so that $\mu'-\mu\in{\mathbb Z}^2$ one has: the return maps $P,P'$ of $X,X'$ on the transverse cross section $\Sigma$ coincide: $P=P'$. In particular, $X$ and $X'$ are smoothly topologically equivalent in restriction to the attracting region $U$, by an equivalence whose restriction to the cross section $\Sigma$ is the identity map. \end{itemize} Shortly, we say that $X_\mu$ is a ${\mathbb T}^2$-parameter family. \end{defi} \subsection{Essential families} The aim of this section is to define the notion of essential ${\mathbb T}^2$ families. In a rough way of speaking, we don't want that the family $X_\mu$ is contained in a small neighborhood of a given vector field $X_0$. We want that, when $\mu$ follows a closed simple path in ${\mathbb T}^2$, non homotopic to a point, then the images $P(\Sigma_1)$ or $P(\Sigma_2)$ give a turn in $\Sigma$ in an essential way. Let us firt present a non-intrinsecal definition of these phenomenon, and we will see then an intrinsecal definition, showing that the definition does not depend of the choices. Consider a ${\mathbb T}^2$-parameter family $\{X_\mu\}$ and let $\gamma^s_{+,\mu}$ and $\gamma^s_{-,\mu}$ the associated stable leaves corresponding to the discontinuities of the first return map. The leaves $\gamma^s_\mu$ vary continuously with $\mu$. This allow us to choose a parametrization of $\Sigma$, depending on $_mu$ and so that $\gamma^s_{+,\mu}$ is the segment $\{0\}\times [-1,1]$ in the annulus $\Sigma={\mathbb R}/{\mathbb Z}\times [-1,1]$. Consider now the points $q_{1,\mu}$ and $q_{2,\mu}$ (first intersection points of the unstable separatrices of $\sigma_\mu$ with $\Sigma$). This defines two continuous maps $q_1\colon {\mathbb T}^2\to {\mathbb R} /{\mathbb Z}\times [-1,1] \mbox{ and } q_2\colon {\mathbb T}^2\to {\mathbb R}/{\mathbb Z}\times [-1,1]$. Then, composing $q_1$ and $q_2$ by the projection $\psi\colon {\mathbb S}^1\times [-1,1]\to {\mathbb S}^1$ one gets two continuous maps $\psi \circ q_i\colon {\mathbb T}^2\to {\mathbb S}^1$, $i=1,2$. Let us denote $$Q=\left(\psi\circ q_1, \psi\circ q_2\right) \colon {\mathbb T}^2\to {\mathbb T}^2.$$ \begin{defi} With the notation above, we say that the family $X_\mu$ is essential if the topological degree of $\mu$ is $1$ (or else, if $Q$ is homotopic to an orientation preserving homeomorphism). \end{defi} Let us convince you now that this notion does not depend on the choice of the parametrization of $\Sigma$. Consider the hypersuperface $\Gamma^s_+\subset \Sigma \times {\mathbb T}^2$ defined by $\Gamma^s_+= \bigcup_\mu \gamma^s_mu\times\{\mu\}$. Is it homeomorphic to a $3$-torus in $\Sigma\times {\mathbb T}^2$. To every closed path $c\colon{\mathbb S}^1\to {\mathbb T}^2$ let us consider the closed pathes $q_{i,c}\colon {\mathbb S}^1\to \Sigma\times {\mathbb T}^2$, $i\in\{1,2\}$ so that $q_{i,c}(\theta)= \left( q_{i,c(\theta)},c(\theta)\right)$ for $\theta\in{\mathbb S}^1$. Notice that $q_{i,c}$ depends continuouly on the closed path $c$. In particular, its algebraic intersection number with the hypersurface $\Sigma^s_+$ only depends on the homology class $[c]$. We denote it $[q_{i,c}]\cdot\Gamma^s_+$. One gets a maps $$[c]\mapsto \left([q_{1,c}]\cdot\Gamma^s_+],[q_{2,c}]\cdot\Gamma^s_+\right)\in{\mathbb Z}^2$$ defined on $H_1({\mathbb T}^2,{\mathbb Z})\simeq {\mathbb Z}^2$ with values in ${\mathbb Z}^2$, and this map is a linear map given by a $2$ by $2$ matrix with ${\mathbb Z}$-entries. The topological degree in the first presentation of the notion is just the determinant of this linear map: the family is essential if and only iff the determinant is $1$. \subsection{Building ${\mathbb T}^2$-families} Consider a vector field $X\in{\mathcal O}_1$ on a $3$-manifold $M$. Thus, by definition, it is transversal to the annuli $\Sigma$ $D_1$ and $D^2$. Furthermore, the first return maps of $D_i$ to $\Sigma\cup D_1\cup D_2$ is a smooth map $R\colon D_1\cup D_2\to\Sigma$ , mapping $D_1$ and $D_2$ on two disjoint essential annuli in the interior of $\Sigma$. Consider a annuli $\tilde D_i$, $i=1,2$, containing $D_i$ in its interior, and so that $R$ extends in a diffeomorphism $ \tilde R\colon \tilde D_1\cup \tilde D_2\to \Sigma$ which is the first return maps from $\tilde D_i$ to $\Sigma\cup \tilde D_1\cup \tilde D_2$. We denote by $\Delta_i$ the union of the $X$-orbit segment joining points $p\in \tilde D_i$ to $\tilde R(p)$. Thus $(\Delta_i,X)$ is smoothly orbitally equivalent to $(\tilde D_i\times [0,1], \frac\partial{\partial t})$ Next lemma expresses that one can realize by a continuous family of vector fileds in $M$ any continuous deformation of the return map $R$. \begin{lema} \label{l.family}Consider a continuous family $R_\mu\colon D_1\cup D_2\to \tilde R(\tilde D_1\cup \tilde D_2)$, $\mu\in V$, where $V\subset {\mathbb R}^2$ is an open disk containing $0$. One assume $R_0=R$. Then there is a family of vector fields $X_\mu$ with the following properties: \begin{itemize} \item $X_0=X$ \item for any $\mu$, $X_\mu$ satifies all the topological properties (items 1 to 7)) of the definition of the set ${\mathcal O}_1$. \item for any $\mu$ , $X_\mu$ coincides with $X$ out of $\Delta_1\cup\Delta_2$. \item for any $\mu$, the restriction of $X_\mu$ to $\Delta_i$ is transverse to the fibers $\tilde D_i\times \{t\}$ \item the return map of $X$ from $D_1\cup D_2$ to $\Sigma$ is $R_\mu$. \end{itemize} \end{lema} \begin{proof} One extends $R_\mu$ to $\tilde D_i$ in a diffeomorphism $\tilde R_\mu$ so that all $\tilde R_\mu$ coincide with $\tilde R$ in a neighborhood of the boundary $\partial \tilde D_i$. Then we replace the retriction of $X$ in $\Delta_i$ by a vectorfield which coincides with $X$ in a neighborhood of $\partial \Delta_i$ and which entrance exit map is $\tilde R_\mu$. \end{proof} Assume now that the projection of $V$ on ${\mathbb R}^2/{\mathbb T}^2$ covers the whole torus ${\mathbb T}^2$, and assume that the family $R_\mu$ is ${\mathbb Z}^2$-periodic in the following sense: \begin{itemize} \item for any $\mu_1,\mu_2\in V$ so that $\mu_2-\mu_1\in {\mathbb Z}^2$ then $R_{\mu_1}=R_{\mu_2}$. \end{itemize} Then the family $X_\mu$ defined in Lemma~\ref{l.family} is a ${\mathbb T}^2$ parameter family of vector fields having $U$ has an attracting region. \subsection{An essential ${\mathbb T}^2$-family of vector fields in ${\mathcal O}_\varphi$ obtained by rotating the return maps $R|_{D_i}$} Consider $\Sigma\simeq {\mathbb S}^1\times [-1,1]$ with the coordinates $\theta, t$ and consider the constant cone field ${\mathcal C}$ defined by $${\mathcal C}(p)=\{u=\alpha\frac{\partial}{\partial \theta}+\beta\frac{\partial}{\partial \theta}\in T_p\Sigma \mbox{ so that } |\alpha|\geq |\beta|\}$$ We denote by ${\mathcal R}_\alpha\colon \Sigma\to\Sigma$ the rotation of angle $\alpha \in{\mathbb S}^1$, that is $(theta, t)\mapsto (\theta+\alpha,t)$. Consider a vector field $X\in {\mathcal O}_\varphi$ with the extra following properties \begin{itemize} \item The constant cone field ${\mathcal C}$ is the unstable cone field ${\mathcal C}^u$ \item The images $R(D_i)$ are product annulus ${\mathbb S}^1\times I_i \subset\Sigma={\mathbb S}^1\times [-1,1]$ \end{itemize} Now consider the ${\mathbb Z}^2$-family of maps $R_{\alpha,\beta}\colon D_1\cup D_2\to \Sigma$, $\alpha,\beta\in {\mathbb T}^2={\mathbb R}^2/{\mathbb S}^2$ defined by $$R={\mathcal R}_\alpha\circ R\mbox { on } D_1, \mbox{ and } R={\mathcal R}_\beta\circ R\mbox{ on } D_2$$ According to Lemma~\ref{l.family} one can realize the periodic family $R_\mu$ by a ${\mathbb T}^2$-parameter family of vector fields $X_\mu$ and one easily check \begin{lema} The family $X_\mu$ is a ${\mathbb T}^2$-parameter essential family in ${\mathcal O}_\varphi$. \end{lema} \section{Reduction to a $1$-dimensional dynamics} \subsection{Action of the return map on the space of stable leaves} Any $C^1$ vector field $X\in{\mathcal O}_1$ is singular hyperbolic in the attracting region $U$, with a continuous strong stable direction. There is a well defined \emph{stable foliation} (also called \emph{strong stable foliation} tangent to the stable distribution, and with leaves having the same regularity as $X$. It admits therefore a well defined $2$-dimensional \emph{(weak)stable foliation} (also called \emph{center-stable folaition}) out of the strong stable manifold of the singularity. The leaves of the weak stable foliation are the orbits for the flow of the leaves of the strong stable: out of the strong stable manifold of $\sigma$, the vector is not tangent to the stable direction, so that these orbits are $2$-dimensional. Along the strong stable manifold of $\sigma$ the vector field is tangent to the $1$ dimensional strong stable leaf, so the foliation is singular. The strong stable foliation is not tangent to $\Sigma$. However, the $2$ dimensional center-stable foliation cuts the annulus $\Sigma$ transversely, along a one-dimensional foliation ${\mathcal F}^s$, which is the stable foliation of the return map $P$. The segment $\gamma^s_+$ and $\gamma^s_-$ are leaves of ${\mathcal F}^s$. The foliation ${\mathcal F}^s$ is transverse to the unstable cone field ${\mathcal C}^u$. The leave of the foliation ${\mathcal F}^s$ are segments crossing $\Sigma$ (connecting the two boundary components of $\Sigma$). The leaves space $\Sigma/{\mathcal F}^s$ is a (topological) circle ${\mathbb S}^1_X$. The leaves $\gamma^s_+$ and $\gamma^s_-$ induce each a point $c_+$ and $c_-$ respectively, on ${\mathbb S}^1_X$. Note that the flow $X^t$ preserves the center-stable foliation, and thus the first return map $P$ preserves the foliation ${\mathcal F}^s$. As a consequence $P$ passes to the quotient as a map $f=f_X$ defined from ${\mathbb S}^1_X\setminus\{c_+,c_-\}$ to ${\mathbb S}^1_X$. As $P(\Sigma_i)$ is an essential pinched annulus in $\Sigma$ we deduce that $f_X$ restricted to each interval of ${\mathbb S}^1_X\setminus\{c_+,c_-\}$, is a diffeomorphism on a punctured circle. \subsection{Increasing the regularity of the foliation} In a recent work \cite{ararujomelbourne} Ara\'ujo and Melbourne adapt to our setting a condition from \cite{HPS} ensuring the smouthness of the strong stable foliation of $X$. \begin{itemize}\setcounter{enumi}{9} \item \label{i.am} there exists $t>0$ such that \begin{equation}\|DX^t|_{E^s(x)}\|\cdot \|DX^{-t}|_{E^{cu}(X^t(x))}\|\cdot \|DX^t|_{E_x^{cu}}\|<1 \label{eq.cont.}\quad \mbox{ for all $x \in \Lambda$}. \end{equation} \end{itemize} As a consequence of item~\ref{i.am}), the weak stable foliation of $X$, and , also the stable foliation $F^s$ of the first return map $P$ are of class $C^1$. Therefore the circle ${\mathbb S}^1_X$ is endowed with a natural $C^1$-structure, and $f_X$ is of class $C^1$. \section{Symbolic dynamics and topological classification}\label{ss-symbolic} As for the classical geometric model of Lorenz attractor, we will see in that section that, to any vector field in ${\mathcal O}_1$ one, can associate a combinatorial data, called the itinerary of the discontinuities. Furthermore, these itineraries provide a topological classification of the vector fields in the attracting region $U$. In this section we fix a vector field $X\in{\mathcal O}_1$, and its return map $P\colon\Sigma\to\Sigma$. \subsection{Itineraries for the return map $P$ on $\Sigma$} Recall that $\Sigma$ is endowed with two specific stable leaves $\gamma^s_+$ and $\gamma^s_-$ which split $\Sigma$ in $\Sigma_1$ and $\Sigma_2$. Note that $\gamma^ s_+$ cuts both pinched annuli $P(\Sigma_i)$ along one stable leaf, excepted the case where $q_i\in \gamma^s_+$. Thus $P^{-1}(\gamma^s_+)$ cut $\Sigma_i$ in two components (one of them being empty if $q_i\in\gamma^s_+$): \begin{itemize} \item we denote by $A_0$ and $A_1$ the two components of $\Sigma_1\setminus P^{-1}(\gamma^s_+)$ where $A_0$ is starting at $\gamma^s_+$ (for the positive orientation of the circle ${\mathbb S}^1$ of $\Sigma={\mathbb S}^1\times [-1,1]$. If $q_1\in \gamma^s_+$ then $A_1=\emptyset$. \item we denote by $B_0$ and $B_1$ the two components of $\Sigma_2\setminus P^{-1}(\gamma^s_+)$ where $B_0$ is starting at $\gamma^s_-$. If $q_1\in \gamma^s_+$ then $B_1=\emptyset$. \end{itemize} We consider ${\mathbb X}=\{A_0,A_1,B_0,B_1\}^{\mathbb N}$ the space of positive infinite words in the alphabet $\{A_0,A_1,B_0,B_1\}$. A finite word of length $k$ is an element $\{\omega_0,\dots,\omega_{k-1}\}$ of $\{A_0,A_1,B_0,B_1\}^{k}$. Given $\omega=\{\omega_i\}_{i\in{\mathbb N}} \in {\mathbb X}$ we denote by $[\omega]_k$ the initial word $\{\omega_0,\dots,\omega_{k-1}\}$ of length $k$ of~$\omega$. \begin{rema} For any $k>0$ and any $\varepsilon\in +,-$, $P^{-k}(\gamma^s_\varepsilon)$ consists in at most $2^k$ stable leaves. \end{rema} \begin{defi}For every $p\in \Sigma\setminus \bigcup_{j=0}^{k} P^{-j}(\gamma^s_+\cup\gamma^s_-)$ we denote by $[\omega(p)]_k=\{\omega_0(p),\dots,\omega_{k-1(p)}\}$ the word defined by $f^i(p)\in \omega_i(p)$ (recall that $\omega_i(p)$ is one of the four regions $A_0, A_1,B_0,B_1$). $[\omega(p)]_k$ is called the \emph{$k$-itinerary} of $p$. \end{defi} Figure \ref{f.itinerario} displays the choice of the alphabet above for the $1$-dimensional dynamics as in the picture. \begin{figure}[th] \centering \includegraphics[scale=0.25]{f-uni-dim-MaisUma.pdf} \caption{The alphabet chosen.}\label{f.itinerario} \end{figure} \begin{lema}\label{l.itinerary} Consider any point $p\in \Sigma$, and $S\colon[-1,1]\to \Sigma$ a positively oriented unstable segment centered at $p$ (i.e., $S(0)=p$). Then for any $k\geq 0$ the itinerary $[\omega(S(t))]_k$ is well defined and constant for $t>0$ (resp. $t<0$) small enough. This itinerary is independent of the choice of $S$. We denote them $$[\omega_-(p)]_k \quad \mbox{and} \quad [\omega_+(p)]_k.$$ If $p_1$ and $p_2$ belong to the same stable leaf, then $$[\omega_\pm(p_1)]_k = [\omega_\pm(p_2)]_k.$$ In other words, the itinerary depends only on the stable leaf, and not of the point in the leaf. \end{lema} For any $p\in\Sigma$ one denotes by $\omega_-(p)$ and $\omega_+(p)$ the infinite words whose first segments of length $k$ are, respectively, $[\omega_-(p)]_k$ and $[\omega_+(p)]_k$. They are called the \emph{down-} and \emph{up -itinerary} of $p$. The itinerary of $p$ is the pair of sequences $\omega(p)=(\omega_-(p),\omega_+(p)).$ \begin{rema} If $p\notin W^s(\sigma)$ ( that is, $P^k(p)\notin \gamma^s_\pm$ for all $k>0$) $k$ is then $\omega_-(p)=\omega_+(p)$ and the segment of length $k$ is \emph{$k$-itinerary} $[\omega(p)]_k$ of $p$. This shows that our terminology and notations are consistent. \end{rema} Next remark says that the itineraries $\omega_+$ and $\omega_-$ induce, in some sense, a semi-conjugacy of $(\Sigma, P)$ with $({\mathbb X},\mathfrak{S})$, where $\mathfrak{S}$ is the shift on ${\mathbb X}$. Being rigorous, $P$ is defined on $\Sigma\setminus (\gamma^s_+\cup \gamma^s_-)$ which is not invariant under $P$. Thus $\omega_{\pm}$ are conjugacies in restriction of $\Sigma\setminus W^s(\sigma)$. As $P$ is discontinuous along $\gamma^s+$ and $\gamma^s_-$ we also explain the itinerary of these points. \begin{rema}\begin{itemize}\item For any $x\in \Sigma\setminus (\gamma^s_+\cup \gamma^s_-)$ (that is, $P(x)$ is defined), then $$\omega_-(P(x))=\mathfrak{S}(\omega_-(x))\quad\mbox{and}\quad \omega_+(P(x))=\mathfrak{S}(\omega_+(x))$$ \item For $x\in \gamma^s_+$ one has: $$\omega_+(x)= A_0\star\omega_+(q_1)$$ and $$\begin{array}{l} \omega_-(x)= B_1\star\omega_-(q_2), \mbox{ if } q_2\notin \gamma^s_+\\ \omega_-(x)= B_0\dots B_0\dots, \mbox{ if } q_2\in \gamma^s_+ \quad (\mbox{and then } B_1=\emptyset) \end{array} $$ \item For $x\in \gamma^s_-$ one has: $$\omega_+(x)= B_0\star\omega_+(q_2)$$ and $$\begin{array}{l} \omega_-(x)= A_1\star\omega_-(q_1), \mbox{ if } q_1\notin \gamma^s_+\\ \omega_-(x)= A_0\star\omega_-(q_1) \mbox{ if } q_1\in \gamma^s_+ \quad (\mbox{and then } A_1=\emptyset) \end{array} $$ \end{itemize} \end{rema} \subsection{Itineraries for the $1$-dimensional dynamics} According to Lemma~\ref{l.itinerary}, the itineraries $\omega_-$ and $\omega_+$ are functions of the stable leaf. So they passes to the quotient on the leaves space ${\mathbb S}^1_X$. We still denote by $\omega_-$ and $\omega_+$ the quotient maps $$\omega_\pm\colon {\mathbb S}^1_X\to{\mathbb X}.$$ \subsection{Order and topology}\label{ss.order} We endow the alphabet $\{A_0,A_1,B_0,B_1\}$ with the total order $$A_0<A_1<B_0<B_1<1$$ which corresponds to the order that an unstable segment starting at $\gamma^s_+$ crosses the corresponding regions in $\Sigma$. We endow ${\mathbb X}$ with the corresponding lexicographic order, that we denote by $\prec$ (and $\preccurlyeq$ for the non-strict order). \begin{prop}\label{p.order}Let $S\colon [0,1]\to \Sigma$ be a unstable segment, positively oriented, whose interior is contained in $\Sigma\setminus \gamma^s_+$. Then: \begin{itemize} \item for any $t\in(0,1)$ one has $\omega_-(S(t))\preccurlyeq\omega_+(S(t)),$ \item for any $t_1,t_2\in[0,1]$ so that $t_1<t_2$ one has $\omega_+(S(t_1))\prec\omega_-(S(t_2)).$ \end{itemize} \end{prop} This proposition has a straightforward translation for the itineraries associated to the $1$-dimensional dynamic $f$. \begin{proof} For the first item, we already have seen that if $S(t)$ does not belong to $W^s(\sigma)$ then $\omega_-(S(t))=\omega_+(S(t)$, and there is nothing to prove. Consider $t_1<t_2$ so that $S(t_i)\notin W^s(\sigma)$. As $\omega_+$ and $\omega_-$ coincide on $S(t_i)$ we just note $\omega^i=\omega_+(S(t_i))=\omega_-(S(t_i))$. If the first letter is not the same, that is $S(t_1)$ and $S(t_2)$ are not in the same region $A_0,A_1,B_0,B_1$, then $(\omega^1)_0<(\omega^2)_0$, by the choice of the order on our alphabet, and so $\omega^1\prec\omega^2$. Assume now that $(\omega^1)_j=(\omega^2_j)$ for $j=0,\dots,k-1$ but $(\omega^1)_k\neq (\omega^2)_k$. \begin{clai}\label{c.k} With the hypotheses above \begin{itemize}\item $P^j$, $0\leq j\leq k$ is defined on the unstable segment $S([t_1,t_2])$ \item $P^j(S([t_1,t_2])$ is contained in one of the regions $A_0,A_1,B_0,B_1$ for $0\leq j<k$, \item $P^k(S(t_1))$ and $P^k(S(t_2))$ are not in same region \end{itemize} \end{clai} \begin{proof}The third item just says $(\omega^1)_k\neq (\omega^2)_k$ which is our hypothesis. The proof of the two first items goes togetehr and by induction. As $S(t_1)$ and $S(t_2)$ belongs to the same region, and as the interior of $S$ is disjoint from $\gamma^s_+$ and $S$ is an unstable segment (transverse to the fibration by stable leaves) this implies that $S([t_1,t_2])$ is contained in one of the region of the alphabet. As $P^0=id$ is clearly defined on $S([t_1,t_2])$, we proved both items for $j=0$. We assume now that both items have been proved for $0,\dots, j-1$ and let us prove them for $j$. Thus $P^{j-1}(S([t_1,t_1]))$ is contained in one of the regions. Thus $P$ is well defined on this interval meaning that $P^j$ is well defined on $S([t_1,t_2])$, proving the the first item. If $j\neq k$, then $(\omega^1)_j=(\omega^2)_j$: in other words the end points of the unstable segment $P^{j}(S([t_1,t_1]))$ belong to the same region. If the whole segment is contained in that region, we are done. Otherwize, $P^{j}(S([t_1,t_1]))$ crosses $\gamma^s_+$. This means that $P^{j-1}(S([t_1,t_1]))$ is crossing $P^{-1}(\gamma^s_+$, and this (by definition of the regions $A_0,A_1,B_0,B_1$) is contradicting the fact that $P^{j-1}(S([t_1,t_1]))$ is contained in one of these regions. This ends the proof of the claim. \end{proof} \begin{clai} With the hypotheses above, $(\omega^1)_k < (\omega^2)_k$. \end{clai} \begin{proof} According to Claim~\ref{c.k}, the $P^{k-1}(S([t_1,t_1]))$ is an unstable segment contained in one of the regions, and we have seen in the proof that this implies that $P^{k}(S([t_1,t_1]))$ is an unstable segment which does not cross $\gamma^s_+$. As already seen, the choice of the order on $A_0,A_1,B_0,B_1$ implies that, either $P^{k}(S([t_1,t_1]))$ is contained in one region (which contradicts $(\omega^1)_k\neq (\omega^2)_k$) or $(\omega^1)_k < (\omega^2)_k$, ending the proof of the claim. \end{proof} The claims above show that, for any $t_1<t_2$ so that $S(t_i)\notin W^s(\sigma)$ one has $$\omega^1\preccurlyeq\omega^2.$$ \begin{clai} Given $t_1<t_2$ so that $S(t_i)\notin W^s(\sigma)$, then $\omega^1\neq \omega^2$. \end{clai} \begin{proof}As in Claim~\ref{c.k}, if $\omega^1=\omega^2$ then $P^j$ is well defined on $S([t_1,t_2])$ for any $j\geq 0$ and $P^j(S([t_1,t_2])$ is contained in $1$ of the regions $A_0,A_1,B_0,B_1$. The length of these iterates is increasing exponentially, and these forbids (for large iterates) these segments to be contained in one region, ending the proof of the claim. \end{proof} Consider now any $t_1<t_2$. We need to prove $$\omega^1_+=\omega_+(S(t_1))\prec\omega_-(S(t_2))=\omega^2_-.$$ By definition of $\omega_-$, there is a decreasing sequence $t_{1,n}<t_2$ tending to $t_1$ and so that: \begin{itemize} \item $S(t_{1,n})\notin W^s(\sigma)$ \item $[\omega^1_+]_n=[\omega^{1,n}]_n$ (where $\omega^{1,n}$ is the itinerary of $S(t_{1,n})$. \end{itemize} Note that the sequence $\omega^{1,n}$ is strictly descreasing for $\prec$ and tends to $\omega^1_-$. In other words, $$\omega^1_-=\inf_{n\to+\infty} \omega^{1,n}. $$ In the same way we fix an incresing sequence $t_{2,n}>t_{1,0}$ tending to $t_2$ and so that \begin{itemize} \item $S(t_{2,n})\notin W^s(\sigma)$ \item $[\omega^2_-]_n=[\omega^{2,n}]_n$ (where $\omega^{2,n}$ is the itinerary of $S(t_{2,n})$. \end{itemize} Then $$\omega^2_+=\sup_{n\to+\infty} \omega^{2,n}. $$ As $\omega^{1,n}\prec \omega^{2,n}$, we conclude $$\omega^1_+\prec\omega^2_-$$ proving the second item of the proposition. To end the proof of the proposition it remains to show that $\omega_-(S(t))\preccurlyeq\omega_+(S(t))$ for any $t\in (0,1)$. For that we consider sequences $t_{-,n}<t_{-,n+1}<\dots<t<\dots<t_{+,n+1}<t_{+,n}$ tending to $t$ as $n\to +\infty$ and so that \begin{itemize} \item $S(t_{\pm,n})\notin W^s(\sigma)$ \item $[\omega_-(S(t))]_n=[\omega^{-,n}]_n$ (where $\omega^{-,n}$ is the itinerary of $S(t_{-,n})$) \item $[\omega_+(S(t))]_n=[\omega^{+,n}]_n$ (where $\omega^{+,n}$ is the itinerary of $S(t_{-,n})$ ) \end{itemize} We know that $\omega^{-,n}\prec \omega^{+,n}$ (the itinerary is strictly increasing on point out of $W^s(\sigma)$). So for every $n$ one has $[\omega_-(S(t))]_n\preccurlyeq [\omega_+(S(t))]_n$, ending the proof. \end{proof} \subsection{Admissible itineraries} Given a vector field $X\in {\mathcal O}_1$, we associate four itineraries: $\omega^+_+=\omega_+(\gamma^s_+)$, $\omega^+_-=\omega_-(\gamma^s_+)$, $\omega^-_+=\omega_+(\gamma^s_-)$ and $\omega^-_-=\omega_-(\gamma^s_-)$. \begin{defi} We say that a $\omega\in {\mathbb X}$ is admissible for the vector field $X$ (or, shortly, $X$-admissible), if it satisfies the following inequalities \begin{itemize} \item $\omega^+_+\preccurlyeq \mathfrak{S}^n(\omega)\preccurlyeq \omega^+_-$. \item If $(\omega)_i\in\{A_0,A_1\}$ then $\mathfrak{S}^i(\omega)\preccurlyeq\omega^-_-$. \item If $(\omega)_i\in\{B_0,B_1\}$ then $\omega^-_+\preccurlyeq \mathfrak{S}^i\omega$. \end{itemize} We denote by ${\mathcal A}_X\subset {\mathbb X}$ the set of $X$-admissibles itinerires. \end{defi} \begin{rema} The subset ${\mathcal A}_X$ is a $\mathfrak{S}$-invariant compact set. \end{rema} {We note that if $X$ does not exhibit homoclinic loops then $\underset{x \to c^-_-}{\lim} f^i_X(x)=\underset{x \to c^+_+}{\lim} f^i_X(x)$ and $\underset{x \to c^-_+}{\lim} f^i_X(x)= \underset{x \to c^+_-}{\lim} f^i_X(x)$, for all $i \in \mathbb{N}$, and therefore} \begin{equation}\label{e-itinerario} \begin{array}{c} \mathfrak{S}(\omega^+_+)=\mathfrak{S}(\omega^-_-)\\ \mathfrak{S}(\omega^+_-)=\mathfrak{S}(\omega^-_+) \end{array} \end{equation} Remembering that $(\omega^+_+)_0= A_0$ and $(\omega^-_+)_0=B_0$ one gets that these $4$ itineraries are determined by $\omega^+_-$ and $\omega^-_-$. {If $X$ exhibits a homoclinic loop, the equalities (\ref{e-itinerario}) are no longer true. For instance, this is the case when $W^u_+(\sigma) \cap W^s_+(\sigma) \neq \emptyset$ and $W^u_-(\sigma) \cap W^s(\sigma) = \emptyset$, $\omega_+^+$ is periodic and $\omega_-^-$ is not. See Figure \ref{f-itinerario-45}. However, since $\underset{x \to c^-_-}{\lim} f_X(x)=\underset{x \to c^+_+}{\lim} f_X(x)$ and $\underset{x \to c^-_+}{\lim} f_X(x)=\underset{x \to c^+_-}{\lim} f_X(x)$, the set ${\mathcal A}_X$ is still determined by $\{\omega_+^+,\omega_-^+\}$ or $\{\omega_-^-,\omega_+^-\}$.} \begin{figure}[!h] \centering \includegraphics[scale=0.08]{f-itinerario-1.pdf} \caption{(a) $\mathcal{H}_-^1 \cap \mathcal{H}^2_+$,\hspace{4,5cm} (b) $\mathcal{H}_-^1 \cap \mathcal{H}_-^2$.}\label{f-itinerario-45} \end{figure} \vspace{-1cm} \begin{eqnarray*} \mbox{Itineraries of} \quad \mathcal{H}_-^1 \cap \mathcal{H}_-^2 &\hspace{3cm}& \mbox{Itineraries of} \quad \mathcal{H}_-^1 \cap \mathcal{H}_+^2\\ \omega_+^+=A_0B_0A_0B_0\ldots\,\,\,\,\, &&\,\,\,\,\, \omega_+^+=A_0B_0B_0B_0\ldots\\ \omega_-^-=A_1A_1A_1A_1\ldots \,\,\,\,\,&&\,\,\,\,\, \omega_-^-=A_1A_1A_1A_1\ldots \\ \omega_-^+=B_0A_0B_0A_0\ldots \,\,\,\,\,&&\,\,\,\,\, \omega_-^+=B_1A_1A_1A_1\ldots\\ \omega_+^-=B_0B_0B_0B_0\ldots \,\,\,\,\,&&\,\,\,\,\, \omega_+^-=B_0B_0B_0B_0\ldots \end{eqnarray*} \begin{lema}\label{lemma.ad}If $p\in\Sigma$ then $\omega_-(p)$ and $\omega_+(p)$ are $X$-admissible. \end{lema} \begin{proof} Consider an unstable segment $S\colon [0,1]\to \Sigma$ whose interior is disjoint from $\gamma^s_+$ and so that $S(0),S(1)\in \gamma^s_+$ and $p\in S([0,1])$. Then Proposition~\ref{p.order} applies and implies that, if $p\notin \gamma^s_+$ then $\omega^+_+\preccurlyeq \omega_-(p)\preccurlyeq\omega_+(p)\preccurlyeq \omega^+_-$. In particular, $$\begin{array}{c} \omega^+_-=\max\{ \omega_-(p), \omega_+(p), p\in\Sigma\}\\ \omega^+_+=\min\{ \omega_-(p), \omega_+(p), p\in\Sigma\} \end{array} $$ In particular, this shows that $\omega_-(p)$ and $\omega_+(p)$ satisfy the first item of the definition of $X$-admissibility. The other two items correspond to several cases whose proof are very similiar, let us just present one of these cases: Let $p=S(t)$ so that $(\omega_+(p))_0\in\{A_0,A_1\}$. Then there is a decreasing sequence $t_n\to t$ so that $S(t_i)\notin W^s(\sigma)$ and $$[\omega_+(p)]_n=[\omega^n]_n$$ where $\omega^n$ denotes $\omega_-(S(t_n))=\omega_+(S(t_n))$. Then $\omega_+(p)=\inf \omega^n$. On hte other hand $S(t_n)$ is a point out of $W^s(\sigma)$ and contained in $A_0\cup A_1$, and thus in $\Sigma_1$. Proposition~\ref{p.order} implies that $\omega^n\prec \omega^-_-$, finishing this case, and the proof. \end{proof} \subsection{Realizing $X$-admissible itineraries} The aim of this section is to prove \begin{prop}\label{p.realisacao} Given any $\omega\in {\mathcal A}_X$ there is $p\in\Sigma$ so that $$\omega\in\{\omega_+(p),\omega_-(p)\}.$$ \end{prop} \begin{rema}\label{r.realisacao} Proposition~\ref{p.order} implies that any two points satisfying the conclusion of Proposition~\ref{p.realisacao} belong to the same stable leaf. \end{rema} Proposition~\ref{p.realisacao} is a direct consequence of Lemma~\ref{l.realisacao} below: \begin{lema}\label{l.realisacao} Given any $\omega\in {\mathcal A}_X$, the set $\Omega_n$ of points $p$ in $\Sigma$ so that $$[\omega]_n\in \{[\omega_+(p)]_n,[\omega_-(p)]_n\}$$ is a non empty compact subset of $\Sigma$. \end{lema} Assuming that Lemma~\ref{l.realisacao} is true, then the sequence $\Omega_n$ is a nested sequence of non-empty compact sets and any $p\in\bigcap\Omega_n$ satifies that $\omega\in\{\omega_+(p),\omega_-(p)\}.$ Note that, indeed, this intersection is exactly the stable leaf through $p$, according to Proposition~\ref{p.order}. It remains to prove Lemma~\ref{l.realisacao}. \subsection{Proof of Lemma~\ref{l.realisacao}} For any itinerary $\omega\in{\mathcal A}_X$ we denote by $\Omega_n(\omega)$ the set of points $q\in\Sigma$ so that $[\omega]_n\in \{[\omega_-(q)]_n,[\omega_+(q)]_n\}$. \begin{lema}\label{l.itinerarios-contantes} Let $\omega\in {\mathcal A}_X$ so that there is some $p\in\Sigma$ for which $\omega\in\{\omega_-(p),\omega_+(p)\}$. Fix $n\in{\mathbb N}$ and denote $\Omega_n(\omega)$ the set of points $q\in\Sigma$ so that $[\omega]_n\in [\omega_-(q)]_n,[\omega_+(q)]_n$. Then $\Omega_n(\omega)$ is the closure of a connected component of $$\Gamma_n\stackrel{\scriptscriptstyle\rm def}{=} \Sigma\setminus\bigcup_{i=0}^{n-1} P^{-i}(\gamma^s_+\cup P^{-1}(\gamma^s_+)\cup\gamma^s_-).$$ \end{lema} \begin{proof} First notice that $\Gamma_n$ consist in the union of finitely many stable leaves. Consider an unstable segment $S\colon[0,1]\to\Sigma$ whose interior is disjoint from $\Gamma^n$, and having its end points on $\Gamma_n$. Let $\Omega_n$ be the closure of the connected component of $\Sigma\setminus \Gamma_n$ containing $S((0,1))$. Then for any $0\leq i <n$, $P^i(S((0,1)))$ is well defined and disjoint from $\gamma^s_\pm$ and of $P^{-1}(\gamma^s_+)$. In other word, $P^i(S((0,1)))$ is contained in one of the region defining the alphabet. Thus $[\omega_-(S(t))]_n$ does not depend on $t\in(0,1)$ and is equal to $[\omega_+(S(0))]_n$ and $[\omega_-(S(1))]_n$. One deduces that $[\omega_-]_n$ and $[\omega_+]_n$ are equal and constant on the interior of $\Omega_n$, and $[\omega_-]_n$ takes the same value on one of the boundary stable leaf, and $[\omega_+]_n$ on the other boudary stable leaf. This proves that $\Omega_n(\omega)$ is a union of such closures of connected components $\Omega_n$. Fix $\Omega_n\subset \Omega_n(\omega)$ and let $q\notin \Omega_n$. If $q$ is not is the same region $\{A_0,A_1,B_0,B_1\}$ as $\Omega_n$ then $[\omega_\pm(q)]_n$ is not $[\omega]_n$. Otherwise, there is an unstable segment ( still denoted by $S$) in this region (hence disjoint from $\gamma^s_+\cup\gamma^s_-\cup P^{-1}(\gamma^s_+)$, joining $q$ to a point $p$ in the interior of $\Omega_n$. The interior $\operatorname{int}(S)$ of segment $S$ crosses the boundary of $\Omega_n$ that is crosses $\Gamma_n$. Let $i$ the smallest integer so that $\operatorname{int}(S)\cap P^{-i}(\gamma^s_+\cup\gamma^s_-\cup P^{-1}(\gamma^s_+))$. Then $P^{i-1}(S)$ is a contained in the closure of one of the regions $\{A_0,A_1,B_0,B_1\}$ but not $P^i(S)$. This implies that the two end points of $P_i(S)$ are not in the same region $\{A_0,A_1,B_0,B_1\}$. This implies that $[\omega_+(q)]_i$ and $[\omega_-(q)]_i$ are different from $[\omega]_i$, proving that $q\notin \Omega_n(\omega)$, ending the proof. \end{proof} \begin{proof}[Prof of Lemma~\ref{l.realisacao}]. The proof goes by induction. We want to prove that, for every $n\geq 0$ and every $\omega\in{\mathcal A}_X$ , $\Omega_n(\omega)$ is the closure of one connected component of $\Sigma\setminus \Gamma_n$. Let us check this is true for $n=0$. Each itineraries of length $0$ is one letter of our alphabet, which corresponds to a connected component of $\Sigma\setminus\Gamma_0$, and its closure is the one we announced. We now prove it also holds for $n=1$. Assume for instance that $(\omega)_0=A_0$. Thus $\Omega_0(\omega)=\bar A_0$, and $P(\Omega_0(\omega))$ is a cuspidal triangle starting at $q_1$ and ending at $\gamma^s_+$. Now, by definition of ${\mathcal A}_X$ one has $$\omega_+(\gamma_+^s)\preccurlyeq\omega\preccurlyeq\omega_-(\gamma^s_-).$$ As the first letter of $\omega$ is the same as the first letter of $\omega_+(\gamma^s_+) $ one gets $$\mathfrak{\omega_+}(\gamma^s_+)=\omega_+(q_1)\preccurlyeq\mathfrak{S}(\omega).$$ In particular $(\omega)_1=(\mathfrak{S}(\omega))_0$ either is strictly bigger than or is equal to $(\omega_+(q_1))_0$. In both cases $P(\Omega_0(\omega))$ intersects $\Omega_0(\mathfrak{S}(\omega))$ proving that $\Omega_2(\omega)$ is not empty. Now Lemma~\ref{l.itinerarios-contantes} asserts that it is the closure of a connecting component of $\Sigma\setminus \Gamma_2$ proving the induction hypothesis in that case. The case $(\omega)_0=A_1,B_0,B_1$ are very similar. We assume now that the induction hypothese have been proved for $i=0\dots n$. Consider $\omega\in{\mathcal A}_X$. By the induction hypothesis, $\Omega_n(\omega)$ is the closure of one connected component of $\Sigma\setminus \Gamma_n$. We split the proof in cases. \noindent\underline{Case 1:} $\gamma^s_+$ and $\gamma^s_-$ are not contained in the compact set $\Omega_n(\omega)$. Then $P$ is defined on $\Omega_n(\omega)$ and the boundary $\partial P(\Omega_n(\omega))$ is contained in $\Gamma_{n-1}$. This implies that $P(\Omega_n(\omega))$ crosses every stable leaf in the closure of a connected component of $\Sigma\setminus \Gamma_{n-1}$ , which has to be $\Omega_{n-1}(\mathfrak{S}(\omega))$ (by Lemma~\ref{l.itinerarios-contantes}). By the induction hypothesis $\Omega_n(\mathfrak{S}(\omega))$ is a connected component of $\Sigma\setminus \Gamma_n$, and is contained in $\Omega_{n-1}(\mathfrak{S}(\omega))$. This implies that $P(\Omega_n(\omega))$ intersects $\Omega_{n-1}(\mathfrak{S}(\omega))$. Thus $\Omega_{n+1}(\omega)$ is not empty, and therefore is a connected component of $\Sigma\setminus \Gamma_{n+1}$ by Lemma~\ref{l.itinerarios-contantes}. \noindent\underline{Case 2:} We now assume that $1$ of the boundary components of $\Omega_n(\omega)$ is $\gamma^s_+$ and the other is not $\gamma^s_-$. Up to reverse the orientation we assume that the positively oriented unstable segments starting at $\gamma^s_+$ enter in $\Omega_n(\omega)$. Then $P(\Omega_n(\omega))$ is a cuspidal triangle starting at $q_1$ and ending on a stable leaf in $\Gamma_{n-1}$. Now $\Omega_{n-1}(\mathfrak{S}(\omega))$ is (induction hypothesis) the closure of a connected component in $\Sigma\setminus \Gamma_{n-1}$, which contains $P(\Omega_n(\omega))$ and thus contains $q_1$. Now $\Omega_{n}(\mathfrak{S}(\omega))$ is (induction hypothesis) the closure of a connected component in $\Sigma\setminus \Gamma_{n}$. \begin{clai}$P(\Omega_n(\omega))\cap \Omega_{n}(\mathfrak{S}(\omega) \neq \emptyset$ \end{clai} \begin{proof} Note that the first letter of $\omega$ is $A_0$. As $\omega\in{\mathcal A}_X$ on has $\omega_+(\gamma^s_+)\preccurlyeq\omega$. As their first letters are equal, this implies $$\mathfrak{\omega}_+(\gamma^s_+)=\omega_+(q_1)\preccurlyeq\mathfrak{S}(\omega).$$ Consider a positively oriented unstable segment $S\colon[0,1]\to \Sigma$ crossing every stable leaf in $\Omega_{n-1}(\mathfrak{S}(\omega))$ and containing $q_1=S(t_0)$. Then $S$ is crossing $\Omega_n(\mathfrak{S}(\omega))$ at point $S(t)$. Recall that $\omega_+(q_1)\preccurlyeq\mathfrak{S}(\omega)$. Recall that the fonction $[\omega_+(S(t))]n$ is non decreasing with $t$, so that $S^{-1}(\Omega_n(\mathfrak{S}(\omega)))\subset [0,1]$ is not inferior to $S^{-1}(\Omega_n(\omega_+(q_1))$. As a consequence $S^{-1}(\Omega_n(\mathfrak{S}(\omega)))\subset [0,1]$ either coincide with $S^{-1}(\Omega_n(\omega_+(q_1))$ or is strictly larger than $t_0$. In both cases $P(\Omega_n(\omega))$ intersects $\Omega_{n}(\mathfrak{S}(\omega)) $, concluding. \end{proof} This implies that $\Omega_{n+1}(\omega)$ is not empty, and Lemma~\ref{l.itinerarios-contantes} concludes that case. \noindent\underline{Case 3:} We now assume that $1$ of the boundary components of $\Omega_n(\omega)$ is $\gamma^s_-$ and the other is not $\gamma^s_+$. Up to reverse the orientation we assume that the positively oriented unstable segments starting at $\gamma^s_+$ enter in $\Omega_n(\omega)$. Then $P(\Omega_n(\omega))$ is a cuspidal triangle starting at $q_2$ and ending on a stable leaf in $\Gamma_{n-1}$. Note that the first letter of $\omega$ is $B_0$. As $\omega\in{\mathcal A}_X$ on has $\omega_+(\gamma^s_-)\preccurlyeq\omega$. As their first letters are equal, this implies $$\mathfrak{S}(\omega)\preccurlyeq\mathfrak{S}\omega_+(\gamma^s_+)=\omega_+(q_2).$$ The proof follows now in a similar way to Case 2. \noindent\underline{Case 4:} Finally, we assume that both boundary components of $\Omega_n(\omega)$ are $\gamma^s_-$ and $\gamma^s_+$. This implies that $\Omega_n(\omega)$ is the closure of $\Sigma_1$ or of $\Sigma_2$, and thus $P(\Sigma)$ crosses $\Omega_n(\mathfrak{S}(\omega))$ concluding. Now the proof of Lemma~\ref{l.realisacao} (and thus of Proposition~\ref{p.realisacao}) is complete. \end{proof} \subsection{Itineraries, conjugacy, and topological equivalence} \begin{teo}\label{L.T} Consider $X,Y\in {\mathcal O}_1$ ad let $f_X,f_Y$ be the corresponding $1$-dimensional dynamics. Assume that the itineraries $\omega_-(\gamma^s_-,X)=\omega_-(\gamma^s_-,Y)$ and $\omega_-(\gamma^s_+,X)=\omega_-(\gamma^s_+,Y)$. Then there is a orientation preserving map of ${\mathbb S}^1$ which is a conjugation between $f_X$ and $f_Y$. \end{teo} \begin{proof} We have seen that the sets ${\mathcal A}_X$ and ${\mathcal A}_Y$ of admissible itineraries for $X$ and $Y$ are completely determined by $\omega_-(\gamma^s_\pm, X)$ and $\omega_-(\gamma^s_\pm, Y)$, respectively. Thus $${\mathcal A}_X={\mathcal A}_Y\stackrel{\scriptscriptstyle\rm def}{=}{\mathcal A}.$$ Now for any $\omega\in{\mathcal A}$, Proposition~\ref{l.realisacao} and Remark~\ref{r.realisacao} imply that there are a unique point $x_\omega\in {\mathbb S}^1_X$ and $y_\omega\in{\mathbb S}^1_Y$ so that $\omega\in \{\omega_-(x_\omega,f_X),\omega_+(x_\omega,f_X)\}$ and $\omega\in \{\omega_-(y_\omega,f_Y),\omega_+(y_\omega,f_Y)\}$. We define $h(x_\omega)=y_\omega$. This defines a bijection from ${\mathbb S}^1_X$ to ${\mathbb S}^1_Y$, which sends $\gamma^s_{i,X}$ on $\gamma^s_{i,Y}$. The punctured circle is an interval endowed with an order (frome the positive orientation of the unstable segments), and Proposition~\ref{p.order} implies that $\omega\mapsto x_\omega$ and $\omega\mapsto y_\omega$ are increasing. This implies that $h\colon x_\omega\mapsto y_\omega$ is an increasing bijection from ${\mathbb S}^1_X\setminus\{\gamma^s_{+,X}\}$ onto ${\mathbb S}^1_Y\setminus\{\gamma^s_{+,Y}\}$. An increasing bijection between to intervals is aa homeomorphism, so that $h$ is a homeomorphism. The fact that $h$ is a conjugacy now comes from the fact that $x_{\mathfrak{S}(\omega)}= f_X(x)$ for $x\notin \{\gamma^s_{\pm,X}\}$. \end{proof} {Recall that the discontinuities are fixed points for the conjugacy $h$ contructed above.} We finish proving Theorem \ref{igual-a-t.conjugado}, which establishes that the restriction to the maximal invariant set of $X,\, Y \in \mathcal{O}_1$ are topologically equivalent by a conjugacy close to identity if, and only if, $X$ and $Y$ have the same itineraries. \begin{proof} ({\em{of Theorem \ref{igual-a-t.conjugado}}}) {($\Rightarrow$) A conjugation $\mathbb{H}$ between $X|_{\Lambda_X}$ and $Y|_{\Lambda_Y}$ induces a topological conjugation $h:{\mathbb S}^1 \to {\mathbb S}^1$ between $f_X:{\mathbb S}^1 \to {\mathbb S}^1$ and $f_Y:{\mathbb S}^1 \to {\mathbb S}^1$. Let $\varepsilon=\frac{1}{2}\min\{d(D_1,\Sigma;d(D_2,\Sigma))\}$ (see Section \ref{ss-the-open-set-1} for the definitions of $D_1$ and $D_2$). If $d(\mathbb{H},id)<\varepsilon$, $h$ is orientation preserving and $h(c_{i,X})=c_{i,Y}$ for $i \in \{-,+\}$, and therefore $X$ and $Y$ have the same itineraries.\\ ($\Leftarrow$ ) Fix $p \in \Sigma \cap \Lambda_X$. Because $X$ and $Y$ have de the same itineraries, Lemma \ref{l.realisacao} together with Theorem \ref{L.T} ensure the existence of $q \in \Sigma \cap \Lambda_Y$ such that for all $n \in \mathbb{N}$, points in $\gamma^s_{X}(P_X^{-n}(p))$ and $\gamma^s_{Y}(P_Y^{-n}(q)$ have the same itineraries. Thus $C_n=\{P_Y^{n}(\gamma^s_{Y}(P_Y^{-n}(q))\}_{n \in \mathbb{N}}$ is a sequence of compact sets in $\gamma^s_Y(q)$ converging to a single point and we define $H(p)=\underset{n \in \mathbb{N}}{\cap} C_n$. Lema \ref{lemma.ad} implies that $H$ is onto. For $p_1,p_2 \in \Sigma$ such that $\gamma^s_X(p_1) \neq \gamma^s_X(p_2)$, it is easy to see that $H(p_1)\neq H(p_2)$. For $p_1$ and $p_2$ in the same leave, there exists $n_1 \in \mathbb{N}$ for which $P^{-n_1}(p_1)$ and $P^{-n_1}(p_2)$ belongs to different connected component of $\Sigma \setminus \{\gamma^s_{-,X},\gamma^s_{+,X}\}$, which implies that $\gamma^s_{X}(P_X^{-n_1}(p_1)) \cap \gamma^s_X(P_X^{-n_1})(p_2)) = \emptyset$ and hence $H_n(p_1) \cap H_n(p_2) \neq \emptyset$ for all $n>n_1$, providing $H$ injectivity. The existence of unstable cone fields around $\Sigma$ and the continuity of $h$ and $h^{-1}$ give the continuity of $H$ and $H^{-1}$. To finish, for $p \in \Sigma \cap \Lambda_X$, consider $\alpha \subset \mathcal{O}(p)$ and $\beta \subset \mathcal{O}(H(p))$, curves parametrized by the arc length, joining $p$ to $D^1 \cup D^2$ and $H(p)$ to $D^1 \cup D^2$ respectively, in away that $\alpha(t_1),\beta(t_2) \notin D^1 \cup D^2$ for all $0<t_1<\ell(\alpha)$ and $0<t_2<\ell(\beta)$). For $\rho$ being the ratio of length of $\alpha$ to the length of $\beta$, we define $\mathbb{H}(t)=\beta(\rho t)$. Extend this map to segments of trajectories leaving $D^1 \cup D^2$ and returning to $\Sigma$ in the same way as before. $\mathbb{H}$ defines a topological equivalence. Note that $\mathbb{H}(D_i)=D_i$ and $\mathbb{H}(\Sigma)=\Sigma$, which implies that $d(\mathbb{H},id)<\varepsilon$ and we have the result.} \end{proof} \section{Preliminaries}\label{s-preliminaries} In this section we recall some definitions and results proved elsewhere that will be used in this text. \subsection{Basic topological notions} Let $\mathfrak{X}^r(M)$, $\,r\geq 1,$ be the space of vector fields defined on a compact boundary less manifold $M$, with the $C^r$ topology. For $X \in \mathfrak{X}^r(M) $ , we denote by $X^t$ the flow of $X$. If $U\subset M$, $\overline{U}$ denotes the closure of $U$, $\operatorname{int} U$ the interior of $U$ and $\partial U$ the border of $U$. An invariant compact set $\Lambda$ of $X$ is \emph{transitive} if there is $p\in\Lambda$ so that its positive orbit is dense in $\Lambda$ (this notion is called \emph{topological ergodicity} by some authors). Recall that a sequence $\{p_i\}$ is a $\varepsilon$-pseudo-orbit for $X$ if there is $t_i>1$ such that $d(p_{i+1},X^{t_i}(p_i))<\varepsilon$, for all $i$. A point $p$ is \emph{chain recurrent} if for each $\varepsilon>0$ there is a $\varepsilon$-pseudo orbit ~$p=p_0,p_1,\dots,p_n=p$. The chain recurrent set ${\mathcal R}(X)$ is the set of all chain recurrent points. It is a compact invariant set. An invariant compact set $\Lambda$ is \emph{chain recurrent} or \emph{chain transitive} if for every $\varepsilon>0$ it admits a $\varepsilon$-pseudo orbit $\{x_i\}$ , $x_i\in \Lambda$, dense in $\Lambda$. Two points $p,q$ are chain equivalent if for any $\varepsilon>0$ there are $\varepsilon$-pseudo orbits from $p$ to $q$ and from $q$ to $p$. Conley theory \cite{Co} proves that the chain-equivalence is a equivalence relation whose equivalence classes are called the \emph{chain recurrence classes}. They define a partition of ${\mathcal R}(X)$ by invariant pairwise disjoint compact sets. Given a region $U$ the maximal ivariant set in $U$ is the set $\Lambda_U$ of points $p$ whose whole orbit is contained in $U$, that is: $$\Lambda_U= \bigcap_{t\in{\mathbb R}} X^t(U).$$ A region $U$ is \emph{attracting} for a vector field $X$ of $M$ if its boundary is a codimension $1$ submanifold transverse to $X$ and $X$ is entering in $U$. \begin{defi}\begin{itemize}\item A compact invariant set $\Lambda_X$ of $X^t$ is \emph{attracting} if there exists an open neighborhood $U \supset \Lambda_X$ which is an attracting region and such that $\Lambda_X$ is the maximal invariant set in $U$, that is: $$\Lambda_X= \underset{t >0}{\bigcap}X^t(U).$$ \item We say that $\Lambda_X$ is \emph{an attractor} if it is attracting and chain transitive (in particular, $\Lambda_X$ is a chain recurrence class). \item An invariant compact set $K$ is a \emph{quasi-attractor} if it is a chain recurrence class and admits a basis of neighborhoods which are attracting regions: there are a decreasing sequence $U_i\subset U_{i-1}$ of attracting regions so that $K=\bigcap_i U_i$. \end{itemize} \end{defi} Notice that the existence of an attractor is not ensured a priori, but Conley proves the existence of quasi-attractors in any attracting region (for a vector field on a compact manifold). \begin{defi} Two vector fields $X$ and $Y$ defined on $M$ are topologically equivalent, if there exists a homeomorphism $h:M \to M$ satisfying: \begin{enumerate} \item $h(\mathcal{O}_X(p))=\mathcal{O}_Y(h(p))$. \item for all $p \in M$ and $\varepsilon>0$, there exists $\delta>0$ such that for $t \in (0,\delta)$ there is $s \in (0, \varepsilon)$ satisfying $h(X^t(p))=Y^s(h(p))$. \end{enumerate} \end{defi} \subsection{Fake horseshoe}\label{ss-fake-horseshoe} The {\em{fake horseshoe map}} consists of a sequence of operations on the unit square. First, stretch in the $y$ direction by more than a factor of two, then compress in the $x$ direction by more than a factor of two. Finally, fold the resulting rectangle and fit it back onto the square, overlapping at the top and bottom, and not quite reaching the ends to the left and right (and with a gap in the middle), as illustrated in the Figure ~\ref{f.horseshoe-1}. The only difference with the usual horseshoe is the way back to the square of the resulting folded rectangle: the bottom of the folded rectangle fits back like the top of the starting square. \begin{figure}[th] \centering \includegraphics[scale=0.25]{f-horseshoe-1.pdf} \hspace{0.5cm} \includegraphics[scale=0.20]{f-horseshoe-3-esticado.pdf} \hspace{0.5cm} \includegraphics[scale=0.25]{f-fakehorseshoe.pdf} \caption{ The usual horseshoe and a fake horseshoe}\label{f.horseshoe-1} \end{figure} \subsection{Singular points} A point $\sigma$ is singular if $X$ vanishes at $\sigma$. The set of singular points of $X$ is denoted by $Zero(X)$. A point $\sigma\in Zero(X)$ is hyperbolic if the real part of the eigenvalues of $DX(\sigma)$ does not vanish. \begin{defi} \label{d.lorenzlike} A singularity $\sigma$ of $X$ is Lorenz-like if the eigenvalues $\lambda_i \in \mathbb{R},\, i \in \{ss, s, u\},$ of $DX(\sigma)$ satisfy $\lambda_{ss}<\lambda_{s}<0<-\lambda_s<\lambda_{u}<-\lambda_{s}$. \end{defi} \begin{defi}\label{naoressonancia} In addition, $\sigma$ is non resonant, if for all $N>2$ and any choice of nonnegative integer numbers $m_1,m_2,m_3$ with $2\le \overset{3}{\underset{j=1}{\sum}}m_j < N$ we have $\overset{3}{\underset{j=1}{\sum}} m_j\lambda_j \neq \lambda_i$. \end{defi} Hartman Grobman theorem asserts that, in a small neighborhood of a hyperbolic singular point, a vector field is locally topologically equivalent to its linear part. Then Sternberg provides conditions to guarantee that this local topological equivalence is indeed of class $C^2$: \begin{teo} Let $X$ a vector field and let $n \in \mathbb{Z}^+$ be given. Then there exists an integer $N = N(n) \ge 2$ such that: if $A$ is a real non-singular $d \times d$ matrix with eigenvalues $\gamma_1, \ldots ,\gamma_d$ satisfying $$2\le \overset{d}{\underset{j=1}{\sum}}m_j\lambda_j < N \,\,\,\mbox{e} \,\,\, \overset{d}{\underset{j=1}{\sum}} m_j\lambda_j \neq \lambda_i$$ for any choice of nonnegative integers $m_1,m_2 \ldots m_d$ and if the application $X(v)=A(v)+\psi(v)$ and $\psi$ is of class $C^N$ for small $||v||$ with $\psi(0)=0$ e $\partial_v\psi(0)=0$; then there exists a $C^N$ diffeomorphism from a neighborhood of $v = 0$ to a neighborhood of $\xi = 0$ that define a topologic conjugation between $X$ and $A$. \end{teo} Furthermore the linearizing diffeomorphisms depends continuously in the $C^2$ topology from the vector field $X$, see for instance \cite[Corollary in the Appendix]{Pa84} which provides a stronger statement. \subsection{The shift map $\mathfrak{S}$} We consider ${\mathbb X}=\{A_0,A_1,B_0,B_1\}^{\mathbb N}$ the space of positive infinite words in the alphabet $\{A_0,A_1,B_0,B_1\}$. We endow the alphabet $\{A_0,A_1,B_0,B_1\}$ with the total order $$A_0<A_1<B_0<B_1<1.$$ We endow ${\mathbb X}$ with the corresponding lexicographic order, that we denote by $\prec$ (and $\preccurlyeq$ for the non-strict order). The shift map $\mathfrak{S} : {\mathbb X} \to {\mathbb X}$ is defined as $$ (T_j)_{j\geq 0} \in {\mathbb X} \mapsto \mathfrak{S} (T_j)_{j\geq 0}= (T_{j+1})_{j\geq 0}.$$ We also define the {\em{ star map}} (denoted by $\star$) on ${\mathbb X}$ as follows: given a sequence $w =(w_0, w_1,\cdots ) \in {\mathbb X}$ and a letter $L \in \{A_0,A_1,B_0,B_1\}$ $$ L \star w \stackrel{\scriptscriptstyle\rm def}{=} (L, w_0, w_1, \cdots). $$ \subsection{Hyperbolic notions} \begin{defi} Let $X$ be a vector field of a manifold $M$. A compact invariant set $\Lambda\subset M\setminus Zero(X)$ is hyperbolic if the tangent bundle $TM|_\Lambda$ over $\Lambda$ admits a splitting $$TM|\Lambda= E^s\oplus {\mathbb R}\cdot X\oplus E^u$$ so that \begin{itemize} \item the bundles $E^s$ and $E^u$ are continuous and invariant under the natural action of the derivative of the flow. \item there is $T>0$ so that for any point $p\in\Lambda$ and for any unit vectors $u\in E^s(p)$ and $v\in E^u(p)$ one has $$\|DX^T(u)\|<\frac 12 \,\, \mbox{ and } \,\, \|DX^T(v)\|>2$$ (one says that $E^s$ is \emph{uniformly contracted} and $E^u$ is \emph{uniformly expanded}). \end{itemize} \end{defi} An hyperbolic invariant compact set $K$ is called an \emph{hyperbolic basic set} if is transitive and admits a neighborhood on which it is the maximal invariant set. If $K$ is a hyperbolic set we denote $E^{cs}=E^s\oplus {\mathbb R} X$ and $E^{cu}\oplus {\mathbb R} X$, and these bundles are called, respectively, the weak stable and weak unstable bundles. \begin{defi}\label{defsinghip} A compact invariant set $\Lambda \subset M$ for $X^t$ is \emph{partially hyperbolic} if there exist a continuous and $DX^t$-invariant splitting $T_{\Lambda}M=E^{s} \oplus E^{cu}$ and constants $\lambda\in]0,1[, K>0$ such that for all $x\in \Lambda$ and $t\geq 0$, the following inequalities hold: \begin{enumerate} \item[(a)] $||DX^t|_{E^s_{x}}||\cdot \|DX^{-t}|_{E^{cu}_{X_t(x)}}\|\le K\lambda^t \quad \mbox{(domination)};$ \item[(b)] $\|DX^t|_{E^{s}_x}\| \le K\lambda^t \quad \mbox{(uniform contraction). }$ \end{enumerate} In addition, if $E^{cu}_{\Lambda}$ is volume expanding, that is, $|\det(DX^t|_{E^{cu}_x})|>Ke^{\lambda t}$ for all $x \in \Lambda$ and $t \ge 0$, $\Lambda$ is, by definition, a {\em{singular hyperbolic}} set. \end{defi} Note that any hyperbolic set is also partially hyperbolic. Notice that if $\Lambda_X$ an invariant compact set disjoint from $Zero(X)$ is partially hyperbolic if and only if it is hyperbolic. In dimension $3$, if $\Lambda_X$ is partially hyperbolic, then every singular point in $\Lambda_X$ is a Lorenz-like singularity, see \cite{MPP04}. \subsection{Invariant manifold and foliations} Let $\Lambda$ be a compact invariant set for the flow $X^t$ and $p \in \Lambda$. The stable $W^{s}_X(p$ and unstable $W^{u}_X(p)$ sets at $p$ are defined by \begin{eqnarray*} W^{s}_X(p)&=&\{q \in M\,: \mbox{dist}(X^t(q),X^t(p)) \underset{t \to +\infty}{\longrightarrow} 0\}\\ W^{u}_X(p)&=&\{q \in M\,: \mbox{dist}(X^t(q),X^t(p)) \underset{t \to -\infty}{\longrightarrow} 0\}. \end{eqnarray*} If $\mathcal{O}=\mathcal{O}_X(p) \subset \Lambda$ denotes the orbit of $p\in M$, the stable set of the orbit of $p$ is $W^{s}_X(\mathcal{O})= \underset{t \in \mathbb{R}}{\bigcup}W^{s}(X^t(p))$. Analogously, the unstable set of the orbit of $p$ is $W^{u}_X(\mathcal{O})= \underset{t \in \mathbb{R}}{\bigcup}W^{u}(X^t(p))$. If $\Lambda$ is a hyperbolic set and $p$ is a point in an orbit ${\mathcal O}$ in $\Lambda$, then $W^s(p)$ and $W^s({\mathcal O})$ are manifolds of the same regularity as $X$, and are tangent (at $p$) to the stable bundle $E^s(p)$ and $E^{cs}(p)$, respectively. If $\Lambda$ is a partially hyperbolic set the stable sets of the points in $\Lambda$ are submanifolds of the same regularity as $X$, tangent to $E^s$ and varying continuously with the point. Assume now $U$ is an attracting region and that $\Lambda=\Lambda_U$, the maximal invariant in $U$ is a partially hyperbolic attracting invariant compact set. Let $E^s$ denote the stable bundle defined over $\Lambda$. Then the bundle $E^s$ always extends in a unique way to $U$ as an invariant bundle (still denoted by $E^s$) tangent to a topological foliation. Recently Ara\`ujo and Melbourne, \cite{ararujomelbourne}, provided bunching conditions ensuring the smoothness of the stable foliation, as stated below: \begin{teo}\label{holonomia} Let be $M$ differentiable riemannian manifold of dimension 3, $X$ a $C^r$ vector field, $U$ an attracting region and $\Lambda \subset M$ , maximal invariant set in $U$, a partial hyperbolic attracting set. Let $\{W^s_x\}$ denote the stable foliation in $U$. Let $q \in [0,r]$ and suppose that there exists $t>0$ such that \begin{equation}\|DX^t|_{E^s(x)}\|\cdot \|DX^{-t}|_{E^{cu}(X^t(x))}\|\cdot \|DX^t|_{E_x^{cu}}\|^q<1 \label{eq.cont.}\quad \mbox{ for all $x \in \Lambda$}. \end{equation} Then the foliation $\{W^s_x\}$ is of class $C^q$. \end{teo} \subsection{Cone fields} \noindent Let $f:M \to M$ be a $C^1$ diffeomorphism and $T_xM = F^u(x) \oplus F^s(x)$ a continuous splitting. We define the stable and unstable cone fields of size $\gamma< 1$ as \begin{eqnarray*} C^s_{\gamma}(x)&=& \{(v_1,v_2) \in F^u(x) \oplus F^s(x)\,: \|v_1\|\le \gamma \|v_2\|\},\\ C^s_{\gamma}(x)&=&\{(v_1,v_2) \in F^u(x) \oplus F^s(x)\,: \|v_2\| \le \gamma \|v_1\|\}. \end{eqnarray*} We say that $C^s_{\gamma}$ (reciprocally $C^u_{\gamma}(x)$) is strictly invariant by $Df^{-1}$ (reciprocally by $Df$) if there is $\alpha < 1$ such that $Df^{-1}(C^s_{\gamma}(f(x))) \subset C^s_{\alpha\gamma}(x)$ (reciprocally $Df(C^u_{\gamma}(x)) \subset C^u_{\alpha\gamma}(f(x))$). \section{Topological dynamics: the attractor and the chain recurrence classes}\label{s-the-open-set-2} \subsection{Quasi-attractor, two-sided, up and down Lorenz attractors} \begin{defi}\begin{itemize}\item We say that $X\in {\mathcal O}_1$ exhibits a \emph{two-sided(geometric model)of Lorenz attractor} if, for any $Y$ in a neighborhood of $X$ maximal invariant set $\Lambda_Y$ in the attracting region $U$ is transitive and have a non-trivial intersection with both stable separatrices $W^s_+(\sigma,Y)$ and $W^s_-(\sigma, Y)$. \item We say that $X$ exhibit a \emph{up-Lorenz attractor} if it admits a geometric model of Lorenz attractor (in the usual meaning) disjoint from the component $W^s_-$. \item We say that $X$ exhibit a \emph{down-Lorenz attractor} if it admits a geometric model of Lorenz attractor disjoint from the component $W^s_+$. \end{itemize} \end{defi} The aim of this section is to show that, under a certain condition on the expansion in the unstable direction, the vector fields $X$ in an open and dense subset exhibit either a two-sided or a up- or a down- Lorenz attractor, and the $3$ cases appear. \subsection{Quasi-attractor} We started by noticing that there is non ambiguity on what can be the attractor: \begin{prop}\label{p.quase-atrator} For any $X\in {\mathcal O}_1$ the chain recurrence class of the sigularity $\sigma$ is the unique quasi attractor in the attracting region $U$ \end{prop} As $U$ is an attracting region, Conley theory implies that $U$ contains at least one quase attractor. A quasi attractor admits by definition, arbitrarily small invairant neighborhoods. Now Proposition~\ref{p.quase-atrator} is a direct consequence of nex lemma: \begin{lema}\label{l.quase-atrator} The stable manifold of $\sigma$ is dense in $U$. \end{lema} This proof is identical to the proof of the similar statement for the classical geometric model of Lorenz attractor, and is very simple. We include it here for completeness. \begin{proof} As any orbit cuts the cross-section $\Sigma$, we just need to prove that, for any open set $O\subset \Sigma$ there is $n>0$ so that $P^n(O)\cap (\gamma^s_+\cup \gamma^s_+)\neq \emptyset$. We consider a segment $S\subset O$ tangent to the unstable cone ${\mathcal C}^u$. By item 9) there is $\lambda>1$ so that the vector in ${\mathcal C}^u$ are expanded by a factor larger than $\lambda$. Thus: \begin{itemize} \item either $S$ cuts $(\gamma^s_+\cup \gamma^s_+)$ and we are done, \item or $P(S)$ is a segment tangent to ${\mathcal C}^u$ whose length is at least $\lambda$ times the length of $S$. \end{itemize} Iterating the process one gets that either there is $n$ so that $P^n(S)$ cuts $\gamma^s_+\cup \gamma^s_+ $ or $P^n(S)$ is a segment tangent to ${\mathcal C}^u)$ whose length thend to infinity. As $\gamma^s_+$ and $\gamma^s_+$ are leaves of the foliation ${\mathcal F}^s$ which is a fibration by segments on the annulus $\Sigma$, there is a bound for the length of any segment in the unstable cone which do not cut every leaf, ending the proof. \end{proof} \subsection{Iterating unstable segments and transitive properties} We fix a vector field $X\in{\mathcal O}_1$, $U$ is the attracting region in the definition, $\Sigma$ is the global cross section, and $P$ the first return map. First notice that the closures $\bar P^n(\Sigma)$, $n\geq 0$, is a nested family of compact subsets of $\Sigma$. Let $\Lambda_P$ denote the intersection of the $\bar P^n(\Sigma)$ $$\Lambda_P=\bigcap_{n\geq 0} \bar P^n(\Sigma).$$ \begin{lema} The compact set $\Lambda_P$ is the intersection of the maximal invariant set $\Lambda_X$ with $\Sigma$. Furthermore, $\Lambda_X$ is the union of $\sigma$ with the saturated by the flow of $\Lambda_P$. \end{lema} \begin{proof} Any orbit of $\Lambda_X$, but $\sigma$, cuts $\Sigma$. Any orbit but $W^u(\sigma)$ cut $\Sigma$ for an infinite sequence of negative times (tending to $-\infty$): indeed the $\alpha$-limit set of such an orbit is not reduced to $\sigma$, and therefore cuts $\Sigma$. Thus every orbit of $\Lambda_X$ not in $W^u(\sigma)$ cuts $\Sigma$ in $\bigcap_{n\geq 0} P^n(\Sigma)$. In $\Lambda_P$ we consider the closures $\bar P^n(\Sigma)$. This consists in adding $\Sigma\cap W^u(\sigma)$ to $\bigcap_{n\geq 0} P^n(\Sigma)$. The proof follows from these observations. \end{proof} Note that for every $n>0$, the closure $\bar P^n(\Sigma)$ consists in the union of $2^n$ pinched essential annuli, each annulus admits a finite set of pinched (cuspidal) points, the boundary is tangent to the unstable cone, and the intersection of two annuli is contained in this finite set of pinched points. Finally the thickness of this annuli is exponentially decreasing with $n$. For any nested sequence of such pinched annuli in $\bar P^n(\Sigma)$, $n>0$, the intersection is an essential circle. In particular the topological dimension of $\Lambda_P$ is $1$. Moreover the boundary of any annulus in tangent to the unstable cone, and to the image by $DP^n$ of this unstable cone. The intersection of the iterates of the unstable cones converges to a well defined continuous, invariant line field $E^u$ on $\Lambda_P$. Each of the circles is tangent to $E^u$. \begin{lema} Let $X$ be a vector field in ${\mathcal O}_1$ and $P$ the first return map on the cross section $\Sigma$. If, for every segment $S\subset \Sigma$ transverse to the stable foliation ${\mathcal F}^s$, the union of stable leaves through the iterates $P^n(S)$, $n\geq 0$, covers an open and dense subset of $\Sigma$, then the maximal invariant set $\Lambda_X$ is transitive. \end{lema} One easily check that is enough to prove the following \begin{lema} Under the same hypotesis, given any non-empty open subsets $U\cap \Lambda_P$, $V\cap\Lambda_P$ ($U$ and $V$ open sets of $\Sigma$) there is $n>0$ so that $$P^n(V\cap \Lambda_P)\cap (U\cap \Lambda_P)\neq \emptyset.$$ \end{lema} \begin{proof} The open subset $U\cap \Lambda_P$ contains a segment $S_U$ tangent to the unstable direction. There is $\varepsilon >0$ so that the segments of stable leaves through $S_U$ (shrinking $S_U$ if necessary) are contained in $U$. Let $W^s_\varepsilon(S_U)$ be the union of these segments. Note that for any $n>0$, $P^{-n}$ is defined on $S_U$ up to a finite set (the first $n$ iterates of the unstable manifold of $\sigma$). Now $P^{-n}(W^s_\varepsilon(S_U))$ contains the saturated by ${\mathcal F}^s$ of $P^{-n}(S_U)$ which contains open segments in $\Lambda_P$. Consider now a segment $S_V$ contained in $V\cap \Lambda_P$. By assumtion there is $m>0$ so that $P^m(S_V)$ intersect stable leaves in $P^{-n}(W^s_\varepsilon(S_U))$. In other words $P^{m+n}(S_V)$ cuts $W^s_\varepsilon(S_U)$ at points which belong to $\Lambda_\Sigma$ (because $S_V$ is contained in $\Lambda_\Sigma$ which is positively invariant by $P$). Thus these intersection points belong to $U\cap \Lambda_\Sigma$ concluding the proof. \end{proof} We now present some tools ensuring that the orbit of an \emph{unstable segment} (i.e. a segment in $\Sigma$ tangent to the unstable cone ) cuts almost every stable leaves. \begin{lema}\label{l.cusptogamma} Let $S$ be an unstable segment joining a cuspidal point $q_i$ in $\bar P(\Sigma)$ to a point $q_\pm$ in $\gamma^s_\pm$ . Assume that $q_i$ belongs to $\Sigma_i$. Then there is $n>0$ so that the union of the iterates $P^i(S)$, $i\in\{0,\dots,n\}$ cut every stable leaves but a finite number. \end{lema} \begin{proof}The unstable cone is oriented, inducing an orientation over every unstable segment. Assume for instance that $S$ is starting (for this orientation) at $q_i\subset\Sigma_1$ and ends on $q_-\in \gamma^s_-$: in particular, $S\subset \Sigma_1$. Now, $P(S)$ is an unstable segment ending at $q_1$ and of length $\ell(P(S)\geq \lambda \ell(S)$. So $S_1= P(S)\cup S$ is an unstable segment of length at least twice $\ell(S)$ and ending at $q_-$. We define by induction a finite sequence $S_n$ as follows: if $S_{n-1}$ is contained in $\Sigma_1$ then $S_n=P(S_{n-1})\cup S$. Otherwize, the sequence ends and $S_n$ is not defined. Thus for every $n$, $S_n$ is a unstable segment ending at $q_-$ and of size at least $n$ times the size of $S$. In particular, this sequence ends at some $n_0$, and $S_{n_0}$ is not contained in $\Sigma_1$: it cuts $\gamma^s_+$. Now $P(S_{n_0})$ cuts every stable leaf up to the one of $q_1$. \end{proof} \begin{rema}In Lemma~\ref{l.cusptogamma} the unique stable leaf which may not intersects the iterates $P^n (S)$, $n\geq 0$, is the leaf through $q_1$. Furthermore, if $q_1$ is distinct from $q_2$, then further iterates of $S$ cut $q_1$, so that every leaf intersect some iterate of $S$. \end{rema} \begin{lema}\label{l.cortaestavefixo} Assume now that $\Sigma_i$ contains a fixed point $p_i$ of $P$. Let $S$ be an unstable segment whose interior cuts the stable leaf through $p_i$. Then there is $n_0$ so that for any $n\geq n_0$, the iterate $P^n(S)$ cuts every stable leaf but the one through $q_i$. \end{lema} \begin{proof}By the inclination lemma (also known as $\lambda$-lemma), the positive iterates $P^n(S)$ accumulate the unstable manifold $W^u(p_i)$. Some of these iterates will cross completely $\Sigma_i$ (crossing both $\gamma^s_+$ and $\gamma^s_-$). Then the next iterate will cross the whole essential pinched annulus $P(\Sigma_i)$, ending the proof. \end{proof} \subsection{Fixing the expansion rate larger than the golden number} For any flow $X\in {\mathcal O}_1$ the return map $P$ is defined on the cross section $\Sigma$ but the two stable leaves $\gamma^s_+$ and $\gamma^s_-$. So we get two rectangles and both of them are sent into $\Sigma$ as a pinched essential annulus. Therefore the expansion rate $\lambda$ by $P$ in the unstable cone cannot be required uniformly larger than $2$. However we will see that we can require a uniform rate expansion arbitrarily close to $2$. Figure \ref{f.fixingexpansion} displays the main features of the return map $P$. \begin{figure}[th] \centering \includegraphics[scale=.25]{f-ss-fixingexpansion.pdf} \caption{}\label{f.fixingexpansion} \end{figure} The proof of the main topological properties consists in iterating unstable segments (tangent to the unstable cone) and to estimate the length of the components of these iterates. In this section we will choose a rate $\lambda$ which will allow us to estimate these length. {Let us start with a very elementary observation: Consider a segment $[a,c]\subset {\mathbb R}$ and pick $b\in ]a,c[$. Then one of the length $\lambda^2\ell([a,b])$ or $\lambda^2\ell([b, c])$ is strictly larger than $\ell([a,c]$ if $\lambda>\sqrt{2}$. That is, the choice of $\lambda>\sqrt2$ ensures the increase of a segment splitted into two components after two iterations, if those components are not split again. This is the traditional rate of expansion to guarantee the transitivity of a Lorenz attractor.We note that there are examples of one-dimensional Lorenz maps with rate of expansion smaller than $\sqrt{2}$ that are not transitive. Note that this map has an isolated periodic orbit. See Figure \ref{f.naotransitivo}.} \begin{figure}[th] \centering \hspace{-0.3cm} \includegraphics[scale=.25]{f-naotransitiv.pdf} \hspace{0.3cm} \includegraphics[scale=.25]{f-naotransitivo.pdf} \caption{The map at left is transitive but not robustly, the map at right is not transitive.}\label{f.naotransitivo} \end{figure} Consider now the maximum of the lengths $\lambda\ell( [a,b])$ and $\lambda^2\ell([b,c])$. Notice that the golden ration $\varphi= \frac{1+\sqrt5}2\in]1,2[$. \begin{lema}\label{l.golden}{ For any $a<b<c$, for any $\lambda\geq \varphi$ on gets} {$$\max\{\lambda\ell([a,b]),\lambda^2\ell([b,c])\}\geq \frac{\lambda}{\varphi}\ell([a,c]).$$} \end{lema} \begin{proof} Taking $\alpha=\frac{\ell([b,c])}{\ell([a,c])}$, we only have to show $$\max\{(1-\alpha),\lambda\alpha \}\geq \frac{1}{\varphi}.$$ Case $\alpha \ge \frac{\varphi - 1}{\varphi}$, we have $\lambda \alpha \ge \lambda \left(\frac{\varphi -1 }{\varphi} \right)\ge \varphi\left(\frac{\varphi -1 }{\varphi} \right)\ge \frac{\varphi^2-\varphi}{\varphi}\ge \frac{1}{\varphi}$. On the other hand, if $\alpha < \frac{\varphi - 1}{\varphi}$, we get $1-\varphi > 1- \left(\frac{\varphi - 1}{\varphi}\right) = \frac{1}{\varphi}$. \end{proof} We consider now the open subset ${\mathcal O}_\varphi\subset {\mathcal O}_1$ consisting of flows $X$ so that the expansion rate $\lambda$ of the return map in the unstable cone is larger than $\varphi$. See Section\label{s.familia-explicita} which explicits vector fields in ${\mathcal O}_\varphi$, showing that $O_\varphi$ is not empty. \subsection{Cutting ${\mathcal O}_\varphi$ in regions}\label{ss.region} Consider ${\mathcal H}^i\subset {\mathcal O}_1$, $i=1, 2$, the subset corresponding to the vector fields $X$ for which the point $q_i$ (first intersection with $\Sigma$ of the unstable separatrix $W^u_i(\sigma)$) belongs to $\gamma^s_+\cup\gamma^s_-$. In other words, $X$ belongs to ${\mathcal H}^i$ if $\sigma$ admits a homoclinic loop for the unstable separatrix $W^u_i(\sigma)$ so that the homoclinic loop cuts $\Sigma$ at a unique point $q_i$. We split each ${\mathcal H}^i$ is two ${\mathcal H}^i={\mathcal H}^i_+\cup {\mathcal H}^i_-$ according to $\{q_i\in\gamma^s_+\}$ and $\{q_i\in\gamma^s_-\}$ respectively. Figure \ref{f.p.Hi1mais} displays the feature of a vector field $X \in {\mathcal H}^1$. It is well known that a homoclinic connection corresponds to a codimension $1$ phenomenon, as expressed below: \begin{lema}\label{l.homo} The subsets ${\mathcal H}^i$ are codimension $1$ submanifolds of ${\mathcal O}_1$. \end{lema} \begin{figure}[th] \centering \includegraphics[scale=0.15]{f-pHi1.pdf} \hspace{0.2cm} \includegraphics[scale=0.15]{f-pHi1-.pdf} \caption{(a) $X \in {\mathcal H}^1_+ $ and (b) $X \in {\mathcal H}^1_- $.} \label{f.p.Hi1mais} \end{figure} The submanifolds ${\mathcal H}^1$ and ${\mathcal H}^2$ cut ${\mathcal O}_1$ in $4$ regions ${\mathcal O}_\varphi^{\omega_1,\omega_2}$, $\omega_i\in\{+,-\}$, so that $\omega_i=-$ if and only if $q_i\in \Sigma_i$. Then: \begin{lema}\label{f-Lema-Out-H1eH2} Let $X$ be a vector field in ${\mathcal O}_1$ out of ${\mathcal H}^1$ and ${\mathcal H}^2$. Then the first return map $P$ admits a fixed point in $\Sigma_1$ (resp. $\Sigma_2$) if and only if $X$ belongs to ${\mathcal O}_\varphi^{+,-}\cup {\mathcal O}_\varphi^{+,+}$ (resp. ${\mathcal O}_\varphi^{-,+}\cup {\mathcal O}_\varphi^{+,+}$). \end{lema} \begin{proof} If $q_1\in\Sigma_2$ this means that the cuspidal point of the piched annulus $P(\Sigma_1)$ belongs to $\Sigma_2$. Thus $P(\Sigma_1)$ crosses $\Sigma_2$ in a (hyperbolic) Markov way leading to a unique fixed point in $\Sigma_1$, see Figure \ref{f-Lema-Out-H1eH2}(a). If $q_1\in \Sigma_1$ then $\Sigma_1\cap P(\Sigma_1)$ has $2$ connected components which are cuspidal triangles $T_1^+$ (bounded by $\gamma^s_+$) , $T_1^-$ (bounded by $\gamma^s_-$). Assume that there is a fixed point $p$ in $T_1^-$ (for instance, the other case being equivalent). Then the region $R$ bounded by $\gamma^s_+$ and $W^s(p)$ is invariant by $P$, see Figure \ref{f-Lema-Out-H1eH2}. But $W^u(p)$ contains a segment joining $p$ to $\gamma^s_+$. This segment in expanded by $P$ contradicting the invariance of $R$. This ends the proof. \begin{figure}[th] \centering \includegraphics[scale=0.15]{f-Lema-Out-H1eH2.pdf} \hspace{0.3cm} \includegraphics[scale=0.15]{f-Lema-Out-H1eH2-SegundoCaso.pdf} \caption{$X \in {\mathcal O}_{1}\setminus {\mathcal H}^{1}\cup {\mathcal H}^{2}$} \label{f-Lema-Out-H1eH2} \end{figure} \end{proof} Consider now the region ${\mathcal O}_\varphi^{+,+}$ : every vector field $X\in {\mathcal O}_\varphi^{+,+}$ has exactly $1$ fixed point $p_1$ in $\Sigma_1$ and $1$ fixed point $p_2$ in $\Sigma_2$. Let $W^s_i$ be the stable leaf through $p_i$. Note that $q_1$ and $p_2$ are both in $\Sigma_2$, and in the same way $q_2$ and $p_1$ are both in $\Sigma_1$. This remark allows, \emph{a priori}, that $q_1$ and $p_2$ belong to the same stable leaf, or that $q_2$ and $p_1$ belong to the same stable leaf. This corresponds to our next splitting of the region ${\mathcal O}_\varphi^{+,+}$. Let us denote ${\mathcal H}{\mathcal E}_i$ the subset of ${\mathcal O}_\varphi^{+,+}$ corresponding to the vector fields $X$ for which $q_i\in W^s_j$. In other words for $X\in {\mathcal H}{\mathcal E}_i$ the unstable separatrix of $\sigma$ corresponding to $q_i$ is a heteroclinic connection with $p_j$, see Figure \ref{f-HE1eHE2}. As in Lemma \ref{l.homo} one has: \begin{figure}[th] \centering \includegraphics[scale=0.15]{f-q2emSigma1q2emWsp1.pdf} \hspace{0.6cm} \includegraphics[scale=0.15]{f-q1emSigma2eemWsp2.pdf} \caption{(a) $X \in {\mathcal H}{\mathcal E}^{1}$ and (b) $X \in {\mathcal H}{\mathcal E}^{2}$} \label{f-HE1eHE2} \end{figure} \begin{lema}\label{l.hetero} The subsets ${\mathcal H}{\mathcal E}_i$ are codimension $1$ submanifold of ${\mathcal O}_1$. \end{lema} Consider $X\in{\mathcal O}_\varphi^{+,+}\setminus ({\mathcal H}{\mathcal E}_1\cup{\mathcal H}{\mathcal E}_2)$. Then $\Sigma\setminus (W^s_1\cup W^s_2)$ has exactly $2$ connected components: one, denoted by $\Sigma_+$ contains $\gamma^s_+$, and the other, denoted by $\Sigma_-$ contains $\gamma^s_-$. We denoted by ${\mathcal L}^+$ the open subset of ${\mathcal O}_\varphi^{+,+}$ where both points $q_1,q_2$ belong to $\Sigma_+$ and ${\mathcal L}^-$ the open set where both points $q_1,q_2$ belong to $\Sigma_-$. We denote by $\widetilde{{\mathcal O}_\varphi}^{+,+}$ the open subset where $q_1$ and $q_2$ belong to different component $\Sigma_\pm$ and $\Sigma_\mp$. We will denote ${\mathcal L}^{+,-}$ the union $${\mathcal L}^{+,-}={\mathcal O}_\varphi^{-,-}\cup {\mathcal O}_\varphi^{+,-}\cup{\mathcal O}_\varphi^{-,+}\cup\widetilde{{\mathcal O}_\varphi}^{+,+}\cup \left(({\mathcal H}^1\cup{\mathcal H}^2)\setminus (({\mathcal H}^1_+\cap {\mathcal H}^2_+) \cup ({\mathcal H}^1_-\cap {\mathcal H}^2_-)\right).$$ Recall that ${\mathcal H}^i$ corresponds to a homoclinic loop and ${\mathcal H}^i_\pm$ distinguish the up of down stable seapratrix of $\sigma$ involved in that loop.
1,314,259,996,198
arxiv
\section{Introduction} \subsection{Background} Multiview learning has been an important in machine learning community \cite{Wu2012661,Sun20132031,Yu20142431,Lu2015,Zha2015,FakeriTabrizi2015117,Liu20151233,liu2015supervised,sindhwani2008rkhs,wang2015regularized}. In traditional machine learning problems, we usually assume that a data point has a feature vector to represent its input information. For example, in image recognition problem, we can extract a visual feature vector from an image, using a texture descriptor \cite{Petrov20131499,Mohanty20131011,Mala201580,Yadav2015101,Luo2013709,Gui20143126,Wang2014,jiang2015manifold}. In this scene, the texture is a view of the image. However, there could be more than one view of an image. Besides the texture view, we can also extract feature vectors from other views, including shape and color. An other example is the problem of classification of scientific articles, and we may extract a feature vector from the main text of the article \cite{Long20151833,Hogenboom201546,Picard201595,La2015929,Hogenboom201546,Picard201595,Koopman2015,Chen2015473,Feng2015109,KumarNagwani20152589}. However, the main text is just one view the article, and we can also have features from other views, such as abstract, reference list, etc. Multiview learning argues that we should learn from more than one views to present the data and construct a predictor. The motive for multiview learning is that single view based data representation is usually incomplete, and different views can present complementary information for the learning problem. In the problem of multiview learning, the input of a data point is not just one single feature vector of one single view, but multiple feature vectors presenting different views. The target of multiview learning is to learn a predictor to take multiple view feature vectors to predict one single output of a data point. The problem of multiview learning can be classified to two types, supervised multiview learning and unsupervised learning. \begin{itemize} \item Supervised multiview learning refers to the problem of learning from a data set, where both the multiview input and output are available for each data point \cite{Li20142040,Jiang20141635,Hajmohammadi2014195}. In this problem, the output is usually a class label, or a continues response. In this case, the learning problem is to build a predictive model from the training data set to predict the output of a input data point, with help the input-output pairs of the training set. \item Unsupervised multiview learning refers to the problem of cluster a set of data points, and the multiview inputs of each data point are given \cite{Feng2013343,Sublemontier2013,Zhao20137}. In this problem, the outputs of the data points are not available. \end{itemize} In this paper, we investigate the problem of supervised multiview learning, and propose a novel algorithm to solve it. The proposed method is based on an assumption that different views of a data point are generated from one single intact feature vector, and the view generation is performed by a linear transformation. We try to recover the intact feature vector for each data point from its multiview feature vectors, with guiding of its corresponding output, i.e., its binary class label. \subsection{Relevant works} There are some existing multiview learning methods. We the state-of-the-arts of them as follows. \begin{itemize} \item Zhang et al. \cite{Zhang2008752} proposed to use local learning (LL) method for the problem of multiview learning problem, and designs a local predictive model for each data point based on the multiview inputs. The local predictive model is learned on the nearest neighbors of a data point. \item Sindhwani et al. \cite{sindhwani2005co} proposed to use co-training algorithm for multiview learning problems to improve the classification performance of each view (CT). This method is based on multiview regularization, and the agreement and smoothness over both labeled and unlabeled data points. \item Quadrianto \cite{Quadrianto2011425} proposed a multiview learning algorithm to solve the problem of view disagreement (VD), i.e., different views of one single data point do not belong to the same class. This method uses a conditional entropy criterion to find the disagreement among different views, and remove the data points with view disagreement from the training set. \item Zhai \cite{Zhai2012} proposed multiview metric learning method with global consistency and local smoothness (GL) for the multiview learning problem with partially labeled data set. This method simultaneously consider both the global consistency and local smoothness, by assuming that the different views has a shared latent feature space, and imposing global consistency and local structure to the learning procedure. \item Chen et al. \cite{Chen20122365} proposed a statistical subspace multiview representation method (SS), by leveraging both multiview dependencies and supervision information. This method is based on a subspace Markov network of multiview latent, and assumes that the multiviews and the class labels are conditionally independent. The algorithm is based on the maximization of data likelihood, and the minimization of classification error. \end{itemize} \subsection{Contributions} In this paper, we propose a novel supervised multiview learning method. This method is based on the assumption of single discriminative intact of different multiview inputs. Under this assumption, although there are different views of one single data point, one single intact feature vector exists for the data point. This intact feature vector is assumed to be discriminative, i.e., it can represents the class information of each data point. Moreover, the feature vector of each view of a data point can be obtained from the intact vector, by performing a linear view-conditional transformation to the intact feature vector. In this way, if we learn the discriminative intact feature vector for each training data point, we can learn a classifier in the intact with the help of the class labels of the training data points. To this end, we proposed a novel method to learn the hidden of the intact feature vector, the view-conditional transformation matrices, and the classifier in the intact space simultaneously. We define a intact feature vector for each data point, and a transformation matrix for each view. The feature vector of one view of each data point can be reconstructed as the product of its corresponding transformation matrix and intact feature vector. The reconstruction error for each view of each data point is measured by the Cauchy error estimator \cite{Idan20141108,Gallagher20151264}. To learn the optimal intact feature vectors and view-conditional transformation matrices, we propose to minimize the Cauchy errors. Moreover, due to the assumption that the intact feature vectors are discriminative, we also argue that we can design a classifier in the intact space, and the classifier can minimize the classification error. Thus we also propose to learn a linear classifier in the intact space, and use the hinge loss to measure the classification error the training set in the intact space \cite{Chen201580,Charuvaka201563}. To learn the optimal classifier parameter and the intact feature vectors, we also propose to minimize the hinge loss with regard to both the classifier parameter and the intact feature vectors. To model the problem, we propose a joint optimization problem for learning of intact vectors, view-conditional transformation matrices, and the classifier parameter vector. The objective function of this problem is composed of two error terms, and three regularization terms. The firs error term is the view reconstruction term measured by Cauchy estimator over all the data points and views. The second error term is the classification error over all the intact feature vectors of all training data points, measured by hinge losses. The three regularization terms are all squared $\ell_2$ norm terms over each intact feature vectors, view-conditional matrices, and the classifier parameter vectors. The purpose of impose the squared $\ell_2$ norm to these variables are to reduce the complexity of the learned outputs. To minimize the proposed objective function, we adapt an alternate optimization strategy, i.e., when the objective function is minimized with regard to one variable, other variables are fixed. The optimization with regard to each variable is conducted by using gradient descent algorithm. The contributions of this paper are of three parts: \begin{enumerate} \item We propose a novel supervised multiview learning framework by simultaneous learning of intact feature vectors, view-conditional transformation matrices, and classifier parameter vector. \item We build a novel optimization problem for this learning problem, by considering both the view reconstruction problem, and the classifier learning problem. \item We develop an iterative algorithm to solve this optimization problem based on alternate optimization strategy and gradient descent algorithm. \end{enumerate} \subsection{Paper organization} This paper is organized as follows: In section \ref{sec:method}, the proposed method for supervised multiview learning is introduced. In this section, we first model this problem as a minimization problem of a objective function, and then solve it with an iterative algorithm. In section \ref{sec:experiments}, the proposed iterative algorithm is evaluated. We first give an analysis of its sensitivity to parameters, and then compare it to some state-of-the-art algorithms, and finally test the running time performance of the proposed algorithm. In section \ref{sec:conclusion}, we give the conclusion of this paper. \section{Methods} \label{sec:method} In this section, we introduce the proposed supervised multiview learning method. \subsection{Problem modeling} We assume we are dealing with supervised binary classification problem with multiview data. A training data set of $n$ data points is given, $X = \{\theta_1, \cdots, \theta_n\}$. $\theta_i=(\bfx_i^1, \cdots, \bfx_i^m, y_i)$ is the $i$-th data point. The information of each data point is composed of feature vectors of $m$ views, and a binary class label $y_i$. $\bfx_i^j\in \mathbb{R}^{d_j}$ is the $d_j$-dimensional feature vector of the $j$-th view of the $i$-th data point, and $y_i\in \{+1,-1\}$ is a the binary class label of the $i$-th data point. The problem of supervised multiview learning is to learn a predictive model from the training set, which can predict a binary class label from the multiview input of a test data point. We assume there is an intact vector $\bfz_i\in \mathbb{R}^d$ for the $i$-th data point, and its $j$-th view $\bfx_i^j$ can be reconstructed by a linear transformation, \begin{equation} \begin{aligned} \bfx_i^j \leftarrow W_j \bfz_i, \end{aligned} \end{equation} where $W_j\in R^{d_j\times d}$ is the view-conditional linear transformation matrix for the $j$-th view. Please the view-conditional transformation matrix for the same view of all the data points is the same. By learning both the $W_j$ and $\bfz_i$, we can recover the hidden intact vector for the $i$-th data point, $\bfz_i$, and use it for classification problem. To this end, we propose to minimize the reconstruction error. The reconstruction error is measured by Cauchy error estimator, $E(\bfx_i^j, W_j \bfz_i)$, \begin{equation} \begin{aligned} E(\bfx_i^j, W_j \bfz_i) = \log \left ( 1 + \frac{\left \| \bfx_i^j - W_j \bfz_i \right \|_2^2}{c^2} \right ). \end{aligned} \end{equation} This error estimator has been shown to be robust, and it also provides a offset. We propose to minimize this error estimator over all data points and all views with regard to both $\bfz_i, i=1, \cdots, n$, and $W_j, j=1, \cdots, m$, \begin{equation} \label{equ:reconstruction} \begin{aligned} \min_{\bfz_i|_{i=1}^n,W_j|_{j=1}^m} \left \{ \sum_{i=1}^n \sum_{j=1}^m E(\bfx_i^j, W_j \bfz_i) = \sum_{i=1}^n \sum_{j=1}^m \log \left ( 1 + \frac{\left \| \bfx_i^j - W_j \bfz_i \right \|_2^2}{c^2} \right ) \right \} \end{aligned} \end{equation} Moreover, we also assume that the intact feature vectors of the data points are discriminative, and presents the class information, thus the intact feature vectors can minimize a classification loss function of the data set. We propose to learn the intact feature vector of the $i$-th data point by jointly learning a liner classifier to predict its class label, $y_i$. The classifier is designed as linear function, \begin{equation} \begin{aligned} y_i \leftarrow \bfomega^\top \bfz_i \end{aligned} \end{equation} The usage of a linear function as the classifier is motive by the work of Fan and Tang \cite{fan2010enhanced}. Fan and Tang \cite{fan2010enhanced} proposed to use a linear classifier to maximize the area under the ROC Curve (AUC) for the problem of imbalance learning and cost sensitive learning. Fan and Tang \cite{fan2010enhanced} found that a linear classifier used to maximize AUC searches an optimal solution in a very constrained space, and enhance the maximum AUC linear classifier by extending its searching in the solution space, and improving the way to use the structure of the classifier. Thus the linear classifier has been proven to be effective in the optimizing AUC by Fan and Tang \cite{fan2010enhanced}, it inspires us to use it to learn an effective classifier in the intact vector space. The classification error can be measured by the hinge loss function, \begin{equation} \begin{aligned} L(y_i, \bfomega^\top \bfz_i) = \max(0, 1 - y_i \bfomega^\top \bfz_i). \end{aligned} \end{equation} This the optimization of this loss function can obtain a large margin classifier. To learn the optimal classifier and the discriminative intact feature vectors, we propose to minimize the classifier loss measured by the hinge loss function of the classification result over all the training data points, \begin{equation} \label{equ:classifier} \begin{aligned} \min_{\bfz_i|_{i=1}^n, \bfomega} \left \{ \sum_{i=1}^n L(y_i, \bfomega^\top \bfz_i) = \sum_{i=1}^n \max(0, 1 - y_i \bfomega^\top \bfz_i) \right \} \end{aligned} \end{equation} Moreover, to prevent the problem of over-fitting of variables, we propose to minimize the squared $\ell_2$ norm of the variables to regularize the learning $\bfz_i$, $W_j$, and $\bfomega$, \begin{equation} \label{equ:regular} \begin{aligned} \min_{\bfz_i|_{i=1}^n, W_j|_{j=1}^m,\bfomega} \left \{ R(\bfz_i|_{i=1}^n, W_j|_{j=1}^m,\bfomega) = \sum_{i=1}^n \|\bfz_i\|_2^2 + \sum_{j=1}^m \|W_j\|_2^2 + \|\bfomega\|_2^2 \right \}. \end{aligned} \end{equation} Our overall learning problem is obtained by considering both the problems of view-conditional reconstruction in (\ref{equ:reconstruction}), and classifier learning in the intact space in (\ref{equ:classifier}), \begin{equation} \label{equ:objective} \begin{aligned} \min_{\bfz_i|_{i=1}^n,W_j|_{j=1}^m,\bfomega} & \left \{ \sum_{i=1}^n \sum_{j=1}^m E(\bfx_i^j, W_j \bfz_i) + \alpha L(y_i, \bfomega^\top \bfz_i) + \gamma R(\bfz_i|_{i=1}^n, W_j|_{j=1}^m,\bfomega) \right . \\ &\left . = \sum_{i=1}^n \sum_{j=1}^m \log \left ( 1 + \frac{\left \| \bfx_i^j - W_j \bfz_i \right \|_2^2}{c^2} \right ) \right .\\ & + \alpha \sum_{i=1}^n \max(0, 1 - y_i \bfomega^\top \bfz_i)\\ & \left . + \gamma \left ( \sum_{i=1}^n \|\bfz_i\|_2^2 + \sum_{j=1}^m \|W_j\|_2^2 + \|\bfomega\|_2^2 \right ) \vphantom{\sum_{i=1}^n \sum_{j=1}^m E(\bfx_i^j, W_j \bfz_i) } \right \}, \end{aligned} \end{equation} where $\alpha$ is a tradeoff parameter to balance the view-conditional reconstruction terms and the classification error terms, and $\gamma$ is a tradeoff parameter to balance the view-conditional reconstruction terms and the regularization terms. By optimizing this problem, we can learn intact feature vectors which can present the multiview inputs of the data points, and also is discriminative. \subsection{Optimization} To solve the optimization problem in (\ref{equ:objective}), we propose to use the alternate optimization strategy. The optimization is conducted in an iterative algorithm. When one variable is considered, the others are fixed. After one variable is updated, it will be fixed in the next iteration when other variable is updated. In the following subsections, we will discuss how to update each variable. \subsubsection{Updating $\bfz_i$} When we want to update $\bfz_i$, we only consider this single variable, while fix all other variables. Thus we have the following optimization problem, \begin{equation} \label{equ:zi} \begin{aligned} \min_{\bfz_i} & \left \{ \sum_{j=1}^m \log \left ( 1 + \frac{\left \| \bfx_i^j - W_j \bfz_i \right \|_2^2}{c^2} \right ) + \alpha \max(0, 1 - y_i \bfomega^\top \bfz_i) + \gamma \|\bfz_i\|_2^2 \right \}. \end{aligned} \end{equation} The second term $\max(0, 1 - y_i \bfomega^\top \bfz_i)$ is not a convex function, and it is hard to optimize it directly. Thus we rewrite it as follows, \begin{equation} \label{equ:hinge} \begin{aligned} \max(0, 1 - y_i \bfomega^\top \bfz_i) = \left\{\begin{matrix} 1 - y_i \bfomega^\top \bfz_i, &if~ 1 - y_i \bfomega^\top \bfz_i > 0\\ 0, &otherwise. \end{matrix}\right. \end{aligned} \end{equation} We define a indicator variable, $\beta_i$, to indicate which of the above cases is true, \begin{equation} \label{equ:beta} \begin{aligned} \beta_i = \left\{\begin{matrix} 1, &if~ 1 - y_i \bfomega^\top \bfz_i > 0\\ 0, &otherwise, \end{matrix}\right. \end{aligned} \end{equation} and rewrite (\ref{equ:hinge}) as follows, \begin{equation} \label{equ:hinge1} \begin{aligned} \max(0, 1 - y_i \bfomega^\top \bfz_i) = \beta_i \left ( 1 - y_i \bfomega^\top \bfz_i \right ) \end{aligned} \end{equation} Please note that $\beta_i$ is also a function of $\bfz_i$, however, we first update it by using $\bfz_i$ solved in previous iteration, and then fix it to update $\bfz_i$ in current iteration. In this way, (\ref{equ:zi}) is rewritten as \begin{equation} \label{equ:zi1} \begin{aligned} \min_{\bfz_i} & \left \{ \sum_{j=1}^m \log \left ( 1 + \frac{\left \| \bfx_i^j - W_j \bfz_i \right \|_2^2}{c^2} \right ) + \alpha \beta_i \left ( 1 - y_i \bfomega^\top \bfz_i \right ) + \gamma \|\bfz_i\|_2^2 = g(\bfz_i) \right \}, \end{aligned} \end{equation} where $g(\bfz_i)$ is the objective of this optimization problem. To seek the minimization of $g(\bfz_i)$, we use gradient descent algorithm. This algorithm update $\bfz_i$ by descending it to the direction of gradient of $g(\bfz_i)$, \begin{equation} \label{equ:zi1_gradient} \begin{aligned} \bfz_i \leftarrow \bfz_i - \mu \nabla g(\bfz_i), \end{aligned} \end{equation} where $\mu$ is the descent step, and $\nabla g(\bfz_i)$ is the gradient function of $g(\bfz_i)$. We set $\nabla g(\bfz_i)$ as the partial derivative of $g(\bfz_i)$ with regard to $\bfz_i$, \begin{equation} \label{equ:derivative} \begin{aligned} \nabla g(\bfz_i) &= \frac{\partial g(\bfz_i)}{\partial \bfz_i} = \sum_{j=1}^m \frac{\frac{2 W_j^\top( \bfx_i^j - W_j \bfz_i )}{c^2}}{\left ( 1 + \frac{\left \| \bfx_i^j - W_j \bfz_i \right \|_2^2}{c^2} \right ) } - \alpha \beta_i y_i \bfomega + \gamma \bfz_i\\ &= \sum_{j=1}^m \frac{2 W_j^\top( \bfx_i^j - W_j \bfz_i )}{\left ( c^2 + \left \| \bfx_i^j - W_j \bfz_i \right \|_2^2\right )} - \alpha \beta_i y_i \bfomega + \gamma \bfz_i. \end{aligned} \end{equation} By substituting (\ref{equ:derivative}) to (\ref{equ:zi1_gradient}), we have the final updating rule of $\bfz_i$, \begin{equation} \label{equ:zi1_gradient1} \begin{aligned} \bfz_i \leftarrow \bfz_i - \mu \left ( \sum_{j=1}^m \frac{2 W_j^\top( \bfx_i^j - W_j \bfz_i )}{\left ( c^2 + \left \| \bfx_i^j - W_j \bfz_i \right \|_2^2\right )} - \alpha \beta_i y_i \bfomega + \gamma \bfz_i \right ). \end{aligned} \end{equation} \subsubsection{Updating $W_j$} When we want to optimize $W_j$, we fix all other variables and only consider $W_j$ itself. The optimization problem is changed to the follows, \begin{equation} \label{equ:Wj} \begin{aligned} \min_{W_j} & \left \{ \sum_{i=1}^{n} \log \left ( 1 + \frac{\left \| \bfx_i^j - W_j \bfz_i \right \|_2^2}{c^2} \right ) + \gamma \|W_j\|_2^2= f(W_j) \right \}. \end{aligned} \end{equation} where $f(W_j)$ is the objective function of this problem. To solve this problem, we also update $W_j$ by using the gradient descent algorithm, \begin{equation} \label{equ:Wj_gradient} \begin{aligned} W_j \leftarrow W_j - \mu \nabla f(W_j), \end{aligned} \end{equation} where $\nabla f(W_j)$ is the gradient function of $f(W_j)$, \begin{equation} \label{equ:f_gradient} \begin{aligned} \nabla f(W_j) &= \frac{\partial f(W_j)}{\partial W_j} = \sum_{i=1}^{n} \frac{\frac{2(\bfx_i^j - W_j\bfz_i)\bfz_i^\top}{c^2}}{ \left ( 1 + \frac{\left \| \bfx_i^j - W_j \bfz_i \right \|_2^2}{c^2} \right )}+ \gamma W_j\\ & = \sum_{i=1}^{n} \frac{2(\bfx_i^j - W_j\bfz_i)\bfz_i^\top}{ \left ( c^2 + \left \| \bfx_i^j - W_j \bfz_i \right \|_2^2 \right )} + \gamma W_j. \end{aligned} \end{equation} Substituting (\ref{equ:f_gradient}) to (\ref{equ:Wj_gradient}), we have the final updating rule of $W_j$, \begin{equation} \label{equ:Wj_gradient1} \begin{aligned} W_j \leftarrow W_j - \mu \left ( \sum_{i=1}^{n} \frac{2(\bfx_i^j - W_j\bfz_i)\bfz_i^\top}{ \left ( c^2 + \left \| \bfx_i^j - W_j \bfz_i \right \|_2^2 \right )}+ \gamma W_j \right ). \end{aligned} \end{equation} \subsubsection{Updating $\bfomega$} When we want to update $\bfomega$ to minimize the objective function of (\ref{equ:objective}), we fix the other variables, and only consider $\bfomega$. Thus the problem in (\ref{equ:objective}) is transferred to \begin{equation} \label{equ:objective} \begin{aligned} \min_{\bfomega} & \left \{ \alpha\sum_{i=1}^n \beta_i \left ( 1 - y_i \bfomega^\top \bfz_i \right ) + \gamma \|\bfomega\|_2^2 = h(\bfomega) \right \}. \end{aligned} \end{equation} Please note that $\beta_i$ is actually a function of $\bfomega$. However, similar the strategy to solve $\bfz_i$, we also update it according to $\bfomega$ solved in previous iteration, and fix it to update $\bfomega$ in current iteration. When $\beta_i, i=1, \cdots, n$ are fixed, we update $\bfomega$ to minimize $h(\bfomega)$ by using the gradient descent algorithm, \begin{equation} \label{equ:beta_gradient} \begin{aligned} \bfomega \leftarrow \bfomega - \mu \nabla h(\bfomega), \end{aligned} \end{equation} where $\nabla h(\bfomega)$ is the gradient function of $h(\bfomega)$, and it is defined as follows, \begin{equation} \label{equ:h_gradient} \begin{aligned} \nabla h(\bfomega) = \frac{\partial h(\bfomega)}{\partial \bfomega} =- \alpha \sum_{i=1}^n \beta_i y_i \bfz_i + \gamma \bfomega. \end{aligned} \end{equation} By substituting it to (\ref{equ:beta_gradient}), we have the final updating rule for $\bfomega$, \begin{equation} \label{equ:beta_gradient} \begin{aligned} \bfomega \leftarrow \bfomega - \mu \left ( - \alpha \sum_{i=1}^n \beta_i y_i \bfz_i + \gamma \bfomega \right ). \end{aligned} \end{equation} \subsection{Iterative algorithm} After we have the updating rules of all the variables, we can design an iterative algorithm for the learning problem. This iterative algorithm has one outer FOR loop, and two inner FOR loops. The outer FOR loop is corresponding to the main iterations. The two inner FOR loops are corresponding to the updating of $n$ intact feature vectors of $n$ data points, and the updating of $m$ view-conditional transformation matrices. The algorithm is given in Algorithm 1. {The iteration number $T$ is determined by cross-validation in our experiments. } \begin{itemize} \item \textbf{Algorithm 1}. Iterative algorithm for multiview intact and single-view classifier learning (MISC). \item \textbf{Input}: Training data set, $(\bfx_1^1, \cdots, \bfx_1^m, y_1), \cdots, (\bfx_n^1, \cdots, \bfx_n^m, y_n)$. \item \textbf{Input}: Tradeoff parameters, $\alpha$ and $\gamma$. \item \textbf{Input}: Maximum iteration number, $T$. \item \textbf{Initialization}: $\bfz_i^0, i=1,\cdots,n$, $W_j^0,j=1,\cdots, m$ and $\bfomega^0$. \item \textbf{For $t=1,\cdots, T$} \begin{itemize} \item Update descent step, $\mu^t \leftarrow \frac{1}{t}$ \item \textbf{For $i=1,\cdots,n$} Update $\beta_i^t$ as follows, \begin{equation} \begin{aligned} \beta_i^t = \left\{\begin{matrix} 1, &if~ 1 - y_i {\bfomega^{t-1}}^\top \bfz_i^{t-1} > 0\\ 0, &otherwise. \end{matrix}\right. \end{aligned} \end{equation} Update $\bfz_i^t$ by fixing $W_j^{t-1}, j=1,\cdots, m$, $\beta_i^{t-1}$ and $\bfomega^{t-1}$, \begin{equation} \begin{aligned} \bfz_i^t \leftarrow \bfz_i^{t-1} - \mu^t \left ( \sum_{j=1}^m \frac{2 {W_j^{t-1}}^\top( \bfx_i^j - {W_j^{t-1}} \bfz_i^{t-1} )}{\left ( c^2 + \left \| \bfx_i^j - {W_j^{t-1}} \bfz_i^{t-1} \right \|_2^2\right )} - \alpha \beta_i^t y_i \bfomega^{t-1} + \gamma \bfz_i^{t-1} \right ). \end{aligned} \end{equation} \item \textbf{End of For} \item \textbf{For $j=1,\cdots,m$} Update $W_j^t$ by fixing $\bfz_i^{t}, i=1,\cdots, m$, \begin{equation} \begin{aligned} W_j^t \leftarrow W_j^{t-1} - \mu^t \left ( \sum_{i=1}^{n} \frac{2(\bfx_i^j - {W_j^{t-1}}\bfz_i^{t}){\bfz_i^t}^\top}{ \left ( c^2 + \left \| \bfx_i^j - W_j^{t-1} \bfz_i^t \right \|_2^2 \right )}+ \gamma W_j^{t-1} \right ). \end{aligned} \end{equation} \item \textbf{End of For} \item Update $\bfomega^t$ by fixing $\beta_i^t, i=1, \cdots, n$ and $\bfz_i^t, i=1, \cdots, n$, \begin{equation} \begin{aligned} \bfomega^t \leftarrow \bfomega^{t-1} - \mu^t \left ( - \alpha \sum_{i=1}^n \beta_i^t y_i \bfz_i + \gamma \bfomega^{t-1} \right ). \end{aligned} \end{equation} \end{itemize} \item \textbf{End of For} \item \textbf{Output}: $W_j^T, j=1, \cdots, m$, $\bfz_i^T, i=1, \cdots, n$, and $\bfomega^T$. \end{itemize} As we can see from the algorithm, in the main FOR loop, descent step variable, $\mu$, is firstly updated, and then the hinge loss indicator variables, $\beta_i, i=1,\cdots,n$ and the intact feature vectors are updated. The view-conditional transformation matrices, $W_j, j=1, \cdots, m$ are updated, and finally, the classifier parameter $\bfomega$ are updated. \newpage \section{Experiments} \label{sec:experiments} In this section, we will evaluate the proposed algorithm on a few real-world supervised multiview learning problems experimentally. \subsection{Benchmark data sets} \subsubsection{PASCAL VOC 07 data set} The first data set used in the experiment is the PASCAL VOC 07 data set \cite{Jaszewski2015}. In this data set, there are 9,963 images of 20 different object classes. Each image is presented by two different view, which are visual view, and tag view. To extract the feature vector from the visual view of an image, we extract local visual features, SIFT, from the image, and represent the local features as a histogram. To extract the feature vector from the tag view from the image, we use the histogram vector of user tags of the image as the feature vector. \subsubsection{CiteSeer data set} The second data set is the CiteSeer data set \cite{Williams201468}. In this data set, there are 3,312 documents of 6 classes. Each document has three views, which are the text view, inbound reference view, and outbound reference view. \subsubsection{HMDB data set} The third data set is the HMDB dataset, which is a video database of human motion recognition problem \cite{Kuehne20112556}. In this data set, there are 6,849 video clips of 51 action classes. To present each video clip, we extract 3D Harris corners, and present them by two different types of local features, which are the histogram of oriented gradient (HOG) and histogram of oriented flow (HOF). We further represent each clip by two feature vectors of two views, which are the histograms of HOG and HOF. \subsection{Experiment protocols} To conduct the experiments, we split each data set into 10 non-overlapping folds, and use the 10-fold cross-validation to perform the training-testing procedure. Each fold is used as a test set in turn, and the remaining 9 folds are used as the training sets. The proposed algorithm is performed to the training set to obtain the view-conditional transformation matrices, and the classifier parameter. Then the learned view-conditional transformation matrices and the classifier parameter are used to represent and classify the data points in the test set. To handle the multiple class problem, we use the one-vs-all strategy. \subsection{Performance measures} To measure the classification performance over the test set, we use the classification accuracy. The classification accuracy is defined as follows, \begin{equation} \begin{aligned} Classification~accuracy= \frac{Number~of~correctly~classified~test~data~points}{Number~of~total~test~data~points}. \end{aligned} \end{equation} It is obvious that a better algorithm should be able to obtain a higher classification accuracy. \subsection{Experiment results} In this experiment, we first study the sensitivity of the algorithm to the parameters, which are $\alpha$ and $\gamma$. \subsubsection{Sensitivity to parameters} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{fig1.eps}\\ \caption{Sensitivity curve of $\alpha$ over PASCAL VOC 07 data set.} \label{Fig_alpha150805} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{fig2.eps}\\ \caption{Sensitivity curve of $\beta$ over PASCAL VOC 07 data set.} \label{Fig_beta150807} \end{figure} To study the performance of the proposed algorithm with different tradeoff parameters, $\alpha$ and $\beta$. We perform the algorithm by using the parameters of values $0.1, 1, 10, 100$ and $1000$, and measured the performance of different parameters. Fig. \ref{Fig_alpha150805} illustrates the performance on the PASCAL VOC 07 data set with respect to different tradeoff parameter $\alpha$. The proposed algorithm achieves a stable performance in all the settings of parameter $\alpha$. In Fig. \ref{Fig_beta150807}, the performance against different tradeoff parameter $\beta$ is also shown. From this figure, we can also see that the algorithm is stable tot he changes of value of $\beta$. This suggests that MISC is not sensitive to the changes of tradeoff parameters. \subsubsection{Comparison to state-of-the-art algorithms} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{fig3.eps}\\ \caption{Results of comparison of different algorithms over PASCAL VOC'07 data set.} \label{FigCompr1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{fig4.eps}\\ \caption{Results of comparison of different algorithms over CiteSeer data set.} \label{FigCompr2} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{fig5.eps}\\ \caption{Results of comparison of different algorithms over HMDB data set.} \label{FigCompr3} \end{figure} We compare the proposed algorithm to the following methods, multiview learning algorithm using local learning (LL) proposed by Zhang et al. \cite{Zhang2008752}, multiview learning algorithm using co-training (CT) proposed by Sindhwani et al. \cite{sindhwani2005co}, multiview learning algorithm based on view disagreement (VD) proposed by Quadrianto \cite{Quadrianto2011425}, multiview learning algorithm with global consistency and local smoothness (GL) proposed by Zhai \cite{Zhai2012}, and multiview representation method using statistical subspace learning (SS) proposed by Chen et al. \cite{Chen20122365}. The error bars of the classification accuracy of the compared methods over three different data sets are given in Fig. \ref{FigCompr1}, Fig. \ref{FigCompr1} and Fig. \ref{FigCompr3}. From the figures, we find that the proposed method, MISC, stably outperforms other algorithms at all the data sets. Even on the most difficult data set, HMDB, the proposed method, MISC, also achieves an accuracy as high as about 0.4. The multiple view data are optimally combined by MISC to find the latent intact space and the optimal classifier in the corresponding intact space. The main reason for this is the robust property of the proposed algorithm. This algorithm has the ability to appropriately handle the complementary between multiple views, and learn a discriminative hidden intact space with help of classifier learning. \section{Conclusions and future works} \label{sec:conclusion} We propose a novel multiview learning algorithm by learning intact vectors of the training data points and a classifier in the intact space. The intact vectors is assumed to be a hidden but critical vector for each data point, and we can obtain its multiple view feature vectors by view-conditional transformations. Moreover, we also assume that the intact vectors are discriminative, i.e., can be separated by a linear function according to their classes. We propose a novel optimization problem to model both the learning of intact vectors and classifier. An iterative algorithm is developed to solve this problem. This algorithm outperforms other multiview learning algorithms on benchmark data sets, and it also shows its stability over tradeoff parameters. In the future, we will study the potential to use the proposed algorithm for imbalanced data set with multi-view features \cite{fan2011margin,fan2010enhanced,chawla2004editorial}, and the usage of Bayesian network classifier instead of linear classifier to learn the intact vectors of multi-view data \cite{fan2014tightening,fan2014finding,fan2015improved}. Moreover, we will also investigate to use the proposed algorithm to solve the problems of bioinformatics \cite{wang2014computational,zhou2014biomarker,liu2013structure,peng2015modeling}, computer vision \cite{wang2015representing,wang2015image}, and multi-media data processing \cite{wang2015supervised,wang2015multiple}. \section*{Acknowledgements} This project was supported by the National Natural Science Foundation of China (Grant No. 61472172), and a research funding of Ludong University (Grant No. 27870301).
1,314,259,996,199
arxiv
\section{Introduction} The determination and knowledge of basic physical parameters of stellar and sub-stellar sources provides the basis for astrophysics. Observations allow us to determine distance \citep[using parallaxes;][]{dl12,fah12}, age \citep[through membership in clusters or moving groups;][]{per98}, mass \citep[orbital parameters in binary systems;][]{kon10}, and radius \citep[transits;][]{joh10} for these objects. But when dealing with isolated stellar objects, we have to rely on models to determining mass and age. While stellar and atmospheric theoretical models are rapidly evolving, we need a powerful tool to calibrate them. Open clusters can be a good candidate for this role, since they contain many coeval objects of the same chemical composition, thus avoiding the problems of small number statistics. The Hyades is the closest open cluster to the Sun (d $\sim$ 45 pc) and suffers little from interstellar extinction. The estimated age of the Hyades is 625 Myr \citep{per98}. \cite{roe11} report 724 likely members of the Hyades with a kinematic distance estimate for each member using the convergent point method. Therefore, absolute magnitudes for each source can be calculated and placed on color-magnitude and color-color diagrams. Since there are binary and multiple systems among the 724 likely Hyades members, the main sequence on the diagrams is scattered. That is why, in this work, we use AstraLux lucky imaging observations and results from the literature to introduce a single star sequence of the Hyades. Together with over-plotted theoretical isochrones, it allows us to test stellar and atmospheric theoretical models. \section{Observations} \subsection{AstraLux observations} We observed 196 Hyades members using the lucky imaging camera AstraLux mounted on the 2.2m telescope at the Calar Alto Observatory in Spain. Observations were carried out during two runs in November 2011 and December 2012. Giving the size of the mirror of the telescope, the angular resolution is $\sim$ 0$\farcs$1 at 900 nm, resulting on a typical separation of 5 AU that can be resolved for a Hyades member. 36 targets were identified as candidates to binary or multiple systems (Fig. \ref{fraction}). \subsection{Literature data} We combined previously published Hyades data, which include spectroscopic observations \citep{mer09}, speckle binary search \citep{pat98}, adaptive optics observations at the Subaru telescope \citep{mor11}, and information from HST, HIPPARCOS and WDS catalogs. The results are summarized in Fig. \ref{fraction}. \vspace{12pt} In total, 462 Hyades members were observed or found in the literature, 207 of these were identified as candidates to binary or multiple systems. The targets that have at least one observation but does not reveal binarity/multiplicity (255 stars) are included to the single star sequence which is used in the further model testing. It should be noted that some of the "single" stars still could be unresolved binaries. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fraction_bw.eps}} \caption{\footnotesize The multiplicity fraction in the Hyades. Combined data using Mermilliod et al. (2009), Patience et al. (1998), Morzinski (2011), AstraLux observations, and HST, HIPPARCOS and WDS catalogs. The shaded part of each bar is a binary/multiple fraction from each survey. In total, 462 systems are observed and 207 candidates to binary/multiple systems are detected. } \label{fraction} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{cmd_1.eps}} \caption{\footnotesize The single star sequence of the Hyades (black dots) with over-plotted PADOVA (dashed line), DARTMOUTH (solid line) and BCAH (dash-dotted line) isochrones on J vs. J - Ks diagram. } \label{cmd_1} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{cmd_2.eps}} \caption{\footnotesize The single star sequence of the Hyades (black dots) with the over-plotted FRANEC/BT-Settl isochrone (solid line) on J vs J - Ks diagram. } \label{cmd_2} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{cmd_3.eps}} \caption{\footnotesize The single star sequence of the Hyades (black dots) with the over-plotted FRANEC/BT-Settl isochrone (solid line) on J vs. J - H diagram.} \label{cmd_3} \end{figure} \section{Model testing} We place the identified single Hyades members on J vs. J - Ks color-magnitude diagram. Additionally, we over-plot theoretical isochrones based on PADOVA \citep{mar08}, DARTMOUTH \citep{dot08} and BCAH \citep{bar98} models (Fig. \ref{cmd_1}). We also use the Tool for Theoretical Data Analysis \citep[TA-DA;][]{rio12} to merge the stellar FRANEC model \citep{tog11} with the atmospheric BT-Settl model \citep{all11}. As we can see, most of the models have troubles to describe the sequence at the lower-mass regime or around the "knee" ($\sim$ 0.6 M$_{\odot}$). On the other hand, the combined FRANEC/BT-Settl isochrone describes the observed main sequence of the Hyades very well, even fitting the "knee" part (Fig. \ref{cmd_2}). However, when trying to over-plot the FRANEC/BT-Settl isochrone with the Hyades single star sequence on J vs. J - H color-magnitude diagram, we notice an offset of about 0.05 mag (Fig. \ref{cmd_3}). This effect could be explained by the suggestion that opacities of alkali and hydride lines are still missing in the model (France Allard; private communication). \section{Conclusions} In this work, we reported a multiplicity census of the Hyades likely members resulting on identification of 207 possible binary and multiple systems out of 462 observed targets. This allowed us to create the single star sequence containing 255 Hyades members and apply it to testing stellar and atmospheric models. We found that the merged FRANEC/BT-Settl isochrone matches the sequence very well on J vs. J - Ks color-magnitude diagram but shows the offset of 0.05 mag on J vs. J - H. The latter could be explained by missing theoretical opacities of alkali and hydride lines in the model. We have shown that open clusters, in particular, the Hyades is a powerful tool to testing stellar and atmospheric models. \begin{acknowledgements} Authors are grateful to Katie Morzinski who allowed to use results of her PhD dissertation that contributed significantly to this work. We thank France Allard for comments about atmospheric models. We are grateful to Anatoly Piskunov for providing a compilation of theoretical isochrones. And additionally, we would like to thank Betrand Goldmann, Elena Manjavacas, Joshua Schlieder and Niall Deacon for useful discussions. \end{acknowledgements} \vspace{12pt} \bibliographystyle{aa}
1,314,259,996,200
arxiv
\section{Introduction\label{sec:Introduction}} Si metal-oxide-semiconductor field effect transistors (Si MOSFETs) support a highly conductive two dimensional electron gas (2DEG) system where coupling to a metallic gate allows tuning of electron density in the 2DEG over a wide range. Decreasing density can drive the 2DEG from a highly conductive ``metallic'' state to a highly resistive ``insulating'' state~\cite{Ando_1982}. Intense theoretical and experimental research over several decades suggested a number of possible physical mechanisms underlying such behavior in 2DEGs depending on microscopic details of the structure. The subject of the 2D metal-to-insulator transition (2D MIT) arising from the gate-induced tuning of the 2D carrier density is still an active area of research~\cite{Abrahams_2001,*Spivak_2010,*Sarma_Hwang_2005,*Kravchenko_Sarachik_2004}, particularly in the context of high-quality (i.e. high -mobility) 2D systems where the transition occurs at relatively low critical density where electron-electron interaction effects may play an important role. In particular, the specific question of whether the density-tuned 2D MIT is or is not a zero-temperature quantum phase transition, as opposed to a crossover from a high-density apparent metallic phase to a low-density insulating phase, has been much debated in the recent literature~\cite{Abrahams_2001,*Spivak_2010,*Sarma_Hwang_2005,*Kravchenko_Sarachik_2004}. If the 2D MIT turns out to be a true quantum phase transition rather than a finite-temperature crossover, one immediate important implication would be that the high-density 2D metallic phase must necessarily be a non-Fermi liquid because a non-interacting 2D disordered Fermi liquid is an insulator at $T=0$~\cite{Abrahams_2001,*Spivak_2010,*Sarma_Hwang_2005,*Kravchenko_Sarachik_2004, Abrahams_1979}. Early measurements of 2D resistivity in low density Si MOSFETs showed good agreement with the scaling theory of Anderson localization originating from quantum interference of electrons scattered by random disorder potential (see Refs.~\onlinecite{Ando_1982} and \onlinecite{Abrahams_2001,*Spivak_2010,*Sarma_Hwang_2005,*Kravchenko_Sarachik_2004} and references therein). The theoretical argument~\cite{Abrahams_1979} relies on the scaling theory of localization that shows that the system-size-independent semi-classical Boltzmann (or Drude) resistivity $\rho_{B}$ in two dimensions is overpowered by the logarithmic quantum correction $\sim\frac{1}{\pi}\ln\frac{L}{\ell}$ (in units of $h/e^{2}$) arising from quantum interference of diffusing electrons, here $L$ is the system size and $\ell$ is the electron mean free path. As a result, in the thermodynamic limit all states are localized~\cite{Abrahams_1979} in a 2D orthogonal class system (preserving time-reversal and spin-rotation symmetries). This result, that all disordered 2D systems are insulating at $T=0$ in the infinite-system-size limit, was initially derived for non-interacting electrons, but is universally thought to be valid in the presence of weak electron-electron interaction. Boltzmann resistivity can be varied by tuning the electron density in the 2DEG resulting in a tunable apparent metal-insulator transition which occurs when the system size $L=\xi$ equals a characteristic localization length at which the quantum correction equals the Boltzmann part of the resistivity, $\rho_{B}\sim h/e^{2}$. This is of course a finite-system-size induced crossover (and not really a transition) from an effective apparent 2D metallic phase for $L\ll\xi$ to an insulator for $L\gg\xi$. Experimentally, however, the system-size induced transition is impractical to implement, and one uses carrier density to tune the effective localization length. This is possible because the effective localization length $\xi\sim\ell\exp\left(\pi k_{F}\ell/2\right)$ depends on the underlying 2D carrier density through $k_{F}$ and through the density-dependent mean free path $\ell$, and thus the 2D MIT can be tuned by changing the carrier density leading to a critical density defining the crossover between the effective metal and the strongly localized insulator. In realistic experiments, there are inevitably inelastic scattering processes that limit the coherent diffusion of electrons and introduce a temperature-dependent dephasing length $L_{\varphi}\sim T^{-p/2}$. This dephasing length limits the effective system size in the scaling theory of localization providing a temperature-dependent length scale cut off (i.e., the system size gets replaced by the dephasing length in the scaling theory) which results in a logarithmic temperature dependence of the quantum correction to resistivity. Therefore, in a resistivity measurement, the metal-insulator transition is evidenced by the qualitative change in the temperature dependence of resistivity with changing electron density from a metallic (or, strictly speaking, weak localization) dependence $\rho^{-1}(T)\approx\rho_{B}^{-1}-\mathrm{const}\times\ln T$ to an exponential insulating dependence $\rho(T)\sim\exp\left[\left(\frac{T_{0}}{T}\right)^{\alpha}\right]$, characteristic of hopping or activated conduction, with some non-universal scale $T_{0}$ and $\alpha=\frac{1}{3},\frac{1}{2},1$ depending on the details. The presence of weak Coulomb interaction is not expected to change the character of the quantum correction (at least from the point of view of the perturbation theory in the high-density regime) and only affects the coefficient in front of $\ln T$ due to the additional scattering of electrons on Friedel fluctuations of density around impurities~\cite{AAbook,AkkermansBook,Zala_2001,Zala_2001R,Gornyi_2004,Burdis_1988,Klimov_2008} (the so-called Altshuler-Aronov effect). By contrast, high-mobility Si MOSFETs $\mu\gtrsim20000\mathrm{cm^{2}/Vs}$ as well as a number of other high mobility 2DEGs seem to demonstrate a qualitatively different dependence of resistivity $\rho(T,n)$ on electron density $n$ and temperature~\cite{Abrahams_2001,*Spivak_2010,*Sarma_Hwang_2005,*Kravchenko_Sarachik_2004}. In these samples, the low-density conductivity has the standard insulating exponential temperature dependence. However, with increasing density the temperature dependence of resistivity gradually changes from exponential insulating behavior $d\rho/dT<0$ to a metallic-type $d\rho/dT>0$ dependence~\cite{Abrahams_2001,*Spivak_2010,*Sarma_Hwang_2005,*Kravchenko_Sarachik_2004,Zavaritskaya_1987,Vitkalov_2001,Tsui_2005,Kravchenko_1994,Kravchenko_1995,Kravchenko_1996,Pudalov_1998,Cham_1980,Smith_1986} without any obvious manifestation of the $\ln T$ behavior on the metallic side. Thus, there exists a range of densities where at the lowest accessible temperatures only a metallic temperature dependence is observed $d\rho/dT>0$. Actually, this metallic temperature dependence (i.e., $d\rho/dT>0$) typically saturates at low temperatures ($T\lesssim100\mathrm{mK}$) with the resistivity generically becoming temperature independent (i.e., $d\rho/dT=0$) at low enough temperatures for all 2D effectively metallic samples. Whether this low-temperature resistivity saturation (with the actual value of the saturated low-temperature residual resistivity being dependent on the carrier density) is a fundamental phenomenon arising from some incipient low-energy cut-off suppressing the effective metallicity or is just a trivial manifestation of electron heating effect (where the carrier temperature saturates and no longer decreases with the decreasing lattice temperature) is not known definitively. In addition to this low-temperature resistivity saturation there is a higher temperature anomaly in the metallic behavior also; typically, the 2D metallic resistivity $\rho(T)$ starts decreasing (i.e. $d\rho/dT<0$) at some density-dependent ``high'' temperature ($1-10\mathrm{K}$) after manifesting the metallic (i.e. $d\rho/dT>0$) behavior and before phonon scattering effects take over at still higher temperatures. The combination of metallic (i.e. $d\rho/dT>0$) behavior at low temperatures and insulating (i.e. $d\rho/dT<0$) behavior at intermediate temperatures coupled with phonon-induced metallic behavior (i.e. $d\rho/dT>0$) at still higher temperatures could lead to a rather interesting non-monotonicity in $\rho(T)$ on the metallic side of the 2D MIT at low carrier densities, and has been well-studied in the literature~\cite{DasSarma_Hwang_2000,*Min_Hwang_2012}. The higher-temperature effective insulating behavior in the metallic phase is thought to arise from a quantum-classical crossover phenomenon in the 2D system occurring on the scale of the Fermi temperature ($\propto n$) which could be low ($\lesssim10\mathrm{K}$) at the low carrier densities of interest for the 2D MIT phenomena~\cite{DasSarma_2003}. We will not much discuss this temperature-induced quantum-classical high-temperature transition from metallic to insulating behavior in this paper, concentrating instead on the density-induced 2D MIT transition at low temperatures. The sign of the derivative switches from insulating to metallic at a value of resistivity of the order of the resistance quantum $\rho\sim h/e^{2}$, i.e. at the value at which a transition to strong localization behavior is predicted by the scaling theory. On the metallic side of this transition, where $d\rho/dT>0$, the resistivity increases sharply by a factor of $\gtrsim2-3$ with growing temperature at lower carrier density staying within the metallic phase. This pronounced temperature dependence diminishes with growing density deeper in the metallic regime. Standard $\ln T$ quantum corrections are observed at high densities where the metallic temperature dependence is weak~\cite{Pudalov_1998logT,Klimov_2008,Pudalov_1999}. By contrast in the vicinity of the metal-insulator transition logarithmic corrections are typically not observed within the experimentally accessible temperature range. This qualitative change in the temperature dependence of 2D resistivity driven by electron density is routinely called a metal-insulator transition in the literature and we will use this convention in the following despite the ongoing debate about the existence or not of an actual thermodynamic phase transition at this point~\cite{Abrahams_2001,*Spivak_2010,*Sarma_Hwang_2005,*Kravchenko_Sarachik_2004}. Our view in the current work is based on the assumption that the 2D MIT is a crossover phenomenon with the $\ln T$ behavior suppressed by the strong metallic temperature dependence of the Drude-Boltzmann resistivity at lower metallic densities. We will critically test this assumption in this paper by comparing theory and experiment in the density- and temperature-dependent transport data in high-mobility Si MOSFETs. An important distinction between low- and high-mobility samples is in the relative strength of Coulomb interactions. The presence of weaker disorder high mobility structures allows for metallic behavior to persist down to very low densities $n\sim10^{11}cm^{-2}$ which correspond to very small values of Fermi energy and thus large values of the density dependent dimensionless interaction strength in the system which may be as large as $r_{s}\equiv1/\sqrt{\pi na_{B}^{2}}\sim10$. Here $a_{B}=\hbar\kappa/(me^{2})$ is the effective Bohr radius of electrons in the 2DEG, $\kappa$ being the background dielectric constant. We note, however, that even high-density Si MOSFETs, which were extensively studied~\cite{Ando_1982} before the current interest arose in the 2D MIT phenomena, have a dimensionless interaction strength $r_{s}>1$, and 3D metals all have $r_{s}\sim4-7$. Thus, it is not manifestly clear that interaction by itself is the sole physical mechanism underlying the 2D MIT phenomena. Perhaps an even more important aspect of high-mobility 2D samples in the context of strong metallic temperature dependence of resistivity is that, by having a relatively low critical density distinguishing the metallic and the strongly insulating regimes by virtue of low sample disorder (and hence high sample mobility), the Fermi temperature ($T_{F}\propto n$) is low ($\sim1-10\mathrm{K}$) in high mobility samples in the metallic regime. Then, the low-temperature regime ($T\approx0.1-5\mathrm{K}$) where the 2D metallic behavior manifests itself (i.e. large positive $d\rho/dT$) has effectively large values of the dimensionless temperature $T/T_{F}\sim1$. By contrast, 3D metals, which are also strongly interacting electron systems by virtue of having $r_{s}\gg1$ have very low dimensionless effective temperature $T/T_{F}\sim10^{-4}$, by virtue of $T_{F}\sim10^{4}\mathrm{K}$ in metals. The large effective values of $T/T_{F}$ also distinguish the high-mobility 2D systems from the low-mobility 2D systems where the Fermi temperature is $T_{F}\gtrsim100K$ in the metallic phase and thus $T/T_{F}\sim10^{-2}-10^{-3}$ in the low-temperature experimental regime. Extensive theoretical work demonstrated that all of the observed features of the temperature dependence of resistivity on the metallic side of the metal-insulator transition can be successfully described extrapolating from high densities (and coincidentally low interactions strength) and using a Boltzmann transport theory which includes the temperature dependent screening~\cite{DasSarma_1999,DasSarma_2003,DasSarma_2004} of charged impurities. The metallic increase of the resistivity with growing temperature is explained by a decreasing efficiency of screening by the electron gas of charged impurities with growing temperature. The large effective value of $T/T_{F}$ explains the experimentally observed large $d\rho/dT$ in the metallic phase. This suggests that the standard Fermi liquid theory may explain the unusual temperature and density dependence of resistivity in high mobility Si MOSFETs. However, the Fermi liquid theory also predicts the presence of quantum corrections giving rise to $\ln T$ behavior which are not observed experimentally in the vicinity of the metal-to-insulator transition. The observation and analysis of these $\ln T$ corrections would allow us to continuously connect the low-density strong interaction and strong disorder regime to the weakly interacting high-density Fermi liquid regime where the Boltzmann theory is valid. There may be a simple conventional explanation for the absence of quantum corrections in the data. Analyses of the high-density data~\cite{Pudalov_1999,Pudalov_1998logT,Altshuler_Martin_Maslov_Pudalov_Prinz_Brunthaler_Bauer_2000,Altshuler_2001} where $\ln T$ is observed suggest that at low densities in high-mobility samples the temperature at which $\ln T$ would become apparent may be beyond the measurement temperatures because of electron heating~\cite{Altshuler_2001,Prus_2001}. Our approach in this paper is a straightforward phenomenological approach where we assume that the metallic transport has contributions from both the screening-induced semi-classical metallic temperature dependence and the weak localization induced quantum $\ln T$ temperature dependence. The strong metallic temperature dependence of the resistivity completely overwhelms the $\ln T$ insulating correction at higher temperatures with the logarithmic correction eventually manifesting itself at some density-dependent low temperatures which might very well be inaccessible to experimental measurements due to electron heating problem. We also present experimental transport data on 2D MIT in Si MOSFET samples which are consistent with the presence of both screening-induced metallic temperature dependence and quantum weak localization correction. In this paper, we present experimental resistivity data taken on two high-mobility Si MOSFETs demonstrating 2D metal-insulator transition. We construct a microscopic Boltzmann theory of resistivity in Si MOSFETs that includes the effect of temperature-dependent screening of charged impurities by the electron gas. We use this model to fit the metallic temperature dependence observed in the data. We then construct a minimal additive model describing the competition between this metallic temperature dependence of resistivity and the insulating temperature dependence due to the quantum correction. Using this model, we determine the temperatures at which the quantum correction to resistivity is expected to dominate the experimental data which turn out to be beyond the range of the current experiments. We also discuss the magnetoresistance data on the two samples in the vicinity of the transition and show that they are qualitatively consistent with weakly interacting localization theory suggesting that the standard Fermi-liquid theory could be sufficient to describe the temperature dependence of the transport properties in high-mobility Si MOSFETs. \section{Description of the experiment} \begin{figure} \includegraphics[width=0.5\textwidth]{4FigurePlot03}\caption{Resistivity as a function of temperature for sample A and B plots ((a) and (c)), and ((b) and (d)), respectively showing zoom in of the low-temperature region in (c) and (d). Different lines correspond to different carrier density $n$ in units of $10^{11}\mathrm{cm^{-2}}$ (from top to bottom): in (a) and (c) $n=1.07,\,1.10,\,1.13,\,1.20,\,1.26,\,1.32,\,1.38,\,1.44,\,1.50,\,1.56,\,1.62,$ and $1.68$; in (b) and (d) $n=1.52,\,1.70,\,1.88,\,2.05,\,2.23,\,2.41,\,2.76,\,3.11,$ and $3.46$. Plot (a) is reproduced from Ref.~\onlinecite{Tracy_2009}.} \label{rhoT} \end{figure} We consider the transport data on two Si MOSFET samples with relatively high mobility: sample A ($\mu=1.5\times10^{4}\mathrm{cm^{2}/Vs}$) and sample B ($\mu=10^{4}\mathrm{cm^{2}/Vs}$). The resistivity temperature dependence at various carrier densities is shown in Fig.~\ref{rhoT}. At the lowest densities an insulating exponential temperature dependence is observed. This insulating behavior $d\rho/dT<0$ gradually loses exponential character with increasing density. At higher densities the temperature dependence of resistivity becomes non-monotonic: with increasing temperature the resistivity drops down to a minimum value and then rises up $d\rho/dT>0$ to a maximum before gradually sloping downwards. There is a range of densities at which at the lowest accessible temperatures there is no sign of the insulating rise of the resistance or $\ln T$ metallic correction. This form of temperature dependence $\rho(T)$ is typical in high-mobility Si MOSFETs~\cite{Zavaritskaya_1987,Vitkalov_2001,Tsui_2005,Kravchenko_1994,Kravchenko_1995,Kravchenko_1996,Pudalov_1998,Cham_1980}. We point out, as is obvious from Figs.~\ref{rhoT}~(c) and~(d) where the resistivity is shown on an expanded temperature scale, the insulating temperature dependence is suppressed gradually as density increases and there is really no absolutely sharp density distinguishing metallic and insulating behaviors. In Fig.~\ref{MinMax}, we track the evolution with electron density of the temperatures at which the maximum and minimum of the resistivity are reached. Red circles here correspond to the low-temperature resistivity minimum which signifies the onset of insulating behavior. The black squares correspond to the high-temperature maximum of the resistivity. Our interest in this work is mostly on the red circles which provide the temperature at which the metallic temperature dependence is just being overcome by the quantum localization effect on the effective metallic side of the 2D MIT. We note that as expected this characteristic temperature for the resistivity minimum is rather low and it increases with decreasing density as localization effects become more important quantitatively. The black squares in Fig.~\ref{MinMax}, indicating the resistivity maxima in $\rho(T)$ as a function of carrier density, provide the characteristic temperature for the high-temperature quantum-to-classical crossover in the 2D transport as discussed in Sec.~\ref{sec:Introduction}. This quantum-classical crossover typically occurs on the scale of the Fermi temperature (typically around $T\sim T_{F}/2$) and therefore decreases with decreasing carrier density. The region in-between the red circles and the black squares is the putative 2D effective metallic phase where the metallic temperature dependence with $d\rho/dT>0$ is manifested in 2D transport. We note that in the high-mobility samples, the regime (below red circles) showing $\ln T$ weak localization behavior is strongly suppressed by the metallic temperature dependence arising from other physical mechanisms. \begin{figure} \includegraphics[width=0.5\textwidth]{Figure2}\caption{Black squares and red circles correspond to the maximum and minimum of the measured resistivity $\rho(T)$ in Fig.~\ref{rhoT}, respectively. } \label{MinMax} \end{figure} \section{Boltzmann theory}\label{sec:Boltzmann} Strong disorder and strong interaction low-density regime is difficult to address theoretically. Boltzmann theory allows us to quantitatively describe mobility dependence on the electron density in a wide density regime (of not too low densities so that one is away from the strongly localized regime). The model of disorder describing the density dependence of mobility is a combination of random charged impurities and surface roughness~\cite{Ando_1982}. This Boltzmann theory quantitatively agrees with mobility measurements in a wide density range in the higher-density metallic regime~\cite{Tracy_2009}. It is therefore natural to extrapolate this theory to the low-density regime of strong disorder and interaction. At very low densities, the effect of surface roughness is negligible and the resistivity is completely dominated by charged impurities~\cite{Ando_1982}. In the following, we neglect the effect of surface roughness, but our results do not change if surface roughness is included in the theory since it has little quantitative effect on transport at the low carrier densities of interest in this work. Boltzmann conductivity $\sigma_{B}$ is given by, \begin{equation} \rho_{B}^{-1}\equiv\sigma_{B}=ne^{2}\left\langle \tau\right\rangle /m\label{eq:Boltzmann} \end{equation} where $n$ is the electron density, $m$ is the effective mass of electrons, and the average transport time $\langle\tau\rangle$ reads as, \begin{equation} \left\langle \tau\right\rangle \equiv\frac{\int dE\tau(E)E\left(-\frac{\partial f}{\partial E}\right)}{\int dEE\left(-\frac{\partial f}{\partial E}\right)}. \end{equation} Here the impurity averaged relaxation time $\tau(E)$ is given by \begin{eqnarray} & \frac{1}{\tau(E_{k})}=\frac{2\pi}{\hbar}\sum_{\alpha,\mathbf{k'}}\int_{-\infty}^{\infty}dzN_{i}(z)|u(\mathbf{k}-\mathbf{k'};z)|^{2}\nonumber \\ & \times(1-\cos\theta_{\mathbf{kk'}})\delta(E_{k}-E_{k'}), \end{eqnarray} where the standard parabolic energy dispersion is assumed for $E_{k}$, $N_{i}(z)$ is the 3D charged impurity density and $u(\mathbf{q};z)$ is the 2D Fourier transform in the plane of the 2DEG of the impurity potential screened by the electron gas, \begin{equation} u(q;z)=\frac{1}{\varepsilon(q)}\frac{2\pi e^{2}}{\kappa q}F_{imp}(q;z), \end{equation} where the $\kappa$ is the background dielectric constant, and $F_{imp}(q;z)$ is a form factor depending on the microscopic details which are known~\cite{Ando_1982}. In the strictly 2D limit an impurity charge located a distance $d$ away from the 2DEG is described by a form factor $F_{imp}=e^{-qd}$. The screening effect is characterized by the dielectric function $\varepsilon(q)$ calculated in the random phase approximation (RPA)~\cite{Ando_1982}. The resulting theory fits well (see Fig.~\ref{rhoTtheory}) the strong non-monotonic temperature dependence of the metallic resistivity at low densities~\cite{DasSarma_1999,DasSarma_2003}. To reproduce the experimental data, we adjust the density of impurities and their locations in the oxide layer in a narrow range of values as free fitting parameters and the resulting model reproduces the functional dependence of the resistivity on temperature and electron density in the 2DEG. We mention that the oxide impurity charge density and their spatial distribution necessary to get agreement between the theory and the experimental resistivity data are very reasonable (and independently confirmed by capacitance measurements). The details of this comparison between theory and experiment for our samples are given in Ref.~\onlinecite{Tracy_2009} and not repeated here. The non-monotonicity of the temperature dependence of resistivity as a function of density can therefore be explained within this model: with increasing temperature screening becomes less effective and as a result the resistivity increases. The increase in resistivity can be by a factor $2-3$ in this regime due to the temperature-induced weakening of the screening effect which explains the observed metallic temperature dependence. At higher temperatures, the quantum-to-classical crossover results in the decreasing resistivity with temperature at $T/T_{F}\sim1$. The theoretical details of the screening model for the 2D resistivity and the corresponding comparisons with experimental "metallic" temperature dependence of transport properties have already been discussed extensively in earlier references~\onlinecite{DasSarma_Hwang_2000,*Min_Hwang_2012,DasSarma_2003,DasSarma_1999,DasSarma_2004,Tracy_2009} and will not be repeated here. We mention, however, that none of these earlier references included the weak localization effect into consideration (as we do in this work) assuming the effective metallic behavior to dominate the transport properties completely. Extrapolation to the strong interaction regime makes sense since the random phase approximation (RPA) is given by a subset of the most divergent diagrams which therefore may dominate even at strong interactions~\cite{DasSarma_1999,DasSarma_2003}. Extrapolation of Boltzmann theory to low density and strong interaction may be successful as it describes short-range phenomena $\lesssim\ell$ (where $\ell$ is the mean free path) as opposed to localization physics and localizing interaction correction originating from the diffusive length scales $\mbox{\ensuremath{\gg\ell}}$. Boltzmann theory therefore may describe the case of strong dephasing and/or high temperature. In particular, the fact that the effective temperature is high (i.e. $T/T_{F}\sim1$) in the low-density high-mobility samples makes 2D MIT a high-temperature crossover phenomenon where interaction effects are likely to be strongly suppressed by temperature. We note here (as can be seen in Fig.~\ref{rhoT}) that at very low temperatures, the temperature dependence of the metallic resistivity invariably saturates, thus making the 2D MIT an effectively high-temperature phenomenon. \section{Magnetoresistance} A strong indication of a Fermi liquid behavior in the strong disorder and strong interaction regime near the metal-insulator transition is the observation of weak perpendicular field magnetoresistance which is a smoking gun signature of the weak localization physics. Magnetoresistance data taken on sample A (see Fig.~5 in Ref.~\onlinecite{Tracy_2009}) were fitted using the standard di-gamma function expression for the weak localization theory, \begin{equation} \frac{\delta\rho}{\rho^{2}}=-\alpha g_{v}G_{0}\left[\Psi\left(\frac{1}{2}+\frac{\tau_{B}}{\tau}\right)-\Psi\left(\frac{1}{2}+\frac{\tau_{B}}{\tau_{\varphi}}\right)\right],\label{eq:WLMR} \end{equation} where $\tau_{B}\equiv\hbar/(4eBD)$, $G_{0}=\frac{e^{2}}{2\pi^{2}\hbar}$, $B$ stands for the perpendicular magnetic field, $D$ the diffusion coefficient, $g_{v}$ the valley degeneracy factor. The coefficient $\alpha$ along with the dephasing time $\tau_{\varphi}$ are used as fitting parameters with the best fit achieved with $\alpha\approx0.25$ and $\tau_{\varphi}=33,32,$ and $28ps$ for electron densities $n=1.45,1.51,$ and $1.63\times10^{11}cm^{-2}$ at $T=0.1K$ ($g_{v}=1$ is assumed in the fits). In this low density regime of these measurements the resistivity is high $\rho\lesssim h/e^{2}$, which suggests $k_{F}\ell\gtrsim1$, whereas the standard weak-localization theory is really valid for $k_{F}\ell\gtrsim10$. Therefore extra care has to be taken when interpreting these results and quantum corrections of higher order in $1/(k_{F}\ell)$ may have to be considered going beyond the usual weak localization theory. The key effect of the higher order terms is in lowering the prefactor in front of the magnetoresistance expression Eq.~(\ref{eq:WLMR}) from $\alpha=1$ to $\alpha\approx1-\beta(G_{0}\rho)$, with $\beta$ a degeneracy factor depending on the intervalley scattering in the system. This reduction in $\alpha$ is a result of the two-loop correction to non-interacting weak localization theory~\cite{Minkov_2004}. Also an additional magnetoresistance due to electron-electron interactions may enhance the reduction of the prefactor with the combined effect leading to $\alpha\approx1-2\beta(G_{0}\rho)$, as discussed in Ref.~\onlinecite{Minkov_2004}. Intervalley scattering due to the short range scattering may suppress the valley degeneracy factor $g_{v}$ in front of the magnetoresistance in Eq.~(\ref{eq:WLMR}). The effect of intervalley scattering on magnetoresistance was analyzed in great detail using high density measurements in high mobility Si MOSFETs in Ref.~\onlinecite{Gershenson_2007}. The typical intervalley scattering times extracted from these measurements are in the range $1ps\lesssim\tau_{v}\lesssim20ps$. Comparing the typical values of $\tau_{v}$ with the dephasing time $\tau_{\varphi}\sim30ps$ in our samples extracted from our measurements we conclude that $\tau_{v}/\tau_{\varphi}\lesssim1$ and valley mixing is relatively strong and the effective degeneracy factor is expected to be in the range $1\lesssim g_{v}$ in Eq.~(\ref{eq:WLMR}). This means that the fitting parameter $\alpha g_{v}=0.5$ in Eq.~(\ref{eq:WLMR}) signifies a reduction of the magnetoresistance amplitude by at least a factor of two, and probably more. This suggests that the data in our Si MOSFETs is qualitatively consistent with the detailed theory~\cite{Minkov_2004} of quantum corrections in a weakly interacting strongly disordered Fermi liquid in the presence of strong inter-valley scattering (which may arise from the surface roughness at the $\mathrm{Si-SiO_{2}}$ interface providing short-range scattering). This agreement has to be taken with a grain of salt as the interaction strength in the low density regime may not be small. Nevertheless, similar magnetoresistance behavior was observed in other high mobility Si MOSFET measurements~\cite{Brunthaler_2001} and other 2DEGs~\cite{Simmons_1998,Coleridge_2002,Rahimi_2003} giving us confidence in this conclusion. \begin{figure} \includegraphics[clip,width=0.5\textwidth]{WLRhoTnew} \caption{Boltzmann resistivity theory for parameters of Sample A (from fitting) combined with the weak localization correction Eq.~(\ref{eq:Resistivity}). Numbers on the plot correspond to the electron density in units of $10^{11}cm^{-2}$. The right panel shows a zoom in on a low-temperature region in the left panel. } \label{rhoTtheory} \end{figure} Experimental values of the magneto-resistance cut off extracted from the fitting procedure give a dephasing time $\tau_{\varphi}(0.1K)\sim30ps$ in our sample which is an order of magnitude shorter than the value expected due to inelastic electron-electron scattering in the diffusive regime~\cite{AAbook,AkkermansBook,Narozhny_2002}, \begin{equation} \frac{\tau}{\tau_{\varphi}}=\gamma\frac{k_{B}T}{E_{F}}\ln g,\label{eq:tauPhi} \end{equation} where $\gamma$ is a factor of the order unity. Despite the low-mobility 2D devices showing quantitative agreement with the dephasing rate due to inelastic electron-electron interaction~\cite{Davies_1983,Kawaji_1986} in Eq.~(\ref{eq:tauPhi}), and also high-density high-mobility devices showing quantitative agreement with the dephasing rate formula~\cite{Gershenson_2007}, low density measurements routinely manifest an order of magnitude shorter dephasing times than predicted by Eq.~(\ref{eq:tauPhi})~\cite{Brunthaler_2001}, which cannot be explained by simple deviations~\cite{Minkov_2004} from Eq.~(\ref{eq:tauPhi}) at strong disorder $k_{F}\ell\sim1$. The same enhanced dephasing rate is also routinely observed in other 2DEGs~\cite{Simmons_1998,Coleridge_2002,Rahimi_2003}. It seems likely that either there is an additional dephasing mechanism responsible for such short dephasing rates or the magnetoresistance is cut off by the localization length. At low values of $k_{F}\ell\gtrsim1$ the localization length $\xi\approx\ell\exp\left(\pi k_{F}\ell/2\right)$ becomes comparable to the dephasing length. It has been shown theoretically that Eq.~(\ref{eq:WLMR}) is applicable even for $\xi<L_{\varphi}$ as long as $k_{F}\ell>1$. However, the meaning of the dephasing rate extracted from the data is different in this regime since the localization length cuts off the magneto-resistivity instead of dephasing~\cite{Minkov_2004}. This may cause a saturation in the temperature dependence of the dephasing rate. It is, in principle, also possible that the dephasing rate at very low carrier density is dominated by the physics of density inhomogeneity~\cite{Germanenko_2001} (i.e. disorder induced puddles) not included in the theory leading to Eq.~(\ref{eq:tauPhi}). \section{Temperature dependence of resistivity within the Fermi liquid model\label{sec:Temperature-dependence-of}} \begin{figure} \includegraphics[width=0.3\textwidth]{TminTheory}\caption{Black squares and red circles correspond to maximum and minimum of $\rho(T)$ in Eq.~(\ref{eq:Resistivity}). Inset shows temperature at the minimal value of resistance as a function of electron density.} \label{fig:MinMaxTheory} \end{figure} High-density measurements identified standard $\ln T$ quantum correction to 2D resistivity~\cite{Pudalov_1998logT,Brunthaler_2001,Pudalov_1999}. It is therefore natural to expect that, since the Boltzmann theory can be continuously extended to low densities, the $\ln T$ correction is present in the system at all densities in addition to the Boltzmann contribution. The presence of weak field magnetoresistance discussed above is an additional argument in favor of the Fermi liquid behavior at low densities near the metal-insulator transition. Therefore, we assume the presence of $\ln T$ quantum correction to the Boltzmann resistivity up to the onset of the strongly insulating behavior~\cite{Pudalov_1998logT,Altshuler_Martin_Maslov_Pudalov_Prinz_Brunthaler_Bauer_2000,Altshuler_2001}. However, with decreasing electron density, the metallic temperature dependence of the Boltzmann resistivity becomes pronounced. As a result, there is a competition between the insulating and metallic temperature dependence at low densities which are simultaneously present since they arise from distinct microscopic mechanisms. We consider the minimal model of transport that describes this behavior, including both semi-classical Boltzmann contribution and the quantum weak localization contribution, \begin{equation} \rho(T)=\frac{1}{\rho_{B}^{-1}(T)+\sigma_{WL}(T)},\label{eq:Resistivity} \end{equation} where $\rho_{B}(T)$ is the temperature-dependent Boltzmann resistivity given by Eq.~(\ref{eq:Boltzmann}). The quantum correction to conductivity in Eq.~(\ref{eq:Resistivity}) is given by the standard theory, \begin{equation} \sigma_{WL}=-g_{v}G_{0}\frac{1}{2\pi}\ln\frac{\tau_{\varphi}}{\tau}.\label{eq:WLT0} \end{equation} Figure~\ref{rhoTtheory} shows the calculated temperature and density dependence of the resistivity for the parameters extracted from the fits of the data for sample A. In Fig.~\ref{fig:MinMaxTheory}, we present the theoretical results corresponding to those shown in Fig.~\ref{MinMax}. It is clear that the theoretical results in Fig.~\ref{fig:MinMaxTheory} closely resemble the experimental results shown in Fig.~\ref{MinMax} for sample A, thus demonstrating that 2D MIT may indeed be a crossover phenomenon. Here the logarithmic correction becomes apparent only below the experimentally accessible temperatures $T\sim0.1\textrm{K}$ which is qualitatively similar to the experimental situation. In Fig.~\ref{fig:SampleBTheory}, we present the theoretical calculation for the parameters corresponding to the experimental sample B, which qualitatively simulates the experimental results for sample B shown in Figs.~\ref{rhoT}(b),~\ref{rhoT}(d),~and~\ref{MinMax}(b). Comparison of Figs.~\ref{rhoT}~and~\ref{MinMax} with Figs.~\ref{rhoTtheory}-\ref{fig:SampleBTheory} establishes that the crossover picture of 2D MIT is valid qualitatively (and probably even quantitatively), i.e., both the screening-induced metallic temperature dependence and the quantum weak localization temperature dependence are present in the resistivity. \begin{figure} \includegraphics[width=0.5\textwidth]{Figure5}\caption{Theoretical results for resistivity as a function of temperature for parameters of sample B showing a zoom in of the low-temperature region in (b) of the results in panel (a). (c) Red circles and black squares correspond to numerical calculation of the minimum and maximum in the temperature dependence of resistivity for sample B, respectively. } \label{fig:SampleBTheory} \end{figure} \section{Discussion and conclusion} In this paper, we present experimental and theoretical results for the density-dependent low-temperature transport properties of high-mobility 2D Si MOSFETs manifesting the 2D MIT phenomena with the critical goal of searching for the possible presence of the quantum weak localization (i.e., $\ln T$) corrections to the resistivity on the apparent metallic side of the so-called 2D metal-insulator transition. Our detailed analyses of the experimental transport data indicate the existence of a resistivity minimum at a density-dependent characteristic low temperature in the effective metallic regime. The existence of this characteristic temperature, with resistivity $\rho(T)$ increasing with both increasing and decreasing temperature away from this minimum, points to the presence of two competing transport mechanisms in the system with one being ``metallic'' (i.e., $d\rho/dT>0$) and the other ``insulating'' (i.e., $d\rho/dT<0$) which balance each other at this low-$T$ resistivity minimum with the localization effect dominating at still lower temperatures. We identify the two competing mechanisms to be the temperature-induced reduction of screening (leading to the metallic $d\rho/dT>0$ behavior) and the quantum interference induced weak localization contribution (leading to the $\ln T$ weak localization correction with $d\rho/dT<0$) which dominate, respectively, the higher and the lower temperature sides of the resistivity minima. We find that the minimum occurs mostly at temperatures which are experimentally inaccessible in high-mobility samples (due to perhaps the well-known electron heating problem in 2DEG semiconductors), thus providing a possible explanation for why most experimental measurements in high-mobility samples do not manifestly show the $\ln T$ insulating behavior at low temperatures in the metallic regime. We believe that the $\ln T$ weak localization behavior would routinely show up in the metallic transport properties of high-mobility samples if much lower experimental electron temperatures can be achieved in future measurements. In fact, our work indicates that the most straightforward experimental technique to search for signatures of localization in the metallic regime of high-mobility 2D semiconductor systems is to look for extrema in the resistivity $\rho(T)$ at a fixed density $n$ by numerically obtaining the solutions of $d\rho/dT=0$ in the experimental $\rho(T,n)$ data at fixed density. The low-temperature minima in the resistivity would correspond to the temperature at each density below which localization correction dominates the semi-classical metallic temperature dependence. Depending on the carrier density, this minimum could lie at inaccessibly low temperature, but there still should be some signatures for the minima in the data. As observation of this low-temperature density-dependent minima (Fig.~\ref{MinMax} in our samples) in the transport data indicates that the 2D MIT is a crossover and not a true quantum phase transition, and the absence of a manifest $\ln T$ weak localization effect in the resistivity is simply a feature of the insulating localization correction being overwhelmed by a strong metallic temperature dependence in the semi-classical Drude-Boltzmann resistivity as our theoretical results in Figs.~\ref{rhoTtheory}-\ref{fig:SampleBTheory} clearly demonstrate. The excellent qualitative agreement between our theory and our experiment is a strong evidence in favor of 2D MIT being a crossover phenomenon. The possibility of 2D MIT being a Fermi-liquid crossover phenomenon driven by disorder in high-mobility low-density MOSFETs with weak localization effects masked by finite-temperature Drude-Boltzmann effects was also pointed out in the early experimental works of Pudalov~\cite{Altshuler_Martin_Maslov_Pudalov_Prinz_Brunthaler_Bauer_2000,Altshuler_Maslov_Pudalov_2000,Prus_2001} and of Pepper~\cite{Lewalle_2002,Lewalle_2004}. One feature, prominent both in our experimental data Figs.~\ref{rhoT}~and~\ref{MinMax} and in our theory (Figs.~\ref{rhoTtheory}-\ref{fig:SampleBTheory}) needs to be specifically mentioned in addition to the resistivity minima discussed before. It is the existence of the high-temperature resistivity maxima in the data (black squares in the figures) with $d\rho/dT<0$ above this temperature (until phonons become important at still higher temperatures). This quantum-classical high-temperature crossover behavior is ubiquitous in all high-mobility 2D systems in the metallic phase, where after the sharp initial rise of $\rho(T)$ with increasing $T$, $\rho(T)$ goes through a maximum at a density-dependent ``high'' temperature slowly decreasing beyond this characteristic temperature until phonon scattering takes over at still higher temperatures. We note that the characteristic temperature for this resistivity maxima (black squares) rapidly decreases with decreasing density, whereas the characteristic temperature for the resistivity minima (red circles) increases with decreasing density. Once these two lines come close together ($n\sim1.2\times10^{11}\mathrm{cm}^{-2}$ for sample A and $n\sim1.8\times10^{11}\mathrm{cm}^{-2}$ for sample B, see Fig.~\ref{MinMax}), the system simply behaves as an insulating system at all lower densities since $d\rho/dT<0$ for all density and temperature below this intersection regime of the black squares and red circles. This finite-temperature behavior is also apparent in our theoretical curves [see Fig.~\ref{fig:MinMaxTheory} for sample A and Fig.~\ref{fig:SampleBTheory}(c) for sample B]. Before concluding, we point out that, within our model of Boltzmann resistivity due to screened charged impurity scattering and weak localization due to quantum interference [i.e., Eq.~(\ref{eq:Resistivity}) in Sec.~\ref{sec:Temperature-dependence-of}], we can actually derive a leading order analytical formula for the characteristic temperature (i.e., the red circle plots in the figures) for the resistivity minima below which weak localization effect should dominate the metallic transport properties. Using the leading order (linear) analytical low-temperature expansion in temperature for $\rho_{B}(T)$ in Eq.~(\ref{eq:Resistivity}) we get for the characteristic temperature $T_{m}$ where $d\rho/dT=0$ to be, \begin{equation} T_{m}\propto T_{F}/\sigma_{B}(T=0), \end{equation} where $T_{F}\propto n$ is the Fermi temperature and $\sigma_{B}(T=0)=\rho^{-1}(T=0)$ is the zero-temperature Boltzmann conductivity due to charged impurity scattering. It is well-known~\cite{DasSarma_2013b} that $\sigma_{B}(T=0)$ obeys an approximate scaling law with the carrier density going as, \begin{equation} \sigma_{B}\sim n^{\alpha+1}, \end{equation} where $\alpha(n)\approx0.3$ in Si MOSFETs in the low-density metallic regime. This leads to a rather weak density dependence of the characteristic temperature $T_{m}$ going as, \begin{equation} T_{m}\sim n^{-0.3}, \end{equation} which is approximately consistent with the experimental and theoretical numerical results for the red circle lines in Figs.~\ref{MinMax},~\ref{fig:MinMaxTheory},~and~\ref{fig:SampleBTheory}(c). The important point to note is that the weak localization correction becomes important at lower density since the weak localization effect becomes progressively stronger with the decreasing Drude conductivity with decreasing carrier density. On the other hand, if $\sigma_{B}(T)$ is temperature independent as it is for low-mobility samples, the weak localization $\ln T$ correction would be visible at all carrier densities. We conclude by emphasizing that our theory has many approximations which need to be improved in future work. We neglect effects of interaction in the theory beyond the finite-temperature screening effect by the electrons themselves which we include within RPA (i.e., infinite sum of bubble diagrams). At the low carrier densities of interest in the 2D MIT problem, interaction effects are likely to be important, but we neglect them in the spirit of obtaining the leading-order result within the minimal model. In addition, we believe that the interaction effects may be substantially suppressed by finite temperature since $T/T_{F}$ is not particularly small at the experimental densities and temperatures. We also assume rather unrealistically that the weak localization quantum correction may simply be added to the Drude-Boltzmann conductivity as a $\ln T$ correction, which is obviously a simplification done in the spirit of developing the minimal physical model for including both metallic and insulating temperature dependence within a single unifying scheme. In particular, neither the Boltzmann theory nor the simple weak localization correction is strictly applicable in the strongly disordered situation close to the metal-insulator transition where $k_{F}\ell\sim1$, but we have assumed in this work that such a minimal theory (i.e., Boltzmann conductivity along with the $\ln T$ weak localization quantum correction) can be continuously extended from the high-density ($k_{F}\ell\gg1$) regime to the low-density regime ($k_{F}\ell\gtrsim1$) as long as the system is still nominally in the metallic phase. Our minimal theory obviously becomes progressively quantitatively worse as the carrier density decreases, but we think that it remains qualitatively valid all the way down to $k_{F}\ell\gtrsim1$ in the metallic phase. We have neglected all phonon scattering effects in the theory which probably become important for $T\gtrsim10\mathrm{K}$ outside the regime of our interest. It is straightforward to include phonon scattering in the theory and it adds a resistivity increasing linearly with $T$ for $T>T_{\mathrm{BG}}$ ($\sim10\mathrm{K}$ in Si MOSFET) where $T_{\mathrm{BG}}$ is the Bloch-Gruneisen temperature. Since our interest in the current paper is the low-temperature 2D MIT physics, phonon scattering effects are irrelevant for our problem. A 2DEG in the presence of disorder and electron-electron interaction is expected to manifest a diffusive (Altshuler-Aronov) interaction correction to conductivity~\cite{AAbook,AkkermansBook,Zala_2001,Zala_2001R,Gornyi_2004,Burdis_1988,Klimov_2008} which gives another(i.e. in addition to the weak localization correction) $\ln T$ contribution in the temperature dependence of conductivity. However, the prefactor in front of $\ln g T$ given by a combination of singlet and triplet components is not known at low electron densities of interest in the current work since interaction effects are non-perturbatively strong at low carrier densities. This Altshuler-Aronov effect does not change the functional form of the temperature dependence of resistivity, and therefore our theory as described by Eqs.~(\ref{eq:Resistivity})~and~(\ref{eq:WLT0}) not including the Altshuler-Aronov part of the electron-electron interaction correction (note that the ballistic part of the electron-electron interaction effect is included in our Boltzmann theory of Sec.~\ref{sec:Boltzmann}) nevertheless qualitatively describes the data. It is, however, important to point out that naively adding an Altshuler-Aronov lnT term in our theory will be an incorrect double-counting of many-body effects since the screening effect we included non-perturbatively in the theory already contains the Hartree part of the interaction effect in the ballistic regime (which crosses over to the lnT effect in the diffusive regime~\cite{Zala_2001,Zala_2001R}, and thus there is no need to add a separate lnT term arising from the Altshuler-Aronov effect also. Finally, we mention that we have not discussed at all the nature of the actual crossover to the strongly localized exponential temperature dependence (in the resistivity) at very low carrier density as our focus in this work has entirely been on the signature of weak localization in the putative metallic regime. The issue of the strong localization crossover has recently been discussed in great detail by the two of us~\cite{DasSarma_2014}. One key issue that remains open in this context is the role of impurity-induced density inhomogeneity or puddle formation in the 2DEG as the system crosses over to the strongly insulating phase and screening fails completely. Such inhomogeneous puddles could lead to percolation physics competing with the physics of Anderson localization. In fact, sometimes the crossover to the strong localization behavior may itself be considered a percolation transition as was done in Ref.~\onlinecite{Tracy_2009}. The interplay of puddle physics and localization physics in the strongly interacting 2D system is an interesting open question in the 2D MIT problem. For our specific considerations, the puddle size could act as a cut off for the dephasing length explaining why the low-temperature dephasing length appears to be short compared with the standard Fermi liquid theory. This work is supported by NSA-LPS-CMTC.
1,314,259,996,201
arxiv
\section{Introduction} This paper aims to address two principle questions in the degree diameter problem (a survey of which can be found in \cite{degree-diameter-survey}). The first question, posed in \cite{when-cayley}, concerns when the G\'omez graphs are Cayley. To this question we provide a solution in the extremal case, though note that the paper of G\'omez \cite{gomez} defines a broader family of graphs than we address here. The second question we address regards the similarity in the construction methods used in the Faber-Moore-Chen graphs \cite{faber-moore-chen, faber-moore-chen-2} and the G\'omez graphs, and shows that with reasonable assumptions the G\'omez graphs provide an optimal construction. This paper is divided into material serving four logical purposes. The first section on tau and sigma sequences provides a technical proof necessary in subsequent work whose inclusion at a later stage would interrupt the natural flow of argument. The second section on word graphs contains a discussion of a natural generalisation of Faber-Moore-Chen graphs and G\'{o}mez graphs. This section achieves two main goals, firstly an optimality result for the G\'{o}mez graphs, and secondly providing the motivation to the argument persued in the remaining section. Sections 4 through 7 deal with the problem of classifying the full automorphism group of extremal directed G\'omez graphs, and then showing the limitation of the technique used for dealing with all G\'omez graphs. The final section concludes with a list of relevant questions which remain open. \section{Tau and Sigma Sequences} In this section we define two special sequences, and we aim to count how many times a given initial value may occur in these sequences. We define a \textit{$\tau$-sequence} as an ordered sequence of $n$ integers $a_1 a_2 \dots a_n$ such that \begin{enumerate}[label=\textnormal{(\roman*)}] \item \label{tau-at-least-one} $a_i \geq 0$, \item \label{tau-descending} if $a_i > 1$ then $a_{i - 1} = a_i - 1$, \item \label{tau-is-three-seqs} there are at most three $i$ such that $a_i = 0$, \item \label{tau-rotate} if $a_1 a_2 \dots a_n$ is a $\tau$-sequence then $a_n a_1 a_2 a_3 \dots a_{n - 1}$ must also be a $\tau$-sequence. \end{enumerate} We shall use $\tau(n, \alpha, \beta)$ to indicate the number of $\tau$-sequences of length $n$ such that $a_1 = \alpha$ and $a_n = \beta$, and $\tau(n, \alpha)$ the number of $\tau$-sequences of length $n$ such that $a_1 = \alpha$. Informally we may think of a $\tau$-sequence of length $n$ as being a concatenation of either 1, 2 or 3 sequences of ascending integers starting at 0, or a rotation thereof. We now aim to evaluate $\tau(n, i)$ for each $0 \leq i < n$. \begin{lemma} For $n > 1$, $\tau(n, 0, 0) = n - 1$. \end{lemma} \begin{proof} We begin with the observation that in any $\tau$-sequence, if $a_k = 0$ and $a_{k + j} \neq 0$ for some range of $j$, then we may repeatedly apply \ref{tau-at-least-one} and \ref{tau-descending} to show that $a_{k + j} = j$. In particular, each $a_{k + j}$ is uniquely defined. Now suppose that $a_1 a_2 \dots a_n$ is a $\tau$-sequence with $a_1 = a_n = 0$. First suppose that there is no $j$ such that $1 < j < n$ and $a_j = 0$. If this is the case then each $a_j = j - 1$, and there is exactly one $\tau$-sequence with this property. Now suppose that there is some $k$ such that $1 < k < n$ and $a_k = 0$. By \ref{tau-is-three-seqs} there is no $j \not\in \{ 1, k, n \}$ such that $a_j = 0$. Hence for $1 < j < k$ we must have $a_j = j - 1$ and for $k < j < n$ we must have $a_{k + j} = j$. Hence there is exactly one $\tau$-sequence with $a_k = 1$ for each possible value of $k$. This gives rise to $n - 2$ possible $\tau$-sequences. Hence, in total, we have $n - 1$ possible $\tau$-sequences of length $n$ with $a_1 = a_n = 0$, hence $\tau(n, 0, 0) = n - 1$. \end{proof} \begin{lemma} For $1 < i \leq n$, we have $\tau(n, 0, n - i) = i - 1$. \end{lemma} \begin{proof} Suppose that $a_1 a_2 \dots a_n$ is a $\tau$-sequence with $a_1 = 0$. If $a_n = \alpha > 0$, then we see that $a_1 a_2 \dots a_n$ is a $\tau$-sequence of length $n$ if, and only if, $a_1 a_2 \dots a_{n - 1}$ is a $\tau$-sequence of length $n - 1$. Hence $\tau(n, 0, \alpha) = \tau(n - 1, 0, \alpha - 1)$ for all $\alpha > 1$. We may repeatedly apply this observation to show that $\tau(n, 0, \alpha) = \tau(n - \alpha, 0, 0)$. Hence we have $\tau(n, 0, n - i) = \tau(n - (n - i), 0, 0) = \tau(i, 0, 0) = i - 1$. \end{proof} \begin{proposition} For $1 \leq i \leq n$, $\tau(n, n - i) = \frac{1}{2}(i^2 - i + 2)$. \end{proposition} \begin{proof} We proceed by induction on $i$. We start with the case $i = 1$. To calculate $\tau(n, n - 1)$, let $a_1 a_2 \dots a_n$ be a $\tau$-sequence with $a_1 = n - 1$. By \ref{tau-rotate} we equivalently have $a_2 a_3 \dots a_n a_1$ is a $\tau$-sequence. Now we may repeatedly apply \ref{tau-at-least-one} and \ref{tau-descending} to show $a_i = i - 1$, and that there is a unique possible $\tau$-sequence. Hence $\tau(n, n - 1) = 1$. Now take $i = k$, with the hypothesis given for $i = k - 1$. Let $a_1 a_2 \dots a_n$ be a $\tau$-sequence with $a_1 = (n - k)$. By \ref{tau-descending} we have $a_2 = 0$ or $a_2 = (n - (k - 1))$. In the first case, we may rotate to get a $\tau$ sequence beginning with 0 and ending with $n - k$. In the second case, we may rotate to get a $\tau$-sequence beginning with $n - (k - 1)$. This gives us \begin{align*} \tau(n, n - k) &= \tau(n, 0, n - k) + \tau(n, n - (k - 1)) \\ &= k - 1 + ((k - 1)^2 - (k - 1) + 2)/2 \\ &= (k^2 - k + 2)/2. \qedhere \end{align*} \end{proof} We now define a \textit{$\sigma$-sequence} as a sequence of $n = 2k + 1$ integers $a_1 a_2 \dots a_n$ such that \begin{enumerate}[label=\textnormal{(\roman*)}] \item \label{sigma-at-least-zero} $a_i \geq 0$, \item \label{sigma-one-location} if $a_i = 0$ then $a_{i + (k + 1)} = 1$, \item \label{sigma-one-location-two} if $a_i = 1$ then either $a_{i - 1} = 0$ or $a_{i + k} = 0$, \item \label{sigma-descending} if $a_i > 1$ then $a_{i - 1} = a_i - 1$, \item \label{sigma-is-three-seqs} there are at most three $i$ such that $a_i = 0$, \item \label{sigma-rotate} if $a_1 a_2 \dots a_n$ is a $\sigma$-sequence then $a_n a_1 a_2 a_3 \dots a_{n - 1}$ must also be a $\sigma$-sequence. \end{enumerate} As this definition is not as readily visualisable as that of $\tau$-sequences, we give as examples each sequence for $n \in \{ 9, 11 \}$ without rotations. { \newcommand{\f}[1]{\textcolor{red}{#1}} \newcommand{\g}[1]{\textcolor{blue}{#1}} \newcommand{\h}[1]{\textcolor{green}{#1}} \begin{table}[H] \centering \begin{tabular}{l|l} $n = 9$ & $n = 11$ \\ \hline \f{01}234\f{1}234 & \f{01}2345\f{1}2345 \\ \f{01}\g{01}2\f{1}2\g{1}2 & \f{01}\g{01}23\f{1}2\g{1}23 \\ \f{01}2\g{01}\f{1}23\g{1} & \f{01}2\g{01}2\f{1}23\g{1}2 \\ \f{001}23\f{11}23 & \f{01}23\g{01}\f{1}234\g{1} \\ \f{001}\g{01}\f{11}2\g{1} & \f{001}234\f{11}234 \\ \f{001}\g{1}2\f{11}\g{01} & \f{001}\g{01}2\f{11}2\g{1}2 \\ \f{01}\h{1}\g{01}\f{1}\h{01}\g{1} & \f{001}2\g{01}\f{11}23\g{1} \\ \f{0001}2\f{111}2 & \f{001}\g{1}23\f{11}\g{01}2 \\ & \f{001}2\g{1}2\f{11}2\g{01} \\ & \f{01}\g{01}\h{01}\f{1}2\g{1}2\h{1} \\ & \f{01}\h{1}\g{01}2\f{1}\h{01}\g{1}2 \\ & \f{0001}23\f{111}23 \\ \end{tabular} \end{table} } In our table we have highlighted a pattern made by the 0s and 1s in these sequences which we aim to formalise and prove. The patterns of 0s and 1s are of the following forms \[ 01\underbrace{\dots}_{k - 1}1\underbrace{\dots}_{k - 1}, \quad 001\underbrace{\dots}_{k - 2}11\underbrace{\dots}_{k - 2} \quad \text{or} \quad 0001\underbrace{\dots}_{k - 3}111\underbrace{\dots}_{k - 3}. \] We shall call these patterns \textit{01-groups}. We aim to show that each 0 or 1 in a $\sigma$-sequence occurs in a unique 01-group. In the following, suppose that $a_1 a_2 \dots a_n$ is a $\sigma$-sequence with $a_1 = 0$ and $a_n \neq 0$. \begin{lemma} \label{01_group_from_zero} There is some $1 \leq \alpha \leq 3$ such that for $1 \leq i \leq \alpha$ we have $a_i = 0, a_{i + (k + 1)} = 1$ and $a_{\alpha + 1} = 1$. \end{lemma} \begin{proof} We let $\alpha$ be the largest number such that $a_i = 0$ for $1 \leq i \leq \alpha$. From \ref{sigma-is-three-seqs} we see that $\alpha \leq 3$. Hence, combining \ref{sigma-at-least-zero}, \ref{sigma-descending} and our definition of $\alpha$, we see that $0 \leq a_{\alpha + 1} < 2$ and $a_{\alpha + 1} \neq 0$, hence we must have $a_{\alpha + 1} = 1$. Finally, we may apply \ref{sigma-one-location} and the fact $a_i = 0$ to show $a_{i + (k + 1)} = 1$. \end{proof} \begin{corollary} Every 0 in a $\sigma$-sequence is in a unique 01-group. \end{corollary} \begin{proof} Consider a $\sigma$-sequence with $a_i = 0$ for some $i$. Using \ref{sigma-rotate} we may consider a rotation of this sequence which moves this 0 from $i$ to 1, and then possibly further rotate $a_1$ to $a_2$ or $a_3$ until $a_n \neq 0$. Then we may apply the previous lemma. \end{proof} \begin{lemma} Every 1 in a $\sigma$-sequence is in a unique 01-group. \end{lemma} \begin{proof} Applying \ref{sigma-one-location-two}, we have two possibilities if $a_i = 1$. In the first possibility, $a_{i - 1} = 0$. In this case, $a_{i - 1 + (k + 1)} = a_{i + k} = 1$ by \ref{sigma-one-location}. Hence, the second possibility that $a_{i + k} = 0$ is mutually exclusive with the first. In the first possibility, we may use Lemma~\ref{01_group_from_zero} to find the 01-group from $a_{i - 1}$ which contains $a_i$, and in the second possibility we may do the same but from $a_{i + k}$ instead of $a_{i - 1}$. \end{proof} Let $\sigma(i, n)$ where $n = 2k + 1$ be the number of $\sigma$-sequences of length $n$ with $a_1 = i$. \begin{lemma} $\sigma(k, n) = 2$. \end{lemma} \begin{proof} If $a_1 = k$ in a $\sigma$-sequence, by \ref{sigma-rotate} we may consider a rotation such that $a_k = k$. Repeatedly applying \ref{sigma-descending} we may show that for $i \geq 1$ we have $a_i = i$. For $2 \leq i \leq k$, we have $a_i > 1$, and hence we have a block of $k - 1$ numbers in our sequence not in a 01-group. Hence, the only possible 01-group in the sequence is $01\underbrace{\dots}_{k - 1}1\underbrace{\dots}_{k - 1}$. As $a_1 = 1$, and each occurence of 1 is in a 01-group, we must have this 01-group in our sequence and no other 01-group may be in this sequence. We may now consider rotating our $\sigma$-sequence again so that $a_1 = 0$. Now we may apply \ref{sigma-descending} to show $a_i = i$ and $a_{i + (k + 1)} = i$ for all $2 \leq i \leq k$, and this is the only $\sigma$-sequence containing $k$ up to rotation. Finally, we may rotate this sequence in two ways to make $a_1 = k$, hence $\sigma(k, n) = 2$. \end{proof} \begin{lemma} $\sigma(0, n) \geq 3$. \end{lemma} \begin{proof} For $k \geq 3$ we consider the $\sigma$-sequence with $a_1 = a_2 = a_3 = 0$, $a_4 = a_{k + 2} = a_{k + 3} = a_{k + 4} = 1$ and all other $a_i$ filled in using \ref{sigma-descending}. This sequence may be rotated to give $a_1 = 0$ in three different ways, hence in this case $\sigma(0, n) \geq 3$. For $k = 2$, we consider the $\sigma$-sequences 00111, 01110 and 01212 to see $\sigma(0, n) \geq 3$. \end{proof} \begin{lemma} $\sigma(i, n) < \sigma(i - 1, n)$ for $1 < i \leq k$. \end{lemma} \begin{proof} Consider the map $\phi$ which takes a $\sigma$-sequence $a_1 a_2 \dots a_n$ to $a_n a_1 a_2 \dots a_{n - 1}$. If $a_1 = i$, then $a_n = i - 1$ by \ref{sigma-descending}, hence $\phi$ is an injective map from $\sigma$-sequences starting with $i$ to those starting with $i - 1$. Hence, to show $\sigma(i, n) < \sigma(i - 1, n)$ we need only find a $\sigma$-sequence with $a_1 = i - 1$ and $a_2 \neq i$. For $i \leq k - 1$, the sequence with $a_1 = 0, a_2 = a_{k + 2} = 1, a_{i + 1} = 0, a_{i + 2} = a_{i + k + 2} = 1$ and all other $a_j$ satisfying $a_j = a_{j - 1} + 1$ is a $\sigma$-sequence with $a_i = i - 1$ and $a_{i + 1} = 0 \neq i$. Hence, we can take a rotation of this by \ref{sigma-rotate} to find a $\sigma$-sequence with $a_1 = i - 1$ and $a_2 \neq i$. Now, for $i = k$, we consider the sequence $a_1 = 0, a_2 = a_{k + 2} = 1, a_{k - 1} = 0, a_k = 1$ and $a_n = 1$, and all other $a_j$ satisfying $a_j = a_{j - 1} + 1$. We see this is a $\sigma$-sequence with $a_{n - 1} = i - 1$ and $a_n = 1 \neq i$. Hence, again we can take a rotation of this by \ref{sigma-rotate} to find a $\sigma$-sequence with $a_1 = i - 1$ and $a_2 \neq i$. \end{proof} \section{Word Graphs} To facilitate our discussion of the G\'{o}mez graphs we first introduce the notion of a \textit{word graph} and word graph \textit{families}, which form a natural generalisation of the construction of the G\'{o}mez and Faber-Moore-Chen graphs (found in \cite{gomez} and \cite{faber-moore-chen} respectively). To define a word graph, fix some number $n$, the \textit{word length}, some set $\Pi_n \subseteq S_n$, the \textit{rules}, and some $m > n$, the \textit{alphabet size}. We define the word graph $G_m = \langle V, E \rangle$ as follows. Fix some arbitrary set $B$ such that $|B| = m$, let $V = \{ x_1 x_2 \dots x_n | x_i \in B, x_i = x_j \Leftrightarrow i = j \}$, that is the vertices of $G_m$ are the words of length $n$ on $B$ all of whose letters are distinct, and we form the directed adjacencies of $G_m$ by the following rules \[ x_1 x_2 \dots x_n \rightarrow \begin{cases} x_2 x_3 \dots x_n y, & y \in B \setminus \{ x_1, x_2, \dots, x_n \}, \\ x_{\pi(1)} x_{\pi(2)} \dots x_{\pi(n)}, & \pi \in \Pi_n. \end{cases} \] We define the word graph family of $\Pi_n$ to be $\{ G_{n + 1}, G_{n + 2}, \dots \}$ and denote it by $\mywgp{\Pi_n}$. For the following, let $\Pi_n$ be an arbitrary rule set and $G_m \in \mywgp{\Pi_n}$. We will refer to the rules of the form $x_1 x_2 \dots x_n \rightarrow x_2 x_3 \dots x_n y$ for $y \not\in \{ x_i \}$ as \textit{alphabet changing} and rules of the form $x_1 x_2 \dots x_n \rightarrow x_{\pi(1)} x_{\pi(2)} \dots x_{\pi(n)}$ as \textit{alphabet fixing}. For a vertex $v = x_1 x_2 \dots x_n$, we shall define $\alpha$ by $\alpha(v) = \{ x_1, x_2, \dots, x_n \}$ and refer to $\alpha(v)$ as the \textit{alphabet} of $v$. \begin{lemma} \label{all_at_least_n} For all $m \geq 2n$ we have $\mydiam{G_m} \geq n$. \end{lemma} \begin{proof} Letting $B = \{ x_1, x_2, \dots, x_n, y_1, y_2, \dots, y_n \}$ we consider any path from $u = x_1 x_2 \dots x_n$ to $v = y_1 y_2 \dots y_n$. As each rule of $G_m$ introduces at most one new letter which is not in $\alpha(u)$, and each letter of $\alpha(v)$ is not in $\alpha(u)$, we must have at least $|\alpha(v)| = n$ rules in a path from $u$ to $v$. \end{proof} \begin{lemma} \label{path_of_double_length} For all $m \geq 3n$ we have $\mydiam{G_m} \leq 2n$. \end{lemma} \begin{proof} Consider $u = x_1 x_2 \dots x_n, v = y_1 y_2 \dots y_n \in \myvert{G_m}$, and let $\{ z_1, z_2, \dots, z_n \} \subseteq B \setminus (\alpha(u) \cup \alpha(v))$. Letting $w = z_1 z_2 \dots z_n$, we can create a path of length $n$ from $u$ to $w$ by using the alphabet changing rule to append $z_i$ at the $i$\textsuperscript{th} step in the path. This is always possible as $\alpha(u) \cap \alpha(w) = \emptyset$. We then may form another such path of length $n$ from $w$ to $v$ by the same logic. Concatenating these two paths gives us a path of length $2n$ from $u$ to $v$. \end{proof} \begin{lemma} \label{eventual_diameter} For all $m \geq 4n$ we have $\mydiam{G_m} = \mydiam{G_{4n}}$. \end{lemma} \begin{proof} Take arbitrary $u, v \in G_m$. Lemma~\ref{path_of_double_length} tells us that $\mydist{u}{v} \leq 2n$, hence if we consider a shortest path connecting $u$ and $v$ we know it is of length at most $2n$. On such a path, whenever we encounter alphabet changing rules, denote by $z_i$ the element introduced by the $i$\textsuperscript{th} alphabet changing rule. Now let $B' = \alpha(u) \cup \alpha(v) \cup \{ z_i \}$, note that we have $|B'| \leq 4n$. Hence, we see that the shortest path connecting $u$ and $v$ is in the subgraph $H$ induced by the vertices $\{ x_1 x_2 \dots x_n | x_i \in B' \} \subseteq \myvert{G_m}$, which is trivially isomorphic to a subgraph of $G_{4n}$. As $G_{4n}$ is also a subgraph of $G_m$, the result immediately follows. \end{proof} From Lemma~\ref{eventual_diameter}, for a given $\Pi_n$ we call the $\mydiam{G_{4n}}$ the \textit{eventual diameter} of $\mywgp{\Pi_n}$, and note from Lemma~\ref{all_at_least_n} that the eventual diameter is at least $n$. \begin{proposition} A family of word graphs, $\mywgp{\Pi_n}$, is asymptotically close to the Moore bound if, and only if, its eventual diameter is $n$. \end{proposition} \begin{proof} Let $G_m \in \mywgp{\Pi_n}$ for $m \geq 4n$. Suppose the eventual diameter of $\mywgp{\Pi_n}$ is $n + \varepsilon$. The degree of $G_m$ is given by $|\Pi_n| + (m - n)$, hence letting $\alpha = |\Pi_n| - n$ we have the $\mydeg{G_m} = m + \alpha$. Finally, we may count the size of $G_m$ as follows \[ |\myvert{G_m}| = n! \binom{m}{n} = \frac{m!}{(m - n)!} = m^n + \mybigo{m^{n - 1}}. \] Now we recall the Moore bound for a directed graph of degree $d$ and diameter $k$ is given by $\mymoore{d}{k} = d^k + d^{k - 1} + \dots + 1 = d^k + \mybigo{d^{k - 1}}$. Hence, letting $d_m = \mydeg{G_m}$ and $k_m = \mydiam{G_m}$, we have \begin{align*} \lim_{m \to \infty} \left\{ \frac{|\myvert{G_m}|}{\mymoore{d_m}{k_m}} \right\} &= \lim_{m \to \infty} \left\{ \frac{m^n + \mybigo{m^{n - 1}}} {\mymoore{m + \alpha}{n + \varepsilon}} \right\} \\ &= \lim_{m \to \infty} \left\{ \frac{m^n + \mybigo{m^{n - 1}}} { (m + \alpha)^{n + \varepsilon} + \mybigo{m^{n + \varepsilon - 1}} } \right\} \\ &= \lim_{m \to \infty} \left\{ \frac{m^n + \mybigo{m^{n - 1}}} { m^{n + \varepsilon} + \mybigo{m^{n + \varepsilon - 1}} } \right\} \\ &= \begin{cases} 1, & \text{if $\varepsilon = 0$}, \\ 0, & \text{otherwise.} \end{cases} \qedhere \end{align*} \end{proof} Hence, we now introduce the restriction that a rule set $\Pi_n$ is \textit{admissible} if the eventual diameter of $\mywgp{\Pi_n}$ is $n$. For the rest of this section, we will only consider admissible sets $\Pi_n$. We also now introduce the further restriction that, for all $\pi \in \Pi_n$, $\pi(i) \leq i + 1$. Informally this means that the alphabet fixing rules cannot ``shift'' any letter to the left more than one space at a time. We call this \textit{shift restriction} and note that the G\'{o}mez and Faber-Moore-Chen graphs are shift restricted word graphs. For the remainder of this section, we will only consider shift restricted word graphs. We now show that the G\'{o}mez graphs are largest possible shift restricted word graphs for given degree and diameter. Let $\Pi_n$ be admissible and shift restricted, and let $G_m \in \mywgp{\Pi_n}$, where $m > n$. For each vertex $v \in \myvert{G_m}$ and letter $x \in B$ we introduce the function $p_x(v)$ which is the position of the letter $x$ in $v$. The function is defined by $p_{x_i}(x_1 x_2 \dots x_n) = i$ and $p_y(x_1 x_2 \dots x_n) = 0$ where $y \not\in \{ x_1, x_2, \dots, x_n \}$. \begin{lemma} \label{shift_one} For $u, v \in V$ with $u \rightarrow v$ and $y \in B$, we have $p_y(v) \geq p_y(u) - 1$. \end{lemma} \begin{proof} If $p_y(u) = 0$ then the result is immediate as $p_y(v') \geq 0$ for all $v' \in V$. Hence, suppose $u = x_1 x_2 \dots x_n$ and $y = x_i$. If $v = x_2 x_3 \dots x_n y$, then $p_{x_i}(v) = p_{x_i}(u) - 1$. If $v = x_{\pi(1)} x_{\pi(2)} \dots x_{\pi(n)}$ and $\pi(j) = i$ then $p_{x_i}(v) = p_{x_j}(u) = j \geq \pi(j) - 1 = p_{x_i}(u) - 1$. \end{proof} \begin{corollary} \label{path_len_pos} For any $u, v \in V$ and $y \in B$, all paths connecting $u$ to $v$ have length at least $p_y(u) - p_y(v)$ (if $p_y(u) = 0$ and $p_y(v) > 0$ this becomes $n + 1 - p_y(u)$). \end{corollary} \begin{lemma} \label{at_least_diam_n} $\mydiam{G_m} \geq n$. \end{lemma} \begin{proof} Let $u = x_1 x_2 \dots x_n$ and $v = y x_1 x_2 \dots x_{n - 1}$. We have $p_{x_n}(u) = n$ and $p_{x_n}(v) = 0$, hence any path connecting $u$ and $v$ is at least length $n$. \end{proof} \begin{proposition} $\mydiam{G_m} = n$. \end{proposition} \begin{proof} By our assumption of the eventual diameter being $n$, we need only show this for $m < 4n$. Hence, consider $u, v \in \myvert{G_m}$ where $m < 4n$. Let $\phi : G_m \hookrightarrow G_{4n}$ be the inclusion from $G_m$ to $G_{4n}$, and consider a path from $u' = \phi(u)$ to $v' = \phi(v)$ in $G_{4n}$. Letting $B' = \alpha(u') \cup \alpha(v')$, we can see that any vertex $w \in G_{4n}$ satisfying $\alpha(w) \subseteq B'$ is invertible by $\phi$. Suppose that on a path from $u'$ to $v'$ we introduce a letter $y \not\in B'$ via an alphabet changing rule, call the vertex after this rule $w$. We have $p_y(w) = n$ and $p_y(v') = 0$, hence the remainder of the path is at least length $n$, but this contradicts that the path is of length at most $n$. Therefore all vertices $w$ between $u'$ and $v'$ satisfy $\alpha(w) \subseteq B'$. Hence, we may use $\phi^{-1}$ to find a path connecting $u$ and $v$ of length at most $n$ in $G_m$. This shows $\mydiam{G_m} \leq n$, applying Lemma~\ref{at_least_diam_n} gives us the result. \end{proof} \begin{corollary} If $\Pi_n$ and $\mathrm{T}_n$ are admissible and shift restricted, and $|\Pi_n| < |\mathrm{T}_n|$, then each $G_m \in \mywgp{\Pi_n}$ and $H_m \in \mywgp{\mathrm{T}_n}$ have the same diameter and are the same size for all $m$, but $\mydeg{G_m} < \mydeg{H_m}$. \end{corollary} Hence we now may make the definition that an admissible shift restricted rule set $\Pi_n$ is \textit{optimal} if there exists no rule set $\mathrm{T}_n$ with $|\mathrm{T}_n| < |\Pi_n|$ which is also both admissible and shift restricted. \begin{lemma} \label{contains_cycle} For each $1 \leq i \leq n$ there exists some $\pi \in \Pi_n$ which contains an $i$-cycle. \end{lemma} \begin{remark} The proof of this lemma obscures that this is a fairly natural observation. We illustrate below an example path used in the lemma, noting that the key idea is that the letter $x_n$ has to ``jump'' the block of $y_i$s by exactly $k + 1$ spaces. At the point of the jump, we must have a permutation $\pi \in \Pi_n$, and the jump is a $(k + 1)$-cycle in that permutation. \[ \newcommand{\f}[1]{\textcolor{red}{#1}} \newcommand{\g}[1]{\textcolor{blue}{#1}} \begin{matrix} x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & x_7 & x_8 & \f{x_9} \\ x_2 & x_3 & x_4 & x_5 & x_6 & x_7 & x_8 & \f{x_9} & \g{y_1} \\ x_3 & x_4 & x_5 & x_6 & x_7 & x_8 & \f{x_9} & \g{y_1} & \g{y_2} \\ x_4 & x_5 & x_6 & x_7 & x_8 & \f{x_9} & \g{y_1} & \g{y_2} & \g{y_3} \\ x_5 & x_6 & x_7 & x_8 & \f{x_9} & \g{y_1} & \g{y_2} & \g{y_3} & x_1 \\ x_6 & x_7 & x_8 & \f{x_9} & \g{y_1} & \g{y_2} & \g{y_3} & x_1 & x_2 \\ x_7 & x_8 & \f{x_9} & \g{y_1} & \g{y_2} & \g{y_3} & x_1 & x_2 & x_3 \\ x_7 & x_8 & \g{y_1} & \g{y_2} & \g{y_3} & \f{x_9} & x_1 & x_2 & x_3 \\ x_8 & \g{y_1} & \g{y_2} & \g{y_3} & \f{x_9} & x_1 & x_2 & x_3 & x_4 \\ \g{y_1} & \g{y_2} & \g{y_3} & \f{x_9} & x_1 & x_2 & x_3 & x_4 & x_5 \\ \end{matrix} \] \end{remark} \begin{proof} For $k \geq 1$ we show the existence of a permutation containing a $(k + 1)$-cycle, noting that when $k + 1 = n - 1$ the remaining fixed element of the permutation provides the missing ``1-cycle''. Let $u = x_1 x_2 \dots x_n$ and $v = y_1 y_2 \dots y_k x_n x_1 x_2 \dots x_{n - (k + 1)}$. As $p_{y_1}(u) = 0$ and $p_{y_1}(v) = 1$, the shortest path that connects $u$ and $v$ is of length $n$. Hence, let $u = u_0 \rightarrow u_1 \rightarrow \dots \rightarrow u_n = v$ be such a path. A trivial induction and Lemma~\ref{shift_one} shows for $1 \leq j \leq k$ we have $p_{y_j}(u_{i + j}) = n - i$. A similar induction starting from $u_0$ shows that $p_{x_n}(u_i) \geq n - i$ and an induction working backwards from $u_n$ shows that $p_{x_n}(u_i) \leq n - i + k + 1$. As eventually we have $p_{x_n}(u_n) = k + 1 > 0$ we know there exists $c$ such that $p_{x_n}(u_{c - 1}) = n - c + 1$ and $p_{x_n}(u_c) > n - c$. Now consider the first such $c$. For each $1 \leq j \leq k$ we cannot have $p_{x_n}(u_c) = n - c + j$, as we have that $p_{y_j}(u_c) = n - c + j$, and we cannot have some $j > k + 1$ such that $p_{x_n}(u_c) = n - c + j$, as we have $p_{x_n}(u_c) \leq n - c + k + 1$. The only possibility this leaves is that $p_{x_n}(u_c) = n - c + k + 1$. This can only happen if the edge connecting $u_{c - 1}$ and $u_c$ is alphabet fixing, corresponding to some rule $\pi$. We see that in $\pi$, each $y_j$ went from $n - c + i + j + 1$ to $n - c + i + j$, and $x_n$ went from $n - c + 1$ to $n - c + k + 1$, hence, letting $\alpha = n - c + 1$, we have the $k + 1$-cycle $((\alpha + k) \ (\alpha + k - 1) \ \dots \ \alpha)$. \end{proof} \begin{corollary} The G\'{o}mez graphs are optimal. \end{corollary} \noindent \textit{(The reader unfamiliar with the definition of the G\'omez graphs will find the definition at the beginning of Section~\ref{intro_to_gomez}).} \begin{proof} For $n = 2k + 1$, the set $\Pi_n$ which defines the G\'{o}mez graphs contains exactly one cycle of each length $1 \leq i \leq n$. For $n = 2k$, the set $\Pi_n$ which defines the G\'{o}mez graphs contains exactly one cycle of each length $1 \leq i \leq n$, $i \neq k$, and two cycles of length $k$. As each permutation is a permutation on $n = 2k$ elements, it is not possible to remove a permutation from $\Pi_n$ by eliminating only one $k$-cycle. \end{proof} Altogether, this shows that if we want to try to create new word graphs which are larger than G\'{o}mez graphs for a given degree and diameter, then we will either have to consider non-admissible $\Pi_n$ to find small examples, which would be limited to $m < 2n$, or consider word graphs which are not shift restricted. We will now proceed in this section by establishing other properties shared by word graphs. In particular, we are interested in when they are Cayley, and shall provide a table of which values of $n$ and $m$ correspond to Cayley graphs. We note that it is possible that other values of $n$ and $m$ can correspond to Cayley graphs also, but this can only happen if the graph $G_m \in \mywgp{\Pi_n}$ contains an automorphism outside of $S_m$. \begin{lemma} \label{symmetric_group_lemma} There is some group $H \leq \myaut{G_m}$ with $H \cong S_m$. \end{lemma} \begin{proof} We construct $H$ by taking all $\phi \in S_m$ acting naturally on $B'$ and defining $\phi' : \myvert{G_m} \to \myvert{G_m}$ by $\phi'(x_1 x_2 \dots x_n) = \phi(x_1) \phi(x_2) \dots \phi(x_n)$. \end{proof} In light of this lemma, we quote a result available in \cite{regular-action, regular-action-2, regular-action-3} and used in \cite{when-cayley} and use it to classify when word graphs are Cayley. The following is an exhaustive table of values of $n$ and $m$ such that there is a subgroup of $S_m$ acting regularly on the tuples of length $n$, and the subgroups which have this action. \begin{table}[H] \centering \begin{tabular}{|l|l|l|} \hline $n$ & $m$ & Group \\ \hline $k$ & $k$ & $S_k$ \\ $k$ & $k + 1$ & $S_{k + 1}$ \\ $k$ & $k + 2$ & $A_{k + 2}$ \\ 2 & $q$ & Finite near-field \\ 3 & $q + 1$ & $\mypsl{2}{q}, \mypdeltal{2}{q} $ \\ 4 & 11 & $M_{11}$ \\ 5 & 12 & $M_{12}$ \\ \hline \end{tabular} \end{table} \begin{corollary} For $n, m$ in this table, the graph $G_m \in \mywgp{\Pi_n}$ is Cayley. \end{corollary} \begin{corollary} If $\myaut{G_m} \cong S_m$, then the graph $G_m \in \mywgp{\Pi_n}$ is Cayley if, and only if, $n$ and $m$ are in the given table. \end{corollary} Hence, we now conclude this section by establishing a test to determine whether a given family of word graphs satisfies $\myaut{G_m} \cong S_m$. First we shall need some definitions. Let $\Gamma_n$ be the Cayley graph $\mycayley{\Pi_n}{S_n}$. \begin{lemma} Letting $H$ be a subgraph of $G_m$ induced by vertices $\{ x_1, x_2, \dots, x_n \} \subset B$, we have $H \cong \Gamma_n$. \end{lemma} \begin{proof} This is simply a relabelling. \end{proof} We will now refer to the graph $\Gamma_n$ as the \textit{alphabet fixing subgraph} of $G$, noting that it is unique to isomorphism regardless of the choice of $\{ x_i \}$. Now we make two further definitions. We shall call a word graph $G$ \textit{alphabet stable} if there exists no automorphism $\phi \in \myaut{G}$ such that there exist some $u, v \in \myvert{G}$ with $\alpha(u) = \alpha(v)$ but $\alpha(\phi(u)) \neq \alpha(\phi(v))$. In other words, a word graph is alphabet stable if, and only if, it preserves whether arcs are alphabet changing or alphabet fixing. Second, we shall call a family of word graphs \textit{subregular} if the alphabet fixing subgraph $\Gamma_n$ of $G_m$ is regular, i.e. $\myaut{\Gamma_n} \cong S_n$. In the following let $G_m$ be a word graph which is alphabet stable and subregular. We now aim to show that $\myaut{G_m} \cong S_m$. \begin{lemma} \label{fixes_alpha_subs} If $\phi \in \myaut{G_m}$ fixes a vertex $u$, then $\phi$ fixes all $v$ such that $\alpha(u) = \alpha(v)$. \end{lemma} \begin{proof} Let $V = \{ v \in \myvert{G_m} | \alpha(v) = \alpha(u) \}$. Consider $\psi = \phi|_V$, the restriction of $\phi$ to the vertices of $V$. For any $v \in V$ we have $\alpha(\psi(v)) = \alpha(\phi(v)) = \alpha(u)$, hence we have $\psi(v) \in V$. As $\phi$ is an automorphism, $\psi$ is injective, and therefore bijective as its image is its domain. Hence $\psi$ is a bijection on the subgraph induced by vertices of $V$, which is the alphabet fixing graph $\Gamma_n$. As $G_m$ is subregular, any automorphism of $\Gamma_n$ which fixes a vertex must fix all of $\Gamma_n$. Therefore, as $\psi(u) = u$ we must have that $\psi$ is the identity on $V$. \end{proof} \begin{lemma} \label{inductive_step} If $\phi \in \myaut{G_m}$ and $X, Y, Z \subset B$ with the following properties \begin{itemize} \item $X = \{ x_1, x_2, z_1, z_2, \dots, z_{n - 2} \}$, \item $Y = \{ y_1, y_2, z_1, z_2, \dots, z_{n - 2} \}$, \item $Z = \{ x_2, y_2, z_1, z_2, \dots, z_{n - 2} \}$, \item $\phi$ fixes all $v \in \myvert{G_m}$ with $\alpha(v) = X$ or $\alpha(v) = Y$, \end{itemize} then $\phi$ fixes all $v \in \myvert{G_m}$ with $\alpha(v) = Z$. \end{lemma} \begin{proof} Let $u = x_1 z_1 z_2 \dots z_{n - 2} x_2, v = y_1 z_1 z_2 \dots z_{n - 2} y_2$ and suppose we have $w, w' \in \myvert{G_m}$ such that $u \rightarrow w, v \rightarrow w'$ and $\alpha(w) = \alpha(w')$. As $|X \cap Y| = 2$, we must have that both $u \rightarrow w$ and $v \rightarrow w'$ are alphabet changing rules. Therefore, $x_1 \not\in \alpha(w), x_2 \in \alpha(w), y_1 \not\in \alpha(w')$ and $y_2 \in \alpha(w')$. Hence we have $\alpha(w) \supseteq (X \cap Y) \cup \{ x_2, y_2 \}$, but $|\alpha(w)| = |(X \cap Y) \cup \{ x_2, y_2 \}|$, and hence we have equality. We now see the rule $u \rightarrow w$ must introduce the letter $y_2$, and $w = z_1 z_2 \dots z_{n - 2} x_2 y_2$. Similarly, $w' = z_1 z_2 \dots z_{n - 2} y_2 x_2$. By our assumptions on $\phi$ we have $\phi(u) = u$, $\phi(v) = v$, and $\alpha(\phi(w)) = \alpha(\phi(w'))$ as $\alpha(w) = \alpha(w')$ and $G_m$ is alphabet stable. Hence we have $u \rightarrow \phi(w)$ and $v \rightarrow \phi(w')$ with $\alpha(\phi(w)) = \alpha(\phi(w'))$, so $\phi(w) = w$ and $\phi(w') = w'$. Now applying Lemma~\ref{fixes_alpha_subs} we get the desired result. \end{proof} \begin{lemma} \label{fixer_is_identity} The only $\phi \in \myaut{G_m}$ which fixes a vertex $u \in \myvert{G_m}$ and all $v \in \myvert{G_m}$ such that $u \rightarrow v$ is the identity. \end{lemma} \begin{proof} We may label $u$ as $x_1 x_2 \dots x_n$ taking $B$ to be $\{ x_1, x_2, \dots, x_n, y_1, y_2, \dots, y_{m - n} \}$. For a vertex $v$, define $f(v) = |\{ x_1, x_2, \dots, x_n \} \cap \alpha(v)|$. We show by induction for $n \geq k \geq 0$ that $\phi$ fixes all $v$ such that $f(v) = k$. For $k = n$, take any $v$ with $f(v) = n$. We have $\alpha(v) = \alpha(u)$, and $u$ is fixed by $\phi$. Hence by Lemma~\ref{fixes_alpha_subs} $\phi$ fixes $v$ also. For $k = n - 1$, we have $f(v) = n - 1$. First we consider $\alpha(v) = \{ x_2, x_3, \dots, x_n, y \}$. In this case the vertex $v' = x_2 x_3 \dots x_n y$ satisfies $u \rightarrow v'$ and so $\phi(v') = v'$. Hence, as $\alpha(v) = \alpha(v')$ we use Lemma~\ref{fixes_alpha_subs} again to show that $\phi(v) = v$. For other $v$ with $f(v) = n - 1$, without loss of generality we may assume $\alpha(v) = \{ x_1, x_2, \dots, x_{n - 1}, y \}$. Applying Lemma~\ref{inductive_step} to the sets $\{ x_1, x_2, \dots, x_n \}$ and $\{ x_2, x_3, \dots, x_n, y \}$ we see that $v$ is fixed. For $k = c$ given the inductive hypothesis for $k = c + 1$, let $v \in \myvert{G_m}$ such that \[ \alpha(v) = \{ x_1, \dots, x_c, y_1, \dots, y_{n - c} \}. \] By applying the inductive hypothesis and Lemma~\ref{fixes_alpha_subs} to the sets \[ \{ x_1, \dots, x_{c + 1}, y_1, \dots, y_{n - c - 1} \} \quad \text{and} \quad \{ x_1, \dots, x_{c + 1}, y_2, \dots, y_{n - c} \} \] we get the desired result. \end{proof} \begin{proposition} $\myaut{G_m} \cong S_m$. \end{proposition} \begin{proof} Let $H \leq \myaut{G_m}$ as defined in Lemma~\ref{symmetric_group_lemma}. Suppose $\phi \in \myaut{G_m}$. Consider some $u \in \myvert{G_m}$, and define $\psi \in H$ such that $\psi(\phi(u)) = u$ and for all $v \in \myvert{G_m}$ with $u\rightarrow v$ via an alphabet changing rule we have $\psi(\phi(v)) = v$. Note that $\psi$ is guaranteed to exist as alphabet stability guarantees this process of defining $\psi$ corresponds to defining a unique permutation. We now consider the automorphism $\psi \circ \phi$, which by Lemma~\ref{fixer_is_identity} must be the identity. Hence $\psi = \phi^{-1}$ and $\phi \in H$. \end{proof} Now we see that alphabet stability and subregularity are sufficient conditions to guarantee $\myaut{G_m} \cong S_m$, we devote the remainder of this section to creating tests to determine when a family of word graphs is alphabet stable and subregular. Our tests will only concern the counting of certain paths in the alphabet fixing subgraph. In the following we consider the word graph $G_m \in \mywgp{\Pi_n}$ with alphabet fixing subgraph $\Gamma_n$. \begin{lemma} \label{unique_n_path} If $u, v \in \myvert{G_m}$ such that $u \rightarrow v$ and $\alpha(u) \neq \alpha(v)$, then there is a unique path of length $n$ from $v$ to $u$. \end{lemma} \begin{proof} Without loss of generality we may take $u = x_1 x_2 \dots x_n$ and $v = x_2 x_3 \dots x_n y$. Considering a path $u = u_0 \rightarrow u_1 \rightarrow \dots \rightarrow u_k = v$ with $k \leq n$, we may repeatedly apply Corollary~\ref{path_len_pos} whilst considering $p_{x_i}(u_{i - 1})$ and $p_{x_i}(u_k)$ to deduce that $u_{i - 1} \rightarrow u_i$ by the alphabet changing rule which introduces $x_i$. \end{proof} Now suppose for all $u, v \in \myvert{\Gamma_n}$ with $u \rightarrow v$ we have either more than one path from $v$ to $u$ of length $n$, or we have a path of length less than $n$ from $v$ to $u$. \begin{lemma} \label{no_exchanges} There is no $\phi \in \myvert{G_m}$ with $\phi(u) = u$ and $\alpha(\phi(v)) \neq \alpha(u)$. \end{lemma} \begin{proof} If such a $\phi$ exists then $u \rightarrow \phi(v)$ by an alphabet changing rule. Hence there is a unique shortest path of length $n$ connecting $\phi(v)$ to $u$. \end{proof} \begin{proposition} $G_m$ is alphabet stable. \end{proposition} \begin{proof} Suppose $G_m$ is not alphabet stable. Let $\phi \in \myaut{G_m}$ and $u, v \in \myvert{G_m}$ such that $\alpha(u) = \alpha(v)$ and $\alpha(\phi(u)) \neq \alpha(\phi(v))$. Consider a path from $u$ to $v$ of length at most $n$, say $u = u_0 \rightarrow \dots \rightarrow u_k = v$. Lemma~\ref{unique_n_path} shows that $\alpha(u) = \alpha(u_i)$ for each $i$. Now consider the path $\phi(u_0) \rightarrow \dots \rightarrow \phi(u_k)$. As $\alpha(\phi(u_0)) = \alpha(\phi(u)) \neq \alpha(\phi(v)) = \alpha(\phi(u_k))$ there must be some $c$ such that $\alpha(\phi(u_c)) \neq \alpha(\phi(u_{c + 1}))$. Hence, we have $u_c \rightarrow u_{c + 1}$ and $\alpha(u_c) = \alpha(u_{c + 1})$, but $\alpha(\phi(u_c)) \neq \alpha(\phi(u_{c + 1}))$, contradicting Lemma~\ref{no_exchanges}. \end{proof} \begin{lemma} If $\Gamma_n$ is not regular, there is an automorphism $\phi \in \myaut{\Gamma_n}$ such that for some $u, v \in \myvert{\Gamma_n}$ with $u \rightarrow v$ we have $\phi(u) = u$ but $\phi(v) \neq v$. \end{lemma} \begin{proof} Let $H < \myaut{\Gamma_n}$ be regular, and let $\phi \in \myaut{\Gamma_n} \setminus H$. Consider $u \in \myvert{\Gamma_n}$ and let $\psi \in H$ be the automorphism such that $\psi(\phi(u)) = u$. Let $\phi' = \psi \circ \phi$, so $\phi'$ fixes $u$. As $\phi \not\in H$, we must have $\phi'$ is not the identity. Hence there is some $v \in \myvert{\Gamma_n}$ such that $\phi'(v) \neq v$. Consider a path from $u$ to $v$, we must encounter a pair of vertices on the path such that $u' \rightarrow v'$, $\phi'(u') = u'$ and $\phi'(v') \neq v'$. \end{proof} \begin{corollary} If for all $u, v, w \in \myvert{\Gamma_n}$ with $u \rightarrow v$ and $u \rightarrow w$ there exists some $k$ such that the number of paths of length $k$ from $v$ to $u$ is different to the number of paths of length $k$ from $w$ to $u$, then $\Gamma_n$ is regular. \end{corollary} We now combine these results and state our test. Let $G_m \in \mywgp{\Pi_n}$ be a word graph with alphabet fixing subgraph $\Gamma_n$. Let $u \in \myvert{\Gamma_n}$ be an arbitrary fixed vertex of $\Gamma_n$ and let $\{ v_i \}$ be the set of vertices such that $u \rightarrow v_i$. \begin{proposition} If the following conditions are satisfied, then $\myaut{G_m} \cong S_m$. \begin{itemize} \item each $v_i, v_j$ has some $k$ such that the number of paths of length $k$ from $v_i$ to $u$ is different to the number of paths of length $k$ from $v_j$ to $u$. \item each $v_i$ has either a path of length less than $n$ to $u$ or has more than one path of length $n$ to $u$. \end{itemize} \end{proposition} \section{Introduction to G\'{o}mez Graphs} \label{intro_to_gomez} In our account of G\'{o}mez graphs, we shall use a modified notation to that of the original paper more appropriate to our purposes. We note that this paper will only deal with the G\'{o}mez graphs corresponding to the graphs $\mydg{k}{k}$ and $\mydg{k}{k + 1}$. The technique used herein does not work for all $\mydg{k}{k'}$ where $k' \geq k$, which we shall provide explicity examples to show, and runs into difficulty when pursued for the case $\mydg{k}{k + 2}$. Hence, we only deal with the cases which provide the extreme examples in degree-diameter as opposed to dealing with the entire family. We begin by giving a definition of the alphabet fixing subgraphs $\Gamma_n$ of the G\'{o}mez graphs. For any $n$, define $k$ so that either $n = 2k + 1$ or $n = 2k$, and let $B$ be any set such that $|B| = n$. We define the graph $\Gamma_n = \langle V, E \rangle$ as follows. The set $V$ of vertices is given by $V = \{ x_1 x_2 \dots x_n | x_i \in B, x_i = x_j \Leftrightarrow i = j \}$, that is $V$ is the set of all words of length $n$ on the alphabet $B$ with distinct letters, and the set $E$ is given by the directed adjacencies \[ x_1 x_2 \dots x_n \rightarrow \begin{cases} x_2 x_3 \dots x_{k - i} x_1 x_{k - i + 2} x_{k - i + 3} \dots x_n x_{k - i + 1}, & \text{for $0 \leq i < k$}, \\ x_2 x_3 \dots x_n x_1. \end{cases} \] Informally, each of these rules splits the word into a left and right half and rotates each half by one. The size of the left half is not allowed to exceed that of the right, and we also allow an empty left half. Now we give the example adjacencies for the cases $n = 6$ and $n = 7$ with the left and right halfs coloured for clarity. \begin{table}[H] \centering \begin{tabular}{c|c} $n = 6$ & $n = 7$ \\ \hline $ x_1 x_2 x_3 x_4 x_5 x_6 \rightarrow \begin{cases} \textcolor{blue}{x_2 x_3 x_1}\textcolor{red}{x_5 x_6 x_4} \\ \textcolor{blue}{x_2 x_1}\textcolor{red}{x_4 x_5 x_6 x_3} \\ \textcolor{blue}{x_1}\textcolor{red}{x_3 x_4 x_5 x_6 x_2} \\ \textcolor{red}{x_2 x_3 x_4 x_5 x_6 x_1} \end{cases}$ & $ x_1 x_2 x_3 x_4 x_5 x_6 x_7 \rightarrow \begin{cases} \textcolor{blue}{x_2 x_3 x_1}\textcolor{red}{x_5 x_6 x_7 x_4} \\ \textcolor{blue}{x_2 x_1}\textcolor{red}{x_4 x_5 x_6 x_7 x_3} \\ \textcolor{blue}{x_1}\textcolor{red}{x_3 x_4 x_5 x_6 x_7 x_2} \\ \textcolor{red}{x_2 x_3 x_4 x_5 x_6 x_7 x_1} \end{cases}$ \\ \end{tabular} \end{table} Now we introduce terminology and a visual representation of these rules which we will make use of throughout our proof. First, we note that the graph $\Gamma_n$ has $k + 1$ rules, we shall call these rules $\pi_i$ for $0 \leq i \leq k$, where rule $\pi_i$ is given by $\pi_i(x_1 x_2 \dots x_n) = x_2 x_3 \dots x_{k - i} x_1 x_{k - i + 2} x_{k - i + 3} \dots x_n x_{k - i + 1}$, and $\pi_k(x_1 x_2 \dots x_n) = x_2 x_3 \dots x_n x_1$. In this notation, we now show our visual representation of the rules in the case $n = 8$. \begin{center} \begin{tikzpicture} \foreach \x in {1,...,8} { \draw (\x, 0) node {}; \draw (\x, 1) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; \draw (\x, 6) node {}; \draw (\x, 7) node {}; \draw (\x, 9) node {}; \draw (\x, 10) node {}; \draw (\x, 12) node {}; \draw (\x, 13) node {}; } \draw [->, gray] (1, 13) to (4, 12); \draw [->, gray] (2, 13) to (1, 12); \draw [->, gray] (3, 13) to (2, 12); \draw [->, gray] (4, 13) to (3, 12); \draw [->, gray] (5, 13) to (8, 12); \draw [->, gray] (6, 13) to (5, 12); \draw [->, gray] (7, 13) to (6, 12); \draw [->, gray] (8, 13) to (7, 12); \foreach \x in {1,...,8} { \node[fill=none, draw=none] at (\x, 13.5) {$x_\x$}; } \node[fill=none, draw=none] at (1, 11.5) {$x_2$}; \node[fill=none, draw=none] at (2, 11.5) {$x_3$}; \node[fill=none, draw=none] at (3, 11.5) {$x_4$}; \node[fill=none, draw=none] at (4, 11.5) {$x_1$}; \node[fill=none, draw=none] at (5, 11.5) {$x_6$}; \node[fill=none, draw=none] at (6, 11.5) {$x_7$}; \node[fill=none, draw=none] at (7, 11.5) {$x_8$}; \node[fill=none, draw=none] at (8, 11.5) {$x_5$}; \node[fill=none, draw=none] at (0, 12.5) {$\pi_0$}; \draw [->, gray] (1, 10) to (3, 9); \draw [->, gray] (2, 10) to (1, 9); \draw [->, gray] (3, 10) to (2, 9); \draw [->, gray] (4, 10) to (8, 9); \draw [->, gray] (5, 10) to (4, 9); \draw [->, gray] (6, 10) to (5, 9); \draw [->, gray] (7, 10) to (6, 9); \draw [->, gray] (8, 10) to (7, 9); \foreach \x in {1,...,8} { \node[fill=none, draw=none] at (\x, 10.5) {$x_\x$}; } \node[fill=none, draw=none] at (1, 8.5) {$x_2$}; \node[fill=none, draw=none] at (2, 8.5) {$x_3$}; \node[fill=none, draw=none] at (3, 8.5) {$x_1$}; \node[fill=none, draw=none] at (4, 8.5) {$x_5$}; \node[fill=none, draw=none] at (5, 8.5) {$x_6$}; \node[fill=none, draw=none] at (6, 8.5) {$x_7$}; \node[fill=none, draw=none] at (7, 8.5) {$x_8$}; \node[fill=none, draw=none] at (8, 8.5) {$x_4$}; \node[fill=none, draw=none] at (0, 9.5) {$\pi_1$}; \draw [->, gray] (1, 7) to (2, 6); \draw [->, gray] (2, 7) to (1, 6); \draw [->, gray] (3, 7) to (8, 6); \draw [->, gray] (4, 7) to (3, 6); \draw [->, gray] (5, 7) to (4, 6); \draw [->, gray] (6, 7) to (5, 6); \draw [->, gray] (7, 7) to (6, 6); \draw [->, gray] (8, 7) to (7, 6); \foreach \x in {1,...,8} { \node[fill=none, draw=none] at (\x, 7.5) {$x_\x$}; } \node[fill=none, draw=none] at (1, 5.5) {$x_2$}; \node[fill=none, draw=none] at (2, 5.5) {$x_1$}; \node[fill=none, draw=none] at (3, 5.5) {$x_4$}; \node[fill=none, draw=none] at (4, 5.5) {$x_5$}; \node[fill=none, draw=none] at (5, 5.5) {$x_6$}; \node[fill=none, draw=none] at (6, 5.5) {$x_7$}; \node[fill=none, draw=none] at (7, 5.5) {$x_8$}; \node[fill=none, draw=none] at (8, 5.5) {$x_3$}; \node[fill=none, draw=none] at (0, 6.5) {$\pi_2$}; \draw [->, gray] (1, 4) to (1, 3); \draw [->, gray] (2, 4) to (8, 3); \draw [->, gray] (3, 4) to (2, 3); \draw [->, gray] (4, 4) to (3, 3); \draw [->, gray] (5, 4) to (4, 3); \draw [->, gray] (6, 4) to (5, 3); \draw [->, gray] (7, 4) to (6, 3); \draw [->, gray] (8, 4) to (7, 3); \foreach \x in {1,...,8} { \node[fill=none, draw=none] at (\x, 4.5) {$x_\x$}; } \node[fill=none, draw=none] at (1, 2.5) {$x_1$}; \node[fill=none, draw=none] at (2, 2.5) {$x_3$}; \node[fill=none, draw=none] at (3, 2.5) {$x_4$}; \node[fill=none, draw=none] at (4, 2.5) {$x_5$}; \node[fill=none, draw=none] at (5, 2.5) {$x_6$}; \node[fill=none, draw=none] at (6, 2.5) {$x_7$}; \node[fill=none, draw=none] at (7, 2.5) {$x_8$}; \node[fill=none, draw=none] at (8, 2.5) {$x_2$}; \node[fill=none, draw=none] at (0, 3.5) {$\pi_3$}; \draw [->, gray] (1, 1) to (8, 0); \draw [->, gray] (2, 1) to (1, 0); \draw [->, gray] (3, 1) to (2, 0); \draw [->, gray] (4, 1) to (3, 0); \draw [->, gray] (5, 1) to (4, 0); \draw [->, gray] (6, 1) to (5, 0); \draw [->, gray] (7, 1) to (6, 0); \draw [->, gray] (8, 1) to (7, 0); \foreach \x in {1,...,8} { \node[fill=none, draw=none] at (\x, 1.5) {$x_\x$}; } \node[fill=none, draw=none] at (1, -0.5) {$x_2$}; \node[fill=none, draw=none] at (2, -0.5) {$x_3$}; \node[fill=none, draw=none] at (3, -0.5) {$x_4$}; \node[fill=none, draw=none] at (4, -0.5) {$x_5$}; \node[fill=none, draw=none] at (5, -0.5) {$x_6$}; \node[fill=none, draw=none] at (6, -0.5) {$x_7$}; \node[fill=none, draw=none] at (7, -0.5) {$x_8$}; \node[fill=none, draw=none] at (8, -0.5) {$x_1$}; \node[fill=none, draw=none] at (0, 0.5) {$\pi_4$}; \end{tikzpicture} \end{center} Within this visual representation, we label the following features \begin{table}[H] \centering \begin{tabular}{c|c|c} Diagram & Red & Blue \\ \hline \hline \begin{tikzpicture} \foreach \x in {1,...,6} { \draw (\x, 1) node {}; \draw (\x, 0) node {}; } \draw [->, blue] (2, 1) to (1, 0); \draw [->, blue] (4, 1) to (3, 0); \draw [->, blue] (5, 1) to (4, 0); \draw [->, blue] (6, 1) to (5, 0); \draw [->, red] (3, 1) to (6, 0); \draw [->, red] (1, 1) to (2, 0); \end{tikzpicture} & forward arrows & backward arrows \\ \hline \begin{tikzpicture} \foreach \x in {1,...,6} { \draw (\x, 1) node {}; \draw (\x, 0) node {}; } \draw [->, gray] (2, 1) to (1, 0); \draw [->, gray] (4, 1) to (3, 0); \draw [->, gray] (5, 1) to (4, 0); \draw [->, gray] (6, 1) to (5, 0); \draw [->, blue] (3, 1) to (6, 0); \draw [->, red] (1, 1) to (2, 0); \end{tikzpicture} & left arrow & right arrow \\ \end{tabular} \end{table} \begin{lemma} \label{num_arrows} The number of right arrows in a path of length $m$ is $m$, and the number of left arrows in a path of length $m$ is less than or equal to $m$. \end{lemma} \begin{proof} Each of the rules $\pi_i$ for $0 \leq i \leq k$ contains exactly one right arrow, and either one or zero left arrows. \end{proof} We represent the composition of rules as in the following diagram and call it a \textit{path}. The following diagram shows the path $\pi_3 \pi_0 \pi_2$ when $n = 8$. \begin{center} \begin{tikzpicture} \foreach \x in {1,...,8} { \draw (\x, 1) node {}; \draw (\x, 2) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; \draw (\x, 5) node {}; \draw (\x, 6) node {}; } \draw [->, gray] (3, 6) to (2, 5); \draw [->, gray] (4, 6) to (3, 5); \draw [->, gray] (5, 6) to (4, 5); \draw [->, gray] (6, 6) to (5, 5); \draw [->, gray] (7, 6) to (6, 5); \draw [->, gray] (8, 6) to (7, 5); \draw [->, gray] (2, 6) to (8, 5); \draw [->, gray] (1, 6) to (1, 5); \draw [->, gray] (2, 4) to (1, 3); \draw [->, gray] (3, 4) to (2, 3); \draw [->, gray] (4, 4) to (3, 3); \draw [->, gray] (6, 4) to (5, 3); \draw [->, gray] (7, 4) to (6, 3); \draw [->, gray] (8, 4) to (7, 3); \draw [->, gray] (5, 4) to (8, 3); \draw [->, gray] (1, 4) to (4, 3); \draw [->, gray] (2, 2) to (1, 1); \draw [->, gray] (4, 2) to (3, 1); \draw [->, gray] (5, 2) to (4, 1); \draw [->, gray] (6, 2) to (5, 1); \draw [->, gray] (7, 2) to (6, 1); \draw [->, gray] (8, 2) to (7, 1); \draw [->, gray] (3, 2) to (8, 1); \draw [->, gray] (1, 2) to (2, 1); \foreach \x in {1,...,8} { \node[fill=none, draw=none] at (\x, 6.5) {$x_\x$}; } \node[fill=none, draw=none] at (1, 4.5) {$x_1$}; \node[fill=none, draw=none] at (2, 4.5) {$x_3$}; \node[fill=none, draw=none] at (3, 4.5) {$x_4$}; \node[fill=none, draw=none] at (4, 4.5) {$x_5$}; \node[fill=none, draw=none] at (5, 4.5) {$x_6$}; \node[fill=none, draw=none] at (6, 4.5) {$x_7$}; \node[fill=none, draw=none] at (7, 4.5) {$x_8$}; \node[fill=none, draw=none] at (8, 4.5) {$x_2$}; \node[fill=none, draw=none] at (1, 2.5) {$x_3$}; \node[fill=none, draw=none] at (2, 2.5) {$x_4$}; \node[fill=none, draw=none] at (3, 2.5) {$x_5$}; \node[fill=none, draw=none] at (4, 2.5) {$x_1$}; \node[fill=none, draw=none] at (5, 2.5) {$x_7$}; \node[fill=none, draw=none] at (6, 2.5) {$x_8$}; \node[fill=none, draw=none] at (7, 2.5) {$x_2$}; \node[fill=none, draw=none] at (8, 2.5) {$x_6$}; \node[fill=none, draw=none] at (1, 0.5) {$x_4$}; \node[fill=none, draw=none] at (2, 0.5) {$x_3$}; \node[fill=none, draw=none] at (3, 0.5) {$x_1$}; \node[fill=none, draw=none] at (4, 0.5) {$x_7$}; \node[fill=none, draw=none] at (5, 0.5) {$x_8$}; \node[fill=none, draw=none] at (6, 0.5) {$x_2$}; \node[fill=none, draw=none] at (7, 0.5) {$x_6$}; \node[fill=none, draw=none] at (8, 0.5) {$x_5$}; \node[fill=none, draw=none] at (0, 5.5) {$\pi_3$}; \node[fill=none, draw=none] at (0, 3.5) {$\pi_0$}; \node[fill=none, draw=none] at (0, 1.5) {$\pi_2$}; \end{tikzpicture} \end{center} In subsequent diagrams, we may drop the explicit labelling of letters to present the same path in a more succinct manner as in the example below. \begin{center} \begin{tikzpicture} \foreach \x in {1,...,8} { \draw (\x, 1) node {}; \draw (\x, 2) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; } \draw [->, gray] (3, 4) to (2, 3); \draw [->, gray] (4, 4) to (3, 3); \draw [->, gray] (5, 4) to (4, 3); \draw [->, gray] (6, 4) to (5, 3); \draw [->, gray] (7, 4) to (6, 3); \draw [->, gray] (8, 4) to (7, 3); \draw [->, gray] (2, 4) to (8, 3); \draw [->, gray] (1, 4) to (1, 3); \draw [->, gray] (2, 3) to (1, 2); \draw [->, gray] (3, 3) to (2, 2); \draw [->, gray] (4, 3) to (3, 2); \draw [->, gray] (6, 3) to (5, 2); \draw [->, gray] (7, 3) to (6, 2); \draw [->, gray] (8, 3) to (7, 2); \draw [->, gray] (5, 3) to (8, 2); \draw [->, gray] (1, 3) to (4, 2); \draw [->, gray] (2, 2) to (1, 1); \draw [->, gray] (4, 2) to (3, 1); \draw [->, gray] (5, 2) to (4, 1); \draw [->, gray] (6, 2) to (5, 1); \draw [->, gray] (7, 2) to (6, 1); \draw [->, gray] (8, 2) to (7, 1); \draw [->, gray] (3, 2) to (8, 1); \draw [->, gray] (1, 2) to (2, 1); \node[fill=none, draw=none] at (0, 3.5) {$\pi_3$}; \node[fill=none, draw=none] at (0, 2.5) {$\pi_0$}; \node[fill=none, draw=none] at (0, 1.5) {$\pi_2$}; \end{tikzpicture} \end{center} We will refer to the \textit{trail} from position $i$ in a path to mean the concatenation of consecutive arrows in our diagram starting from the arrow at position $i$. In the following example we have highlighted the trail starting at position 2. \begin{center} \begin{tikzpicture} \foreach \x in {1,...,8} { \draw (\x, 1) node {}; \draw (\x, 2) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; } \draw [->, gray] (3, 4) to (2, 3); \draw [->, gray] (4, 4) to (3, 3); \draw [->, gray] (5, 4) to (4, 3); \draw [->, gray] (6, 4) to (5, 3); \draw [->, gray] (7, 4) to (6, 3); \draw [->, gray] (8, 4) to (7, 3); \draw [->, red] (2, 4) to (8, 3); \draw [->, gray] (1, 4) to (1, 3); \draw [->, gray] (2, 3) to (1, 2); \draw [->, gray] (3, 3) to (2, 2); \draw [->, gray] (4, 3) to (3, 2); \draw [->, gray] (6, 3) to (5, 2); \draw [->, gray] (7, 3) to (6, 2); \draw [->, red] (8, 3) to (7, 2); \draw [->, gray] (5, 3) to (8, 2); \draw [->, gray] (1, 3) to (4, 2); \draw [->, gray] (2, 2) to (1, 1); \draw [->, gray] (4, 2) to (3, 1); \draw [->, gray] (5, 2) to (4, 1); \draw [->, gray] (6, 2) to (5, 1); \draw [->, red] (7, 2) to (6, 1); \draw [->, gray] (8, 2) to (7, 1); \draw [->, gray] (3, 2) to (8, 1); \draw [->, gray] (1, 2) to (2, 1); \node[fill=none, draw=none] at (0, 3.5) {$\pi_3$}; \node[fill=none, draw=none] at (0, 2.5) {$\pi_0$}; \node[fill=none, draw=none] at (0, 1.5) {$\pi_2$}; \end{tikzpicture} \end{center} We will call a trail \textit{closed} if it begins and ends at the same position. Here we illustrate a closed trail in blue, and a non-closed trail in red. \begin{center} \begin{tikzpicture} \foreach \x in {1,...,8} { \draw (\x, 1) node {}; \draw (\x, 2) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; \draw (\x, 5) node {}; } \draw [->, gray] (1, 5) to (1, 4); \draw [->, gray] (2, 5) to (8, 4); \draw [->, blue] (3, 5) to (2, 4); \draw [->, gray] (4, 5) to (3, 4); \draw [->, gray] (5, 5) to (4, 4); \draw [->, red] (6, 5) to (5, 4); \draw [->, gray] (7, 5) to (6, 4); \draw [->, gray] (8, 5) to (7, 4); \draw [->, gray] (1, 4) to (8, 3); \draw [->, blue] (2, 4) to (1, 3); \draw [->, gray] (3, 4) to (2, 3); \draw [->, gray] (4, 4) to (3, 3); \draw [->, red] (5, 4) to (4, 3); \draw [->, gray] (6, 4) to (5, 3); \draw [->, gray] (7, 4) to (6, 3); \draw [->, gray] (8, 4) to (7, 3); \draw [->, blue] (1, 3) to (4, 2); \draw [->, gray] (2, 3) to (1, 2); \draw [->, gray] (3, 3) to (2, 2); \draw [->, red] (4, 3) to (3, 2); \draw [->, gray] (5, 3) to (8, 2); \draw [->, gray] (6, 3) to (5, 2); \draw [->, gray] (7, 3) to (6, 2); \draw [->, gray] (8, 3) to (7, 2); \draw [->, gray] (1, 2) to (2, 1); \draw [->, gray] (2, 2) to (1, 1); \draw [->, red] (3, 2) to (8, 1); \draw [->, blue] (4, 2) to (3, 1); \draw [->, gray] (5, 2) to (4, 1); \draw [->, gray] (6, 2) to (5, 1); \draw [->, gray] (7, 2) to (6, 1); \draw [->, gray] (8, 2) to (7, 1); \draw [-, dashed, gray] (3, 5) to (3, 1); \draw [-, dashed, gray] (6, 5) to (6, 1); \node[fill=none, draw=none] at (0, 4.5) {$\pi_3$}; \node[fill=none, draw=none] at (0, 3.5) {$\pi_4$}; \node[fill=none, draw=none] at (0, 2.5) {$\pi_0$}; \node[fill=none, draw=none] at (0, 1.5) {$\pi_2$}; \end{tikzpicture} \end{center} We will call a path \textit{closed} if the trails starting at each position in the path are closed. With the terminology established, we now briefly describe our motivation. In order to prove the G\'{o}mez graphs are subregular and alphabet stable, we aim to count paths of lengths $n$ and $n - 1$ from each neighbour of an arbitrary vertex $v \in \Gamma_n$ back to $v$. All of these paths in $\Gamma_n$ correspond to cycles of lengths $n$ and $n + 1$ in $\Gamma_n$, which correspond exactly to the closed paths we have just defined. Hence, we now aim to count closed paths of lengths $n$ and $n + 1$, considering what the first rule is on those paths. For a path $p = p_0 p_1 \dots p_n$ we shall call $p^i = p_{i + 1} p_{i + 2} \dots p_n p_1 \dots p_i$ the $i$\textsuperscript{th} \textit{rotation} of $p$. \begin{lemma} \label{rotate} There is a bijection between the closed trails of a path $p$ and the closed paths of each of its rotations $p^i$. \end{lemma} \begin{proof} We demonstrate such a bijection between the closed trails of some path $p$ and its rotation $p^2$, noting that the result follows from a trivial induction. Let $p$ be a path, first we show that the trail at $i$ is closed in $p$ if, and only if, the trail at $p_1(i)$ is closed in $p^2$. \begin{align*} p(i) = i \quad &\Leftrightarrow \quad (p_2 p_3 \dots p_n)(p_1(i)) = i \\ &\Leftrightarrow \quad (p_2 p_3 \dots p_n p_1)(p_1(i)) = p_1(i) \quad \Leftrightarrow \quad p^2(p_1(i)) = p_1(i). \end{align*} Hence, as $p_1$ is a bijection, we have a bijection between the closed trails of $p$ and $p^2$. \end{proof} In light of Lemma~\ref{rotate}, we shall identify a closed trail starting at $i$ in a path $p$ with the closed trail starting at $(p_1 p_2 \dots p_{j - 1})(i)$ in $p^j$, referring to them as the same trail. Here we illustrate an example of a closed trail and its rotations. \begin{center} \begin{tikzpicture} \foreach \x in {1,...,7} { \draw (\x, 1) node {}; \draw (\x, 2) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; \draw (\x, 5) node {}; \draw (\x, 6) node {}; \draw (\x, 7) node {}; \draw (\x, 8) node {}; \draw (\x, 9) node {}; \draw (\x, 10) node {}; \draw (\x, 11) node {}; \draw (\x, 12) node {}; \draw (\x, 13) node {}; \draw (\x, 14) node {}; \draw (\x, 15) node {}; \draw (\x, 16) node {}; \draw (\x, 17) node {}; \draw (\x, 18) node {}; \draw (\x, 19) node {}; \draw (\x, 20) node {}; } \draw [->, gray] (1, 20) to (7, 19); \draw [->, gray] (2, 20) to (1, 19); \draw [->, gray] (3, 20) to (2, 19); \draw [->, gray] (4, 20) to (3, 19); \draw [->, gray] (5, 20) to (4, 19); \draw [->, blue] (6, 20) to (5, 19); \draw [->, gray] (7, 20) to (6, 19); \draw [->, gray] (1, 19) to (2, 18); \draw [->, gray] (2, 19) to (1, 18); \draw [->, gray] (3, 19) to (7, 18); \draw [->, gray] (4, 19) to (3, 18); \draw [->, blue] (5, 19) to (4, 18); \draw [->, gray] (6, 19) to (5, 18); \draw [->, gray] (7, 19) to (6, 18); \draw [->, gray] (1, 18) to (3, 17); \draw [->, gray] (2, 18) to (1, 17); \draw [->, gray] (3, 18) to (2, 17); \draw [->, blue] (4, 18) to (7, 17); \draw [->, gray] (5, 18) to (4, 17); \draw [->, gray] (6, 18) to (5, 17); \draw [->, gray] (7, 18) to (6, 17); \draw [->, gray] (1, 17) to (1, 16); \draw [->, gray] (2, 17) to (7, 16); \draw [->, gray] (3, 17) to (2, 16); \draw [->, gray] (4, 17) to (3, 16); \draw [->, gray] (5, 17) to (4, 16); \draw [->, gray] (6, 17) to (5, 16); \draw [->, blue] (7, 17) to (6, 16); \node[fill=none, draw=none] at (0, 19.5) {$\pi_3$}; \node[fill=none, draw=none] at (0, 18.5) {$\pi_1$}; \node[fill=none, draw=none] at (0, 17.5) {$\pi_0$}; \node[fill=none, draw=none] at (0, 16.5) {$\pi_2$}; \node[fill=none, draw=none] at (-1, 18) {$p = p^1$}; \draw [-, dashed, gray] (6, 20) to (6, 16); \draw [->, gray] (1, 15) to (2, 14); \draw [->, gray] (2, 15) to (1, 14); \draw [->, gray] (3, 15) to (7, 14); \draw [->, gray] (4, 15) to (3, 14); \draw [->, blue] (5, 15) to (4, 14); \draw [->, gray] (6, 15) to (5, 14); \draw [->, gray] (7, 15) to (6, 14); \draw [->, gray] (1, 14) to (3, 13); \draw [->, gray] (2, 14) to (1, 13); \draw [->, gray] (3, 14) to (2, 13); \draw [->, blue] (4, 14) to (7, 13); \draw [->, gray] (5, 14) to (4, 13); \draw [->, gray] (6, 14) to (5, 13); \draw [->, gray] (7, 14) to (6, 13); \draw [->, gray] (1, 13) to (1, 12); \draw [->, gray] (2, 13) to (7, 12); \draw [->, gray] (3, 13) to (2, 12); \draw [->, gray] (4, 13) to (3, 12); \draw [->, gray] (5, 13) to (4, 12); \draw [->, gray] (6, 13) to (5, 12); \draw [->, blue] (7, 13) to (6, 12); \draw [->, gray] (1, 12) to (7, 11); \draw [->, gray] (2, 12) to (1, 11); \draw [->, gray] (3, 12) to (2, 11); \draw [->, gray] (4, 12) to (3, 11); \draw [->, gray] (5, 12) to (4, 11); \draw [->, blue] (6, 12) to (5, 11); \draw [->, gray] (7, 12) to (6, 11); \node[fill=none, draw=none] at (0, 14.5) {$\pi_1$}; \node[fill=none, draw=none] at (0, 13.5) {$\pi_0$}; \node[fill=none, draw=none] at (0, 12.5) {$\pi_2$}; \node[fill=none, draw=none] at (0, 11.5) {$\pi_3$}; \node[fill=none, draw=none] at (-1, 13) {$p^2$}; \draw [-, dashed, gray] (5, 15) to (5, 11); \draw [->, gray] (1, 10) to (3, 9); \draw [->, gray] (2, 10) to (1, 9); \draw [->, gray] (3, 10) to (2, 9); \draw [->, blue] (4, 10) to (7, 9); \draw [->, gray] (5, 10) to (4, 9); \draw [->, gray] (6, 10) to (5, 9); \draw [->, gray] (7, 10) to (6, 9); \draw [->, gray] (1, 9) to (1, 8); \draw [->, gray] (2, 9) to (7, 8); \draw [->, gray] (3, 9) to (2, 8); \draw [->, gray] (4, 9) to (3, 8); \draw [->, gray] (5, 9) to (4, 8); \draw [->, gray] (6, 9) to (5, 8); \draw [->, blue] (7, 9) to (6, 8); \draw [->, gray] (1, 8) to (7, 7); \draw [->, gray] (2, 8) to (1, 7); \draw [->, gray] (3, 8) to (2, 7); \draw [->, gray] (4, 8) to (3, 7); \draw [->, gray] (5, 8) to (4, 7); \draw [->, blue] (6, 8) to (5, 7); \draw [->, gray] (7, 8) to (6, 7); \draw [->, gray] (1, 7) to (2, 6); \draw [->, gray] (2, 7) to (1, 6); \draw [->, gray] (3, 7) to (7, 6); \draw [->, gray] (4, 7) to (3, 6); \draw [->, blue] (5, 7) to (4, 6); \draw [->, gray] (6, 7) to (5, 6); \draw [->, gray] (7, 7) to (6, 6); \node[fill=none, draw=none] at (0, 9.5) {$\pi_0$}; \node[fill=none, draw=none] at (0, 8.5) {$\pi_2$}; \node[fill=none, draw=none] at (0, 7.5) {$\pi_3$}; \node[fill=none, draw=none] at (0, 6.5) {$\pi_1$}; \node[fill=none, draw=none] at (-1, 8) {$p^3$}; \draw [-, dashed, gray] (4, 10) to (4, 6); \draw [->, gray] (1, 5) to (1, 4); \draw [->, gray] (2, 5) to (7, 4); \draw [->, gray] (3, 5) to (2, 4); \draw [->, gray] (4, 5) to (3, 4); \draw [->, gray] (5, 5) to (4, 4); \draw [->, gray] (6, 5) to (5, 4); \draw [->, blue] (7, 5) to (6, 4); \draw [->, gray] (1, 4) to (7, 3); \draw [->, gray] (2, 4) to (1, 3); \draw [->, gray] (3, 4) to (2, 3); \draw [->, gray] (4, 4) to (3, 3); \draw [->, gray] (5, 4) to (4, 3); \draw [->, blue] (6, 4) to (5, 3); \draw [->, gray] (7, 4) to (6, 3); \draw [->, gray] (1, 3) to (2, 2); \draw [->, gray] (2, 3) to (1, 2); \draw [->, gray] (3, 3) to (7, 2); \draw [->, gray] (4, 3) to (3, 2); \draw [->, blue] (5, 3) to (4, 2); \draw [->, gray] (6, 3) to (5, 2); \draw [->, gray] (7, 3) to (6, 2); \draw [->, gray] (1, 2) to (3, 1); \draw [->, gray] (2, 2) to (1, 1); \draw [->, gray] (3, 2) to (2, 1); \draw [->, blue] (4, 2) to (7, 1); \draw [->, gray] (5, 2) to (4, 1); \draw [->, gray] (6, 2) to (5, 1); \draw [->, gray] (7, 2) to (6, 1); \node[fill=none, draw=none] at (0, 4.5) {$\pi_2$}; \node[fill=none, draw=none] at (0, 3.5) {$\pi_3$}; \node[fill=none, draw=none] at (0, 2.5) {$\pi_1$}; \node[fill=none, draw=none] at (0, 1.5) {$\pi_0$}; \node[fill=none, draw=none] at (-1, 3) {$p^4$}; \draw [-, dashed, gray] (7, 5) to (7, 1); \end{tikzpicture} \end{center} \begin{corollary} A path $p$ is closed if and only if each of its rotations $p^i$ is closed. \end{corollary} \begin{lemma} \label{at_least_two} Any closed trail in a path of length $n + 1$ must contain at least two forward arrows. \end{lemma} \begin{proof} For any closed trail we see that the distance traversed by backwards arrows is equal to the distance traversed by forwards arrows. If there are no forwards arrows, this obviously cannot happen. If there is only one forwards arrow, then there must be $n$ backwards arrows, hence the forwards arrow must correspond to travelling forwards $n$ places. However, the furthest that can be travelled forwards occurs in rule $\pi_k$, which travels forwards by $n - 1$ spaces. Hence, this cannot occur, and so any closed trail must contain at least two forward arrows. \end{proof} \begin{lemma} \label{left_arrows_suck} Any closed trail of length $n + 1$ whose only forward arrows are left arrows contains at least three left arrows. \end{lemma} \begin{proof} The furthest that can be travelled forwards by a left arrow occurs in the rule $\pi_0$ in which we travel forwards $k - 1$ spaces. If we have a closed trail which contains two left arrows, then it travels a distance of $n - 1$ with backwards arrows. Hence, we must have $n - 1 \leq 2(k - 1) = 2k - 2 \leq n - 2$. \end{proof} \begin{lemma} \label{right_arrows_rule} Any closed trail of length $n + 1$ whose only forward arrows are right arrows contains exactly two right arrows. \end{lemma} \begin{proof} Consider a closed trail whose only forward arrows are right arrows. As the trail is closed, the sum of the forward arrows equals the sum of the backward arrows. Suppose there are three or more right arrows in the trail, then the sum of the forward arrows is at least $3(k - 1)$ and the sum of the backwards arrows is at most $n - 3 < 3(k - 1)$, hence there must be fewer than three right arrows, and so by Lemma~\ref{at_least_two} there are two. \end{proof} \begin{lemma} \label{at_most_three} In a closed path of length $n + 1$ there are at most three trails containing two right arrows. \end{lemma} \begin{proof} Suppose we have a closed path of length $n + 1$ which contains at least four trails containing two right arrows. By Lemma~\ref{num_arrows} we now have $(n + 1) - 8 = n - 7$ unaccounted for right arrows, and at most $n + 1$ unaccounted for left arrows. Lemma~\ref{at_least_two} tells us that we need to use at least two forward arrows per remaining trail in the path. If each left arrow is in a trail with a right arrow, then such a trail requires only two arrows, otherwise it requires three or more. Hence, in order to minimise the number of required arrows, we may assume as many left arrows as possible are paired with right arrows. In this manner, we assume all $n - 7$ remaining right arrows are paired with left arrows. This leaves at most $(n + 1) - (n - 7) = 8$ unaccounted for left arrows. We have now accounted for $(n - 7) + 4 = n - 3$ of $n$ trails, leaving three unaccounted for. Lemma~\ref{left_arrows_suck} tells us that each of the three remaining trails requires at least three left arrows, but we only have eight unaccounted for left arrows. \end{proof} \begin{lemma} \label{right_arrow_first} In a path of length $n + 1$, if the trail starting with the right arrow of $p_1$ contains no further right arrows, it contains the left arrow of $p_{n + 1}$. \end{lemma} \begin{proof} After $p_1$, the trail is at position $n$. As the trail contains no further right arrows, each $p_{1 + i}$ maps the trail from $n - i$ to $n - i - 1$, provided that $n - i > 1$. Hence, the trail reaches position 1 after $p_n$, and so the trail contains the left arrow of $p_{n + 1}$. \end{proof} If in a path $p$ of length $n$ there is some $i$ and $j$ such that $p_i = \pi_j$ and $p_{i + 1} = \pi_{j + 1}$, then we call the left arrow of $p_i$ and the right arrow of $p_{i + 1}$ \textit{paired} and refer to them together as a \textit{pair}. As a special case, we allow $i = n$ and use $p_1$ instead of $p_{i + 1}$. \begin{lemma} \label{right_left_pair} If a closed trail in a path of length $n + 1$ contains both right and left arrows, then it contains one pair and no other forward arrows. \end{lemma} \begin{proof} Suppose $p$ is a path with such a trail. Let $q$ be a rotation of $p$ which puts a right arrow of the trail in position $q_1$. As the trail contains a left arrow, at some point we have some $i$ such that $(q_2 q_3 \dots q_i)(n) = 1$. As the most we can travel backwards in each $q_j$ is one space, the soonest this can happen is by $i = n - 1$, if and only if each $q_j$ introduces a backwards arrow into the trail. If this does not happen, then no $q_j$ including $j \in \{ 1, n + 1 \}$ can contain a left arrow in the trail. Hence, this is the only possibity. If the position the trail starts at is $k - \alpha$ then we know $q_1(k - \alpha) = n$, so $q_1 = \pi_\alpha$, and $q_{n + 1}(1) = k - \alpha$, hence $q_{n + 1} = \pi_{\alpha - 1}$. Hence the trail contains one pair and all other arrows are backwards arrows. \end{proof} \begin{lemma} \label{left_right_ident} If all right arrows in a path $p$ of length $n + 1$ are either in closed trails or paired, then all pairs in $p$ are in distinct closed trails. \end{lemma} \begin{proof} Consider an arbitrary right arrow in $p$, and the rotation of $p$ which brings this right arrow to $p_1$. If the trail from this right arrow enters a forwards arrow before $p_{n + 1}$, the forwards arrow must be an unpaired right arrow, and so the trail is closed. Otherwise, the trail enters a left arrow at $p_{n + 1}$, which must be the pair of the right arrow of $p_1$, and the trail is closed. \end{proof} We now note from what we have shown that if a closed trail contains a right arrow then either it contains exactly two right arrows or it contains one pair. Hence, we are in a position to easily deal with closed trails containing at least one right arrow. In order to settle the case of closed trails comprised entirely of left arrows, we now require a further definition to continue our discussion. We shall say that within a permutation in a path, a trail is on the \textit{left side} if it is in a cycle containing a left arrow, and the \textit{right side} otherwise. We shall say that between two rules a trail changes sides from right to left or from left to right between two permutations in the obvious manner. Below is a diagram to clarify the terminology, with arrows on the left drawn in red and arrows on the right drawn in blue. \begin{center} \begin{tikzpicture} \foreach \x in {1,...,7} { \draw (\x, 1) node {}; \draw (\x, 2) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; \draw (\x, 5) node {}; } \draw [->, blue] (1, 5) to (7, 4); \draw [->, blue] (2, 5) to (1, 4); \draw [->, blue] (3, 5) to (2, 4); \draw [->, blue] (4, 5) to (3, 4); \draw [->, blue] (5, 5) to (4, 4); \draw [->, blue] (6, 5) to (5, 4); \draw [->, blue] (7, 5) to (6, 4); \draw [->, red] (1, 4) to (2, 3); \draw [->, red] (2, 4) to (1, 3); \draw [->, blue] (3, 4) to (7, 3); \draw [->, blue] (4, 4) to (3, 3); \draw [->, blue] (5, 4) to (4, 3); \draw [->, blue] (6, 4) to (5, 3); \draw [->, blue] (7, 4) to (6, 3); \draw [->, red] (1, 3) to (3, 2); \draw [->, red] (2, 3) to (1, 2); \draw [->, red] (3, 3) to (2, 2); \draw [->, blue] (4, 3) to (7, 2); \draw [->, blue] (5, 3) to (4, 2); \draw [->, blue] (6, 3) to (5, 2); \draw [->, blue] (7, 3) to (6, 2); \draw [->, red] (1, 2) to (1, 1); \draw [->, blue] (2, 2) to (7, 1); \draw [->, blue] (3, 2) to (2, 1); \draw [->, blue] (4, 2) to (3, 1); \draw [->, blue] (5, 2) to (4, 1); \draw [->, blue] (6, 2) to (5, 1); \draw [->, blue] (7, 2) to (6, 1); \node[fill=none, draw=none] at (0, 4.5) {$\pi_3$}; \node[fill=none, draw=none] at (0, 3.5) {$\pi_1$}; \node[fill=none, draw=none] at (0, 2.5) {$\pi_0$}; \node[fill=none, draw=none] at (0, 1.5) {$\pi_2$}; \node[fill=none, draw=none] at (-1, 3) {$p$}; \end{tikzpicture} \end{center} For the remainder of this section, we will consider paths with the property that, letting $p$ be our path, $p_i = \pi_j$ and $p_{i + 1} = \pi_k$ implies $j \geq k - 1$ (note that this restriction applies to the last entry of $p$, considering $p_1$ instead of $p_{i + 1}$). \begin{lemma} If $p$ is closed, any trail which changes sides contains a pair. \end{lemma} \begin{proof} For any closed trail, the number of times the trail changes from left to right must equal the number of times the trail changes from right to left. Hence, if a trail changes sides at all we know at some point the trail must change sides from left to right. It is only possible for a trail to change sides from left to right between a pair of consecutive rules of the form $p_i = \pi_j$ and $p_{i + 1} = \pi_k$ where $k > j$. Given our assumption on our path $p$, we see this happens only if $k = j + 1$, in which case there is only one path which changes sides from left to right, corresponding to the left arrow of $p_i$ connecting to the right arrow of $p_{i + 1}$. Hence any trail which changes sides must contain a pair. \end{proof} \begin{corollary} \label{always_left} If $p$ is closed, any trail which contains only left arrows is always on the left side. \end{corollary} Now we shall define the \textit{closure} of a path $p$ to be $p$ concatenated with itself the smallest number of times necessary to form a closed path. As $p$ is a permutation, we know that the closure exists as each permutation has finite order. \begin{lemma} If $p$ is a path in which every trail with a right arrow is closed, all trails with only left arrows are always on the left side. \end{lemma} \begin{proof} Letting $q$ be the closure of $p$ we may use Corollary~\ref{always_left} to see that all trails of $q$ with only left arrows are always on the left side. The closed trails of $q$ which contain only left arrows correspond to the trails of $p$ which contain only left arrows. This is because any trail containing a right arrow in $q$ corresponds to one of the closed trails of $p$. Now, we note that the property of being on the left or right side in a path only depends on the position of the trail at each $i$\textsuperscript{th} rule in the path, which are the same in $p$ and $q$. \end{proof} \begin{lemma} If $p$ is a path with all trails containing right arrows closed, and the trails starting at positions $a_1, a_2, \dots, a_k$ are all the trails containing only left arrows, and there are $m$ left arrows in total in these trails, then $p$ maps $a_i$ to $a_{i - m}$ (subscripts considered modular). \end{lemma} \begin{proof} This is provable by a trivial induction. Firstly, as all trails other than those starting at each $a_i$ are closed, we see that $p$ maps each $a_i$ to some $a_j$. Now, as all the trails are always on the left side, only two things may happen at each rule. Either all trails are mapped backward by backward arrows, and thus their left right ordering is preserved, or the trail on the far left is mapped by a left arrow and becomes the trail on the far right. The latter case happens exactly $m$ times. Hence, the left right ordering of the trails starting at $a_1, a_2, \dots, a_k$ is cycled $m$ times. \end{proof} \begin{corollary} \label{theyre_a_transposition} If a path $p$ has all trails with right arrows closed, and contains exactly two trails whose only forward arrows are left arrows, and those trails together contain an even number of left arrows, then the path $p$ is closed. \end{corollary} \begin{proof} Letting the trails starting at positions $a_1$ and $a_2$ be those containing only left forward arrows, we may apply the previous lemma to show that $a_1$ is mapped to $a_{1 - 6} = a_1$ and $a_2$ is mapped to $a_{2 - 6} = a_2$, hence the trails at $a_1$ and $a_2$ are closed. \end{proof} \section{The Odd Case} We now begin to count the closed paths of length $n + 1$ in the case where $n = 2k + 1$. Throughtout this section, we consider the path $p = p_1 p_2 \dots p_{n + 1}$ which is assumed to be closed. \begin{lemma} \label{mirror_zeroes} If $p_1 = \pi_0$, then $p_{k + 1} = \pi_0$, and the trail beginning at $k + 1$ contains two right arrows. \end{lemma} \begin{proof} We consider the trail starting with the right arrow of $p_1$. As this trail is closed, by Lemma~\ref{right_arrows_rule} and Lemma~\ref{right_left_pair} we see that it contains one more forward arrow which is either a right or a left arrow. Hence, this trail maps backwards $n - 1$ spaces, the right arrow at $p_1$ maps forward $k$ spaces, and therefore the other forwards arrow maps forward $(n - 1) - k = k$ spaces. The most any left arrow maps forward is $k - 1$ spaces, hence the other arrow in the trail is a right arrow. The only rule with a right arrow which maps forward $k$ spaces is $\pi_0$. To see $p_{k + 1} = \pi_0$, we simply follow the backwards arrows after $p_1$. \end{proof} \begin{corollary} \label{at_most_six} There are at most six occurences of the rule $\pi_0$ in the path $p$. \end{corollary} \begin{proof} This is the result of the combination of Lemma~\ref{mirror_zeroes} and Lemma~\ref{at_most_three}. \end{proof} \begin{lemma} \label{decreasing} If $p_1 = \pi_i$ for some $i \geq 1$, then $p_{n + 1} = \pi_{i - 1}$. \end{lemma} \begin{proof} Consider the trail starting with the right arrow of $p_1$. If this trail contains a left arrow we apply Lemma~\ref{right_left_pair} and are done. Otherwise, Lemma~\ref{right_arrows_rule} shows that there is exactly one other right arrow in the trail. The distance mapped backward in the trail is $n - 1$, and the distance mapped forward by the right arrow in $p_1$ is $k + i$, hence the distance mapped forward by the other right arrow is $(n - 1) - (k + i) = k - i$, but all right arrows map forward at least $k$. \end{proof} \begin{corollary} \label{useful_decreasing} If $p_i = \pi_j$ for $j \neq 0$, then $p_{i - 1} = \pi_{j - 1}$. \end{corollary} \begin{proposition} \label{theyre_tau_sequences} The path $p$ is closed if, and only if, \[ p_i = \begin{cases} \pi_{a_i} & \text{for $1 \leq i \leq k + 1$}, \\ \pi_{a_j} & \text{for $i = j + (k + 1)$, $1 \leq j \leq k + 1$}, \end{cases} \] where $a_1, a_2, \dots, a_{k + 1}$ is a $\tau$-sequence. \end{proposition} \begin{proof} To show the implication, we first note the combination of Lemma~\ref{mirror_zeroes} and Corollary~\ref{useful_decreasing} show that $p_i = p_{i + (k + 1)}$. Hence, we may now define the sequence $a_1 a_2 \dots a_{k + 1}$ such that $p_i = \pi_{a_i}$, and therefore $p_{i + (k + 1)} = \pi_{a_i}$. By Corollary~\ref{at_most_six}, we see that 0 may occur at most three times in $\langle a_i \rangle$. Finally, again from applying Corollary~\ref{useful_decreasing} and considering rotations of $p$, we see that $\langle a_i \rangle$ is a $\tau$-sequence. Now, to show the reverse implication, let $\langle a_i \rangle$ be an arbitrary $\tau$-sequence and define the corresponding path $p$. As shown in Lemma~\ref{mirror_zeroes} if some $a_i = 0$, this corresponds to a closed trail with the two right arrows in $p_i$ and $p_{i + (k + 1)}$. Otherwise, if $a_i \neq 0$, then $a_{i - 1} = a_i - 1$, and the right arrow of $p_i$ is paired with the left arrow of $p_{i - 1}$. Therefore, all right arrows of $p$ are either in closed trails with right arrows only or are paired, hence by Lemma~\ref{left_right_ident} each pair corresponds to a distinct closed trail. Again by Lemma~\ref{mirror_zeroes}, each pair of zeroes introduces exactly one closed trail. In a $\tau$-sequence, we will have either one, two or three zeroes, and hence one, two or three pairs of zeroes in our path. We now consider each case separately, and count the number of closed trails containing right arrows. \begin{itemize} \item If we have one pair of zeroes in our path, these account for one closed trail. The remaining $(n + 1) - 2 = n - 1$ right arrows in our path each correspond to a pair in a distinct closed trail. Hence we have found a total of $n$ closed trails, and so $p$ is closed. \item If we have two pairs of zeroes in our path, these account for two closed trails. The remaining $(n + 1) - 4 = n - 3$ right arrows in our path each correspond to a pair in a distinct closed trail. Hence, we have found $n - 1$ closed trails in our path. However, this means the remaining trail has to start and end in the same position, and so our path must be closed. \item If we have three pairs of zeroes in our path, these account for three closed trails. The remaining $(n + 1) - 6 = n - 5$ right arrows in our path correspond to distinct pairs in our trail. Hence, we have found $n - 2$ closed trails in our path. The remaining trails in our path use only left arrows. As the $\tau$-sequence in question has more than one zero, all other numbers in the sequence are less than $k$, hence each corresponding rule in our path contains both a right and a left rule, hence there are 6 left arrows in these two trails, so we may apply Corollary~\ref{theyre_a_transposition} to show all trails in the path are closed. \end{itemize} \end{proof} \begin{theorem} For $n = 2k + 1$, the G\'{o}mez graphs $G_m \in \mywgp{\Pi_n}$ satisfy $\myaut{G_m} \cong S_m$. \end{theorem} \begin{proof} We have shown that the paths of length $n + 1$ which lead from a vertex back to itself correspond to $\tau$-sequences. Hence, for the vertex $e \in \myvert{\Gamma_n}$ the number of paths from each $\pi_i \in \Pi_n$ to $e$ of length $n$ is distinct for each $i$. This shows that the G\'{o}mez graphs are subregular. Finally, we notice there is one path of length $n$ from $\pi_k$ to $e$, but as $\pi_k$ is simply an $n-cycle$ we have $\pi_k^n = e$, hence there is also a path of length $n - 1$ from $\pi_k$ to $e$. Hence, each $pi_i$ satisfies either $\mydist{\pi_i}{e} < n$ or there is more than one path of length $n$ from $\pi_i$ to $e$. Hence the G\'{o}mez graphs are alphabet stable. \end{proof} \section{The Even Case} In this section, we deal with the case where $n = 2k$ and $k > 1$. We will consider an arbitrary closed path $p = p_1 p_2 \dots p_{n + 1}$. \begin{lemma} \label{even_zero_structure} If $p_1 = \pi_0$, then $p_{k + 2} = \pi_1$, and the closed trail starting at $k + 1$ contains two right arrows. \end{lemma} \begin{proof} The closed trail starting at $k + 1$ is the trail starting with the right arrow of $p_1$. Hence, if we assume there are no further right arrows in this trail, we may apply Lemma~\ref{right_arrow_first} to show that $p_{n + 1}$ contains a left arrow mapping $1$ to $k + 1$. However, no such right arrow exists. Therefore we may use Lemma~\ref{right_arrows_rule} to show to show the trail contains two right arrows. The second right arrow must have size $(n - 1) - (k - 1) = k$ so the number of spaces mapped forwards equals those mapped backwards. Hence, the other right arrow is from rule $\pi_1$, and this is in the trail after being mapped backwards $k + 1$ positions after $\pi_0$, hence occurs in position $p_{k + 2}$. \end{proof} \begin{corollary} The rule $\pi_0$ occurs at most three times in $p$. \end{corollary} \begin{proof} This is an immediate consequence of Lemma~\ref{even_zero_structure} and Lemma~\ref{left_arrows_suck}. \end{proof} \begin{lemma} If $p_1 = \pi_1$, then either $p_{k + 1} = \pi_0$ or $p_{n + 1} = \pi_0$. \end{lemma} \begin{proof} Consider the trail starting with the right arrow of $p_1$. If this trail contains another right arrow, then Lemma~\ref{right_arrows_rule} shows it contains exactly two right arrows, and following previous logic the other right arrow is $\pi_0$ at position $p_{k + 1}$. Otherwise we may apply Lemma~\ref{right_left_pair} to get that $p_{n + 1} = \pi_0$. \end{proof} \begin{lemma} \label{even_decreasing} If $p_1 = \pi_i$ for some $i \geq 2$, then $p_{n + 1} = \pi_{i - 1}$. \end{lemma} \begin{proof} Here we follow the same reasoning as Lemma~\ref{decreasing}. \end{proof} \begin{lemma} \label{non_exist_len_n} If $p$ is a closed path of length $n$, no rule $\pi_i$ where $0 < i < k$ may occur in $p$. \end{lemma} \begin{proof} Suppose $p$ contains some $\pi_i$ with $0 < i < k$. Rotate $p$ as necessary so that $p_1 = \pi_i$. Consider the trail starting with the right arrow of $\pi_i$. The right arrow of $\pi_i$ maps $(k - 1) + i$ spaces forward. If this trail contains another right arrow, the total distance mapped forward in the trail is at least $2(k - 1) + i > n - 2$. The total distance mapped backward in the, however, trail is at most $n - 2$. Therefore, this trail must contain a left arrow. This means that there is some $j$ such that $(p_2 p_3 \dots p_j)(n) = 1$. The smallest $j$ this can occur for is $j = n$, but this leaves no further rule to contain a left arrow in the trail. Therefore, no such path can exist. \end{proof} \begin{proposition} The path $p$ is closed if, and only if, $p_i = \pi_{a_i}$ for some $\sigma$-sequence $\langle a_i \rangle$. \end{proposition} \begin{proof} We follow the same approach as the proof of Proposition~\ref{theyre_tau_sequences}. \end{proof} \begin{theorem} For $n = 2k$, the G\'{o}mez graphs $G_m \in \mywgp{\Pi_n}$ satisfy $\myaut{G_m} \cong S_m$. \end{theorem} \begin{proof} We have shown that the paths of length $n + 1$ from each vertex in $\Gamma_n$ back to itself corresponds to a $\sigma$-sequence. Hence, for each $e \in \Gamma_n$ the number of paths from each $\pi_i \in \Pi_n$ to $e$ of length $n$ is distinct for each $i$, with the possible exception of $\pi_0$. Further, we always have at least two paths of length $n$ from each $\pi_i$ to $e$, hence we have alphabet stability. Now to show subregularity, we consider paths of length $n - 1$ from each $\pi_i$ to $e$. Lemma~\ref{non_exist_len_n} shows that only $\pi_0$ and $\pi_k$ may have a path of length $n - 1$ to $e$, and $\pi_0$ is two disjoint $k$-cycles and hence $\pi_0^n = e$ gives us a path of length $n - 1$ from $\pi_0$ to $e$. Therefore, if $\Gamma_n$ is not regular there is an automorphism which exchanges $\pi_0$ and $\pi_k$. However, there are only two paths of length $n$ from $\pi_k$ to $e$, and at least three paths of length $n$ from $\pi_0$ to $e$. Hence we have established subregularity. \end{proof} \section{Problems in Other Cases} The above argument may seem somewhat unsatisfying as it begins with the observation of some reasonably general facts of G\'omez graphs but then only goes on to make arguments of path counting in the cases $\mydg{k}{k}$ and $\mydg{k}{k+1}$. However, we will now give an example to show that, though perhaps the counting argument could be generalised to count all similar paths in G\'omez graphs, this would not serve our purpose of classifying when G\'omez graphs have full automorphism group $S_n$. We include here a computed table showing numbers of closed paths of length $k + 1$ starting with each $\pi_i$, written in the order $\pi_0, \pi_2$, etc. Noting that the cases we have addressed are $k = k'$ and $k = k' + 1$. \begin{table}[H] \centering \begin{tabular}{cr|c|c|c|c|c} & & \multicolumn{5}{c}{$k$} \\ & & 2 & 3 & 4 & 5 & 6 \\ \hline \multirow{3}{*}{$k'$}& 1 & 2,2 & 4,5,5 & 8,11,15,11 & 16,23,37,37,23 & 32,47,83,100,83,47 \\ &3 & - & 1,2 & 2,5,3 & 4,12,12,12 & 8,27,35,44,33 \\ &5 & - & - & - & 1,2,4 & 2,5,13,8 \\ \end{tabular} \end{table} \noindent From this table we see that the first difficulty we encounter with the case $k = k' + 2$ occurs for $k' = 3$, where there are twelve closed paths of length $6$ starting from each of $\pi_1, \pi_2$ and $\pi_3$. This problem cannot be resolved with the methods used to address the cases $k' = k$ and $k' = k + 1$. In addition, this table highlights the interesting case of $k' = 1$. In this case, we see the number of closed paths of length $k + 1$ starting with each $\pi_i$ is equal to those starting from $\pi_{k - i}$ (taking $\pi_k = \pi_0$ in the special case). We shall provide an informal proof of this to demonstrate that this difficulty cannot possibly be overcome to make this style of proof work for the case $k' = 1$. We consider the case $k = 8$ and $k' = 1$. In this case, we have the rules $\pi_0, \pi_1, \pi_2, \pi_3, \pi_4, \pi_5, \pi_6$ and $\pi_7$ as follows: \begin{center} \begin{tikzpicture}[scale=0.6] \foreach \x in {1,...,8} { \draw (\x, 0) node {}; \draw (\x, 1) node {}; \draw (\x, 2) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; \draw (\x, 5) node {}; \draw (\x, 6) node {}; \draw (\x, 7) node {}; } \foreach \x in {11,...,18} { \draw (\x, 0) node {}; \draw (\x, 1) node {}; \draw (\x, 2) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; \draw (\x, 5) node {}; \draw (\x, 6) node {}; \draw (\x, 7) node {}; } \draw [->, gray] (1, 7) to (4, 6); \draw [->, gray] (2, 7) to (1, 6); \draw [->, gray] (3, 7) to (2, 6); \draw [->, gray] (4, 7) to (3, 6); \draw [->, gray] (5, 7) to (8, 6); \draw [->, gray] (6, 7) to (5, 6); \draw [->, gray] (7, 7) to (6, 6); \draw [->, gray] (8, 7) to (7, 6); \node[fill=none, draw=none] at (0, 6.5) {$\pi_0$}; \draw [->, gray] (1, 5) to (3, 4); \draw [->, gray] (2, 5) to (1, 4); \draw [->, gray] (3, 5) to (2, 4); \draw [->, gray] (4, 5) to (8, 4); \draw [->, gray] (5, 5) to (4, 4); \draw [->, gray] (6, 5) to (5, 4); \draw [->, gray] (7, 5) to (6, 4); \draw [->, gray] (8, 5) to (7, 4); \node[fill=none, draw=none] at (0, 4.5) {$\pi_1$}; \draw [->, gray] (1, 3) to (2, 2); \draw [->, gray] (2, 3) to (1, 2); \draw [->, gray] (3, 3) to (8, 2); \draw [->, gray] (4, 3) to (3, 2); \draw [->, gray] (5, 3) to (4, 2); \draw [->, gray] (6, 3) to (5, 2); \draw [->, gray] (7, 3) to (6, 2); \draw [->, gray] (8, 3) to (7, 2); \node[fill=none, draw=none] at (0, 2.5) {$\pi_2$}; \draw [->, gray] (1, 1) to (1, 0); \draw [->, gray] (2, 1) to (8, 0); \draw [->, gray] (3, 1) to (2, 0); \draw [->, gray] (4, 1) to (3, 0); \draw [->, gray] (5, 1) to (4, 0); \draw [->, gray] (6, 1) to (5, 0); \draw [->, gray] (7, 1) to (6, 0); \draw [->, gray] (8, 1) to (7, 0); \node[fill=none, draw=none] at (0, 0.5) {$\pi_3$}; \draw [->, gray] (11, 7) to (18, 6); \draw [->, gray] (12, 7) to (11, 6); \draw [->, gray] (13, 7) to (12, 6); \draw [->, gray] (14, 7) to (13, 6); \draw [->, gray] (15, 7) to (14, 6); \draw [->, gray] (16, 7) to (15, 6); \draw [->, gray] (17, 7) to (16, 6); \draw [->, gray] (18, 7) to (17, 6); \node[fill=none, draw=none] at (10, 6.5) {$\pi_4$}; \draw [->, gray] (11, 5) to (15, 4); \draw [->, gray] (12, 5) to (11, 4); \draw [->, gray] (13, 5) to (12, 4); \draw [->, gray] (14, 5) to (13, 4); \draw [->, gray] (15, 5) to (14, 4); \draw [->, gray] (16, 5) to (18, 4); \draw [->, gray] (17, 5) to (16, 4); \draw [->, gray] (18, 5) to (17, 4); \node[fill=none, draw=none] at (10, 4.5) {$\pi_7$}; \draw [->, gray] (11, 3) to (16, 2); \draw [->, gray] (12, 3) to (11, 2); \draw [->, gray] (13, 3) to (12, 2); \draw [->, gray] (14, 3) to (13, 2); \draw [->, gray] (15, 3) to (14, 2); \draw [->, gray] (16, 3) to (15, 2); \draw [->, gray] (17, 3) to (18, 2); \draw [->, gray] (18, 3) to (17, 2); \node[fill=none, draw=none] at (10, 2.5) {$\pi_6$}; \draw [->, gray] (11, 1) to (17, 0); \draw [->, gray] (12, 1) to (11, 0); \draw [->, gray] (13, 1) to (12, 0); \draw [->, gray] (14, 1) to (13, 0); \draw [->, gray] (15, 1) to (14, 0); \draw [->, gray] (16, 1) to (15, 0); \draw [->, gray] (17, 1) to (16, 0); \draw [->, gray] (18, 1) to (18, 0); \node[fill=none, draw=none] at (10, 0.5) {$\pi_5$}; \end{tikzpicture} \end{center} \noindent We consider a diagram of the closed path $\pi_2 \pi_3 \pi_7 \pi_7 \pi_0 \pi_1 \pi_2 \pi_3 \pi_2$ and note that if we rotate it by 180 degrees and reverse the arrows we get another closed path. \begin{center} \begin{tikzpicture}[scale=0.6] \foreach \x in {1,...,8} { \draw (\x, 0) node {}; \draw (\x, 1) node {}; \draw (\x, 2) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; \draw (\x, 5) node {}; \draw (\x, 6) node {}; \draw (\x, 7) node {}; \draw (\x, 8) node {}; \draw (\x, 9) node {}; } \foreach \x in {11,...,18} { \draw (\x, 0) node {}; \draw (\x, 1) node {}; \draw (\x, 2) node {}; \draw (\x, 3) node {}; \draw (\x, 4) node {}; \draw (\x, 5) node {}; \draw (\x, 6) node {}; \draw (\x, 7) node {}; \draw (\x, 8) node {}; \draw (\x, 9) node {}; } \draw [->, gray] (1, 9) to (2, 8); \draw [->, gray] (2, 9) to (1, 8); \draw [->, gray] (3, 9) to (8, 8); \draw [->, gray] (4, 9) to (3, 8); \draw [->, gray] (5, 9) to (4, 8); \draw [->, gray] (6, 9) to (5, 8); \draw [->, gray] (7, 9) to (6, 8); \draw [->, gray] (8, 9) to (7, 8); \node[fill=none, draw=none] at (0, 8.5) {\textcolor{red}{$\pi_2$}}; \draw [->, gray] (1, 8) to (1, 7); \draw [->, gray] (2, 8) to (8, 7); \draw [->, gray] (3, 8) to (2, 7); \draw [->, gray] (4, 8) to (3, 7); \draw [->, gray] (5, 8) to (4, 7); \draw [->, gray] (6, 8) to (5, 7); \draw [->, gray] (7, 8) to (6, 7); \draw [->, gray] (8, 8) to (7, 7); \node[fill=none, draw=none] at (0, 7.5) {$\pi_3$}; \draw [->, gray] (1, 7) to (5, 6); \draw [->, gray] (2, 7) to (1, 6); \draw [->, gray] (3, 7) to (2, 6); \draw [->, gray] (4, 7) to (3, 6); \draw [->, gray] (5, 7) to (4, 6); \draw [->, gray] (6, 7) to (8, 6); \draw [->, gray] (7, 7) to (6, 6); \draw [->, gray] (8, 7) to (7, 6); \node[fill=none, draw=none] at (0, 6.5) {$\pi_7$}; \draw [->, gray] (1, 6) to (5, 5); \draw [->, gray] (2, 6) to (1, 5); \draw [->, gray] (3, 6) to (2, 5); \draw [->, gray] (4, 6) to (3, 5); \draw [->, gray] (5, 6) to (4, 5); \draw [->, gray] (6, 6) to (8, 5); \draw [->, gray] (7, 6) to (6, 5); \draw [->, gray] (8, 6) to (7, 5); \node[fill=none, draw=none] at (0, 5.5) {$\pi_7$}; \draw [->, gray] (1, 5) to (4, 4); \draw [->, gray] (2, 5) to (1, 4); \draw [->, gray] (3, 5) to (2, 4); \draw [->, gray] (4, 5) to (3, 4); \draw [->, gray] (5, 5) to (8, 4); \draw [->, gray] (6, 5) to (5, 4); \draw [->, gray] (7, 5) to (6, 4); \draw [->, gray] (8, 5) to (7, 4); \node[fill=none, draw=none] at (0, 4.5) {$\pi_0$}; \draw [->, gray] (1, 4) to (3, 3); \draw [->, gray] (2, 4) to (1, 3); \draw [->, gray] (3, 4) to (2, 3); \draw [->, gray] (4, 4) to (8, 3); \draw [->, gray] (5, 4) to (4, 3); \draw [->, gray] (6, 4) to (5, 3); \draw [->, gray] (7, 4) to (6, 3); \draw [->, gray] (8, 4) to (7, 3); \node[fill=none, draw=none] at (0, 3.5) {$\pi_1$}; \draw [->, gray] (1, 3) to (2, 2); \draw [->, gray] (2, 3) to (1, 2); \draw [->, gray] (3, 3) to (8, 2); \draw [->, gray] (4, 3) to (3, 2); \draw [->, gray] (5, 3) to (4, 2); \draw [->, gray] (6, 3) to (5, 2); \draw [->, gray] (7, 3) to (6, 2); \draw [->, gray] (8, 3) to (7, 2); \node[fill=none, draw=none] at (0, 2.5) {$\pi_2$}; \draw [->, gray] (1, 2) to (1, 1); \draw [->, gray] (2, 2) to (8, 1); \draw [->, gray] (3, 2) to (2, 1); \draw [->, gray] (4, 2) to (3, 1); \draw [->, gray] (5, 2) to (4, 1); \draw [->, gray] (6, 2) to (5, 1); \draw [->, gray] (7, 2) to (6, 1); \draw [->, gray] (8, 2) to (7, 1); \node[fill=none, draw=none] at (0, 1.5) {$\pi_3$}; \draw [->, gray] (1, 1) to (2, 0); \draw [->, gray] (2, 1) to (1, 0); \draw [->, gray] (3, 1) to (8, 0); \draw [->, gray] (4, 1) to (3, 0); \draw [->, gray] (5, 1) to (4, 0); \draw [->, gray] (6, 1) to (5, 0); \draw [->, gray] (7, 1) to (6, 0); \draw [->, gray] (8, 1) to (7, 0); \node[fill=none, draw=none] at (0, 0.5) {$\pi_2$}; \draw [->, gray] (11, 9) to (16, 8); \draw [->, gray] (12, 9) to (11, 8); \draw [->, gray] (13, 9) to (12, 8); \draw [->, gray] (14, 9) to (13, 8); \draw [->, gray] (15, 9) to (14, 8); \draw [->, gray] (16, 9) to (15, 8); \draw [->, gray] (17, 9) to (18, 8); \draw [->, gray] (18, 9) to (17, 8); \node[fill=none, draw=none] at (10, 8.5) {$\pi_6$}; \draw [->, gray] (11, 8) to (17, 7); \draw [->, gray] (12, 8) to (11, 7); \draw [->, gray] (13, 8) to (12, 7); \draw [->, gray] (14, 8) to (13, 7); \draw [->, gray] (15, 8) to (14, 7); \draw [->, gray] (16, 8) to (15, 7); \draw [->, gray] (17, 8) to (16, 7); \draw [->, gray] (18, 8) to (18, 7); \node[fill=none, draw=none] at (10, 7.5) {$\pi_5$}; \draw [->, gray] (11, 7) to (16, 6); \draw [->, gray] (12, 7) to (11, 6); \draw [->, gray] (13, 7) to (12, 6); \draw [->, gray] (14, 7) to (13, 6); \draw [->, gray] (15, 7) to (14, 6); \draw [->, gray] (16, 7) to (15, 6); \draw [->, gray] (17, 7) to (18, 6); \draw [->, gray] (18, 7) to (17, 6); \node[fill=none, draw=none] at (10, 6.5) {$\pi_6$}; \draw [->, gray] (11, 6) to (15, 5); \draw [->, gray] (12, 6) to (11, 5); \draw [->, gray] (13, 6) to (12, 5); \draw [->, gray] (14, 6) to (13, 5); \draw [->, gray] (15, 6) to (14, 5); \draw [->, gray] (16, 6) to (18, 5); \draw [->, gray] (17, 6) to (16, 5); \draw [->, gray] (18, 6) to (17, 5); \node[fill=none, draw=none] at (10, 5.5) {$\pi_7$}; \draw [->, gray] (11, 5) to (14, 4); \draw [->, gray] (12, 5) to (11, 4); \draw [->, gray] (13, 5) to (12, 4); \draw [->, gray] (14, 5) to (13, 4); \draw [->, gray] (15, 5) to (18, 4); \draw [->, gray] (16, 5) to (15, 4); \draw [->, gray] (17, 5) to (16, 4); \draw [->, gray] (18, 5) to (17, 4); \node[fill=none, draw=none] at (10, 4.5) {$\pi_0$}; \draw [->, gray] (11, 4) to (13, 3); \draw [->, gray] (12, 4) to (11, 3); \draw [->, gray] (13, 4) to (12, 3); \draw [->, gray] (14, 4) to (18, 3); \draw [->, gray] (15, 4) to (14, 3); \draw [->, gray] (16, 4) to (15, 3); \draw [->, gray] (17, 4) to (16, 3); \draw [->, gray] (18, 4) to (17, 3); \node[fill=none, draw=none] at (10, 3.5) {$\pi_1$}; \draw [->, gray] (11, 3) to (13, 2); \draw [->, gray] (12, 3) to (11, 2); \draw [->, gray] (13, 3) to (12, 2); \draw [->, gray] (14, 3) to (18, 2); \draw [->, gray] (15, 3) to (14, 2); \draw [->, gray] (16, 3) to (15, 2); \draw [->, gray] (17, 3) to (16, 2); \draw [->, gray] (18, 3) to (17, 2); \node[fill=none, draw=none] at (10, 2.5) {$\pi_1$}; \draw [->, gray] (11, 2) to (17, 1); \draw [->, gray] (12, 2) to (11, 1); \draw [->, gray] (13, 2) to (12, 1); \draw [->, gray] (14, 2) to (13, 1); \draw [->, gray] (15, 2) to (14, 1); \draw [->, gray] (16, 2) to (15, 1); \draw [->, gray] (17, 2) to (16, 1); \draw [->, gray] (18, 2) to (18, 1); \node[fill=none, draw=none] at (10, 1.5) {$\pi_5$}; \draw [->, gray] (11, 1) to (16, 0); \draw [->, gray] (12, 1) to (11, 0); \draw [->, gray] (13, 1) to (12, 0); \draw [->, gray] (14, 1) to (13, 0); \draw [->, gray] (15, 1) to (14, 0); \draw [->, gray] (16, 1) to (15, 0); \draw [->, gray] (17, 1) to (18, 0); \draw [->, gray] (18, 1) to (17, 0); \node[fill=none, draw=none] at (10, 0.5) {\textcolor{red}{$\pi_6$}}; \end{tikzpicture} \end{center} \noindent Hence from the closed path \[ p \triangleq \pi_2 \pi_3 \pi_7 \pi_7 \pi_0 \pi_1 \pi_2 \pi_3 \pi_2 \] we form the closed path \[ q \triangleq \pi_6 \pi_5 \pi_6 \pi_7 \pi_0 \pi_1 \pi_1 \pi_5 \pi_6. \] As we have highlighted in the above example, closed paths beginning with $\pi_2$ are in bijective correspondence with closed paths ending in $\pi_6$. Finally, if we consider the rotation of $q$ such that places the highlighted $\pi_6$ at the beginning, we have found a bijective correspondence between closed paths of a given length beginning with $\pi_2$ and those beginning with $\pi_6$. Hence, we cannot consider any length of path to differentiate these two rules under automorphisms of $\Gamma$. \section{Conclusion} In the context of the degree diameter problem in the directed case, the possibility of finding larger graphs for given degree and diameter than the G\'omez graphs remains open (and, indeed, it appears highly likely that larger examples do exist). Hence the optimality result for the G\'omez graphs primarily serves to demonstrate the limitations of this particular method of construction, and that the construction of larger graphs will likely require altogether new ideas. Further, whilst we have shown an optimality result for the G\'{o}mez graphs we have not shown that they are unique with this property. The fact the cycles used in the G\'{o}mez graphs had the smaller cycle on the left and larger cycle on the right is not necessarily required. Therefore, an interesting question would be to determine which of the potential optimal constructions work. Beyond this, if other constructions work, it is possible that they could have larger automorphism groups. In order to raise further questions, we now define \textit{G\'omez like} graphs. Suppose a set of permutations $\Pi \subseteq S_n$ contains at least one permutation containing a cycle of each length up to $n$, and $\Pi$ is as small as possible with this property (i.e. $\Pi$ meets our optimality condition from previously). If the word graph family $\mywgp{\Pi}$ is diameter $n$, then we shall call these graphs G\'omez like. An obvious first question regarding G\'omez like graphs is what conditions such a set $\Pi$ needs to fulfil to be admissible. With regards to the classification of the automorphism groups of G\'omez graphs, what classification has been achieved misses out a number of important cases. In rough order of importance, these are \begin{enumerate}[label=\textnormal{(\roman*)}] \item the automorphism groups of undirected G\'omez graphs, \item the automorphism groups of the graphs $\mydg{k}{k'}$ for $k \geq k' + 2$, \item the automorphism groups of $\mydg{k}{1}$, \item the automorphism groups of G\'omez like graphs. \end{enumerate} A particular question considered by the author was whether a set of permutations $\Pi$ is admissible for shift restricted word graphs if, and only if, the corresponding alphabet fixing subgraph $\Gamma$ is $n$-reachable. The reason for this question is that both the Faber-Moore-Chen graphs and the G\'omez graphs have this property, and further a simple argument shows that admissibility implies a weaker but similar property as we now show. \begin{lemma} If $\Pi \subseteq S_n$ is an admissible set of permutations, letting $k < n$ and $m = n - k$, then for any $\tau \in S_n$ with \[ \tau(i) = \begin{cases} i - m, & \text{if $m < i \leq n$}, \\ j, & \text{otherwise}, \\ \end{cases} \] there are some $\pi_1, \pi_2, \dots, \pi_m \in \Pi$ such that $\pi_1 \pi_2 \dots \pi_m = \tau$. \end{lemma} \begin{proof} Similar to Lemma~\ref{contains_cycle}. We simply consider a path from $x_1 x_2 \dots x_n$ to $y_1 y_2 \dots y_k x_{\tau(k + 1)} x_{\tau(k + 2)} \dots x_{\tau(n)}$. Consider the case $n = 9, k = 3, m = 6$ here \[ \newcommand{\f}[1]{\textcolor{red}{#1}} \newcommand{\g}[1]{\textcolor{blue}{#1}} \begin{matrix} & x_1 & x_2 & x_3 &\f{x_4}&\f{x_5}&\f{x_6}&\f{x_7}&\f{x_8}&\f{x_9}\\ & x_2 & x_3 &\f{x_4}&\f{x_5}&\f{x_6}&\f{x_7}&\f{x_8}&\f{x_9}&\g{y_1}\\ & x_3 &\f{x_4}&\f{x_5}&\f{x_6}&\f{x_7}&\f{x_8}&\f{x_9}&\g{y_1}&\g{y_2}\\ p_1&\f{x_4}&\f{x_5}&\f{x_6}&\f{x_7}&\f{x_8}&\f{x_9}&\g{y_1}&\g{y_2}&\g{y_3}\\ & x & x & x & x & x &\g{y_1}&\g{y_2}&\g{y_3}& x \\ & x & x & x & x &\g{y_1}&\g{y_2}&\g{y_3}& x & x \\ & x & x & x &\g{y_1}&\g{y_2}&\g{y_3}& x & x & x \\ & x & x &\g{y_1}&\g{y_2}&\g{y_3}& x & x & x & x \\ & x &\g{y_1}&\g{y_2}&\g{y_3}& x & x & x & x & x \\ p_2&\g{y_1}&\g{y_2}&\g{y_3}& x & x & x & x & x & x \\ \end{matrix} \] Considering the path from $p_1$ to $p_2$ we find each permutation in this path must be in $\Pi$ and the permutation of $x_4, x_5, \dots, x_9$ is arbitrary. \end{proof} \begin{corollary} If $\tau \in S_n$ such that there exists some $k < n$ with \[ \tau(i) = \begin{cases} i' < k & \text{if $i < k$} \\ i' \geq k & \text{if $i \geq k$}, \end{cases} \] then there exist $\pi_1, \pi_2, \dots, \pi_n \in \Pi$ such that $\tau = \pi_1 \pi_2 \dots \pi_n$. \end{corollary} \begin{proof} This is simply the concatenation of two of the previous permutations we showed exist in the previous lemma. \end{proof} Hence, if $\Pi$ is admissible, we can easily see ``a lot'' of permutations must be $n$-reachable. This taken in conjunction with the fact the known optimal admissible $\Pi$ are $n$-reachable suggests that this may be a necessary requirement. \bibliographystyle{plain}
1,314,259,996,202
arxiv
\section{Other Metrics} \label{app:other_metrics} In this section we define other metrics of interest. In particular, we consider an adversarial setting (e.g., in the case of bio-warfare) where if our observers are known, the adversary would select the worst location for the source. First, we consider minimum success probability, which is \begin{eqnarray*} \hat \mathcal P_{s}(\mathcal{O}) &\stackrel{{\rm def}}{=}& 1- \max_{E_i} \left( \P(\hat{s} \neq s^* | s^*\in E_i) \right) \end{eqnarray*} \noindent where $\{E_i\}$ are the equivalence classes with respect to $\mathcal{O}$. Note that in an adversarial setting, we would not consider any prior, rather would select $\hat s \in \mbox{argmax}_{E_i} \P(\hat{s} \neq s^* | s^*\in E_i) $ uniformly at random; given any non-uniform distribution, the adversary could place the source at the location with lowest probability. For the same reasons, we may also wish to consider the \emph{maximum distance} between the true and the estimated source as a metric. \begin{eqnarray*} \max (d(s^*, \hat{s})) &\stackrel{{\rm def}}{=}& \max_{s \in V} \Delta_s = \max_i \Delta_i, \end{eqnarray*} \noindent where $\Delta_s$ (similarly $\Delta_i)$ denotes the diameter of equivalence class $[s]_\mathcal{O}$ (similarly $E_i)$. Note that, in particular, this is independent of any prior. Another natural consideration which interpolates between expected and worst-case metrics is the \emph{expected maximum distance} between the true and the estimated source. This captures the case where there is a prior $Q$ on the source, and we are able to identify the equivalence class of $s^*$, but make the \emph{worst-case} estimation $\hat s$ within that class. \begin{eqnarray*} \mathrm E[\max(d(s^*, \hat{s}))] &\stackrel{{\rm def}}{=}& \sum_{s \in V} \P(s^* = s) (\max_{u \in [s]_\mathcal{O}} d(s,u)) \\ &=& \sum_{s \in V} Q(s) \Delta_s = \sum_{i} Q(E_i) \Delta_i . \end{eqnarray*} \section{Double Resolving Sets} \label{app:DRS_CHW} The problem of \emph{minimizing} the required number of observers in order to perfectly identify the source in the zero-variance setting has been studied \cite{ChenHW14}; an observer set $\mathcal{O}$ such that $\mathcal P_{s}(\mathcal{O}) = 1$ is called a Doubly Resolving Set (DRS). While the original formulation of the DRS problem is slightly different, this version follows straightforwardly from our observations in Section~\ref{sec:low-variance}. \begin{definition}[Double Resolving Set] Given a network $\mathcal G$, $S \subseteq V$ is said to be a Double Resolving Set of $\mathcal G$ if for any $x, y \in V$ there exist $u, v \in S$ s.t. $d(x, u) - d(x, v) \neq d(y, u)-d(y, v)$. \end{definition} \noindent Finding a Doubly Resolving Set of minimum size is known to be NP-hard~\cite{kratica}. An approximation algorithm, based on a greedy minimization of an \emph{entropy} function, has been studied. Note that this has no connection to true information-theoretic entropy. \begin{definition}[Entropy \cite{ChenHW14}] Let $\mathcal{G}$ a network, $\mathcal{O} \subseteq V$, $|\mathcal{O}|=k$ a set of observers. The entropy of $\mathcal{O}$ is \[H_\mathcal{O} = \log_2(\prod_{[u]_\mathcal{O} \subseteq V}|[u]_\mathcal{O}|! ).\] \end{definition} \noindent Note that $H_\mathcal{O}$ is minimized if and only if each equivalence class consists of only one node and hence if and only if $\mathcal P_{s}=1$. However, despite the fact that $H_\mathcal{O}$ is minimized when $\mathcal P_{s}$ is maximized and that both act on the same set of equivalence classes for a given $\mathcal{O}$, the greedy processes that minimize $H_\mathcal{O}$ and maximize $\mathcal P_{s}$ are not the same. This can be seen by rewriting both objective functions in the following way. Let $[c_1, \ldots, c_q]$ be the sequence of equivalence class sizes. Then $H_\mathcal{O}$ can be written as $H_\mathcal{O}([c_1, .., c_q]) = \sum_{i=1}^l \sum_{j=2}^{c_i} \log(j) = \sum_{i=2}^{\max{c_j}} \log(i) \#\{c_j \geq i\}.$ Analogously we have the following equality for the success probability $\mathcal P_{s}([c_1, \ldots , c_q])$: $n (1-\mathcal P_{s}([c_1, \ldots, c_q])) = n - q = \sum_{i=2}^{\max{c_j}} \#\{c_j \geq i\}$ Hence, though similar in spirit, a greedy minimization of $H_\mathcal{O}$ is not related to a greedy optimization of $\mathcal P_{s}$ (or $\mathrm E[d(s^*, \hat s)]$). \section{Alternate objective functions}\label{app:alternate-obj \begin{table}[H] \begin{center} \begin{tabular}{l c c c} \hline \multicolumn{4}{c}{Random Geometric Graph, $N=100$, $r=0.2$}\\ \hline \\ & $\frac{\mathcal P_{s}(\Phi_{dist})-\mathcal P_{s}(\Phi)}{\mathcal P_{s}(\Phi)}$ & $\frac{\mathrm E_d(\Phi_{dist})-\mathrm E_d(\Phi))}{\mathrm E_d(\Phi_{dist}) + 1}$ & $\frac{\mathcal P_{s}(\Phi_{ent})-\mathcal P_{s}(\Phi)}{\mathcal P_{s}(\Phi)}$ \\ \\ \hline $k = 2$ & -0.205 & -0.101 & -0.033\\ $k = 4$ & -0.014 & 0.003 & -0.007\\ $k = 8$ & -0.003 & 0.002 & -0.003\\ \hline \\ \hline \multicolumn{4}{c}{Barab\`asi Albert Graph, $N=100$, $m=3$}\\ \hline \\ & $\frac{\mathcal P_{s}(\Phi_{dist})-\mathcal P_{s}(\Phi)}{\mathcal P_{s}(\Phi)}$ & $\frac{\mathrm E_d(\Phi_{dist})-\mathrm E_d(\Phi))}{\mathrm E_d(\Phi_{dist}) + 1}$ & $\frac{\mathcal P_{s}(\Phi_{ent})-\mathcal P_{s}(\Phi)}{\mathcal P_{s}(\Phi)}$ \\ \\ \hline $k = 2$ & -0.168 & -0.023 & -0.037\\ $k = 4$ & -0.039 & -0.025 & -0.028\\ $k = 8$ & -0.004 & 0.003 & 0.005\\ \end{tabular}\caption{Comparison of \textsc{lv-Obs} ($\Phi$) with the greedy algorithms that minimize the entropy function of \cite{ChenHW14} ($\Phi_{ent}$) and the expected distance ($\Phi_{dist}$)}\label{table} \end{center} \end{table} Here we compare Algorithm \ref{algo:budget}, denoted in this section as $\Phi$, with two other greedy algorithms that allocate the budget for observers according to different objective functions: \begin{enumerate} \item $\Phi_{ent}$ minimizes the entropy function $H_{\mathcal{O}}$ \cite{ChenHW14} (see Section \ref{app:DRS_CHW}); \item $\Phi_{dist}$ minimizes the expected distance (see Equation \eqref{eq:exp_dist}). \end{enumerate} We considered different topologies and different budgets $k$ for the observers. The results are given in the form of (averaged) relative differences in Table \ref{table}. The standard error of measurement is not reported for the sake of readability but it was checked to be small: approximately $10^{-2}$ for $k=2$ and $(\mathcal P_{s}(\Phi_{dist})-\mathcal P_{s}(\Phi))/\mathcal P_{s}(\Phi)$; on the order of $10^{-3}$ or smaller in all the other cases. Note that, since the expected distance can be $0$ we add $1$ in the denominator when comparing $\mathrm E_d(\Phi_{dist})$ and $\mathrm E_d(\Phi)$. The results achieved by these algorithms are, on average, worse than those of Algorithm \ref{algo:budget} ($\Phi$) independently of the graph topology. The only exception is the minimization of the expected distance when $k$ is very small. \section{Hardness of Budgeted Observer Placement}\label{app:hardness} \newcommand{\mathcal A}{\mathcal A} \begin{theorem} Given a network $\mathcal G=(V,E)$ and a budget $k$, finding an observer set $\mathcal{O}$ which maximizes $\mathcal P_{s}$ is NP-hard. \end{theorem} \begin{proof} We will prove that the budgeted observer placement is NP-hard with a reduction from the DRS problem (see Section~\ref{app:DRS_CHW}), i.e., given a polynomial-time algorithm for the budgeted observer placement problem, we will prove that we can solve the DRS problem in polynomial time. Assume that we have a polynomial-time algorithm $\mathcal A$ that takes as input a network $\mathcal G = (V,E)$ and a budget $k$, and outputs a set $\mathcal{O} \subseteq V$ of size $k$ such that $\mathcal P_{s}$ is maximized. Recall from Section~\ref{sec:low-variance} that given a network $\mathcal G$ and a set $\mathcal{O}$, the probability $\mathcal P_{s}$ can be calculated in time $O(n)$ where $n = |V|$ (it is enough to compute the $n$ distances vector with respect to $\mathcal{O}$ and any reference observer $o_1\in \mathcal{O}$). Hence, we will construct an algorithm for the DRS problem. \begin{algorithm}[H] \begin{algorithmic} \caption{Finds the minimum cardinality $DRS$ given an algorithm to compute the $k$-nodes set that maximizes $\mathcal P_{s}$.} \Require{Network $\mathcal G=(V, E)$} \For{$k = 1, \ldots, |V|$} \State $\mathcal{O} := \mathcal A(G,k)$ \State $P := \mathcal P_{s}(\mathcal{O})$ \If{$P = 1$} \State \Return $k$ \EndIf \EndFor \end{algorithmic} \end{algorithm} Since the full set $V$ always resolves the network, the program is well defined (i.e., it always returns \emph{some} $k$). Moreover, it returns precisely the minimum budget $k$ required in order to attain $\mathcal P_{s} = 1$. Lastly, it is clear that the runtime is at most $O(n(p_\mathcal A(n) + n))$ where $p_\mathcal A(n)$ is the running time of algorithm $\mathcal A$. Hence, we have a polynomial-time algorithm for the DRS problem. \end{proof} \section{High-Variance Source Estimation} \label{app:estimator} Denote by $T_\mathcal{O}$ the observed infection process. If the infection delays are Gaussian, $\mathcal G$ is a tree and no prior information about the source position is available, the maximum likelihood (ML) estimator is defined as $\hat{s} \in \displaystyle\arg \max_{\substack{s\in V}} \P(s|T_\mathcal{O})$, which has a tractable closed form \cite{Pinto12}.\footnote{Note that the model of \cite{Pinto12} additionally assumed infected observers knew the neighbor that infected them; this assumption is not required for our work.} In particular, given a set of observers $\mathcal{O}=\{o_1, o_2, \ldots, o_k\} \subseteq V$, the vector of observed infection delays $\tau = [t_2-t_1, \ldots, t_k-t_1] \in \mathbb R^{k-1}$ is distributed as $\mathcal{N}(d_{s, \mathcal{O}}, \mathbf{\Lambda}_\mathcal{O})$ where $d_{s, \mathcal{O}}$ is the distance vector of Definition~\ref{distance_vector} and the covariance matrix $\mathbf{\Lambda}_\mathcal{O}$ is \begin{equation}\label{Lambda} \boldsymbol\Lambda_{\mathcal{O}, (k,i)}=\sigma^2 \left\{\begin{matrix} \sum_{(u,v) \in \mathcal{P}(o_1,o_{k+1})} w_{uv}^2 & k=i\\ \sum_{(u,v) \in \mathcal{P}(o_1,o_{k+1})\cap \mathcal{P}(o_1,o_{i+1})} w_{uv}^2& k \neq i, \end{matrix}\right. \end{equation} with $\mathcal{P}(x,y)$ denoting the set of edges in the unique path between node $x$ and node $y$. Hence the ML estimator is \begin{equation}\label{ML_tree} \begin{split} \hat{s}&\in\displaystyle\arg \max_{\substack{s\in V}}\frac{\exp \Big(-\frac{1}{2}(\tau-\mathbf{d}_{s,\mathcal{O}})^\top{\mathbf{\Lambda}_\mathcal{O}}^{-1} (\tau-\mathbf{d}_{s,\mathcal{O}})\Big)}{|\mathbf{\Lambda}_\mathcal{O}|^{1/2}}\\ &=\displaystyle\arg \max_{\substack{s\in V}} \Big[\mathbf{d}_s^{\top}{\mathbf{\Lambda}_\mathcal{O}}^{-1} (\tau-\frac{1}{2}\mathbf{d}_{s,\mathcal{O}})\Big]. \end{split} \end{equation} On non-tree networks, the multiplicity of paths linking any two nodes makes source estimation more challenging. As claimed in \cite{Pinto12}, the same estimator can be used as an approximation of the ML estimator for a non-tree network by assuming that the diffusion happens only through a BFS (\textit{Breadth-First-Search}) tree rooted at the (unknown) source. In this case the paths which appear in the definition of the covariance matrix $\mathbf{\Lambda}_{\mathcal{O}}$ are computed on the BFS tree rooted at the candidate source considered. Hence $\mathbf{\Lambda}_{\mathcal{O}}$ depends on the candidate source and the ML estimator is \begin{equation}\label{eq:estimator} \hat{s}_{\mathrm{bfs}} \in\displaystyle\arg \max_{\substack{s\in V}}\frac{\exp \Big(-\frac{1}{2}(\tau-\mathbf{d}_{s,\mathcal{O}})^\top{\mathbf{\Lambda}^s_\mathcal{O}}^{-1} (\tau-\mathbf{d}_{s,\mathcal{O}})\Big)}{|\mathbf{\Lambda}^s_\mathcal{O}|^{1/2}}. \end{equation} In this work, we adopt \eqref{eq:estimator} as the source estimator in the noisy case. In fact, even if our edge delays are truncated Gaussians, under the hypothesis of sparse observations, we can apply the Central Limit Theorem (CLT) to approximate the sum of the edge delays with Gaussian random variables: if all edges have the same weight we can apply the CLT for i.i.d. random variables; if this is not the case, we can apply Lyapunov's version of CLT.\footnote{Lyapunov condition with $\delta=1$ is easily verified for a sequence of independent and uniformly bounded random variables (see Example $27.4$ in \cite{Billi} for more details). } \section{Additional Figures}\label{app:figures} \begin{figure}[H] \begin{center} \subfigure[\textsc{lv-Obs}]{\includegraphics[width=.45\columnwidth]{ff_LN}} \subfigure[\textsc{hv-Obs}, $L/\tilde{\Delta}=0.5$]{\includegraphics[width=.45\columnwidth]{ff_L19}} \caption{ The oberver placements of \textsc{lv-Obs} and \textsc{hv-Obs} with $L/\tilde{\Delta}=0.5$ and $k=5\%$ on the F \& F network are very different; \textsc{lv-Obs} contains leafs while \textsc{hv-Obs} has shorter spacing.}\label{fig:ex_ff} \end{center} \end{figure} \begin{figure}[H] \subfigure[CR, 5\% observers]{\includegraphics[width=0.47\columnwidth]{cal_unif_success_63-eps-converted-to.pdf}} \subfigure[F \& F, 5\% observers]{\includegraphics[width=0.47\columnwidth]{ff_success_6_unif-eps-converted-to.pdf}}\\ \subfigure[FB, 5\% observers]{\includegraphics[width=0.47\columnwidth]{fb_success_59_unif-eps-converted-to.pdf}} \caption{Success probability $\mathcal P_{s}$ as variance is increased on a uniform transmission model (Section \ref{sec:exp_variance_models}).} \label{fig:alt_model} \end{figure} \begin{figure} \subfigure[California, 5\% observers]{\includegraphics[width=0.47\columnwidth]{cal_trunc_hops_63-eps-converted-to.pdf}} \subfigure[California, 5\% observers]{\includegraphics[width=0.47\columnwidth]{cal_trunc_dist_63-eps-converted-to.pdf}}\\ \subfigure[Friends \& Families, 5\% observers]{\includegraphics[width=0.47\columnwidth]{ff_hops_6_trunc-eps-converted-to.pdf}} \subfigure[Friends \& Families, 5\% observers]{\includegraphics[width=0.47\columnwidth]{ff_dist_6_trunc-eps-converted-to.pdf}}\\ \subfigure[Facebook, 5\% observers]{\includegraphics[width=0.47\columnwidth]{hops_59-eps-converted-to.pdf}} \subfigure[Facebook, 5\% observers]{\includegraphics[width=0.47\columnwidth]{weighted_59-eps-converted-to.pdf}} \caption{Expected distance in number of edges (left column) and in weighted path length (right column) for the datasets and setting of Section~\ref{sec:high-variance}}\label{fig:distance} \end{figure} \section{Conclusion \& Future Work} \label{sec:conclusion} In this work, we have taken a principled approach towards budgeted observer placement for source localization. We are the first to have observed a dichotomy between the low and high-variance regimes, and we developed complementary approaches for both. We have evaluated our approaches against state-of-the-art and alternative heuristics and find that the performance of our algorithms is favourable. One natural extension would account for two stages of observation; in the first stage, as in this work, we select a set of observers to monitor the network. In the next stage, once an epidemic begins, we deploy additional observers in the relevant region of the network. This would pave the way for other types of \emph{adaptive} models, including ones where we not only \emph{observe} a node but can act to \emph{immunize} it or in which we can \emph{move} the observers as required.\\ \section{The High-Variance Regime} \label{sec:high-variance} When the variance is not guaranteed to be low, as defined in Section~\ref{sec:low-variance}, computing analytically the success probability - or other metrics of interest - is unfortunately not possible (except for very simple graphs, like the path in the example of Figure \ref{fig:ex_transition}). Moreover, the estimation of the source is more challenging because the observed infection delay $t_i - t_j$ can be misleading, especially if the corresponding observers $o_i$ and $o_j$ are \emph{far} from the source. Take, for example, a path of length $L$ where the two leaves are the only two observers and all edges have weight $1$. Figure~\ref{fig:variance} shows how the success probability $\mathcal P_{s}$ decays faster for increasing values of $L$. Building on this observation, we propose a strategy for observer placement that enforces a controlled distance from a general source node to the observer set. \begin{figure} \begin{center} \subfigure[]{\includegraphics[width=0.45\columnwidth]{noise_accum-eps-converted-to.pdf}}\label{fig:variance} \hspace{.5cm} \subfigure[]{\includegraphics[width=0.3\columnwidth]{no-unique-path-eps-converted-to.pdf}}\label{fig:no-unique-path} \caption{(a): Success probability $\mathcal P_{s}$ on a path of length $L$ for increasing variance $\sigma$. (b): Counterexample for the converse of Lemma~\ref{lem:shortest-path-resolv}; for each pair of observers in $\mathcal{O}$, $u$ is not contained in the shortest path between them, yet $\mathcal{O}$ is a \emph{DRS}.} \end{center} \end{figure} \subsection{Diffusion Model and Source Estimation} \label{subsec:noise_model} For every edge $(u, v)$ the infection delay $X_{uv}$ is distributed as a truncated Gaussian random variable with parameters ${(w_{uv}, \sigma w_{uv}, [\sfrac{w_{uv}}{2}, \sfrac{3 w_{uv}}{2}])}$. More precisely, if ${Y_{uv} \sim \mathcal{N}(w_{uv}, \sigma w_{uv})}$ is a Gaussian random variable, $X_{uv}$ is obtained by conditioning $Y_{uv}$ with ${Y_{uv} \in [\sfrac{w_{uv}}{2}, \sfrac{3 w_{uv}}{2}]}$. This delay distribution has two advantages with respect to the one of \cite{Pinto12}, i.e., that ${X_{(u,v)} \sim \mathcal{N}(w_{uv}, \sigma w_{uv})}$. First, the model admits only strictly positive infection delays. Second, different values of the standard deviation $\sigma$ result in different regimes for the propagation, making our model very versatile. When $\sigma = 0$, $X_{uv}$ boils down to a deterministic value equal to the edge weight $w_{uv}$; when $\sigma$ is large, the distribution of $X_{uv}$ becomes closer to uniform ${U([\sfrac{w_{uv}}{2}, \sfrac{3 w_{uv}}{2}])}$. Finally, when $\sigma$ is strictly positive but small, ${X_{uv} \approx \mathcal{N}(w_{uv}, (\sigma w_{uv})^2)}$. In Appendix~\ref{app:estimator}, we explain how an approximated maximum likelihood estimator for the source can be derived in this setting. \subsection{Observer Placement} First, we formalize why distances between observers are important: If $o_i, o_j$ are two observers and the source is ${s^* \in \mathcal{P}(o_i, o_j)}$, then \begin{equation}\label{eq:path_variance} \mbox{var}(t_i - t_j) \approx \sigma^2 \left[ \sum_{(uv) \in \mathcal P(o_i, o_j)}w_{uv}^2 \right] \end{equation} where $\mathcal P(x,y)$ denotes the shortest path from $x$ to $y$, written as a sequence of edges. Although we cannot control $\sigma$, we can control the \emph{path length} between observers.\footnote{A relevant but orthogonal line of work would study how to control the parameter $\sigma$ by, e.g., immunizations, quarantines, or other preventative measures and is outside the scope of our work.} We make use of the following sufficient condition for a set to be a $\emph{DRS}$, i.e., for an observer set to guarantee optimal source detection. \begin{lemma}\label{lem:shortest-path-resolv} Let $\mathcal G=(V, E)$ be a network, $\mathcal{O} \subseteq V$. If for every $u\in V$ there exist $o_1, o_2 \in \mathcal{O}$ such that there is a unique shortest path $\mathcal{P}(o_1, o_2)$ between $o_1$ and $o_2$ and $u \in \mathcal{P}(o_1, o_2)$, then $\mathcal{O}$ is a \emph{DRS} for $G$. \end{lemma} \begin{proof} Let $u, v \in V \backslash \mathcal{O}$. We will prove that there exist $o_1, o_2 \in \mathcal{O}$ such that the pair $(u, v)$ is resolved by $(o_1, o_2)$, i.e., $ d(v, o_1)-d(u,o_1) \neq d(v, o_2)- d(u, o_2)$. Let $o_1, o_2 \in \mathcal{O}$ such that $u$ appears in the unique shortest path $\mathcal{P}(o_1, o_2)$ and $o_3, o_4 \in S$ such that $v$ appears in the unique shortest path $\mathcal{P}(o_3, o_4)$. If $v \in \mathcal{P}(o_1, o_2)$ or $u \in \mathcal{P}(o_3, o_4)$ than $u$ and $v$ are resolved by, respectively, $(o_1, o_2)$ or $(o_3, o_4)$. Take $v \notin \mathcal{P}(o_1, o_2)$ and $u \notin \mathcal{P}(o_3, o_4)$. In this case, $\{o_1, o_2\} \neq \{o_3, o_4\}$. Let us suppose without loss of generality that $o_1 \notin \{o_3, o_4\}$. We look only at the case where $(o_1, o_2)$ does not resolve $(u, v)$ and prove that the pair is indeed resolved by two vertices in $\mathcal{O}$. Since $(o_1, o_2)$ does not resolve $(u, v)$, there exists $c\in \mathbb R$ such that $d(v, o_1)-d(u,o_1) = c = d(v, o_2)- d(u, o_2)$. Since the unique shortest path between $o_1$ and $o_2$ goes through $u$ we have that $c>0$. We prove that either $(o_1, o_3)$ or $(o_1, o_4)$ resolves $(u, v)$. If this was not the case, we would have the following equalities: \begin{align*} c &= d(v, o_1)-d(u,o_1) = d(v, o_3)- d(u, o_3)\\ c &= d(v, o_1)-d(u,o_1) = d(v, o_4)- d(u, o_4). \end{align*} Since $c>0$, $d(v, o_3) > d(u, o_3)$ and $d(v, o_4) > d(u, o_4)$ giving a contradiction with $v$ (and not $u$) being on the shortest path $\mathcal{P}(o_3, o_4)$. We conclude that $(u, v)$ are resolved by either $(o_1, o_3)$ or $(o_1, o_4)$. \end{proof} The converse of this lemma is not true: If $\mathcal{O}$ double resolves $\mathcal G$, it is not even true that for every node $u$ there must exist $o_1, o_2 \in \mathcal{O}$ such that $u$ is contained in \emph{some} shortest path between $o_1$ and $o_2$ of (see the Example in Figure~\ref{fig:no-unique-path}). \textbf{Path covering strategy.} We take Lemma~\ref{lem:shortest-path-resolv} as a basis for deriving a \emph{path covering} strategy for observer placement. In practice, the condition about the \emph{uniqueness} of the shortest path is too strong and excludes many potentially useful observer nodes\footnote{Experimentally we see that in many practical situations two shortest paths differ only by a few nodes and the majority of nodes on the path are resolved by the two extreme nodes.}. This is why we relax the condition of Lemma~\ref{lem:shortest-path-resolv} and we prefer, when the shortest path is not unique, to select one arbitrarily. Let $S\subseteq V$ be a set of observers and $L$ a positive integer: We call $P_{L}(S)$ the set of nodes that lie on a shortest path of length at most $L$ between any two observers in the set $S$. Given a budget $k$, and a positive integer $L$, we denote by $S^*_{k, L}$ the set of $k$ vertices that maximize the cardinality of $P_{L}(S)$. We call $L$ the \emph{length constraint} for the observer placement because we consider an observer to be \emph{useful} for source localization only if it is within distance $L$ from another observer. $S^*_{k, L}$ can be approximated greedily as in Algorithm~\ref{algo:path_covering}.\footnote{The running time of Algorithm \ref{algo:path_covering} is $O(n^2k^2)$, however, as with the low-variance case, this is highly parallelizable and hence tractable even for large networks.} \begin{algorithm} \begin{algorithmic} \caption{(\textsc{hv-Obs}): Observer placement for the high-variance setting.}\label{algo:path_covering} \Require{Network $G(V, E)$, budget $k$, length constraint $L$} \State $n \leftarrow |G|$ \For {$v \in V$} \State $\mathcal{O}_v \leftarrow v$ \While{$|P_{L}(\mathcal{O}_v)| \neq n$ \textbf{and} $\mathcal{O}_v < k$} \State $u \leftarrow\mbox{argmax}_{z \in V \backslash \mathcal{O}_v} [|P_{L}(\mathcal{O}_v \cup \{z\})| - |P_{L}(\mathcal{O}_v)|]$ \State $\mathcal{O}_v \leftarrow \mathcal{O}_v \cup \{u\}$. \EndWhile \EndFor \State \Return $\mbox{argmax}_{v \in V} |P_{L}(\mathcal{O}_v)|$ \end{algorithmic} \end{algorithm} We will refer to the observer placement produced by Algorithm~\ref{algo:path_covering} as \textsc{hv-Obs}$(L)$ to emphasize that it is designed for the high-variance case. \textbf{Comparison with Algorithm~\ref{algo:budget}.} Note that taking $L$ equal to the maximum weighted distance $\Delta$ does not make Algorithm~\ref{algo:path_covering} equivalent to Algorithm~\ref{algo:budget}, i.e., we do not obtain \textsc{lv-Obs}. To see how the two algorithms could give different results, take a cycle of odd length $d$ with a leaf node $\ell$ added as a neighbor to an arbitrary node $v$ and assume to start the algorithm with initial set $\{v\}$. At the first step, the two algorithms will make the same choice, choosing one of the two nodes that is at distance $(d-1)/2$ from $v$. At the second step however, $\textsc{lv-Obs}$ will add $\ell$ (a DRS contains all leaves \cite{ChenHW14}), whereas Algorithm~\ref{algo:path_covering} will add a node on the cycle. This observation is key to our results because it explains why Algorithm \ref{algo:path_covering} results in a more uniform (and hence \textit{variance-resistant}) observer placement with respect to \textsc{lv-Obs}. \textsc{hv-Obs} operates a trade-off between the average distance to the observers and the maximization of $\mathcal P_{s}$. \textbf{Choice of the L parameter.} How could one optimally set $L$? Needless to say, the optimal $L$ depends on the network topology and on the available budget: Clearly, for a larger budget a smaller $L$ is preferred. The cardinality of $P_{L}(\mathcal{O})$ is a good proxy for the performance of $\mathcal{O}$. The value $|P_{L}|$ is increasing in $L$ and reaches its maximum for $L$ equal to the maximum weighted distance ($L=\Delta$). For small $L$, $|P_{L}(\textsc{hv-Obs})| < |P_{\Delta}(\textsc{lv-Obs})|$ but for $L$ large enough this is no more the case. See Figure~\ref{fig:covered} for an example. Our empirical results suggest that $L$ should be chosen as the maximum for which $|P_{L}(\textsc{hv-Obs})| \leq |P_{\Delta}(\textsc{lv-Obs})|$. The key property of \textsc{hv-Obs} with respect to \textsc{lv-Obs} is that observers are spread more \emph{uniformly} without \emph{losing} too much in terms of success probability $\mathcal P_{s}$: Figure~\ref{fig:covered-prob-err} shows $|P_{L}(\textsc{hv-Obs})|$ and $\mathcal P_{s}$ as a function of $L$. \textsc{lv-Obs} and \textsc{hv-Obs} can give drastically different observers (see Appendix~\ref{app:figures} for an example). \begin{figure} \begin{center} \includegraphics[width=0.5\columnwidth]{cal_covered_25-eps-converted-to.pdf} \caption{Fraction of nodes in $P_{L}(\cdot)$ for the California dataset with $2\%$ of observers.}\label{fig:covered} \end{center} \end{figure} \begin{figure} \centering \subfigure[CR]{\includegraphics[width=0.47\columnwidth]{cal_covered-eps-converted-to.pdf}} \subfigure[CR]{\includegraphics[width=0.47\columnwidth]{cal_success-eps-converted-to.pdf}} \subfigure[F \& F]{\includegraphics[width=0.47\columnwidth]{ff_covered-eps-converted-to.pdf}} \subfigure[F \& F]{\includegraphics[width=0.47\columnwidth]{ff_success-eps-converted-to.pdf}} \caption{Fraction of nodes in $P_{L}(\textsc{hv-Obs})$ and success probability as a function of $L/\Delta$ for the CR and the F \& F datasets comparing with the zero-variance setting.} \label{fig:covered-prob-err} \end{figure} \section{Empirical results} \label{sec:exp_main} We purposely run our experiments on three very different real-world networks that, in addition to being relevant examples of networks for epidemic spread, display different characteristics in terms of size, diameter, clustering coefficient and average degree (see Table~\ref{tab:graphs}), enabling us to test the performance of our methods on various topologies. \subsection{Datasets} \noindent The three networks we consider are: \begin{itemize} \item[-] Friend \& Families (F \& F). This is a dataset containing phone calls, SMS exchanges and bluetooth proximity, among a community living in the proximity of a university campus~\cite{aharony2011}. We select the largest connected component of individuals who took part in the experiment during its whole duration. The edges are weighted, according to the number of phone calls, SMSs, and bluetooth contacts. \item[-] Facebook-like Message Exchange (FB)~\cite{opsahl2009}. As the individuals included in this dataset were living on the same university-campus, the number of messages exchanged is likely to be a good measure of in-person interaction. We selected links on which at least one message was sent in both directions and individuals that had more than $1$ contact. \item[-] California Road Network (CR)~\cite{cal_data}. In order to obtain a single connected component and remove points that effectively represent the same location, we collapsed the points falling within a distance of $2$ km. Moreover we iteratively deleted all leaves.\footnote{The roads that cross the state border are not completely tracked in this dataset and terminate with a leaf. Some other leaves might represent remote locations, not necessarily close to the borders, but their influence on the epidemic should anyway be very low.} The diameter of this network is very large compared with that of the other two networks. The edges are weighted according to a rescaled version of the real distance (measured in km). \end{itemize} \noindent In all three networks, edges are given (non-unit) integer \textit{weights}, which is realistic in many applications as the expected transmission delays are known only up to some level of precision. Integer weights do \emph{not} simplify the estimation of the source; in fact, this makes it \emph{more} difficult to distinguish between vertices. For example, if the edges of the CR network were weighted according to the Euclidean distance between the two endpoints, \textsc{lv-Obs} would use only a very small portion of the budget and the comparison would not be meaningful. \begin{table*}[t] \centering \begin{tabular}{l| ccccccccc } \hline & $|V|$ & $|E|$ & $\min(w_{uv})$ & $\mathrm{avg}(w_{uv})$ & $\max(w_{uv})$ & Avg Degree & Diameter & Avg Dist & Avg Clust.\\ \hline Friends \& Families &120 & 563 & 4 & 5.58 & 7 & 9.38 & 6 & 17.5 & 0.67\\ Facebook Messages &1020 & 6205 & 1 & 2.97 & 5 & 12.16 & 5 & 6.69 & 0.09\\ California Roads &1259 & 1801 & 1 & 1.71 & 9 & 2.86 & 66 & 55.3 & 0.2\\ \hline \end{tabular} \caption{Displays statistics for networks examined}\label{tab:graphs} \end{table*} \begin{figure*} \centering \subfigure[CR, 2\% observers]{\includegraphics[width=0.6\columnwidth]{cal_trunc_success_25-eps-converted-to.pdf}} \subfigure[CR, 5\% observers]{\includegraphics[width=0.6\columnwidth]{cal_trunc_success_63-eps-converted-to.pdf}} \subfigure[CR, 9\% observers]{\includegraphics[width=0.6\columnwidth]{cal_trunc_success_118-eps-converted-to.pdf}}\\ \subfigure[FB, 5\% observers]{\includegraphics[width=0.6\columnwidth]{success_59-eps-converted-to.pdf}} \subfigure[F \& F, 5\% observers]{\includegraphics[width=0.6\columnwidth]{ff_success_6_trunc-eps-converted-to.pdf}} \subfigure[F \& F, 10\% observers]{\includegraphics[width=0.6\columnwidth]{ff_success_11_trunc-eps-converted-to.pdf}} \caption{Success probability $\mathcal P_{s}$ as variance $\sigma$ is increased.} \label{fig:CR} \label{fig:ff} \end{figure*} \subsection{Comparison against Benchmarks} \label{sec:exp_bench} \noindent We compare \textsc{hv-Obs} against the following benchmarks: \begin{enumerate} \item \textsc{lv-Obs}: this is our solution for the low-variance case (see Section~\ref{sec:low-variance}). \item \textsc{BC} (Betweenness Centrality): This is a popular method for placing observers for source-localization (see, e.g., \cite{Louni14} and \cite{Seo12}, where it emerges as the best heuristic for observer placement among those tested). It consists of the $k$ nodes having the largest BC, which is defined, for all $u \in V$ as $$BC(u) = \sum_{x, y \in V, x \neq y} \frac{\sigma_{x,y}(u)}{\sigma_{x,y}}$$ where $\sigma_{x,y}$ is the number of shortest paths between $x$ and $y$ and $\sigma_{x,y}(u)$ is the number of those paths that passes through $u$. \item Coverage-rate (\textsc{Coverage})~\cite{Zhang2016}: This approach maximizes the number of nodes that have an observer as a neighbor, i.e., $$\mathcal{C}(\mathcal{O}) = |\cup_{o \in \mathcal{O}} N_{o}|/n,$$ where $N_{o}$ denotes the set of neighbors of $o$ and $n=|V|$. It has been shown to outperform several heuristics with a diffusion model and an estimation setting that are very similar to ours. \item \textsc{k-Median}: this is the optimal placement for the closely-related problem of maximizing the detectability of a flow \cite{Berry06}. The \textsc{k-Median} placement is the set of $k$ nodes $\mathcal{O}$ such that \[\mathcal{O} = \mbox{argmin}_{|\mathcal{O}|=k} \sum_{s \in V} (\min_{o \in \mathcal{O}} d(s, o)).\] Determining the \textsc{k-Medians} of a network is NP-hard \cite{Kariv79}; we use a greedy heuristic for \textsc{k-Medians}. \end{enumerate} \subsection{Experimental Results}\label{sec:results} We estimate $\mathcal P_{s}$ and $\mathrm E[d(s^*, \hat{s})]$ for different values of the variance $\sigma$. We generate epidemics by using each node in turn as the source. For the FB and CR datasets, we run $5$ simulations per node and variance level; and for the F \& F dataset, as the network is smaller, we run $20$ simulations per node and variance level. For the FB and CR datasets, we estimate the source based on the first $20$ observations only: Given the large size of the network, it would be unrealistic to wait for all the network to get infected before running the algorithm. The results for $\mathcal P_{s}$ are displayed in Figure~\ref{fig:CR}. An approximation of the value $\sigma_1$, above which \textsc{hv-Obs} outperforms \textsc{lv-Obs}, is marked with a vertical line. For the expected distance (weighted and in hops), see Appendix~\ref{app:figures}. We first take as budget for the observers the minimum budget for which $\mathcal P_{s}($\textsc{lv-Obs}$) = 1$. This corresponds to $k \sim 9 \%$ for the F \& F dataset, $k \sim 9 \%$ for the CR network and $k \sim 5\%$ for the FB dataset. This is the setting in which we expect the improvement of \textsc{hv-Obs} over \textsc{lv-Obs} to be especially strong: For smaller values of $k$ we expect \textsc{lv-Obs} to be nearly optimal even in the high-variance regime because we do not have enough budget to contrast both the topological \emph{undistinguishability} among nodes (what \textsc{lv-Obs} is designed for) and the accumulation of variance (what \textsc{hv-Obs} is designed for). For the F \& F and the CR networks, we also experiment with smaller percentages of observers and consistently find an improvement of \textsc{hv-Obs} over \textsc{lv-Obs} in the high-variance regime: Below a certain amount of variance $\sigma_1$ \textsc{lv-Obs} performs better than \textsc{hv-Obs} for any choice of the parameter $L$, whereas above $\sigma_1$ a calibrated choice of $L$ leads to a significant improvement. Such $L$ stays constant for all $\sigma>\sigma_1$, i.e., with the notation of Figure~\ref{fig:scheme} we have $\sigma_1=\sigma_F$. For the FB dataset instead, probably due to the low diameter with respect to the number of nodes, we observe that \textsc{hv-Obs} does not improve on \textsc{lv-Obs} for any value of $L$. Both \textsc{lv-Obs} and \textsc{hv-Obs} systematically outperform the baseline heuristics for observer placement that we described in Section~\ref{sec:exp_bench}. For the CR dataset the performance of Betweenness Centrality is particularly poor and the results are not shown. The Coverage Rate heuristic outperforms Betweenness Centrality on all three networks (confirming what found by by Zhang et al.~\cite{Zhang2016}) but is consistently less effective than K-Medians and our methods. \subsection{Robustness} \label{sec:exp_variance_models} To measure the robustness of our approach, we consider an alternate transmission model, and we measure whether, without making any changes, our observer placement still performs well. For every edge $uv \in E$ with weight $w_{uv}$, we take $X_{uv} \sim \mbox{Unif}([(1-\varepsilon)w_{uv}, (1+\varepsilon)w_{uv}])$. We find comparable results (see Appendix~\ref{app:figures}); they suggest that our observer placement is not dependant on the exact transmission model and that the variance of the transmission delays is really a key factor for a good observer placement. \section{Introduction} Regardless of whether a network comprises computers, individuals or cities, in many applications we want to detect whenever any anomalous or malicious activity spreads across the network and, in particular, where the activity originated.\footnote{In effect, we wish to answer questions such as \emph{what was the origin of a worm in a computer network?}, \emph{who was the instigator of a false rumor in a social network?} and \emph{can we identify patient zero of a virulent disease?} } We call the spread of such activity an \emph{epidemic} and the originator the \emph{source}. Clearly, monitoring all nodes is not feasible due to cost and overhead constraints: The number of nodes in the network may be prohibitively large and some of them may be unable or unwilling to provide information about their state. Thus, studies have focused on how to estimate the source based on information from a few nodes (called \emph{observers}). Given a set of observers, many models and estimators for source localization have been developed \cite{Pinto12, Louni14, Zhang2016}. However, the \emph{selection} of observers has not yet received a satisfactory answer: Most of state-of-the-art methods are based on common centrality heuristics (e.g., degree- or betweenness-centrality) or on more advanced heuristic approaches that do not directly optimize source localization (see \cite{Zhang2016} for a survey) or are limited to simple networks such as trees (e.g., \cite{celis15}). Moreover, such methods consider only the structure of the network when placing observers. However, depending on the particular epidemic, the expected transmission delay between two nodes, and its variance, can differ widely. We show that different transmission models require different observer placements: This is illustrated in Figure \ref{fig:scheme}: As the variance of the transmission delays changes, the optimal set of observers also changes (see also Figure \ref{fig:ex_transition} for a concrete example). \begin{figure}[H] \begin{center} {\includegraphics[width=0.6\columnwidth]{general.pdf}} \caption{Transmission variance $\sigma$ and optimal observer placement. For ${\sigma \in (0, \sigma_0)}$ the transmission delays are effectively deterministic. For ${\sigma \in (\sigma_0, \sigma_1)}$ the variance $\sigma$ affects the accuracy of source localization but the optimal observer placement is still $\mathcal{O}_0$. For larger $\sigma$, the optimal observer placement may change, possibly multiple times ($\mathcal{O}_k$ denotes the optimal placement for ${\sigma \in (\sigma_k, \sigma_{k+1})}$) up to $\sigma=\sigma_F$. For $\sigma > \sigma_F$ the optimal placement remains the same ($\mathcal{O}_F$).}\label{fig:scheme} \end{center} \end{figure} The difficulties faced in finding the optimal observers are two-fold. First, computing the likelihood of a node being the source conditional on the available observations can be computationally prohibitive \cite{Shah, Pinto12}; evaluating the probability of detection given a set of observers is, in general, even harder. Second, the optimal selection of a limited number of observers is NP-hard, even when the transmission times are deterministic. We take a principled approach that begins with considering deterministic transition delays, and build on this intuition in order to develop heuristics for both the low-variance and high-variance regimes. \subsection{\bf Model and Problem Statement \textbf{Our Transmission Model.} We assume that the contact network $\mathcal G = (V,E, w)$ is known and is \emph{weighted}. The weight $w_{uv} \in \mathbb R_{+}$ of edge $uv \in E$ is the mean of the \emph{transmission delay} encoded by the random variable $X_{uv}$; this is the time it would take for $u$ to infect $v$.\footnote{For ease of presentation we assume the graph is undirected and $w_{uv}=w_{vu}$; however our definitions and approach extend straightforwardly to the directed case.} This transmission model is both natural and versatile as it comprises deterministic transmissions (i.e., if $X_{uv}=w_{uv} \in \mathbb{R}_{+}$ a.s. for all edges $uv \in E$), which we call \emph{zero-variance}, and arbitrary \emph{random} independent transmission models. It naturally captures the SI epidemic model adopted, e.g., in~\cite{Pinto12, luo2012} and related SIR/SIS/SEIR models (see~\cite{Krishnasamy2014} and the discussion in~\cite{Zhang2016}). We study, in particular, how the \emph{amount} of randomness (i.e., the variance of $X_{uv}$) in the transmission delays affects the choice of observers for source localization. Towards this, we are the first to separately analyze two different regimes for the amount of randomness in transmission delays: \emph{low-variance} and \emph{high-variance}. A dichotomy exists between the two, and our approach for observer placement differs. \textbf{Our Source Estimation.} We assume that there is a single source that initiates the epidemic\footnote{ Our results can be extended to the case of multiple sources following the recent work by~\cite{Zhang-res} on a related problem. } and let $\mathcal{O} \subseteq V$ (which we will select) be the set of {observer} nodes. We assume we know the time at which each observer is infected, and refer to this vector of {infection times} as $T_{\mathcal{O}}$. This is a standard (see, e.g.,~\cite{Netrapalli2011}) and realistic assumption (for example, clinics keep records of patients and carefully record outbreaks so can provide such information). To identify the source, we use this (and only this) information. We use maximum likelihood estimation (MLE) to produce an estimate $\hat s$ of the true unknown source $s^*$ as in ~\cite{Pinto12},\footnote{This approach is common (see e.g.,~\cite{Shah, dong2013}), although the exact form of the estimator depends on the model and assumptions. i.e., \begin{equation*} \hat{s}\in \mbox{argmax}_{\substack{s \in V}} \P(T_{\mathcal{O}}|s^*=s)\P(s^*=s). \end{equation*} We assume the prior on $s^*$ is uniform unless otherwise specified (i.e., $\P(s^*=s)=1/n$ for all nodes $s \in V$ where $n = |V|$). \textbf{Our Observer Placement.} We assume that we are given a \emph{budget} $k$ on the number of observers we can use, and that we must select our observers \emph{once and for all}. In order to select the \emph{best set of observers $\mathcal{O}$ of size $k$} we must first define our metric of interest. We consider the two metrics proposed by~\cite{celis15}, although variations (including worst-case versions) exist~\cite{celis15}: \begin{enumerate} \setlength\itemsep{0em} \item the \emph{success probability} $\mathcal P_{s}=\P(\hat{s} = {s^*})$, and \item the \emph{expected distance} between estimated source and real source, i.e., $\mathrm E[d(s^*, \hat{s})]$ with $d$ denoting the distance between two nodes in the network. \end{enumerate} \noindent The two metrics might require different sets of observers~\cite{celis15}, however we show experimentally that maximizing $\mathcal P_{s}$ is a good proxy for minimizing $\mathrm E[d(s^*, \hat{s})]$ (see Section~\ref{sec:low-variance}). Hence, due to space constraints, we focus on the minimization of the former. \subsection{\bf Main Contributions \textbf{Low-Variance Regime.} When the variance in the transmission delays is \emph{low} (see Section~\ref{sec:low-variance}), we prove that the set of optimal observers is equal to the optimal set for the zero-variance regime. In the zero- and low- variance regime, both the probability of success $\mathcal P_{s}$ and the expected distance $\mathrm E[d(s^*, \hat{s})]$ can be explicitly computed. Despite this seeming simplicity, the problem remains NP-hard. We tackle the problem by using its connection with the well-studied related Double Resolving Set (\emph{DRS}) problem~\cite{Caceres07} that minimizes the number of observers for perfect detection.\footnote{This minimum number is, in many cases, still prohibitively large, and can be as much as $n-1$, hence we cannot use this approach directly.} From this connection we find inspiration for our algorithm that, by selecting one observer at a time until the budget is exhausted in order to reach a \emph{DRS} set, greedily improves $\mathcal P_{s}$. \textbf{High-Variance Regime.} When the noise in the transmission delays is \emph{high} (see Section~\ref{sec:high-variance}), it is no longer negligible and it poses an additional challenge to source localization; in effect, the accumulation of noise from node to node as the epidemic spreads might no longer enable us to distinguish between two potential sources, especially when they are both \emph{far} from all observers. Hence, we must \emph{strengthen} the requirements for observer placement in order to ensure that the nodes can be distinguished by observers that are \emph{near} to them; this nearness is a function of the noise, the budget $k$, and the network topology. We define a novel objective function that both maximizes the success probability and imposes a \emph{uniform} spread of observers in the network. Taking inspiration from the low-variance regime, we design an algorithm that greedily maximizes this new objective (see Section~\ref{sec:high-variance}). \begin{figure}[H] \begin{center} \subfigure[]{\includegraphics[width=0.48\columnwidth]{graph_2.pdf}} \subfigure[]{\includegraphics[width=0.48\columnwidth]{transition_2-eps-converted-to.pdf}} \caption{Optimal observers for Gaussian transmission delays with variance $\sigma^2$. (a): different observer placements; (b): their performance in terms of probability of success ($\mathcal{P}_s$) for $w=20$ and $30$ edges.}\label{fig:ex_transition} \end{center} \end{figure} \textbf{Empirical Results.} Our methods perform favourably against state-of-the-art approaches in both the low- and high-variance regimes (see Section~\ref{sec:exp_bench}). In Appendix~\ref{app:alternate-obj}, for the low-variance regime, we further compare them against two other natural objective functions; we show that our approach is the best. Moreover, in the empirical results the dichotomy between the low- and high-variance regimes becomes apparent. \section*{Acknowledgements} B.Spinelli was partially supported by the Bill \& Melinda Gates Foundation, under Grant No. OPP1070273 \bibliographystyle{abbrv} \small{ \section{The Low-Variance Regime} \label{sec:low-variance} In this section, we focus on the low-variance regime. We start by introducing the setting and the definitions we adopt. \subsection{Preliminaries} Let $\mathcal G = (V, E)$ be an undirected weighted network. Assuming $u$ is infected, the weight $w_{uv}$ of edge $uv \in E$ represents the expected time it takes for $u$ to infect $v$. As the network is undirected, we assume $w_{uv} = w_{vu}$ for all $uv \in E$. We assume that the epidemic is initiated by a single unknown source $s^*$ at an unknown time $t^*$. If a node $u$ gets infected at time $t_u$, a non-infected neighbor $v$ of $u$ will become infected at time $t_v = t_u + X_{uv}$ where $X_{uv}$ is a random variable with $\mathrm E[X_{uv}]=w_{uv}$. The \emph{time} $t^*$ at which an epidemic starts is unknown. This adds a significant difficulty to the problem because a \emph{single} observation is not \emph{per se} informative. Instead, we must use the collection of \emph{differences} between observed infection times. If the variance is zero or if it is low compared to edge weights, network distances are a good proxy for time delays (see Proposition~\ref{Prop:p_err}). We refer to this setting as a \emph{low-variance} regime, as opposed to the \emph{high-variance} regime in which time delays are highly noisy and network distances no longer work as a proxy for time delays. \textbf{Distance vectors and equivalence between nodes.} We start with a few definitions. Our setting is similar to~\cite{celis15}. \begin{definition}[Equivalence]\label{def:equiv} Let $\mathcal{G}=(V,E)$ and $\mathcal{O} \subseteq V$ with $|\mathcal{O}|=k \geq 2$ be a set of observers on $\mathcal G$. A node $u$ is said to be equivalent to a node $v$ (which we write $u\sim v$) if and only if, for every $o_i, o_j \in \mathcal{O}$ \begin{equation} \label{eq:distinguish} d(u, o_i) -d(u, o_j) = d(v, o_i) - d(v, o_j). \end{equation} where $d(x, y)$ is the (weighted) distance between $x$ and $y$. \end{definition} \noindent The relation $\sim$ is reflexive, symmetric, and transitive, hence it defines an \emph{equivalence relation}. Therefore, a set of observers $\mathcal{O}$ partitions $V$ in \emph{equivalence classes} (an example is given in Figure~\ref{fig:example_graph}). We denote by $q$ the number of equivalence classes and we let $[u]_\mathcal{O}$ be the class of $u$, i.e., the set of all nodes that are equivalent to $u$. \begin{figure} \begin{centering} {\includegraphics[width=.5\columnwidth]{equiv-classes.pdf}} \caption{An unweighted network with two observer nodes $o_1$ and $o_2$. Different shapes represent different equivalence classes, i.e., groups of nodes which are not distinguishable from the point of view of the observers (solid red). In this example there are $q=5$ equivalence classes.} \label{fig:example_graph} \end{centering} \end{figure} When the variance is zero, given an observer set, we can \emph{distinguish} $u$ from $v$ if Equation~\eqref{eq:distinguish} does \textit{not} hold for $u,v$ and a pair of observers $o_i,o_j$, i.e., if $[u]_\mathcal{O} \neq [v]_\mathcal{O}$. The problem of finding the minimum-size set of nodes $S$, such that for every $u,v$ there exist $s_i,s_j \in S$ for which $d(u, s_i) -d(u, s_j) \neq d(v, s_i) - d(v, s_j)$ is known as the \emph{Double Resolving Set (DRS) Problem}~\cite{Caceres07}. Our problem differs from \emph{DRS} because we focus on the more realistic context in which, due to limited resources, we want to allocate a \emph{finite budget} in order to maximize the detection probability (as opposed to minimizing the number of observers for perfect detection, which is, in many cases, still prohibitively large). However, the connection between our problem and \emph{DRS} paves the way for a principled approach. We now define a \emph{distance vector} associated with a candidate source, which, as we will prove in Lemma~\ref{lemma:equiv_DRS}, mathematically captures equivalence in a manner that is easy to work with. \begin{definition}[Distance Vector]\label{distance_vector} Let $\mathcal G = (V,E)$ and $\mathcal{O} \subseteq V$ with $|\mathcal{O}|=k \geq 2$ a set of observers on $\mathcal G$. For each candidate source $s$ the distance vector is $\mathbf{d}_{s, \mathcal{O}} \in \mathbb{R}^{k-1}$ with entries $d(s, o_{i+1}) - d(s, o_1)$ for $1 \leq i \leq k-1$. \end{definition} The following lemma, similar in spirit to Lemma 3.1 in~\cite{ChenHW14}, shows that, the equality between {distance vectors} of different candidate sources does not depend on the choice of the \textit{reference observer} $o_1$. \begin{lemma}\label{lemma:equiv_DRS} Let $\mathcal G = (V,E)$ and $\mathcal{O} \subseteq V$ with $|\mathcal{O}|=k \geq 2$ and let $u, v \in V$. Then, $[u]_\mathcal{O}$=$[v]_\mathcal{O}$ if and only if $\mathbf{d}_{u, \mathcal{O}} = \mathbf{d}_{v, \mathcal{O}}$, independently of the choice of the reference observer $o_1$ in Definition~\ref{distance_vector}. \end{lemma} \textbf{Estimating the source in the low-variance setting.} We are now ready to describe how we can estimate the source, and define the probability of correct detection in the zero- and low- variance setting, i.e., when $X_{uv} = w_{uv}$ a.s.~for every edge $(u, v)$. For every observer $o_i \in \mathcal{O}$, denote by $t_i$ the time at which $o_i$ gets infected. In the zero-variance setting, the observed infection times of nodes $o_2, .., o_K$ with respect to observer $o_1$, i.e., the vector $\tau \stackrel{{\rm def}}{=} t_2-t_1, \ldots, t_k-t_1$, is exactly the distance vector of the \emph{unknown} source $s^*$. Then, if for every $u,v \in V$, $[u]_\mathcal{O} \neq [v]_\mathcal{O}$, the source can be always correctly identified. We will see in Proposition \ref{Prop:p_err} that this is true also in a more general \textit{low-variance} framework where we are always able to identify the equivalence class to which the real source belongs. We assume a prior probability distribution on the location of the source to be given, i.e., $Q(u) \stackrel{{\rm def}}{=} \P(s^* = u)$. As we cannot distinguish between vertices inside $[s^*]_\mathcal{O}$ (otherwise they would not be in the same equivalence class), we let our estimated source $\hat s$ be chosen at random from the conditional probability $Q|_{E^*}(u)\stackrel{{\rm def}}{=}\P(s^*=u|u \in E^*)$. Hence the success probability is \vspace{-1mm} \begin{equation}\label{eq:probab_success} \begin{split} \mathcal P_{s}(\mathcal{O}) &\stackrel{{\rm def}}{=} \sum_{s \in V} \P(\hat{s} = s | s^*=s)\P(s^* = s) \\ &= \sum_{s \in V} Q|_{[s]_\mathcal{O}}(s)Q(s) = \sum_{s \in V} \frac{Q(s)}{Q([s]_\mathcal{O})}Q(s), \end{split} \end{equation} \vspace{-1mm} \noindent and is $1$ if all equivalence classes are singletons. In the experimental results in Section~\ref{sec:exp_main} we also look at another relevant metric for the source localization problem, the \textit{expected distance} (weighted or in hops) between the true and estimated source: \begin{equation}\label{eq:exp_dist} \begin{split} \mathrm E[d(s^*, \hat{s})] &\stackrel{{\rm def}}{=} \sum_{s \in V} \P(s^* = s) \sum_{u \in [s]_\mathcal{O}} \P(\hat s = u | s^* = s) d(s,u) \\ &= \sum_{s \in V} \sum_{u \in [s]_\mathcal{O}} \frac{Q(s) Q(u)}{Q([s]_\mathcal{O})} \cdot d(s,u) . \end{split} \end{equation} \noindent Alternative metrics, including worst-case metrics, also exist \cite{celis15} (see Appendix~\ref{app:other_metrics} for some examples). \subsection{Setting} For ease of exposition, we focus on the case in which the prior distribution on the position of the source is uniform, hence $Q(u)=1/n$ for all $u\in V$.\footnote{Our algorithms and observations can be generalized using Equation~\eqref{eq:probab_success} instead of the simpler formula that we now derive for the uniform case.} \begin{proposition} \label{Prop:p_err} Let $\mathcal G=(V,E)$ be a network of size $n$ and $\mathcal{O} \subseteq V$. Call $\delta=\min_{u, v: \mathbf{d}_{u, \mathcal{O}} \neq \mathbf{d}_{v, \mathcal{O}}}\|\mathbf{d}_{u, \mathcal{O}} - \mathbf{d}_{v, \mathcal{O}}\|_\infty$. Assume a uniform prior $Q(u) = 1/n$ for all $u\in V$ and call $D$ the maximum distance in hops in any shortest path between any node and any observer. \begin{enumerate} \item In the zero-variance case, then ${\mathcal P_{s}(\mathcal{O})= q/n}$, where $q$ is the number of equivalence classes for $\mathcal{O}$; \item If the transmissions are such that for each $uv \in E$, ${X_{uv} \in [w_{uv} - \varepsilon, w_{uv}+\varepsilon]}$, we denote as $\mathcal P_{s}^\varepsilon(\mathcal{O})$ the probability of success and we define ${\varepsilon_0=\sup\{\varepsilon>0: \mathcal P_{s}^\varepsilon(\mathcal{O}) = \mathcal P_{s}^0(\mathcal{O})\}}$, we have $\varepsilon_0 > \sfrac{\delta}{2D}$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item By definition, $$\mathcal P_{s}(\mathcal{O})=\sum_{[u]_\mathcal{O}}\P(\hat{s} = s^*|s^* \in [u])\P(s^* \in [u]).$$ Hence, $$ \mathcal P_{s}(\mathcal{O}) = \sum_{[u]_\mathcal{O}} \frac{1}{|[u]_\mathcal{O}|} \cdot \frac{|[u]_\mathcal{O}|}{n} = \frac 1 n \sum_{[u]_\mathcal{O} } 1 = \frac{q}{n}.$$ \item Recall that, for $u, v \in V$, $[u]_\mathcal{O} \neq [v]_\mathcal{O}$ if and only if $\mathbf{d}_{u, \mathcal{O}} \neq \mathbf{d}_{v, \mathcal{O}}$. Since $\mathbf{d}_{u, \mathcal{O}} \neq \mathbf{d}_{v, \mathcal{O}}$ implies $\|\mathbf{d}_{u, \mathcal{O}} - \mathbf{d}_{v, \mathcal{O}}\|_\infty \geq \delta$, if $\varepsilon <\sfrac{\delta}{2 D}$, no estimation error is possible between $u, v \in V$ such that $\mathbf{d}_{u, \mathcal{O}} \neq \mathbf{d}_{v, \mathcal{O}}$. Hence $\varepsilon_0 > \sfrac{\delta}{2D}$. \end{enumerate} \end{proof} Note that here $\varepsilon_0$ plays the role of $\sigma_0$ in Figure \ref{fig:scheme}. Indeed, for $\varepsilon<\varepsilon_0$ the variance of the transmission delays does not affect the accuracy of source localization. If additional conditions on the weights or on the network topology are made, more refined bounds for $\varepsilon_0$ can be derived. For example, in a \textit{tree} with integer weights, due the uniqueness of the path between two any vertices, the minimum distance (in the infinity norm) between two distance vectors is $2$. Hence, in this case, an accumulated variance of less than $1$ can be tolerated and we have $\varepsilon_0 > 1/D$. For the remainder of this section, we will assume ${\varepsilon < \delta/2D}$, which we call the low-variance case. Independently of the topology of the network $\mathcal G$, the success probability $\mathcal P_{s}$, as well as other possible metrics of interest, can be computed exactly in polynomial time (see e.g., Equation~\eqref{eq:probab_success} and~\eqref{eq:exp_dist}). In fact, due to Lemma \ref{lemma:equiv_DRS}, it is enough to compute the distance vector of Definition~\ref{def:equiv} for all the nodes. Nonetheless, if we have a budget $k \geq 2$ of nodes that we can choose as observers, finding the configuration that maximizes $\mathcal P_{s}$ is an NP-hard problem. This is a direct consequence of the hardness result of Chen et al.~\cite{ChenHW14}. \begin{theorem} Let $k \geq 2$ be the budget on the number of nodes we can select as observers. Finding $\mathcal{O} \subseteq V$ such that $\mathcal{O} \in \mbox{argmax}_{|\mathcal{O}|=k}\mathcal P_{s}(\mathcal{O})$ is NP-hard. \end{theorem} \noindent The proof follows straightforwardly with a reduction from the \emph{DRS} problem (see Appendix~\ref{app:hardness}). \subsection{Observer Placement} Our first main contribution in this paper is a solution to the budgeted observer-placement problem. Our approach, presented in Algorithm~\ref{algo:budget}, is specifically designed for the source localization problem and has a simple greedy structure: for every node $v \in V$, initialize $\mathcal{O}\leftarrow \{v\}$ and iteratively add to $\mathcal{O}$ the node $u$ that maximizes the gain with respect to the success probability until we either run out of budget or $\mathcal P_{s}=1$. Proposition~\ref{Prop:p_err} ensures that greedily maximizing the success probability is equivalent to greedily maximizing the number $q$ of equivalence classes. When adding an element to the observer set, the partition in equivalence classes can be updated in linear time, hence the total running time of our algorithm is $O(kn^3)$. Despite bypassing the NP-hardness of the problem, this might not be sufficiently fast for very large graphs. However, the procedure is extremely parallelizable and well suited, e.g., for Map-Reduce (see, for example, the main for loop and the $\mathbf{argmax}$ in the $\mathbf{while}$ loop). \begin{algorithm} \begin{algorithmic} \caption{(\textsc{lv-Obs}): Observer placement for the low-variance setting.}\label{algo:budget} \Require{Network $G$, budget $k$} \For {$v \in V$} \State $\mathcal{O}_v \leftarrow v$ \While{$\mathcal P_{s}(\mathcal{O}_v) \neq 1$ \textbf{and} $\mathcal{O}_v < k$} \State $u \leftarrow\mbox{argmax}_{z \in V \backslash \mathcal{O}_v} [\mathcal P_{s}(\mathcal{O}_v \cup \{z\}) - \mathcal P_{s}(\mathcal{O}_v)]$ \State $\mathcal{O}_v \leftarrow \mathcal{O}_v \cup \{u\}$. \EndWhile \EndFor \State \Return $\mbox{argmax}_{v \in V} \mathcal P_{s}(\mathcal{O}_v)$ \end{algorithmic} \end{algorithm} The observer placement obtained through Algorithm~\ref{algo:budget} will be denoted \textsc{lv-Obs} to emphasize the fact that it has been designed for the case in which the variance is absent or very small (\textsc{lv} stands for \emph{low-variance} regime). \subsection{Performance} As budgeted observer placement (even in the zero-variance setting) is NP-hard, there is no optimal algorithm to compare against. Instead, we evaluate the performance of our algorithm against a set of natural benchmarks that have shown to have good performance in other works \cite{Seo12, Berry06, Zhang2016} (see Section \ref{sec:exp_bench} for a discussion of these benchmarks, Figure~\ref{fig:CR} for the results). We further compare against two other natural heuristics that also optimize an objective function greedily. The first is an adapted version of the approximation algorithm for the \emph{DRS} problem proposed by Chen at al.~\cite{ChenHW14} and described in Appendix~\ref{app:DRS_CHW}. By stopping the greedy process after it selects $k$ nodes, we can adapt in a natural way this approximation algorithm and create a heuristic for the budgeted version. The second is a direct minimization of the expected error distance obtained by Equation~\eqref{eq:exp_dist} with $Q(u)=1/n$ for all $u\in V$. Comparing all three approaches, our algorithm outperforms the other two (see Appendix~\ref{app:alternate-obj} for details). \section{Related Work}\label{sec:related_work} The problem of source localization has been widely studied in recent years, we survey the works that are more relevant to ours and refer the reader to the survey by Jiang et. al.~\cite{jiang-survey} for a more complete review of the different approaches to source localization. \textbf{Transmission delays.} Many different transmission models for epidemics have been studied~\cite{Lelarge2009} and also considered in the context of source localization. Although discrete-time transmission delays are common~\cite{luo2013, prakash, Altarelli2014}, in order to better approximate realistic settings, much work (including ours) adopt continuous-time transmission models with varying distributions for the transmission delays; e.g., exponential~\cite{Shah, luo2012} (primarily to capture percolation models such as SI/SIS/SIR) or Gaussian~\cite{Pinto12,Louni14, Louni15, Zhang2016} (to capture delays that are concentrated symmetrically around a value). In the same line of the latter class of works, we use \emph{truncated} Gaussian variables, which gives us the advantage of ensuring that the model admits only strictly positive infection delays. \textbf{Source localization.} Many approaches~\cite{zheng2015, prakash, sundareisan2015hidden}, beginning with the seminal work by Shah and Zaman~\cite{Shah}, rely on knowing the state of the \emph{entire} network at a fixed point in time $t$; this is often called a \emph{complete observation} of the epidemic. These models use maximum likelihood estimation (MLE) to estimate the source. The results of~\cite{Shah} have been extended in many ways, for example in the case of multiple sources~\cite{luo2012} or to obtain a \emph{local} source estimator~\cite{dong2013}. An alternate line of work considers a complete observation of the epidemic, except that the observed states are \emph{noisy}, i.e., potentially inaccurate~\cite{zhu2013, sundareisan2015hidden}. As assuming the knowledge of the state of all the nodes is often not realistic, \emph{partial observation} settings have also been studied. In such a setting, only a subset of nodes $\mathcal O$ reveal their state. In this line of work, the observers are mainly \emph{given}, either arbitrarily or via a random process, and the problem of \emph{selecting} observers is not addressed. For example, when a fraction $x$ of nodes are randomly selected, Lokhov et al.~\cite{lokhov2014} propose an algorithm that relies on the knowledge of the state (S, I or R) of a fraction of the nodes in the graph at a given moment in time. This approach, however, crucially relies on the assumption that the starting time of the epidemic is known, which is often not realistic~\cite{jiang-survey, Pinto12}. When the nodes are independently selected to be observers, an approach to source estimation based on the notion of \emph{Jordan center} was proposed~\cite{luo2013} and has since been used in other work for source estimation, especially with regard to a \emph{game theoretic} version of epidemics~\cite{Fanti2015}. This line of work does not assume infection times are known, which we believe is, in many cases, an unnecessary limitation. Indeed by using infection times we can achieve exact source localization in the zero-variance setting with sufficiently many observers~\cite{ChenHW14}, whereas this is not true otherwise. \textbf{Observer placement.} Natural heuristics for observer placement (e.g., using high-degree vertices or optimizing for distance centrality) were first evaluated under the additional assumption that infected nodes know which neighbor infected them~\cite{Pinto12}. Later, Luoni et al.~\cite{Louni14} proposed, for a similar model, to place the observers using a Betweenness-Centrality criterion (which we use as a benchmark, see Section~\ref{sec:exp_bench}), and extended it to noisy observations~\cite{Louni15}. These and other heuristic approaches for observer placement are evaluated empirically by Seo et al.~\cite{Seo12}; they reach the conclusion that, among the placements they evaluate, the Betweenness-Centrality criterion performs the best. In their work the source is estimated by ranking candidates according to their distance to the set of observers, without using the time at which the observers became infected. Once again, this approach is inherently limited by the fact that it does not make use of the time of infection. The problem of \emph{minimizing} the number of observers required to detect the precise source (as opposed to \emph{maximizing} the performance given a \emph{budget} of observers) has been considered in the zero-variance setting. For trees, given the time at which the epidemic starts, the minimization problem was solved by Zejnilovic et al.~\cite{Zejn13}. Without assuming a tree topology and a known starting time, approximation algorithms have been developed towards this end~\cite{ChenHW14} (still in a zero-variance setting). However, in a network of size $n$, the number of observers required, even if minimized, can be up to $n-1$, hence, a budgeted setting is practically more interesting. For trees, the budgeted placement of observers was solved by using techniques different from ours~\cite{celis15}. However these techniques heavily rely on the tree structure of the network and do not seem to be extendible to other topologies. In a very recent work, Zhang et al.~\cite{Zhang2016} consider selecting a fixed number of observers using several heuristics such as Betweenness-Centrality, Degree-Centrality and Closeness-Centrality and they show that none of these methods are satisfactory. They introduce a new heuristic for the choice of observers, called \emph{Coverage-Rate}, which is linked to the total number of nodes neighboring observers, and show that an approximated optimization of this metric yields better performance. Connecting the budgeted placement problem to the un-budgeted minimization problem, we provably outperform their approach in low-variance settings.\footnote{For example, on cycles of odd-length $d$ with a budget $k=2$ in the low-variance setting, any two nodes at distance more than $2$ are equivalent with respect to the coverage rate, but only optimal if the observers are at distance $(d-1)/2$; our approach selects this optimal placement.} Moreover, the effect of the variance in the transmission delays is neglected by Zhang et al., leaving open the question of whether their approach works in general. However, we consider Coverage-Rate as one of our baselines. \textbf{Other related work.} Finally, a closely related problem is that of \emph{outbreak detection}, i.e., detecting the \emph{existence} of a epidemic in a timely manner. In this context, observer placement is a well-studied problem. The optimal solution for timely detection is to place observers at the \emph{k-Median} nodes~\cite{Berry06}; we use this as a benchmark in Section~\ref{sec:exp_bench}. Furthermore, the optimization of several alternate metrics of interest (e.g., the percentage of infected population at the time of detection) is studied in~\cite{leskovec2007,Krause08}.
1,314,259,996,203
arxiv
\section{Introduction} \label{sec:intro} \def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} During the last few years a series of theoretical works that combine the Chiral Perturbation Theory (CHPT) expansion \cite{wein,gl} with unitarity in a consistent way \cite{npa,nd,iamdoba,iamprl,hannah,jamin,mixing}, order by order, have showed up the close relationship between the leading order CHPT amplitudes (driven by the chiral structure and the actual value of the order parameter for the spontaneous breakdown of chiral symmetry, $f_\pi$) and the presence, mass and width, together with other properties, of the lightest scalar resonances. These results, concerning the $\sigma$ resonance, have been also confirmed by the solution of the Roy equations \cite{roygas}. Also recently, but from an experimental point of view, charmed decays have revealed as a powerful source for the study of the properties of the scalar resonances offering experiments with high statistics, which is of foremost importance taking into account the traditionally lack of high statistics experiments in the scalar sector. In this respect, the studies of the Fermilab E791 Collaboration concerning the decays $D^+\rightarrow \pi^-\pi^+\pi^+$ \cite{prld3pi}, 1686 events, and $D^+\rightarrow K^-\pi^+\pi^+$ \cite{prldk2pi}, 15090 events, were the first to point out statistically significant contributions of the $\sigma$ and $\kappa$ resonances, respectively. Other Collaborations on $D$ decays also report with high statistical significance the existence of a $\sigma$ resonance from the $D^0\rightarrow K^0_s\pi^+\pi^-$ decays, like the CLEO Col. \cite{cleo}, $5299\pm 73$ events, Belle Col. \cite{belle}, 57800 events, and the BaBar Col. \cite{babar}, 81396 events. Thus, we can conclude that nowadays, with the new data from charmed decays, the existence of the $\sigma$ should be taken for grant. Despite these positive facts concerning the charmed decays, from the theoretical point of view several questions arise regarding the analyses followed in the previous references, particularly in the S-waves. We concentrate in this work in the results of the E791 Collaboration with respect to the $D^+_s\rightarrow \pi^-\pi^+\pi^+$ \cite{prlds3pi}, $D^+\rightarrow \pi^-\pi^+\pi^+$ \cite{prld3pi} and $D^+\rightarrow K^-\pi^+\pi^+$ \cite{prldk2pi} decays. Although the final state is a three body one, we would expect that, at least, at the low energy tail of the invariant mass of the neutral $\pi^+\pi^-$ subsystem, where the $\sigma$ is observed, the interactions of this two pion subsystem with the other pion should be quite soft due to the largeness of the $D^+$ mass. In this way, the movement of the phase of the $\sigma$ resonance across the Dalitz plot should be given, according to Watson final state theorem, by the isospin ($I$) 1/2 S-wave $\pi\pi$ phase shifts. However, the Breit-Wigner (BW) employed for the $\sigma$ meson in ref.\cite{prld3pi}, as it is well known, does not fulfill this property, as we discuss below in more detail. Analogous comments are also appropriate for the $\kappa$ resonance in the $K^-\pi^+\pi^+$ decay \cite{prldk2pi} for the $K^-\pi^+$ subsystem and the $K\pi$ $I=1/2$ S-wave phase shifts. In addition, controversial properties were also reported by this collaboration regarding the $f_0(980)$ and the $K^*_0(1430)$ resonances. We will develop in this work alternative parameterizations, based on CHPT plus unitarity (chiral unitary approach \cite{review,kn,nn}), to those employed by the E791 Collaboration that do not show the mentioned problems with the S-waves but also contain the $\sigma$ and $\kappa$ resonance poles. We will also discuss, based on the presence of Adler zeros, why large destructive non-resonant contributions (called backgrounds by theorists) that completely distort the peak shapes of the $\sigma$ and $\kappa$ resonances are present in $\pi\pi$ and $K\pi$ S-wave scattering at low energies, respectively. We will also offer a reason why these backgrounds are not present in charmed decays (also applicable to $B$ decays) due to the lack of such zeroes occurring only at a specific energy. As a result, we will put on sounder theoretical grounds the important findings of the E791 Collaboration about the relevant role played by the $\sigma$ and $\kappa$ mesons.\footnote{Also called $f_0(600)$ and $K^*_0(800)$ in the PDG \cite{pdg04}, respectively.} Let us stress that in this work we focus on the solution of the previous problems associated with the S-waves but we do not attempt to account for full three body unitarity, this is beyond the scope of this work. The content of this paper is as follows. After this brief introduction, we consider the $D^+\rightarrow \pi^-\pi^+\pi^+$, $D^+_s\rightarrow \pi^-\pi^+\pi^+$ and $D^+\rightarrow K^-\pi^+\pi^+$ decays in sections \ref{sec:3pi}, \ref{sec:ds3pi} and \ref{sec:dk2pi}, respectively. We review the formalism and main findings of refs.\cite{prld3pi,prlds3pi,prldk2pi} regarding these decays, in order, and develop our formalism to take into account FSI in section \ref{sec:3pi} and apply it to the previous three decays. We also discuss why one should expect clear, although broad, peak structures associated with the $\sigma$ and $\kappa$ resonances in these decays. We end with section \ref{sec:con} pointing out the most relevant results of our study. \section{FSI in the $D^+\rightarrow \pi^- \pi^+\pi^+$ Decay} \label{sec:3pi} \def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} The study of the Dalitz plot of the decay $D^+\rightarrow \pi^- \pi^+\pi^+$, with a sample of 1686 candidates, was performed in ref.\cite{prld3pi} within the Fermilab E791 Collaboration. The estimated signal to background ratio is 2:1, hence we are left with 1124 events and in the study of this decay we will normalize our results to this number of events. Previous studies with lower statistics can be also found in this reference from the E687 Collaboration. In order to introduce the controversial situation regarding the amplitude employed for describing data in ref.\cite{prld3pi}, let us briefly discussed the fitting process followed by ref.\cite{prld3pi}. In this reference, when the decay amplitude is modeled as the coherent sum of well established resonances \cite{pdg04}, following the isobar model, the non-resonant term is dominant while the description of the fit is poor. Indeed the $\chi^2$ per degree of freedom, $\chi^2/\nu$, turns out to be 254 for 162 degrees of freedom \cite{prld3pi}. The main discrepancies between data and the parametrized amplitude come, particularly, from the low energy region, below 0.5 GeV$^2$. In order to improve the quality of the fit of the Dalitz plot the authors of ref.\cite{prld3pi} included another $\pi^+\pi^-$ resonance corresponding to the $\sigma$ resonance whose mass and width were allowed to float in the fit. The quality of the fit substantially improves with a $\chi^2/\nu=138/162$ and finds values of $478\pm 24$ and $324\pm 42$ MeV for the mass and width of the $\sigma$, respectively. However, the phase of the Breit-Wigner (BW) employed to describe the $\sigma$ resonance does not follow the elastic S-wave $I=0$ $\pi\pi$ phase shifts. This is shown in Fig.\ref{fig:3pitaylor}a by the dashed line in comparison with the data points corresponding to the elastic S-wave $I=0$ $\pi\pi$ phase shifts from several experimental references. Indeed, the errors show the variation in the values from one reference to another, see ref.\cite{nd} for more details. The discrepancy is manifest, specially close to threshold where the variation of the phase for the BW is much faster than that corresponding to data, despite that at the shown energy the $\pi\pi$ interaction is elastic since the $K\bar{K}$ is above 1 GeV and the $4\pi$ only starts contributing in a significant way above around 1.3-1.4 GeV. Denoting by 1 the $\pi^-$ and by 2 and 3 the equal positively charged pions, the amplitude employed in ref.\cite{prld3pi} can be written as: \begin{equation} {\cal A}=a_0 e^{i\delta_0} {\cal N}_0+\sum_{n=1}^N a_n e^{i\delta_n} {\cal A}_n(s_{12},s_{13}) {\cal N}_n~. \label{expprld3pi} \end{equation} We describe in detail the ingredients present in the previous equation since we found several errata and omissions in ref.\cite{prld3pi} that are necessary in order to reproduce their amplitudes. In Eq.(\ref{expprld3pi}) the first term is the non-resonant contribution and the other ones originate from the exchange of resonances. Every resonant contribution is Bose symmetrized for the equally charged pions, ${\cal A}_n={\cal A}_n[(12)3]+{\cal A}_n[(13)2]$, as usual. The parenthesis around 12 mean that the particles 1 and 2 form the resonant state, and analogously for (13). The coefficients $a_n$ and $\delta_n$, for $n\geq 0$, are real constants that float in the fit, the $a_n$ are magnitudes and the $\delta_n$ phases. Finally, the ${\cal N}_n$, $n\geq 0$, are normalization factors\footnote{Omitted in the description of the amplitudes given in ref.\cite{prld3pi}, but used in their results.} given by: \begin{equation} {\cal N}_n=1/\left(\int ds_{12} ds_{13} |{\cal A}_n(s_{12},s_{13})|^2\right)^{1/2}~. \label{normalization} \end{equation} The ${\cal A}_n[(12)3](s_{12},s_{13})$ amplitudes for $n>1$ correspond to the product \begin{equation} {\cal A}_n[(12)3](s_{12},s_{13})=BW_n(s_{12}) F_D^{(J)}(s_{12}) F_n^{(J)}(s_{12}) {\cal M}_n^{(J)}[(12)3]~. \end{equation} The Breit-Wigner propagator is given by, \begin{eqnarray} BW_n(s_{12})&=&\left(s_{12}-m_n^2+i m_n \Gamma_n(s_{12})\right)^{-1}~,\nonumber\\ \Gamma_n(s_{12})&=& \Gamma_n(m_n^2)\frac{m_n}{\sqrt{s_{12}}} \left(\frac{p_1}{\widetilde{p}_1}\right)^{2 J+1} \frac{F_n^{(J)}(p_1)^2}{F_n^{(J)}(\widetilde{p}_1)^2}~, \label{bwpropa} \end{eqnarray} with $m_n$ and $\Gamma_n(s_{12})$ the mass and 'energy dependent' width of the (12) resonance and $p_1$ is the three-momentum of the particle 1 in the (12) rest frame. Furthermore, $\widetilde{p}_1$ is $p_1$ evaluated at the resonance mass. On the other hand, the ${\cal M}_n^{(J)}$ are angular dependent factors and read: \begin{eqnarray} {\cal M}_n^{(0)}[(12)3]&=&1 ~,\nonumber\\ {\cal M}_n^{(1)}[(12)3]&=&-2 p_1 p_3 \cos\theta_{13}~,\nonumber\\ {\cal M}_n^{(2)}[(12)3]&=&\frac{4}{3} (p_1 p_3)^2(3 \cos^2 \theta_{13}-1)~. \label{angularmn} \end{eqnarray} where $p_1$ and $p_3$ are the three-momenta of particles 1 and 3 and $\theta_{13}$ their relative angle, all of them referred to the (12) rest frame. The $F_D^{(J)}$ and $F_n^{(J)}$ are Blatt-Weisskopf penetration factors that depend on the spin $J$ of the resonant state, and are given by: \begin{eqnarray} F_D^{(0)}&=&1~,\nonumber\\ F_D^{(1)}&=&1/\sqrt{\left(1+(r q_3)^2\right)}~,\nonumber\\ F_D^{(2)}&=&1/\sqrt{\left(9+3 (q p_3)^2+(r p_1)^4 \right)} ~,\nonumber\\ F_n^{(0)}&=&1~,\nonumber\\ F_n^{(1)}&=&1/\sqrt{\left(1+(r p_1)^2\right)}~,\nonumber\\ F_n^{(2)}&=&1/\sqrt{\left(9+3 (r p_1)^2+(r p_1)^4 \right)} ~, \end{eqnarray} with $q_3$ the three-momentum of particle 3 in the $D^+$ rest frame. The set of resonances that exchange in Eq.(\ref{expprld3pi}) contain the $\rho^0(770)$, $f_0(980)$, $f_2(1270)$, $f_0(1370)$, $\rho^0(1450)$ and, in the parameterization that reproduces faithfully the experimental data, the $\sigma$. As discussed above the most controversial aspects present in Eq.(\ref{expprld3pi}) involves the $I=0$ S-wave $\pi\pi$ partial wave. \begin{figure}[ht] \psfrag{degrees}{\small Phase shifts (degrees)} \psfrag{G}{\small GeV} \psfrag{absolute}{\small (Absolute value)$^2$} \psfrag{Kpi}{\small $K\pi$ $I=1/2$ S-wave} \centerline{\epsfig{file=3pi.eps,height=3.5in,width=7.0in,angle=0}} \vspace{0.2cm} \caption[pilf]{\protect \small S-wave $I=0$ $\pi\pi$ phase shifts (left panel) and modulus square of this partial wave (right panel), normalized such that the residue of the $\sigma$ pole is one. The data points are the elastic S-wave $I=0$ $\pi\pi$ phase shifts from several experimental references and the errors show the variation in the values from one set to another, as explained in detail in ref.\cite{nd}. The dashed lines correspond to the BW of the $\sigma$ resonance employed by the E791 Collaboration \cite{prld3pi}. The thick solid lines are the full results of ref.\cite{npa} when keeping only the $\pi\pi$ channel while the thin ones correspond to the pure $\sigma$ pole contribution of Eq.(\ref{3pitaylor}). The phase shifts and absolute value from the contribution of the $\sigma$ pole plus the first non-resonant term in Eq.(\ref{kpitaylor}) are shown by the dotted lines. Finally, the dashed-dotted lines are the results from the Laurent expansion of Eq.(\ref{3pitaylor}) keeping all the terms shown. \label{fig:3pitaylor}} \end{figure} Now, we want to show that we are able to reproduce the amplitude given in Eq.(\ref{expprld3pi}) as employed in ref.\cite{prld3pi} but using the S-wave $I=0$ $\pi\pi$, $K\bar{K}$ coupled channel partial waves derived in ref.\cite{npa}. These T-matrices were obtained from Chiral Perturbation Theory (CHPT) at leading order \cite{wein,gl} together with a unitarization scheme compatible with the chiral expansion (chiral unitariy approach). Furthermore, these matrices are not only able to reproduce the scattering data of the $I=0,$ 1 S-wave amplitudes up to around 1.2 GeV but also have been successfully tested by now in a vast number of production processes that pick up large corrections by FSI from these partial waves, see e.g. \cite{gama,fi,jpsi,bdecays}. The S-wave $I=0$ T-matrices from ref.\cite{npa} contains two poles in the appropriate Riemann sheets corresponding to the $\sigma$ and $f_0(980)$ resonances. Their pole positions are around $448-i\,224$ and $990-i 13$ MeV, respectively. Let us remark that the given poles are obtained in the full T-matrices derived in ref.\cite{npa}, which are not any sum of pole contributions. These resonances originate because of the self interactions between the pseudoscalars and are of dynamical origin, disappearing in the large $N_c$ limit \cite{nd,ramonet}. This is why they are said to be of dynamical origin. Due to the already mentioned discrepancy between phase shifts and the phase motion in energy of the BW for the $\sigma$, let us perform a Laurent series around the $\sigma$ pole position of the $I=0$ $\pi\pi$ S-wave amplitude, \begin{equation} t_{11}=\frac{\gamma_0^2}{s-s_\sigma}+\gamma_1+\gamma_2(s-s_\sigma)+\ldots~. \label{3pitaylor} \end{equation} with the pole position and residua obtained from ref.\cite{npa} in the elastic case, with the values: \begin{eqnarray} s_\sigma&=&(0.47-i 0.22)^2 \hbox{~GeV}^2~,~\gamma_0^2=5.3+i 7.7 \hbox{~GeV}^2~,\nonumber\\ \gamma_1&=&-8.1+i 36.9 ~,~\gamma_2=1.1+i0.1 \hbox{~GeV}^{-2}~. \label{residua_s} \end{eqnarray} In Fig.\ref{fig:3pitaylor} we show in the left and right panels the phase and normalized absolute value of this partial wave, respectively. The thinner solid lines correspond to the pole contribution in Eq.(\ref{3pitaylor}). We see in Fig.\ref{fig:3pitaylor}a that the phase of this pole contribution does not vanish at threshold, but that runs parallel to the experimental phase shifts, so that the difference with respect to them keeps constant along energy up to the increase in the last points due to the closeness of the $f_0(980)$ and the opening of the $K\bar{K}$ channel. Thus, the phase of the $\sigma$ pole contribution from Eq.(\ref{3pitaylor}) does follow the motion of the experimental S-wave $I=0$ $\pi\pi$ phase shifts, in contrast with the BW phase indicated by the dashed line employed in ref.\cite{prld3pi}. It is also worth stressing that the phase of the pure $\sigma$ pole contribution starts at $-58$ degrees at threshold and this is the reason why its value is not $+90$ degrees at the mass for the $\sigma$, $466$ MeV, but happens much later, around 1 GeV. By the same reason, for $s\rightarrow \infty$ one gets only $+122$ degrees from this pole contribution instead of the usual $+180$ degrees. The agreement between the experimental phase shifts and the expansion (\ref{3pitaylor}) is reached rather fast. The dotted lines in Fig.\ref{fig:3pitaylor} correspond to keep as well the $\gamma_1$ term in Eq.(\ref{3pitaylor}), while the dashed-dotted ones correspond to keep all the terms shown in this equation. The thick solid lines are the full results from ref.\cite{npa} in the elastic case, removing the $K\bar{K}$ channel. In Fig.\ref{fig:3pitaylor}b we consider the absolute value of the partial wave under consideration, but normalized such that the residue at the pole position is one, so that the series in Eq.(\ref{3pitaylor}) is divided by $\gamma_0^2$. The source of each line is the same as already explained and let us notice that this figure starts at 0 GeV, below threshold. We want to stress three important facts: i) The markedly different behaviour of the $\sigma$ BW employed by the E791 Collaboration and the absolute value from the partial wave of ref.\cite{npa}. ii) The first peak in the BW. This peaks results because the BW formula (\ref{bwpropa}) for the $\sigma$ resonance generates an unphysical pole in the physical sheet below threshold corresponding to a bound state at around 0.2 GeV. iii) The non-resonant contribution from the terms proportional to $\gamma_1$ and $\gamma_2$ in Eq.(\ref{3pitaylor}) are as big as the pole contribution and we see that the final shape of the absolute value of the amplitude is completely distorted as compared with the pure pole contribution. Indeed, the full results from ref.\cite{npa} show a zero at around 98 MeV. This is the Adler zero due to chiral symmetry, so that in the chiral limit the pseudoscalar interactions vanish at $s=0$. At leading order in CHPT \cite{gl}, this Adler zero sits as well at 98 MeV, very close to the position of the dip in the dotted and dashed-dotted lines and in good agreement with the zero in the total amplitude. In fact, because of the presence of this Adler zero, one can understand why the background turns out to be so big. If there is a pole that affects pretty much the low energy region, as in our case with the $\sigma$ resonance, it is then necessary a large background to cancel the pole contribution so that an Adler zero can happen. \begin{table} \begin{center} \begin{tabular}{|lrrr|} \hline Resonance & $a_n$ & $\delta_n$ & Fraction \\ & & (radians) & \\ \hline NR & 0.57 & 0.30 & 12$\%$ \\ $\sigma\pi^+$ & 0.10 & 3.40 &79$\%$\\ $\rho^0(770)\pi^+$ & 1 (fixed) & 0 (fixed) & 35$\%$\\ $f_0(980)\pi^+$ & 0.47 & 2.90 & 8$\%$\\ $f_0(1370)\pi^+$ & 0.24 & 2.09 & 2$\%$\\ $f_2(1270)\pi^+$ & 0.79 & 1.05 & 22$\%$\\ $\rho^0(1450)\pi^+$ & 0.20 & 5.30 & 1$\%$\\ \hline $\chi^2/\nu$ & 3/152& & \\ \hline \end{tabular} \caption{\small Results of the reproduction of the parameterization of ref.\cite{prld3pi} summarized in Eq.(\ref{expprld3pi}), removing the BW of the $\sigma$ for its pole contribution, Eq.(\ref{purespole}). For every resonance we list the resulting magnitude $a_n$ in the second column, the relative phase $\delta_n$ in radians in the third column and the fraction of this decay mode in the fourth one. \label{tab:spole}} \end{center} \end{table} Before making use of the full results of ref.\cite{npa} let us substitute in Eq.(\ref{expprld3pi}) the BW contribution given in Eq.(\ref{bwpropa}), as employed in ref.\cite{prld3pi}, by the pure $\sigma$ pole contribution, \begin{equation} \frac{a_1 e^{i\delta_1}}{s-s_\sigma}~, \label{purespole} \end{equation} located in the position given in Eq.(\ref{residua_s}). This pole contribution, as discussed above and shown in Fig.\ref{fig:3pitaylor}a, has a phase motion in agreement with that from the S-wave $I=0$ phase shifts. We keep the rest of terms in Eq.(\ref{expprld3pi}) and fit the $a_i$ and $\delta_i$ so as to reproduce the results from the parameterization employed in E791 with the values for the parameters given by their fit with the $\sigma$. As in ref.\cite{prld3pi} the magnitude and phase of the $\rho(770)$ vector resonance parameters, $a_n$ and $\delta_n$, respectively, are fixed. In that fit apart from the $a_i$ and $\delta_i$, the authors also leave as free parameters the mass and width of the $\sigma$. Notice that we do not include any Blatt-Weisskopf factors in Eq.(\ref{purespole}). In order to reproduce the results from ref.\cite{prld3pi} we fit a Dalitz plot with $20\times 20$ bins (a figure very standard in this E791 analysis) normalized to the total number of events once the background is subtracted, namely 1124 events. This Dalitz plot is generated from the parameterization employed in ref.\cite{prld3pi} for the signal only, corresponding to Eq.(\ref{expprld3pi}). The resulting fit is very good with a low $\chi^2/\nu=3/152$ and the obtained magnitudes and phases are given in Table \ref{tab:spole}. On the other hand, we show in Fig.\ref{fig:3pi_proy} the $s_{12}$ projection by the dashed line, while the results from the parameterization of ref.\cite{prld3pi} correspond to the points. We then conclude that we are able to reproduce the results of the E791 while keeping the constraint that the phase of the $\sigma$ contribution follows the $I=0$ S-wave $\pi\pi$ phase shifts in a rather straightforward manner. An important difference of our fit in Table \ref{tab:spole} with respect to the second fit of \cite{prld3pi} is the clear dominance of the $\sigma$ pole in our case, 89$\%$, as compared with the previous reference where its fraction, though also dominant, is lower, around 1/2. \begin{figure}[ht] \psfrag{degrees}{\small Events/0.04 (GeV$^2$)} \psfrag{MeV}{\small GeV$^2$} \centerline{\epsfig{file=3pi_proy_sd.ps,height=6.5in,width=3.0in,angle=-90}} \vspace{0.2cm} \caption[pilf]{\protect \small $m^2(\pi\pi)$ projections with "data" points from Eq.(\ref{expprld3pi}) of ref.\cite{prld3pi}. Our results are given by the dashed line with the pole contribution, Eq.(\ref{expprld3pi}), instead of the BW for the $\sigma$ of ref.\cite{prld3pi}. The solid line corresponds to treat the S-wave FSI following Eq.(\ref{unisum3}). \label{fig:3pi_proy}} \end{figure} Now, let us consider the full results of ref.\cite{npa}. In this reference the partial waves are written as a product of two matrices (for the general case of coupled channels) as \begin{equation} T= \left[I+N\cdot g\right]^{-1}\cdot N~, \label{npatmatrix} \end{equation} where $\pi\pi$ is channel 1 and $K\bar{K}$ is channel 2, in such a way that e.g. $T_{11}$ is the $I=0$ elastic $\pi\pi$ S-wave amplitude and so on. The diagonal matrix $g(s)$ corresponds to the unitarity bubbles of every channel and the matrix $N$ is determined by matching the previous expression with the chiral series at leading order, although the structure of Eq.(\ref{npatmatrix}) is valid to any order. At lowest order $N$ is the matrix of leading order CHPT amplitudes \cite{npa}. Now the important point for us is that given a general vertex $N_{ij}$, projected in a certain partial wave and connecting channels $i$ and $j$, the summation of the unitarity bubbles just implies to multiply the matrix $N$ by the inverse of the matrix $\left[I+N\cdot g\right]$, as follows from Eq.(\ref{npatmatrix}). In such a way that if the final channel $\ell$ is produced from the initial channel $k$ we then have: \begin{equation} \label{unisum} T_{\ell k}=\sum_{j}\left[I+N\cdot g\right]^{-1}_{\ell j} N_{jk} ~, \end{equation} so that $j$ runs over all the possible intermediate states, that are also produced from channel $k$ with the original amplitude $N_{jk}$. Now we want to study a completely analogous situation where channel $\ell$ is produced from a given production process starting with the $D^+$ meson. Then, if call by $\xi_j$ the original amplitude for producing channel $j$ the resummation of the unitarity bubbles must be given in exactly the same way as in Eq.(\ref{unisum}),\footnote{The analytical properties of $N_{ij}$ and $\xi_j$, in general terms, are the same, that is, analytical functions except for the presence of cuts and poles.} so that: \begin{equation} \label{unisum2} \xi_\ell \rightarrow \sum_j \left[I+N\cdot g\right]^{-1}_{\ell j} \xi_{j} ~. \end{equation} In ref.\cite{npa} the $\pi\pi$ and $K\bar{K}$ channels were considered for the study of the meson-meson $I=0,~1$ S-waves. The $\sigma$ and $f_0(980)$ appear as poles in the second Riemann sheet of the resulting partial waves. Thus, instead of considering Eq.(\ref{expprld3pi}), that includes explicitly the BW's associated with these resonances \cite{prld3pi}, we will instead apply Eq.(\ref{unisum2}) to correct by final state interactions. In this way if we denote by $D=\left[I+N\cdot g\right]$, we will have: \begin{equation} \label{unisum3} \left[D^{-1}_{11}(s_{12})+D^{-1}_{11}(s_{13})\right] a_{\pi\pi} e^{i\delta_{\pi\pi}}+ \left[D^{-1}_{12}(s_{12})+D^{-1}_{12}(s_{13})\right]a_{K\bar{K}}e^{i\delta_{K\bar{K}}}~, \end{equation} in return of the contribution for the $\sigma$ and $f_0(980)$. Notice that we do not include in (\ref{unisum3}) any form factor through Blatt-Weisskopf terms $F^{(0)}_n$ and $F_D^{(0)}$ for the two scalar resonances. The rest of contributions, NR plus the ones from the $f_0(1370)$ and vector and tensor resonances in Table \ref{tab:spole} are the same as in ref.\cite{prld3pi}, although their associated parameters $a_n$ and $\delta_n$ are fitted again. As in the case before, we fit a Dalitz plot with $20\times 20$ bins generated from ref.\cite{prld3pi} and normalized to the same number of total events. The resulting $\chi^2/\nu=2/152$ is of the same good quality as before, indicating an accurate reproduction of the amplitudes of ref.\cite{prld3pi}. In Table \ref{tab:sdmatrix} we show the values of the parameters that we have fitted and in Fig.\ref{fig:3pi_proy} we give by the solid line the energy projections of any neutral $\pi\pi$ subsystem. \begin{table}[ht] \begin{center} \begin{tabular}{|lrrr|} \hline Resonance & $a_n$ & $\delta_n$ & Fraction \\ & & (radians) & \\ \hline NR & 0.70 & $-0.43$ & 17$\%$ \\ $(\pi\pi)\pi^+$ & 0.31 & 1.27 & 102$\%$\\ $(K\bar{K})\pi^+$ & 0.11 & $-0.39$ & 6$\%$\\ $\rho^+(770)\pi^+$ & 1 (fixed) & 0 (fixed) & 36$\%$\\ $f_0(1370)\pi^+$ & 0.31 & 1.99 & 3$\%$\\ $f_2(1270)\pi^+$ & 0.77 & 1.01 & 21$\%$\\ $\rho^0(1450)\pi^+$ & 0.20 & 5.48 & 1$\%$\\ \hline $\chi^2/\nu$ & 2/152& & \\ \hline \end{tabular} \caption{\small Results of the reproduction of the parameterization of ref.\cite{prld3pi} summarized in Eq.(\ref{expprld3pi}), employing Eq.(\ref{unisum3}). For each intermediate state we list the resulting magnitude $a_n$ in the second column, the relative phase $\delta_n$ in radians in the third column and the fraction of this decay mode in the fourth one. \label{tab:sdmatrix}} \end{center} \end{table} As a result we see that we are able to reproduce the signal function of ref.\cite{prld3pi} rather accurately and at the same time being able to establish that the scattering amplitudes which drive the final state interactions corrections in $D^+\rightarrow \pi^- \pi^+\pi^+$ are the same as those determined from scattering data and other production processes. The same conclusion is obtained in ref.\cite{focus}. This reference performs a K-matrix fit to data from $D^+$ and $D_s^+$ decays to $3\pi$, although the resulting fits have significative lower confidence levels than those of the E791 Collaboration that we reproduce. Regarding the $\sigma$ resonance we see that it exhibits the same behaviour in the D-decays as in the rest of known processes, althogh in ref.\cite{focus} is excluded. It is worth mentioning that the K-matrix employed in ref.\cite{focus} does not meet the chiral requirement of a soft expansion for low energies and, in particular, it does not fulfill the chiral constraints imposed by the chiral power counting, which requires a $\pi\pi$ scattering amplitude to start at order $p^2$, instead of order $p^0$ as in \cite{focus}. This rises serious doubts about the applicatibility of the results from the K-matrix employed in ref.\cite{focus} regarding the existence of the $\sigma$ resonance since meaningful stuctures are required in order to extrapolate the T-matrix in the energy complex plane away from the physical real axis, where it has been tested. Before ending this section let us discuss why the $\sigma$ resonance is more visible in the $D$ decays than in $\pi\pi$ scattering. One important point is the presence of the huge background (those terms proportional to $\gamma_1$, $\gamma_2$, etc... in Eq. (\ref{expprld3pi})) in the $I=0$ S-wave $\pi\pi$ partial wave as discussed at the beginning of this section. There we found out that this background is required in order to preserve the Adler zero in the presence of a light $\sigma$ resonance, so that a cancellation can occur close to threshold. In this way, the standard pole like structure of a resonance, e.g. that for the case the $\rho$ resonance, is completely destroyed. The question now is why this background does not happen in $D$ decays (or as well in $B$ decays \cite{bdecays}). The main point was already discussed in the second entry of ref.\cite{bdecays}, and arguments in this direction were also put forward in ref.\cite{bugg}. The $D$ or $B$ mesons can be identified with pseudoscalar sources coupled directly to a pion while the other two pions can be thought to couple just to a scalar source. As a direct application of CHPT power counting one realizes that this is not suppressed by any power of momentum or quark mass and then there is a priori no reason why the $\sigma$ meson should be screened by such large backgrounds as occur in the scattering case. Indeed, this is a result from Eq.(\ref{unisum3}), and can be seen by just making a Laurent expansion of the $D^{-1}$ matrix around the $\sigma$ pole and then we check the absence of a significant background, contrarily to the scattering case. Certainly the scalar form factor of two pseudoscalars is not renormalization group invariant but this just amounts to a global quark mass multiplying factor and does not induce any energy dependence that could distort the pole structure of the resonance, contrarily to scattering where the Adler zero happens for a specific energy. \section{FSI in the $D^+_s\rightarrow \pi^- \pi^+\pi^+$ Decay} \label{sec:ds3pi} \def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} In this section we consider the decay $D^+_s\rightarrow \pi^- \pi^+\pi^+$ where the $f_0(980)$ resonance plays a central role as clearly seen in Fig.\ref{fig:ds_proy}. The "data" points correspond to the signal function of the E791 Collaboration given by the analogous formula to Eq.(\ref{expprld3pi}) but applied to this other decay. The set of intermediate resonances considered in ref.\cite{prlds3pi} contains the $f_0(980)$, $\rho^0(770)$, $f_2(1270)$, $f_0(1370)$ and the $\rho^0(1450)$. In the fit performed in ref.\cite{prlds3pi} the mass and couplings of the $f_0(980)$ were allowed to float with an energy dependent width given by: \begin{equation} \Gamma_{f_0}(s)=g_\pi \sqrt{s/4-m_\pi^2}+g_k \frac{1}{2}\left(\sqrt{s/4-m_{K^+}^2}+\sqrt{s/4-m_{K^0}^2}\right)~, \end{equation} to be substituted in the expression for $BW_n(s_{12})$ in Eq.(\ref{bwpropa}). The striking fact is the small value for the coupling $g_K$ in the fit of ref.\cite{prlds3pi}, compatible with zero, whereas the coupling of $g_\pi$ is much larger. This result is very puzzled because is well known that the $f_0(980)$ has great affinity to couple with strangeness sources, as seen e.g. in $\phi$ decays \cite{fi} or in the $J\Psi\rightarrow \phi \pi\pi$ decay \cite{jpsi}. We show that we can reproduce the amplitude for the signal of ref.\cite{prlds3pi} making use of the amplitudes of ref.\cite{npa}, where the $f_0(980)$ has more standard properties regarding its couplings. We employ Eq.(\ref{expprld3pi}) of ref.\cite{prlds3pi} but, as in the previous section, we remove the $f_0(980)$ BW contribution of \cite{prlds3pi} and take into account the FSI by employing Eq.(\ref{unisum3}), in terms of the $I=0$ S-wave $\pi\pi$ and $K\bar{K}$ states. In order to reproduce the amplitudes of ref.\cite{prlds3pi} we proceed as in section \ref{sec:3pi}. For that we fit a Dalitz plot with $20\times 20$ bins, normalized to 625 events. The Dalitz plot is generated from Eq.(\ref{expprld3pi}) with the parameters given in ref.\cite{prlds3pi}. We have performed two types of fits, given in Table \ref{tab:ds3pi}. One in which the mass and width of the $f_0(1370)$ is fixed to the values of ref.\cite{prlds3pi} and another in which, as in the previous reference, we let to float their values. The former corresponds to the second and third columns of Table \ref{tab:ds3pi} and the latter to the last two columns. Furthermore, in Fig.\ref{fig:ds_proy} we show the energy projection of any neutral $\pi\pi$ subsystem, the solid line refers to the former fit and the dashed line to the latter. We see that the reproduction of the signal function of ref.\cite{prlds3pi} is very fair in both cases. In connection with the fore mentioned unexpected values of the couplings of the $f_0(980)$ as determined in ref.\cite{prlds3pi}, we refer the interested reader to ref.\cite{mixing} for a detailed study of the masses and couplings of the lightest scalar resonances ($\sigma$, $\kappa$, $f_0(980)$ and $a_0(980)$), showing that they obey rather accurately a standard SU(3) analysis and constitute the lightest scalar nonet. Other interesting studies on the $f_0(980)$ in $D_s$ and $D$ decays are refs.\cite{navarra,nielsen}. \begin{table}[H] \begin{center} \begin{tabular}{|l|rr||rr|} \hline Resonance & $a_n$ & $\delta_n$ & $a_n$ & $\delta_n$ \\ & Fraction & (radians) & Fraction($\%$) & (radians) \\ \hline NR & 0.40 & 0.16 & 0.40 & $-0.24$ \\ & 13$\%$ & & 14$\%$ & \\ $(\pi\pi)\pi^+$ & 0.28 & 2.36 & 0.25 & 2.23\\ & 6$\%$ & & 5$\%$& \\ $(K\bar{K})\pi^+$ & 1(fixed) & 0(fixed)& 1(fixed) & 0(fixed)\\ & 78$\%$ & & 84$\%$& \\ $\rho^+(770)\pi^+$ & 0.24 & 0.14 & 0.22 & 0.19 \\ & 4$\%$ & & 4$\%$& \\ $f_0(1370)\pi^+$ & 0.60 & 1.68 & 0.57& 2.01\\ & 28$\%$ & &28$\%$ & \\ $f_2(1270)\pi^+$ & 0.50 & 0.39 &0.48 &0.41 \\ & 20$\%$ & &19$\%$ & \\ $\rho^0(1450)\pi^+$ & 0.25 & 0.67 &0.24 & 0.87\\ & 5$\%$ & & 5$\%$& \\ \hline $\chi^2/\nu$ & 11/142 & & 8.5/140& \\ \hline \end{tabular} \caption{\small Results of the reproduction of the parameterization of ref.\cite{prlds3pi} summarized in Eq.(\ref{expprld3pi}) for the $D^+_s\rightarrow \pi^-\pi^+\pi^+$. For each intermediate state we list the resulting magnitude $a_n$ in the second column, the relative phase $\delta_n$ in radians in the third column and the fraction of this decay mode just below the value of every $a_n$, analogously for the fourth and fifth columns. The second and third column correspond to the fit with the mass and width of the $f_0(1370)$ fixed to the values of ref.\cite{prlds3pi}, while in the fourth and fifth columns these values are allowed to float in the fit. In the latter case, $m_{f_0(1370)}=1.46$ and $\Gamma_{f_0(1370)}=0.16$ GeV. \label{tab:ds3pi}} \end{center} \end{table} \begin{figure}[ht] \psfrag{degrees}{\small Events/0.07 (GeV$^2$)} \psfrag{MeV}{\small GeV$^2$} \centerline{\epsfig{file=ds_proy.ps,height=6.5in,width=3.0in,angle=-90}} \vspace{0.2cm} \caption[pilf]{\protect \small $m^2(\pi\pi)$ projections for "data" \cite{prlds3pi} and our results, solid and dashed lines, with the FSI given by Eq.(\ref{unisum2}). In the dashed line the mass and width of the $f_0(1370)$ are allowed to float in the fit, second and third columns of Table \ref{tab:ds3pi}. In the solid line the mass and width of the $f_0(1370)$ are fixed to the values of ref.\cite{prlds3pi}, fourth and fifth columns of Table \ref{tab:ds3pi}. \label{fig:ds_proy}} \end{figure} \section{FSI in the $D^+\rightarrow K^- \pi^+\pi^+$ Decay} \label{sec:dk2pi} \def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} The study of the high statistics Dalitz plot of the decay $D^+\rightarrow K^- \pi^+\pi^+$, with a sample of 15090 and an estimated background of a 6$\%$, was performed in ref.\cite{prldk2pi} within the E791 Collaboration in Fermilab. In order to introduce the controversial situation regarding the amplitude employed to describe data in ref.\cite{prldk2pi}, let us briefly discussed the fitting process followed in this reference. When in this reference the amplitude, following the isobar model, is written as the coherent sum of well established resonances, as so quoted in the Particle Data Group ref.\cite{pdg04}, the description of the fit is poor. Indeed the $\chi^2$ per degree of freedom found in ref.\cite{prldk2pi} is $\chi^2/\nu=167/63$. The main discrepancies between data and the parametrized amplitude come, particularly, from the low energy region, below 0.6 GeV$^2$, and at higher energies at around 2.5 GeV$^2$. In order to improve the quality of the fit of the Dalitz plot the authors of ref.\cite{prldk2pi} allowed the mass and width of the scalar resonance $K^*_0(1430)$ to float in the fit and also included Gaussian scalar form factors a la Tornqvist, with the meson radii as additional free parameters. Despite that the $\chi^2/\nu$ is also around 2. Finally, the authors of ref.\cite{prldk2pi}, together with the previous additions, also included the $\kappa$ resonance, referred as $K^*_0(800)$ in the last edition of the PDG \cite{pdg04} but qualified as controversial. The quality of the fit substantially improves and the $\chi^2/\nu=46/63$. However, the situation is far from being resolved since: i) the width of the well known $K^*_0(1430)$, allowed to float as discussed above, turned out to be a factor of two lower than its clearly determined value in scattering experiments and other production processes. ii) The Breit-Wigner employed to describe the $\kappa$ resonance does not follow the elastic S-wave $I=1/2$ $K\pi$ phase shifts. This is shown in Fig.\ref{fig:kpitaylor} where the points are the elastic S-wave $I=1/2$ $K\pi$ phase shifts from \cite{estabrooks} and the dashed line corresponds to the phase of the relativistic BW of the $\kappa$. The discrepancy is manifest, specially close to threshold where the variation for the phase of the BW is much faster than that corresponding to data. Let us remember that the S-wave $I=1/2$ $K\pi$ scattering is elastic below around 1.3 GeV, since both the $K\eta$ and $K\pi\pi$ channels are negligible below that energy, as clearly shown both experimentally \cite{estabrooks,aston} and theoretically \cite{jamin}, and then only the $K\pi$ elastic channel matters in this energy region and the discrepancy cannot come due to inelastic effects. Denoting by 1 the $K^-$ and by 2 and 3 the equal pions, the amplitudes {\it employed} in ref.\cite{prldk2pi} can be written as: \begin{equation} {\cal A}=a_0 e^{i\delta_0} {\cal N}_0+\sum_{n=1}^N a_n e^{i\delta_n} {\cal A}_n(s_{12},s_{13}) {\cal N}_n~. \label{expprldk2pi} \end{equation} The different ingredients contained in the previous equation were discussed in detail in section \ref{sec:3pi}. Here we only mention that in ref.\cite{prldk2pi} the BW expression (\ref{bwpropa}) has opposite sign. The set of resonances that exchange in Eq.(\ref{expprldk2pi}) contains the $K^*_0(892)$, $K^*_0(1430)$, $K_2^*(1430)$, $K^*(1680)$ and, in the parameterization that reproduces faithfully the experimental data, the $\kappa$ or $K^*_0(800)$. As discussed above the most controversial aspects present in Eq.(\ref{expprldk2pi}) involves the $I=1/2$ S-wave $K\pi$ partial wave. Now, we want to show that we are able to reproduce the amplitude given in Eq.(\ref{expprldk2pi}) as employed in ref.\cite{prldk2pi} but using the S-wave $I=1/2$ $K\pi$, $K\eta$ and $K\eta'$ coupled channel partial waves derived in ref.\cite{jamin}. These T-matrices are obtained from Chiral Perturbation Theory (CHPT) \cite{wein,gl} at next-to-leading order, supplemented with the exchange of explicit resonance fields in a chiral invariant manner \cite{rafael}, plus a unitarization scheme compatible with the chiral expansion (chiral unitariy approach). Furthermore, additional constrains from large $N_c$ QCD are considered as well in order to restrict the number of free parameters, for further details we refer to refs.\cite{jamin,jaminff}. These T-matrices are able to provide an accurate reproduction of the $K\pi$ S-wave amplitude and of the $I=1/2$ and $I=3/2$ S-wave phase shifts \cite{estabrooks,aston} up to around 2 GeV. Later on, they were employed for calculating through dispersion relations the strangeness changing scalar form factors \cite{jaminff}, the light quark masses within QCD scalar sum rules \cite{jaminms} and a crucial counterterm needed in the present and precise studies of $K_{\ell 3}$ decays \cite{jaminckm}. The S-wave $I=1/2$ T-matrices from ref.\cite{jamin} contains three poles in the appropriate Riemann sheets corresponding to the $\kappa$, $K^*_0(1430)$ and the $K^*_0(1950)$. Their pole positions are around $708-i\, 305$, $1450-i\,142$ and $1910-i\,27$ MeV, respectively.\footnote{These pole positions vary slightly depending of the fit taken from ref.\cite{jamin}, although we have presented the values from the so called preferred fit, which behaves in the most appropriate way when considering scalar form factors and QCD sum rules.} Let us concentrate on the first two resonances which are those that are relevant for the $D^+$ decay into $K^-\pi^+\pi^+$, since the maximum value of the total center of mass energy of the subsystems (12) or (13) is $m_{D^+}-m_{\pi^+}= 1.73$ GeV, clearly below the influence of the $K^*_0(1950)$ pole. Let us remark that the given pole positions are obtained as a result of the full T-matrices derived in ref.\cite{prldk2pi}, which are not just any sum of pole contributions, although these can dominate in some energy regions. The value for the width of the well established and clearly seen $K^*_0(1430)$ resonance is perfectly compatible with that quoted by the PDG \cite{pdg04}, $294\pm 23$ MeV,\footnote{Remember that the $double$ of minus the imaginary part of the pole position corresponds to the width of a resonance. This is indeed a possible and unambiguous definition of the width of a resonance.} while that from the fit of the E791 Collaboration \cite{prldk2pi}, $175\pm 12 \pm 12$ MeV, is almost a factor of two lower. In addition, the mass obtained in ref.\cite{prldk2pi} for the $K^*_0(1430)$ is out of the range given in the PDG as well. We also mention that the width of the $\kappa$ given by the E791 Collaboration, around 400 MeV, is substantially lower than the value from the pole position from ref.\cite{jamin}, around 600 MeV. \begin{figure}[ht] \psfrag{degrees}{\small Phase shifts (degrees)} \psfrag{gev}{\small GeV} \psfrag{abs}{\small (Absolute value)$^2$} \psfrag{Kpi}{\small $K\pi$ $I=1/2$ S-wave} \centerline{\epsfig{file=i12dk.eps,height=3.5in,width=7.0in,angle=0}} \vspace{0.2cm} \caption[pilf]{\protect \small S-wave $I=1/2$ $K\pi$ phase shifts (left panel) and modulus square of this partial wave, normalized such that the residue at the $\kappa$ pole is one (right panel). The data points are from ref.\cite{estabrooks}. The dashed lines correspond to the BW for the $\kappa$ resonance employed in ref.\cite{prldk2pi}. The solid lines correspond to the pure $\kappa$ pole contribution from Eq.(\ref{kpitaylor}). The results from the contribution of the $\kappa$ pole plus the first non-resonant term in Eq.(\ref{kpitaylor}) are shown by the dotted lines. Finally, the dashed-dotted lines are the results from the Laurent expansion of Eq.(\ref{kpitaylor}) keeping all the terms shown. \label{fig:kpitaylor}} \end{figure} Due to the already mentioned discrepancy between the phase motion in energy of the BW for the $\kappa$ from the E791 Collaboration, let us perform a Laurent series around the $\kappa$ pole position of the $I=1/2$ $K\pi$ S-wave amplitude, \begin{equation} t_{11}=\frac{\gamma_0^2}{s-s_\kappa}+\gamma_1+\gamma_2(s-s_\kappa)+\ldots~. \label{kpitaylor} \end{equation} with the pole position and residua obtained from ref.\cite{jamin}, with the values: \begin{eqnarray} s_\kappa&=&(0.71-i 0.31)^2 \hbox{~GeV}^2~,~\gamma_0^2=19.0+i 5.9 \hbox{~GeV}^2~,\nonumber\\ \gamma_1&=&12.5+i 43.1 ~,~\gamma_2=0.8+i7.3 \hbox{~GeV}^{-2}~. \label{taylork} \end{eqnarray} In Fig.\ref{fig:kpitaylor} we show in the left and right panels the phase and normalized absolute value of this partial wave, respectively. The thinner solid lines correspond to the pure pole contribution in Eq.(\ref{kpitaylor}). We see in Fig.\ref{fig:kpitaylor}a that the phase of this pole contribution does not vanish at threshold, but runs parallel to the experimental phase shifts, so that the difference with respect to them keeps constant along energy up to the increase in the last points due to the closeness of the $K^*_0(1430)$ and the opening of the $K\eta'$ channel. Thus the phase of the pure pole contribution from Eq.(\ref{kpitaylor}) does follow the motion of the experimental S-wave $I=1/2$ $K\pi$ phase shifts, in contrast with the BW phase indicated by the dashed line. It is also worth realizing that the phase of the pure $\kappa$ pole contribution starts at $-90$ degrees at threshold and this is the reason why its value is not $+90$ degrees at the mass of the $\kappa$ resonance, $708$ MeV, but happens much later. By the same reason, for $s\rightarrow \infty$ one gets only $+90$ degrees from this pole contribution. The agreement between the experimental phase shifts and the expansion (\ref{kpitaylor}) is reached rather fast. The dotted lines correspond to keep as well the $\gamma_1$ term while the dashed-dotted ones correspond to keep all the terms shown in this equation. The thick solid lines are the full results from ref.\cite{jamin}. In Fig.\ref{fig:kpitaylor}b we consider the absolute value of the partial wave under consideration, but normalized such that the residue at the pole position is one, so that we divide by $\gamma_0^2$ the series in Eq.(\ref{kpitaylor}). The source of each line is the same as already explained and let us notice that this figure starts at around 0.3 GeV, below threshold. We want to stress three important facts: i) The very different behaviour of the $\kappa$ BW employed by the E791 Collaboration and the absolute value from the partial wave of ref.\cite{jamin}. ii) The first peak in the BW. This peaks results because the BW formula (\ref{bwpropa}) for the $\kappa$ resonance generates an unphysical pole in the physical sheet below threshold at $0.46 \pm i 0.11$ GeV, although it has a negligible impact above threshold. iii) The non-resonant contributions proportional to $\gamma_1$ and $\gamma_2$ in Eq.(\ref{kpitaylor}) are as big as the pole contribution, and we see that the final shape of the absolute value of the amplitude is completely distorted as compared with the pure pole contribution. Indeed they almost cancel each other at around 0.48 GeV giving rise to a zero in the full amplitude, as clearly seen in the figure. This is an Adler zero due to chiral symmetry, so that in the chiral limit the pseudoscalar interactions vanish at $s=0$. At leading order in CHPT \cite{gl}, this Adler zero sits at 0.48 GeV, very close to the position of the dip in the dotted and dashed-dotted lines and even closer to the zero in the total amplitude which also occurs at around 0.48 GeV. In fact, because of the presence of this Adler zero one can understand why the background turns out to be so big, in the same way as explained in section \ref{sec:3pi} for the $\sigma$ case. That is, if there is a pole that largely affects the low energy region, it is then necessary a large background to cancel the pole contribution so that the Adler zero can occur. Before making use of the full results of ref.\cite{jamin} let us substitute in Eq.(\ref{expprldk2pi}) the BW contribution given in Eq.(\ref{bwpropa}), as employed in ref.\cite{prldk2pi}, by the $\kappa$ pole contribution, \begin{equation} \frac{a_1 e^{i\delta_1}}{s-s_\kappa}~, \label{purekpole} \end{equation} located in the position given in Eq.(\ref{taylork}). This pole contribution, as discussed above and shown in Fig.\ref{fig:kpitaylor}a, has a phase motion in agreement with that from the S-wave $I=1/2$ phase shifts. We keep the rest of terms in Eq.(\ref{expprldk2pi}) and fit the $a_i$ and $\delta_i$ so as to reproduce the results from the parameterization employed in E791, Eq.(\ref{expprldk2pi}), with the values for the parameters given by their fit C. As in ref.\cite{prldk2pi}, the magnitude and phase of the $K^*_0(892)$ vector resonance parameters, $a_n$ and $\delta_n$, are fixed to 1 and 0, respectively. In that fit, apart from the $a_i$ and $\delta_i$, the authors also leave as free parameters the masses and widths of the $\kappa$ and $K^*_0(1430)$, as well as the meson radii that appear in the form factors. Notice that we do not include either any Blatt-Weisskopf factors nor Gaussian form factors in Eq.(\ref{purekpole}). In order to reproduce the results from ref.\cite{prldk2pi} we fit a Dalitz plot with 20$\times $20 bins normalized to the total number of events once the background is subtracted, namely 28369 events (the charge conjugate decay is also included). This Dalitz plot is generated from the parameterization employed in ref.\cite{prld3pi} for the signal function, corresponding to Eq.(\ref{expprldk2pi}). The resulting fit is very good with a low $\chi^2/\nu=6.5/132$. The values of the resulting magnitudes and phases are given in Table \ref{tab:spole}. On the other hand, we show in Fig.\ref{fig:i12_kpole_proy} the $s_{12}$ projection by the solid line, while the results from the parameterization of ref.\cite{prldk2pi} correspond to the points. The dashed line is the so called high-projection ($s_{12}>s_{13}$) and the dotted line corresponds to the low projection ($s_{12}<s_{13}$). We then conclude that we are able to reproduce the results of the E791 at the same time that the phase of the $\kappa$ contribution follows the $I=1/2$ S-wave $K\pi$ phase shifts. \begin{table}[ht] \begin{center} \begin{tabular}{|lrrr|} \hline Resonance & $a_n$ & $\delta_n$ & Fraction \\ & & (radians) & \\ \hline NR & 2.10 & $-5.95$ & 53.3$\%$ \\ $\kappa\pi^+$ & 0.30 & $-0.63$ & 56.7$\%$\\ $K^*_0(1430)\pi^+$ & 1.00 & 0.92 & 12.2$\%$\\ $K^*_1(892)\pi^+$ & 1 (fixed) & 0 (fixed) & 12.1$\%$\\ $K^*_2(1430)\pi^+$ & 0.17 & $-0.76$ & 0.4$\%$\\ $K^*_1(1680)\pi^+$ & 0.45 & 0.58 & 2.5$\%$\\ \hline $\chi^2/\nu$ & 6.5/132 & & \\ \hline \end{tabular} \caption{\small Results of the reproduction of the parameterization of ref.\cite{prldk2pi} summarized in Eq.(\ref{expprldk2pi}), removing the BW of the $\kappa$ by its pole contribution, Eq.(\ref{purekpole}). For each resonance we list the resulting magnitude $a_n$ in the second column, the relative phase $\delta_n$ in radians in the third column and the fraction of this decay mode in the fourth one. \label{tab:kpole}} \end{center} \end{table} \begin{figure}[ht] \psfrag{degrees}{\small Events/0.04 (GeV$^2$)} \psfrag{MeV}{\small GeV$^2$} \centerline{\epsfig{file=i12_kpole_proy.ps,height=6.5in,width=3.0in,angle=-90}} \vspace{0.2cm} \caption[pilf]{\protect \small $m^2(K\pi)$ projections for "data" \cite{prldk2pi} and our results with the pole contribution in Eq.(\ref{expprldk2pi}) instead of the BW for the $\kappa$ of ref.\cite{prldk2pi}. The solid, dashed and dotted lines correspond to our results for the projections $m^2(K\pi)$, $m^2(K\pi)_{low}$ and $m^2(K\pi)_{high}$, in order. \label{fig:i12_kpole_proy}} \end{figure} We notice that in Table \ref{tab:kpole} the so called NR contribution plays a more important role, it has a fraction of around $50\%$ while in fit C of the E791 Collaboration amounts to just a 13$\%$. Now, let us consider the full results of ref.\cite{jamin} in order to take into account the FSI employing Eq.(\ref{unisum3}). In ref.\cite{jamin} the partial waves are written as a product of two matrices (for the general case of coupled channel) as $T= \left[I+N\cdot g\right]^{-1}\cdot N$ such that e.g. $T_{11}$ is the $I=1/2$ elastic $K\pi$ S-wave. The diagonal $g(s)$ matrix corresponds to the unitarity bubbles of each channel and the matrix $N$ is determined by matching the previous expression with the chiral series of CHPT plus resonances in the $U(3)$ case, for more details on this respect see ref.\cite{jamin}. In this reference the $K\pi$, $K\eta$ and $K\eta'$ channels are considered, hence we rewrite Eq.(\ref{unisum3}) for this particular case as, \begin{eqnarray} \label{unisum3b} &&\left[D^{-1}_{11}(s_{12})+D^{-1}_{11}(s_{13})\right]a_{K\pi}\, e^{i\delta_{K\pi}} +\left[D^{-1}_{12}(s_{12})+D^{-1}_{12}(s_{13})\right]a_{K\eta}\,e^{i\delta_{K\eta}}\nonumber\\ &&+\left[D^{-1}_{13}(s_{12})+D^{-1}_{13}(s_{13})\right]a_{K\eta'}\,e^{i\delta_{K\eta'}}~. \end{eqnarray} Let us stress that, since the $\kappa$ and $K^*_0(1430)$ appear as poles in the second Riemann sheet of the partial waves of ref.\cite{jamin}, they are already taken into account when considering Eq.(\ref{unisum3b}). Notice that we do not include in Eq.(\ref{unisum3b}) for the two scalar resonances either any from factor as done in Eq.(\ref{expprldk2pi}) through the Blatt-Weisskopf terms $F^{(0)}_n$ and $F_D^{(0)}$ nor Gaussian form factors. Furthermore, the $K^*_0(1430)$ resonance from ref.\cite{jamin} has a mass and a width in complete agreement with the values in the PDG \cite{pdg04}, which are clearly determined from the study of $K\pi$ scattering \cite{estabrooks,aston}. The rest of contributions present in Eq.(\ref{expprldk2pi}), namely, the NR plus the ones from the vector and tensor resonances in Table \ref{tab:kpole} are kept, although the parameters $a_n$ and $\delta_n$ are fit again so as to reproduce the results of the signal parameterization of the E791 Collaboration. As in the cases before, we fit a Dalitz plot with $20\times 20$ bins generated from ref.\cite{prldk2pi} and normalized to the total number of events. The resulting $\chi^2/\nu=127/128$ is acceptable, indicating a rather good reproduction of the amplitudes of ref.\cite{prldk2pi}. In Table \ref{tab:kdmatrix} we show the values of the parameters that we have fitted and in Fig.\ref{fig:kpi_dmatrix_proy} we show the energy projections of any neutral $K\pi$ subsystem. \begin{figure}[ht] \psfrag{degrees}{\small Events/0.04 (GeV$^2$)} \psfrag{MeV}{\small GeV$^2$} \centerline{\epsfig{file=kpi_dmatrix_eta_proy.ps,height=6.5in,width=3.0in,angle=-90}} \vspace{0.2cm} \caption[pilf]{\protect \small $m^2(K\pi)$ projections for "data" \cite{prldk2pi} and our results taking into account Eq.(\ref{unisum3b}). The solid, dashed and dotted lines correspond to our results for the projections $m^2(K\pi)$, $m^2(K\pi)_{low}$ and $m^2(K\pi)_{high}$, in order. The dashed-dotted line refers to the fit when the $(K\eta)\pi^+$ channel is excluded, Eq.(\ref{unisum4}). \label{fig:kpi_dmatrix_proy}} \end{figure} \begin{table}[H] \begin{center} \begin{tabular}{|lrrr|} \hline Resonance & $a_n$ & $\delta_n$ & Fraction \\ & & (radians) & \\ \hline NR & 1.60 & 0.10 & 29.6$\%$ \\ $(K\pi)\pi^+$ & 1.66 & 4.10 & 31.8$\%$\\ $(K\eta)\pi^+$ & 0.86 & 2.63 & 2.0$\%$\\ $(K\eta')\pi^+$ & 2.33 & $-1.54$ & 9.8$\%$\\ $K^*_1(892)\pi^+$ & 1 (fixed) & 0 (fixed) & 11.6$\%$\\ $K^*_2(1430)\pi^+$ & 0.11 & $-0.62$ & 0.2$\%$\\ $K^*_1(1680)\pi^+$ & 0.72 & 0.80 & 5.9$\%$\\ \hline $\chi^2/\nu$ & 127/128 & & \\ \hline \end{tabular} \caption{\small Results of the reproduction of the parameterization of ref.\cite{prldk2pi} summarized in Eq.(\ref{expprldk2pi}), employing Eq.(\ref{unisum3b}). For each intermediate state we list the resulting magnitude $a_n$ in the second column, the relative phase $\delta_n$ in radians in the third column and the fraction of this decay mode in the fourth one. For this fit we also leave as free parameters the poorly measured mass and width of the $K^*_1(1680)$, their values are given in the text. \label{tab:kdmatrix}} \end{center} \end{table} As a result we see that we are able to reproduce the signal function of ref.\cite{prldk2pi} rather accurately and, at the same time, we are able to establish that the scattering amplitudes driving the final state interactions corrections in $D^+\rightarrow K^- \pi^+\pi^+$ are compatible with those determined from scattering data \cite{estabrooks,aston}. In particular, the $\kappa$ resonance is present in both and the mass and width of the $K^*_0(1430)$ does neither differ in both cases. On the other hand in the previous fit we have allowed to float the mass and width of the $K^*_1(1680)$, since they are poorly measured. In the PDG \cite{pdg04} the values reported are between 1.7 to 1.8 MeV for the mass and from 0.17 to 0.40 GeV for the width. The values that we obtain in the fit to the parameterization of ref.\cite{prldk2pi}, which employs the central values given in the PDG, are $m=1.7$ GeV and width $\Gamma=0.17$ GeV. Although we have already included the $K\eta$ channel in the fit of Table \ref{tab:kdmatrix}, since this channel has little effect on the $K\pi$ scattering we then check the stability of this fit by removing the $K\eta$ channel. Then, instead of Eq.(\ref{unisum3}) we now consider, \begin{equation} \label{unisum4} \left[D^{-1}_{11}(s_{12})+D^{-1}_{11}(s_{13})\right]a_{K\pi} e^{i\delta_{K\pi}} +\left[D^{-1}_{13}(s_{12})+D^{-1}_{13}(s_{13})\right]a_{K\eta'}e^{i\delta_{K\eta'}}~. \end{equation} The resulting $\chi^2/\nu=144/130$ is a bit larger than the previous one, and the values of the parameters in the fit are rather similar so that we refrain from presenting them. Noneteless, we show the event projection in Fig.\ref{fig:kpi_dmatrix_proy} for this case by the dashed-dotted line. On the other hand, the resulting mass and width of the $K^*(1680)$ are 1.7 GeV and 0.13 GeV, in order. Thus, we conclude that our results are rather stable under the presence or removal of the $K\eta$ channel although the resulting $\chi^2$ is somewhat lower when this channel is considered as well. Finally, the same reasons advocated at the end of section \ref{sec:3pi} to explain why the $\sigma$ meson is clearly visible in $D^+$ decays to three pions can be also applied here for the $\kappa$ in the $D^+\rightarrow \pi^-\pi^+\pi^+$ decay.\footnote{As explained at the end of section \ref{sec:3pi}, but applied now to our case, if we make a Laurent expansion of $D^{-1}$ in Eq.(\ref{unisum3b}) around the $ \kappa$ pole one checks that there is no a significant background and $D^{-1}$ is driven by the $\kappa$ pole contribution around threshold up to about 1 GeV when the influence of the $K\eta'$ channel and that of the $K^*_0(1430)$ resonance starts. Thus, the $\kappa$ resonance pole structure is not distorted as happens with scattering.} \section{Conclusions} \label{sec:con} \def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} We have considered in detail the FSI of the $D^+\rightarrow \pi^-\pi^+\pi^+$, $D^+_s\rightarrow \pi^-\pi^+\pi^+$ and $D^+\rightarrow K^- \pi^+\pi^+$ driven by the S-waves. The Dalitz plots associated with these decays were studied originally in refs.\cite{prld3pi,prlds3pi,prldk2pi} showing by the first time statistically significant evidences for the existence of the $\sigma$ and $\kappa$ resonances. Here we have payed special attention to those aspects still controversial after those works with the aim of improving the theoretical basis of the parameterizations employed in these references. In particular, we have shown that the meson-meson S-waves with $I=0,~ 1/2$ that drive the corresponding FSI in the previous decays are compatible with those amplitudes determined from studies of scattering data \cite{npa,nd,jamin}, also tested in many other production processes, e.g. \cite{gama,fi,jpsi,bdecays}. In particular, we have shown that the phase motion of the low energy FSI in the $D^+\rightarrow \pi^-\pi^+\pi^+$ and $D^+\rightarrow K^- \pi^+\pi^+$ decays follows the elastic $I=0$ $\pi\pi$ and $I=1/2$ $K\pi$ S-waves phase shifts, in order, and how this is compatible with the presence of the $\sigma$ and $\kappa$ resonances, respectively. We have seen that the reason for the disagreement between the phases associated with these resonances in the studies of refs.\cite{prld3pi,prldk2pi} and the experimental phase shifts is the employment by these authors of Breit-Wigner propagators. Once the BW propagator is substituted by the pure pole contributions from Eqs.(\ref{3pitaylor}) and (\ref{kpitaylor}), for the $\sigma$ and $\kappa$, respectively, the agreement is restored. Indeed, we have also shown that these pole contributions are not affected by significant backgrounds in the expansion of $D^{-1}$, see Eqs.(\ref{unisum3}) and (\ref{unisum3b}), while huge destructive backgrounds are present in the scattering. In this way the pole contributions are not distorted in the studied $D_s$ and $D$ decays while they are so in the scattering, as shown in Figs.\ref{fig:3pitaylor} and \ref{fig:kpitaylor}. Furthermore, we have also considered the full results for the S-wave FSI from refs.\cite{npa,jamin} from Eqs.(\ref{unisum3}) and (\ref{unisum3b}). In this way we have considered simultaneously the interplay between the $\sigma$ and the $f_0(980)$ for $I=0$ (\ref{unisum3}), and that between the $\kappa$ and $K^*_0(1430)$ for $I=1/2$ (\ref{unisum3b}). This is also important since in ref.\cite{prlds3pi} the properties of the couplings of the $f_0(980)$ are astonishing while in ref.\cite{prldk2pi} the mass of the $K^*_0(1430)$ resonance is out of the range given in the PDG \cite{pdg04} and its width is almost a factor 1/2 smaller. Thus, our results show that one can also understand these FSI with "standard" $f_0(980)$ and $K^*_0(1430)$ properties. \vspace{1cm} \noindent {\bf Acknowledgments} I would like to acknowledge Ignacio Bediaga for fruitful and stimulating discussions, together with a long exchange of emails. I would also like to thank Isabel Guillam\'on who participated in an early stage of this research. I also thank Alberto C. dos Rios and Carla G\"obel for their kind disposition to share with me their work. Financial support from the CICYT (Spain) Grants No. FPA2002-03265 and FPA2004-03470 and from the EU-RTN Programme "EURIDICE", Contract No. HPRN-CT-2002-00311, is acknowledged. \medskip
1,314,259,996,204
arxiv
\section{The ALICE Collaboration} \begingroup \small \begin{flushleft} J.~Adam\Irefn{org40}\And D.~Adamov\'{a}\Irefn{org86}\And M.M.~Aggarwal\Irefn{org90}\And G.~Aglieri Rinella\Irefn{org36}\And M.~Agnello\Irefn{org32}\textsuperscript{,}\Irefn{org112}\And N.~Agrawal\Irefn{org49}\And Z.~Ahammed\Irefn{org135}\And S.~Ahmad\Irefn{org19}\And S.U.~Ahn\Irefn{org70}\And S.~Aiola\Irefn{org139}\And A.~Akindinov\Irefn{org60}\And S.N.~Alam\Irefn{org135}\And D.S.D.~Albuquerque\Irefn{org123}\And D.~Aleksandrov\Irefn{org82}\And B.~Alessandro\Irefn{org112}\And D.~Alexandre\Irefn{org103}\And R.~Alfaro Molina\Irefn{org66}\And A.~Alici\Irefn{org12}\textsuperscript{,}\Irefn{org106}\And A.~Alkin\Irefn{org3}\And J.~Alme\Irefn{org18}\textsuperscript{,}\Irefn{org38}\And T.~Alt\Irefn{org43}\And S.~Altinpinar\Irefn{org18}\And I.~Altsybeev\Irefn{org134}\And C.~Alves Garcia Prado\Irefn{org122}\And M.~An\Irefn{org7}\And C.~Andrei\Irefn{org80}\And H.A.~Andrews\Irefn{org103}\And A.~Andronic\Irefn{org99}\And V.~Anguelov\Irefn{org96}\And T.~Anti\v{c}i\'{c}\Irefn{org100}\And F.~Antinori\Irefn{org109}\And P.~Antonioli\Irefn{org106}\And L.~Aphecetche\Irefn{org115}\And H.~Appelsh\"{a}user\Irefn{org55}\And S.~Arcelli\Irefn{org27}\And R.~Arnaldi\Irefn{org112}\And O.W.~Arnold\Irefn{org37}\textsuperscript{,}\Irefn{org95}\And I.C.~Arsene\Irefn{org22}\And M.~Arslandok\Irefn{org55}\And B.~Audurier\Irefn{org115}\And A.~Augustinus\Irefn{org36}\And R.~Averbeck\Irefn{org99}\And M.D.~Azmi\Irefn{org19}\And A.~Badal\`{a}\Irefn{org108}\And Y.W.~Baek\Irefn{org69}\And S.~Bagnasco\Irefn{org112}\And R.~Bailhache\Irefn{org55}\And R.~Bala\Irefn{org93}\And S.~Balasubramanian\Irefn{org139}\And A.~Baldisseri\Irefn{org15}\And R.C.~Baral\Irefn{org63}\And A.M.~Barbano\Irefn{org26}\And R.~Barbera\Irefn{org28}\And F.~Barile\Irefn{org33}\And G.G.~Barnaf\"{o}ldi\Irefn{org138}\And L.S.~Barnby\Irefn{org103}\textsuperscript{,}\Irefn{org36}\And V.~Barret\Irefn{org72}\And P.~Bartalini\Irefn{org7}\And K.~Barth\Irefn{org36}\And J.~Bartke\Irefn{org119}\Aref{0}\And E.~Bartsch\Irefn{org55}\And M.~Basile\Irefn{org27}\And N.~Bastid\Irefn{org72}\And S.~Basu\Irefn{org135}\And B.~Bathen\Irefn{org56}\And G.~Batigne\Irefn{org115}\And A.~Batista Camejo\Irefn{org72}\And B.~Batyunya\Irefn{org68}\And P.C.~Batzing\Irefn{org22}\And I.G.~Bearden\Irefn{org83}\And H.~Beck\Irefn{org55}\textsuperscript{,}\Irefn{org96}\And C.~Bedda\Irefn{org112}\And N.K.~Behera\Irefn{org52}\And I.~Belikov\Irefn{org57}\And F.~Bellini\Irefn{org27}\And H.~Bello Martinez\Irefn{org2}\And R.~Bellwied\Irefn{org124}\And R.~Belmont\Irefn{org137}\And E.~Belmont-Moreno\Irefn{org66}\And L.G.E.~Beltran\Irefn{org121}\And V.~Belyaev\Irefn{org77}\And G.~Bencedi\Irefn{org138}\And S.~Beole\Irefn{org26}\And I.~Berceanu\Irefn{org80}\And A.~Bercuci\Irefn{org80}\And Y.~Berdnikov\Irefn{org88}\And D.~Berenyi\Irefn{org138}\And R.A.~Bertens\Irefn{org59}\And D.~Berzano\Irefn{org36}\And L.~Betev\Irefn{org36}\And A.~Bhasin\Irefn{org93}\And I.R.~Bhat\Irefn{org93}\And A.K.~Bhati\Irefn{org90}\And B.~Bhattacharjee\Irefn{org45}\And J.~Bhom\Irefn{org119}\And L.~Bianchi\Irefn{org124}\And N.~Bianchi\Irefn{org74}\And C.~Bianchin\Irefn{org137}\And J.~Biel\v{c}\'{\i}k\Irefn{org40}\And J.~Biel\v{c}\'{\i}kov\'{a}\Irefn{org86}\And A.~Bilandzic\Irefn{org83}\textsuperscript{,}\Irefn{org37}\textsuperscript{,}\Irefn{org95}\And G.~Biro\Irefn{org138}\And R.~Biswas\Irefn{org4}\And S.~Biswas\Irefn{org4}\textsuperscript{,}\Irefn{org81}\And S.~Bjelogrlic\Irefn{org59}\And J.T.~Blair\Irefn{org120}\And D.~Blau\Irefn{org82}\And C.~Blume\Irefn{org55}\And F.~Bock\Irefn{org76}\textsuperscript{,}\Irefn{org96}\And A.~Bogdanov\Irefn{org77}\And H.~B{\o}ggild\Irefn{org83}\And L.~Boldizs\'{a}r\Irefn{org138}\And M.~Bombara\Irefn{org41}\And M.~Bonora\Irefn{org36}\And J.~Book\Irefn{org55}\And H.~Borel\Irefn{org15}\And A.~Borissov\Irefn{org98}\And M.~Borri\Irefn{org126}\textsuperscript{,}\Irefn{org85}\And F.~Boss\'u\Irefn{org67}\And E.~Botta\Irefn{org26}\And C.~Bourjau\Irefn{org83}\And P.~Braun-Munzinger\Irefn{org99}\And M.~Bregant\Irefn{org122}\And T.~Breitner\Irefn{org54}\And T.A.~Broker\Irefn{org55}\And T.A.~Browning\Irefn{org97}\And M.~Broz\Irefn{org40}\And E.J.~Brucken\Irefn{org47}\And E.~Bruna\Irefn{org112}\And G.E.~Bruno\Irefn{org33}\And D.~Budnikov\Irefn{org101}\And H.~Buesching\Irefn{org55}\And S.~Bufalino\Irefn{org32}\textsuperscript{,}\Irefn{org36}\And P.~Buncic\Irefn{org36}\And O.~Busch\Irefn{org130}\And Z.~Buthelezi\Irefn{org67}\And J.B.~Butt\Irefn{org16}\And J.T.~Buxton\Irefn{org20}\And J.~Cabala\Irefn{org117}\And D.~Caffarri\Irefn{org36}\And X.~Cai\Irefn{org7}\And H.~Caines\Irefn{org139}\And L.~Calero Diaz\Irefn{org74}\And A.~Caliva\Irefn{org59}\And E.~Calvo Villar\Irefn{org104}\And P.~Camerini\Irefn{org25}\And F.~Carena\Irefn{org36}\And W.~Carena\Irefn{org36}\And F.~Carnesecchi\Irefn{org27}\And J.~Castillo Castellanos\Irefn{org15}\And A.J.~Castro\Irefn{org127}\And E.A.R.~Casula\Irefn{org24}\And C.~Ceballos Sanchez\Irefn{org9}\And J.~Cepila\Irefn{org40}\And P.~Cerello\Irefn{org112}\And J.~Cerkala\Irefn{org117}\And B.~Chang\Irefn{org125}\And S.~Chapeland\Irefn{org36}\And M.~Chartier\Irefn{org126}\And J.L.~Charvet\Irefn{org15}\And S.~Chattopadhyay\Irefn{org135}\And S.~Chattopadhyay\Irefn{org102}\And A.~Chauvin\Irefn{org95}\textsuperscript{,}\Irefn{org37}\And V.~Chelnokov\Irefn{org3}\And M.~Cherney\Irefn{org89}\And C.~Cheshkov\Irefn{org132}\And B.~Cheynis\Irefn{org132}\And V.~Chibante Barroso\Irefn{org36}\And D.D.~Chinellato\Irefn{org123}\And S.~Cho\Irefn{org52}\And P.~Chochula\Irefn{org36}\And K.~Choi\Irefn{org98}\And M.~Chojnacki\Irefn{org83}\And S.~Choudhury\Irefn{org135}\And P.~Christakoglou\Irefn{org84}\And C.H.~Christensen\Irefn{org83}\And P.~Christiansen\Irefn{org34}\And T.~Chujo\Irefn{org130}\And S.U.~Chung\Irefn{org98}\And C.~Cicalo\Irefn{org107}\And L.~Cifarelli\Irefn{org12}\textsuperscript{,}\Irefn{org27}\And F.~Cindolo\Irefn{org106}\And J.~Cleymans\Irefn{org92}\And F.~Colamaria\Irefn{org33}\And D.~Colella\Irefn{org61}\textsuperscript{,}\Irefn{org36}\And A.~Collu\Irefn{org76}\And M.~Colocci\Irefn{org27}\And G.~Conesa Balbastre\Irefn{org73}\And Z.~Conesa del Valle\Irefn{org53}\And M.E.~Connors\Aref{idp1803184}\textsuperscript{,}\Irefn{org139}\And J.G.~Contreras\Irefn{org40}\And T.M.~Cormier\Irefn{org87}\And Y.~Corrales Morales\Irefn{org26}\textsuperscript{,}\Irefn{org112}\And I.~Cort\'{e}s Maldonado\Irefn{org2}\And P.~Cortese\Irefn{org31}\And M.R.~Cosentino\Irefn{org122}\And F.~Costa\Irefn{org36}\And J.~Crkovska\Irefn{org53}\And P.~Crochet\Irefn{org72}\And R.~Cruz Albino\Irefn{org11}\And E.~Cuautle\Irefn{org65}\And L.~Cunqueiro\Irefn{org56}\textsuperscript{,}\Irefn{org36}\And T.~Dahms\Irefn{org95}\textsuperscript{,}\Irefn{org37}\And A.~Dainese\Irefn{org109}\And M.C.~Danisch\Irefn{org96}\And A.~Danu\Irefn{org64}\And D.~Das\Irefn{org102}\And I.~Das\Irefn{org102}\And S.~Das\Irefn{org4}\And A.~Dash\Irefn{org81}\And S.~Dash\Irefn{org49}\And S.~De\Irefn{org122}\And A.~De Caro\Irefn{org12}\textsuperscript{,}\Irefn{org30}\And G.~de Cataldo\Irefn{org105}\And C.~de Conti\Irefn{org122}\And J.~de Cuveland\Irefn{org43}\And A.~De Falco\Irefn{org24}\And D.~De Gruttola\Irefn{org12}\textsuperscript{,}\Irefn{org30}\And N.~De Marco\Irefn{org112}\And S.~De Pasquale\Irefn{org30}\And R.D.~De Souza\Irefn{org123}\And A.~Deisting\Irefn{org96}\textsuperscript{,}\Irefn{org99}\And A.~Deloff\Irefn{org79}\And E.~D\'{e}nes\Irefn{org138}\Aref{0}\And C.~Deplano\Irefn{org84}\And P.~Dhankher\Irefn{org49}\And D.~Di Bari\Irefn{org33}\And A.~Di Mauro\Irefn{org36}\And P.~Di Nezza\Irefn{org74}\And B.~Di Ruzza\Irefn{org109}\And M.A.~Diaz Corchero\Irefn{org10}\And T.~Dietel\Irefn{org92}\And P.~Dillenseger\Irefn{org55}\And R.~Divi\`{a}\Irefn{org36}\And {\O}.~Djuvsland\Irefn{org18}\And A.~Dobrin\Irefn{org84}\textsuperscript{,}\Irefn{org64}\And D.~Domenicis Gimenez\Irefn{org122}\And B.~D\"{o}nigus\Irefn{org55}\And O.~Dordic\Irefn{org22}\And T.~Drozhzhova\Irefn{org55}\And A.K.~Dubey\Irefn{org135}\And A.~Dubla\Irefn{org59}\And L.~Ducroux\Irefn{org132}\And P.~Dupieux\Irefn{org72}\And R.J.~Ehlers\Irefn{org139}\And D.~Elia\Irefn{org105}\And E.~Endress\Irefn{org104}\And H.~Engel\Irefn{org54}\And E.~Epple\Irefn{org139}\And B.~Erazmus\Irefn{org115}\And I.~Erdemir\Irefn{org55}\And F.~Erhardt\Irefn{org131}\And B.~Espagnon\Irefn{org53}\And M.~Estienne\Irefn{org115}\And S.~Esumi\Irefn{org130}\And J.~Eum\Irefn{org98}\And D.~Evans\Irefn{org103}\And S.~Evdokimov\Irefn{org113}\And G.~Eyyubova\Irefn{org40}\And L.~Fabbietti\Irefn{org95}\textsuperscript{,}\Irefn{org37}\And D.~Fabris\Irefn{org109}\And J.~Faivre\Irefn{org73}\And A.~Fantoni\Irefn{org74}\And M.~Fasel\Irefn{org76}\And L.~Feldkamp\Irefn{org56}\And A.~Feliciello\Irefn{org112}\And G.~Feofilov\Irefn{org134}\And J.~Ferencei\Irefn{org86}\And A.~Fern\'{a}ndez T\'{e}llez\Irefn{org2}\And E.G.~Ferreiro\Irefn{org17}\And A.~Ferretti\Irefn{org26}\And A.~Festanti\Irefn{org29}\And V.J.G.~Feuillard\Irefn{org15}\textsuperscript{,}\Irefn{org72}\And J.~Figiel\Irefn{org119}\And M.A.S.~Figueredo\Irefn{org126}\textsuperscript{,}\Irefn{org122}\And S.~Filchagin\Irefn{org101}\And D.~Finogeev\Irefn{org58}\And F.M.~Fionda\Irefn{org24}\And E.M.~Fiore\Irefn{org33}\And M.~Floris\Irefn{org36}\And S.~Foertsch\Irefn{org67}\And P.~Foka\Irefn{org99}\And S.~Fokin\Irefn{org82}\And E.~Fragiacomo\Irefn{org111}\And A.~Francescon\Irefn{org36}\And A.~Francisco\Irefn{org115}\And U.~Frankenfeld\Irefn{org99}\And G.G.~Fronze\Irefn{org26}\And U.~Fuchs\Irefn{org36}\And C.~Furget\Irefn{org73}\And A.~Furs\Irefn{org58}\And M.~Fusco Girard\Irefn{org30}\And J.J.~Gaardh{\o}je\Irefn{org83}\And M.~Gagliardi\Irefn{org26}\And A.M.~Gago\Irefn{org104}\And K.~Gajdosova\Irefn{org83}\And M.~Gallio\Irefn{org26}\And C.D.~Galvan\Irefn{org121}\And D.R.~Gangadharan\Irefn{org76}\And P.~Ganoti\Irefn{org91}\And C.~Gao\Irefn{org7}\And C.~Garabatos\Irefn{org99}\And E.~Garcia-Solis\Irefn{org13}\And K.~Garg\Irefn{org28}\And C.~Gargiulo\Irefn{org36}\And P.~Gasik\Irefn{org95}\textsuperscript{,}\Irefn{org37}\And E.F.~Gauger\Irefn{org120}\And M.~Germain\Irefn{org115}\And M.~Gheata\Irefn{org36}\textsuperscript{,}\Irefn{org64}\And P.~Ghosh\Irefn{org135}\And S.K.~Ghosh\Irefn{org4}\And P.~Gianotti\Irefn{org74}\And P.~Giubellino\Irefn{org112}\textsuperscript{,}\Irefn{org36}\And P.~Giubilato\Irefn{org29}\And E.~Gladysz-Dziadus\Irefn{org119}\And P.~Gl\"{a}ssel\Irefn{org96}\And D.M.~Gom\'{e}z Coral\Irefn{org66}\And A.~Gomez Ramirez\Irefn{org54}\And A.S.~Gonzalez\Irefn{org36}\And V.~Gonzalez\Irefn{org10}\And P.~Gonz\'{a}lez-Zamora\Irefn{org10}\And S.~Gorbunov\Irefn{org43}\And L.~G\"{o}rlich\Irefn{org119}\And S.~Gotovac\Irefn{org118}\And V.~Grabski\Irefn{org66}\And O.A.~Grachov\Irefn{org139}\And L.K.~Graczykowski\Irefn{org136}\And K.L.~Graham\Irefn{org103}\And A.~Grelli\Irefn{org59}\And A.~Grigoras\Irefn{org36}\And C.~Grigoras\Irefn{org36}\And V.~Grigoriev\Irefn{org77}\And A.~Grigoryan\Irefn{org1}\And S.~Grigoryan\Irefn{org68}\And B.~Grinyov\Irefn{org3}\And N.~Grion\Irefn{org111}\And J.M.~Gronefeld\Irefn{org99}\And J.F.~Grosse-Oetringhaus\Irefn{org36}\And R.~Grosso\Irefn{org99}\And L.~Gruber\Irefn{org114}\And F.~Guber\Irefn{org58}\And R.~Guernane\Irefn{org73}\And B.~Guerzoni\Irefn{org27}\And K.~Gulbrandsen\Irefn{org83}\And T.~Gunji\Irefn{org129}\And A.~Gupta\Irefn{org93}\And R.~Gupta\Irefn{org93}\And R.~Haake\Irefn{org56}\textsuperscript{,}\Irefn{org36}\And C.~Hadjidakis\Irefn{org53}\And M.~Haiduc\Irefn{org64}\And H.~Hamagaki\Irefn{org129}\And G.~Hamar\Irefn{org138}\And J.C.~Hamon\Irefn{org57}\And J.W.~Harris\Irefn{org139}\And A.~Harton\Irefn{org13}\And D.~Hatzifotiadou\Irefn{org106}\And S.~Hayashi\Irefn{org129}\And S.T.~Heckel\Irefn{org55}\And E.~Hellb\"{a}r\Irefn{org55}\And H.~Helstrup\Irefn{org38}\And A.~Herghelegiu\Irefn{org80}\And G.~Herrera Corral\Irefn{org11}\And F.~Herrmann\Irefn{org56}\And B.A.~Hess\Irefn{org35}\And K.F.~Hetland\Irefn{org38}\And H.~Hillemanns\Irefn{org36}\And B.~Hippolyte\Irefn{org57}\And D.~Horak\Irefn{org40}\And R.~Hosokawa\Irefn{org130}\And P.~Hristov\Irefn{org36}\And C.~Hughes\Irefn{org127}\And T.J.~Humanic\Irefn{org20}\And N.~Hussain\Irefn{org45}\And T.~Hussain\Irefn{org19}\And D.~Hutter\Irefn{org43}\And D.S.~Hwang\Irefn{org21}\And R.~Ilkaev\Irefn{org101}\And M.~Inaba\Irefn{org130}\And E.~Incani\Irefn{org24}\And M.~Ippolitov\Irefn{org77}\textsuperscript{,}\Irefn{org82}\And M.~Irfan\Irefn{org19}\And V.~Isakov\Irefn{org58}\And M.~Ivanov\Irefn{org99}\textsuperscript{,}\Irefn{org36}\And V.~Ivanov\Irefn{org88}\And V.~Izucheev\Irefn{org113}\And B.~Jacak\Irefn{org76}\And N.~Jacazio\Irefn{org27}\And P.M.~Jacobs\Irefn{org76}\And M.B.~Jadhav\Irefn{org49}\And S.~Jadlovska\Irefn{org117}\And J.~Jadlovsky\Irefn{org117}\textsuperscript{,}\Irefn{org61}\And C.~Jahnke\Irefn{org122}\And M.J.~Jakubowska\Irefn{org136}\And M.A.~Janik\Irefn{org136}\And P.H.S.Y.~Jayarathna\Irefn{org124}\And C.~Jena\Irefn{org29}\And S.~Jena\Irefn{org124}\And R.T.~Jimenez Bustamante\Irefn{org99}\And P.G.~Jones\Irefn{org103}\And A.~Jusko\Irefn{org103}\And P.~Kalinak\Irefn{org61}\And A.~Kalweit\Irefn{org36}\And J.H.~Kang\Irefn{org140}\And V.~Kaplin\Irefn{org77}\And S.~Kar\Irefn{org135}\And A.~Karasu Uysal\Irefn{org71}\And O.~Karavichev\Irefn{org58}\And T.~Karavicheva\Irefn{org58}\And L.~Karayan\Irefn{org96}\textsuperscript{,}\Irefn{org99}\And E.~Karpechev\Irefn{org58}\And U.~Kebschull\Irefn{org54}\And R.~Keidel\Irefn{org141}\And D.L.D.~Keijdener\Irefn{org59}\And M.~Keil\Irefn{org36}\And M. Mohisin~Khan\Aref{idp3208048}\textsuperscript{,}\Irefn{org19}\And P.~Khan\Irefn{org102}\And S.A.~Khan\Irefn{org135}\And A.~Khanzadeev\Irefn{org88}\And Y.~Kharlov\Irefn{org113}\And A.~Khatun\Irefn{org19}\And B.~Kileng\Irefn{org38}\And D.W.~Kim\Irefn{org44}\And D.J.~Kim\Irefn{org125}\And D.~Kim\Irefn{org140}\And H.~Kim\Irefn{org140}\And J.S.~Kim\Irefn{org44}\And J.~Kim\Irefn{org96}\And M.~Kim\Irefn{org140}\And S.~Kim\Irefn{org21}\And T.~Kim\Irefn{org140}\And S.~Kirsch\Irefn{org43}\And I.~Kisel\Irefn{org43}\And S.~Kiselev\Irefn{org60}\And A.~Kisiel\Irefn{org136}\And G.~Kiss\Irefn{org138}\And J.L.~Klay\Irefn{org6}\And C.~Klein\Irefn{org55}\And J.~Klein\Irefn{org36}\And C.~Klein-B\"{o}sing\Irefn{org56}\And S.~Klewin\Irefn{org96}\And A.~Kluge\Irefn{org36}\And M.L.~Knichel\Irefn{org96}\And A.G.~Knospe\Irefn{org120}\textsuperscript{,}\Irefn{org124}\And C.~Kobdaj\Irefn{org116}\And M.~Kofarago\Irefn{org36}\And T.~Kollegger\Irefn{org99}\And A.~Kolojvari\Irefn{org134}\And V.~Kondratiev\Irefn{org134}\And N.~Kondratyeva\Irefn{org77}\And E.~Kondratyuk\Irefn{org113}\And A.~Konevskikh\Irefn{org58}\And M.~Kopcik\Irefn{org117}\And M.~Kour\Irefn{org93}\And C.~Kouzinopoulos\Irefn{org36}\And O.~Kovalenko\Irefn{org79}\And V.~Kovalenko\Irefn{org134}\And M.~Kowalski\Irefn{org119}\And G.~Koyithatta Meethaleveedu\Irefn{org49}\And I.~Kr\'{a}lik\Irefn{org61}\And A.~Krav\v{c}\'{a}kov\'{a}\Irefn{org41}\And M.~Krivda\Irefn{org61}\textsuperscript{,}\Irefn{org103}\And F.~Krizek\Irefn{org86}\And E.~Kryshen\Irefn{org88}\textsuperscript{,}\Irefn{org36}\And M.~Krzewicki\Irefn{org43}\And A.M.~Kubera\Irefn{org20}\And V.~Ku\v{c}era\Irefn{org86}\And C.~Kuhn\Irefn{org57}\And P.G.~Kuijer\Irefn{org84}\And A.~Kumar\Irefn{org93}\And J.~Kumar\Irefn{org49}\And L.~Kumar\Irefn{org90}\And S.~Kumar\Irefn{org49}\And P.~Kurashvili\Irefn{org79}\And A.~Kurepin\Irefn{org58}\And A.B.~Kurepin\Irefn{org58}\And A.~Kuryakin\Irefn{org101}\And M.J.~Kweon\Irefn{org52}\And Y.~Kwon\Irefn{org140}\And S.L.~La Pointe\Irefn{org43}\textsuperscript{,}\Irefn{org112}\And P.~La Rocca\Irefn{org28}\And P.~Ladron de Guevara\Irefn{org11}\And C.~Lagana Fernandes\Irefn{org122}\And I.~Lakomov\Irefn{org36}\And R.~Langoy\Irefn{org42}\And K.~Lapidus\Irefn{org37}\textsuperscript{,}\Irefn{org139}\And C.~Lara\Irefn{org54}\And A.~Lardeux\Irefn{org15}\And A.~Lattuca\Irefn{org26}\And E.~Laudi\Irefn{org36}\And R.~Lea\Irefn{org25}\And L.~Leardini\Irefn{org96}\And S.~Lee\Irefn{org140}\And F.~Lehas\Irefn{org84}\And S.~Lehner\Irefn{org114}\And R.C.~Lemmon\Irefn{org85}\And V.~Lenti\Irefn{org105}\And E.~Leogrande\Irefn{org59}\And I.~Le\'{o}n Monz\'{o}n\Irefn{org121}\And H.~Le\'{o}n Vargas\Irefn{org66}\And M.~Leoncino\Irefn{org26}\And P.~L\'{e}vai\Irefn{org138}\And S.~Li\Irefn{org7}\textsuperscript{,}\Irefn{org72}\And X.~Li\Irefn{org14}\And J.~Lien\Irefn{org42}\And R.~Lietava\Irefn{org103}\And S.~Lindal\Irefn{org22}\And V.~Lindenstruth\Irefn{org43}\And C.~Lippmann\Irefn{org99}\And M.A.~Lisa\Irefn{org20}\And H.M.~Ljunggren\Irefn{org34}\And D.F.~Lodato\Irefn{org59}\And P.I.~Loenne\Irefn{org18}\And V.~Loginov\Irefn{org77}\And C.~Loizides\Irefn{org76}\And X.~Lopez\Irefn{org72}\And E.~L\'{o}pez Torres\Irefn{org9}\And A.~Lowe\Irefn{org138}\And P.~Luettig\Irefn{org55}\And M.~Lunardon\Irefn{org29}\And G.~Luparello\Irefn{org25}\And M.~Lupi\Irefn{org36}\And T.H.~Lutz\Irefn{org139}\And A.~Maevskaya\Irefn{org58}\And M.~Mager\Irefn{org36}\And S.~Mahajan\Irefn{org93}\And S.M.~Mahmood\Irefn{org22}\And A.~Maire\Irefn{org57}\And R.D.~Majka\Irefn{org139}\And M.~Malaev\Irefn{org88}\And I.~Maldonado Cervantes\Irefn{org65}\And L.~Malinina\Aref{idp3929568}\textsuperscript{,}\Irefn{org68}\And D.~Mal'Kevich\Irefn{org60}\And P.~Malzacher\Irefn{org99}\And A.~Mamonov\Irefn{org101}\And V.~Manko\Irefn{org82}\And F.~Manso\Irefn{org72}\And V.~Manzari\Irefn{org36}\textsuperscript{,}\Irefn{org105}\And Y.~Mao\Irefn{org7}\And M.~Marchisone\Irefn{org67}\textsuperscript{,}\Irefn{org128}\textsuperscript{,}\Irefn{org26}\And J.~Mare\v{s}\Irefn{org62}\And G.V.~Margagliotti\Irefn{org25}\And A.~Margotti\Irefn{org106}\And J.~Margutti\Irefn{org59}\And A.~Mar\'{\i}n\Irefn{org99}\And C.~Markert\Irefn{org120}\And M.~Marquard\Irefn{org55}\And N.A.~Martin\Irefn{org99}\And P.~Martinengo\Irefn{org36}\And M.I.~Mart\'{\i}nez\Irefn{org2}\And G.~Mart\'{\i}nez Garc\'{\i}a\Irefn{org115}\And M.~Martinez Pedreira\Irefn{org36}\And A.~Mas\Irefn{org122}\And S.~Masciocchi\Irefn{org99}\And M.~Masera\Irefn{org26}\And A.~Masoni\Irefn{org107}\And A.~Mastroserio\Irefn{org33}\And A.~Matyja\Irefn{org119}\And C.~Mayer\Irefn{org119}\And J.~Mazer\Irefn{org127}\And M.~Mazzilli\Irefn{org33}\And M.A.~Mazzoni\Irefn{org110}\And D.~Mcdonald\Irefn{org124}\And F.~Meddi\Irefn{org23}\And Y.~Melikyan\Irefn{org77}\And A.~Menchaca-Rocha\Irefn{org66}\And E.~Meninno\Irefn{org30}\And J.~Mercado P\'erez\Irefn{org96}\And M.~Meres\Irefn{org39}\And S.~Mhlanga\Irefn{org92}\And Y.~Miake\Irefn{org130}\And M.M.~Mieskolainen\Irefn{org47}\And K.~Mikhaylov\Irefn{org60}\textsuperscript{,}\Irefn{org68}\And L.~Milano\Irefn{org76}\textsuperscript{,}\Irefn{org36}\And J.~Milosevic\Irefn{org22}\And A.~Mischke\Irefn{org59}\And A.N.~Mishra\Irefn{org50}\And T.~Mishra\Irefn{org63}\And D.~Mi\'{s}kowiec\Irefn{org99}\And J.~Mitra\Irefn{org135}\And C.M.~Mitu\Irefn{org64}\And N.~Mohammadi\Irefn{org59}\And B.~Mohanty\Irefn{org81}\And L.~Molnar\Irefn{org57}\And L.~Monta\~{n}o Zetina\Irefn{org11}\And E.~Montes\Irefn{org10}\And D.A.~Moreira De Godoy\Irefn{org56}\And L.A.P.~Moreno\Irefn{org2}\And S.~Moretto\Irefn{org29}\And A.~Morreale\Irefn{org115}\And A.~Morsch\Irefn{org36}\And V.~Muccifora\Irefn{org74}\And E.~Mudnic\Irefn{org118}\And D.~M{\"u}hlheim\Irefn{org56}\And S.~Muhuri\Irefn{org135}\And M.~Mukherjee\Irefn{org135}\And J.D.~Mulligan\Irefn{org139}\And M.G.~Munhoz\Irefn{org122}\And K.~M\"{u}nning\Irefn{org46}\And R.H.~Munzer\Irefn{org95}\textsuperscript{,}\Irefn{org37}\textsuperscript{,}\Irefn{org55}\And H.~Murakami\Irefn{org129}\And S.~Murray\Irefn{org67}\And L.~Musa\Irefn{org36}\And J.~Musinsky\Irefn{org61}\And B.~Naik\Irefn{org49}\And R.~Nair\Irefn{org79}\And B.K.~Nandi\Irefn{org49}\And R.~Nania\Irefn{org106}\And E.~Nappi\Irefn{org105}\And M.U.~Naru\Irefn{org16}\And H.~Natal da Luz\Irefn{org122}\And C.~Nattrass\Irefn{org127}\And S.R.~Navarro\Irefn{org2}\And K.~Nayak\Irefn{org81}\And R.~Nayak\Irefn{org49}\And T.K.~Nayak\Irefn{org135}\And S.~Nazarenko\Irefn{org101}\And A.~Nedosekin\Irefn{org60}\And R.A.~Negrao De Oliveira\Irefn{org36}\And L.~Nellen\Irefn{org65}\And F.~Ng\Irefn{org124}\And M.~Nicassio\Irefn{org99}\And M.~Niculescu\Irefn{org64}\And J.~Niedziela\Irefn{org36}\And B.S.~Nielsen\Irefn{org83}\And S.~Nikolaev\Irefn{org82}\And S.~Nikulin\Irefn{org82}\And V.~Nikulin\Irefn{org88}\And F.~Noferini\Irefn{org106}\textsuperscript{,}\Irefn{org12}\And P.~Nomokonov\Irefn{org68}\And G.~Nooren\Irefn{org59}\And J.C.C.~Noris\Irefn{org2}\And J.~Norman\Irefn{org126}\And A.~Nyanin\Irefn{org82}\And J.~Nystrand\Irefn{org18}\And H.~Oeschler\Irefn{org96}\And S.~Oh\Irefn{org139}\And S.K.~Oh\Irefn{org69}\And A.~Ohlson\Irefn{org36}\And A.~Okatan\Irefn{org71}\And T.~Okubo\Irefn{org48}\And J.~Oleniacz\Irefn{org136}\And A.C.~Oliveira Da Silva\Irefn{org122}\And M.H.~Oliver\Irefn{org139}\And J.~Onderwaater\Irefn{org99}\And C.~Oppedisano\Irefn{org112}\And R.~Orava\Irefn{org47}\And M.~Oravec\Irefn{org117}\And A.~Ortiz Velasquez\Irefn{org65}\And A.~Oskarsson\Irefn{org34}\And J.~Otwinowski\Irefn{org119}\And K.~Oyama\Irefn{org96}\textsuperscript{,}\Irefn{org78}\And M.~Ozdemir\Irefn{org55}\And Y.~Pachmayer\Irefn{org96}\And D.~Pagano\Irefn{org133}\And P.~Pagano\Irefn{org30}\And G.~Pai\'{c}\Irefn{org65}\And S.K.~Pal\Irefn{org135}\And P.~Palni\Irefn{org7}\And J.~Pan\Irefn{org137}\And A.K.~Pandey\Irefn{org49}\And V.~Papikyan\Irefn{org1}\And G.S.~Pappalardo\Irefn{org108}\And P.~Pareek\Irefn{org50}\And W.J.~Park\Irefn{org99}\And S.~Parmar\Irefn{org90}\And A.~Passfeld\Irefn{org56}\And V.~Paticchio\Irefn{org105}\And R.N.~Patra\Irefn{org135}\And B.~Paul\Irefn{org112}\And H.~Pei\Irefn{org7}\And T.~Peitzmann\Irefn{org59}\And X.~Peng\Irefn{org7}\And H.~Pereira Da Costa\Irefn{org15}\And D.~Peresunko\Irefn{org82}\textsuperscript{,}\Irefn{org77}\And E.~Perez Lezama\Irefn{org55}\And V.~Peskov\Irefn{org55}\And Y.~Pestov\Irefn{org5}\And V.~Petr\'{a}\v{c}ek\Irefn{org40}\And V.~Petrov\Irefn{org113}\And M.~Petrovici\Irefn{org80}\And C.~Petta\Irefn{org28}\And S.~Piano\Irefn{org111}\And M.~Pikna\Irefn{org39}\And P.~Pillot\Irefn{org115}\And L.O.D.L.~Pimentel\Irefn{org83}\And O.~Pinazza\Irefn{org106}\textsuperscript{,}\Irefn{org36}\And L.~Pinsky\Irefn{org124}\And D.B.~Piyarathna\Irefn{org124}\And M.~P\l osko\'{n}\Irefn{org76}\And M.~Planinic\Irefn{org131}\And J.~Pluta\Irefn{org136}\And S.~Pochybova\Irefn{org138}\And P.L.M.~Podesta-Lerma\Irefn{org121}\And M.G.~Poghosyan\Irefn{org87}\And B.~Polichtchouk\Irefn{org113}\And N.~Poljak\Irefn{org131}\And W.~Poonsawat\Irefn{org116}\And A.~Pop\Irefn{org80}\And H.~Poppenborg\Irefn{org56}\And S.~Porteboeuf-Houssais\Irefn{org72}\And J.~Porter\Irefn{org76}\And J.~Pospisil\Irefn{org86}\And S.K.~Prasad\Irefn{org4}\And R.~Preghenella\Irefn{org106}\textsuperscript{,}\Irefn{org36}\And F.~Prino\Irefn{org112}\And C.A.~Pruneau\Irefn{org137}\And I.~Pshenichnov\Irefn{org58}\And M.~Puccio\Irefn{org26}\And G.~Puddu\Irefn{org24}\And P.~Pujahari\Irefn{org137}\And V.~Punin\Irefn{org101}\And J.~Putschke\Irefn{org137}\And H.~Qvigstad\Irefn{org22}\And A.~Rachevski\Irefn{org111}\And S.~Raha\Irefn{org4}\And S.~Rajput\Irefn{org93}\And J.~Rak\Irefn{org125}\And A.~Rakotozafindrabe\Irefn{org15}\And L.~Ramello\Irefn{org31}\And F.~Rami\Irefn{org57}\And R.~Raniwala\Irefn{org94}\And S.~Raniwala\Irefn{org94}\And S.S.~R\"{a}s\"{a}nen\Irefn{org47}\And B.T.~Rascanu\Irefn{org55}\And D.~Rathee\Irefn{org90}\And I.~Ravasenga\Irefn{org26}\And K.F.~Read\Irefn{org127}\textsuperscript{,}\Irefn{org87}\And K.~Redlich\Irefn{org79}\And R.J.~Reed\Irefn{org137}\And A.~Rehman\Irefn{org18}\And P.~Reichelt\Irefn{org55}\And F.~Reidt\Irefn{org36}\textsuperscript{,}\Irefn{org96}\And X.~Ren\Irefn{org7}\And R.~Renfordt\Irefn{org55}\And A.R.~Reolon\Irefn{org74}\And A.~Reshetin\Irefn{org58}\And K.~Reygers\Irefn{org96}\And V.~Riabov\Irefn{org88}\And R.A.~Ricci\Irefn{org75}\And T.~Richert\Irefn{org34}\And M.~Richter\Irefn{org22}\And P.~Riedler\Irefn{org36}\And W.~Riegler\Irefn{org36}\And F.~Riggi\Irefn{org28}\And C.~Ristea\Irefn{org64}\And M.~Rodr\'{i}guez Cahuantzi\Irefn{org2}\And A.~Rodriguez Manso\Irefn{org84}\And K.~R{\o}ed\Irefn{org22}\And E.~Rogochaya\Irefn{org68}\And D.~Rohr\Irefn{org43}\And D.~R\"ohrich\Irefn{org18}\And F.~Ronchetti\Irefn{org36}\textsuperscript{,}\Irefn{org74}\And L.~Ronflette\Irefn{org115}\And P.~Rosnet\Irefn{org72}\And A.~Rossi\Irefn{org29}\And F.~Roukoutakis\Irefn{org91}\And A.~Roy\Irefn{org50}\And C.~Roy\Irefn{org57}\And P.~Roy\Irefn{org102}\And A.J.~Rubio Montero\Irefn{org10}\And R.~Rui\Irefn{org25}\And R.~Russo\Irefn{org26}\And E.~Ryabinkin\Irefn{org82}\And Y.~Ryabov\Irefn{org88}\And A.~Rybicki\Irefn{org119}\And S.~Saarinen\Irefn{org47}\And S.~Sadhu\Irefn{org135}\And S.~Sadovsky\Irefn{org113}\And K.~\v{S}afa\v{r}\'{\i}k\Irefn{org36}\And B.~Sahlmuller\Irefn{org55}\And P.~Sahoo\Irefn{org50}\And R.~Sahoo\Irefn{org50}\And S.~Sahoo\Irefn{org63}\And P.K.~Sahu\Irefn{org63}\And J.~Saini\Irefn{org135}\And S.~Sakai\Irefn{org74}\And M.A.~Saleh\Irefn{org137}\And J.~Salzwedel\Irefn{org20}\And S.~Sambyal\Irefn{org93}\And V.~Samsonov\Irefn{org88}\textsuperscript{,}\Irefn{org77}\And L.~\v{S}\'{a}ndor\Irefn{org61}\And A.~Sandoval\Irefn{org66}\And M.~Sano\Irefn{org130}\And D.~Sarkar\Irefn{org135}\And N.~Sarkar\Irefn{org135}\And P.~Sarma\Irefn{org45}\And E.~Scapparone\Irefn{org106}\And F.~Scarlassara\Irefn{org29}\And C.~Schiaua\Irefn{org80}\And R.~Schicker\Irefn{org96}\And C.~Schmidt\Irefn{org99}\And H.R.~Schmidt\Irefn{org35}\And M.~Schmidt\Irefn{org35}\And S.~Schuchmann\Irefn{org55}\textsuperscript{,}\Irefn{org96}\And J.~Schukraft\Irefn{org36}\And Y.~Schutz\Irefn{org36}\textsuperscript{,}\Irefn{org115}\And K.~Schwarz\Irefn{org99}\And K.~Schweda\Irefn{org99}\And G.~Scioli\Irefn{org27}\And E.~Scomparin\Irefn{org112}\And R.~Scott\Irefn{org127}\And M.~\v{S}ef\v{c}\'ik\Irefn{org41}\And J.E.~Seger\Irefn{org89}\And Y.~Sekiguchi\Irefn{org129}\And D.~Sekihata\Irefn{org48}\And I.~Selyuzhenkov\Irefn{org99}\And K.~Senosi\Irefn{org67}\And S.~Senyukov\Irefn{org3}\textsuperscript{,}\Irefn{org36}\And E.~Serradilla\Irefn{org10}\textsuperscript{,}\Irefn{org66}\And A.~Sevcenco\Irefn{org64}\And A.~Shabanov\Irefn{org58}\And A.~Shabetai\Irefn{org115}\And O.~Shadura\Irefn{org3}\And R.~Shahoyan\Irefn{org36}\And A.~Shangaraev\Irefn{org113}\And A.~Sharma\Irefn{org93}\And M.~Sharma\Irefn{org93}\And M.~Sharma\Irefn{org93}\And N.~Sharma\Irefn{org127}\And A.I.~Sheikh\Irefn{org135}\And K.~Shigaki\Irefn{org48}\And Q.~Shou\Irefn{org7}\And K.~Shtejer\Irefn{org9}\textsuperscript{,}\Irefn{org26}\And Y.~Sibiriak\Irefn{org82}\And S.~Siddhanta\Irefn{org107}\And K.M.~Sielewicz\Irefn{org36}\And T.~Siemiarczuk\Irefn{org79}\And D.~Silvermyr\Irefn{org34}\And C.~Silvestre\Irefn{org73}\And G.~Simatovic\Irefn{org131}\And G.~Simonetti\Irefn{org36}\And R.~Singaraju\Irefn{org135}\And R.~Singh\Irefn{org81}\And V.~Singhal\Irefn{org135}\And T.~Sinha\Irefn{org102}\And B.~Sitar\Irefn{org39}\And M.~Sitta\Irefn{org31}\And T.B.~Skaali\Irefn{org22}\And M.~Slupecki\Irefn{org125}\And N.~Smirnov\Irefn{org139}\And R.J.M.~Snellings\Irefn{org59}\And T.W.~Snellman\Irefn{org125}\And J.~Song\Irefn{org98}\And M.~Song\Irefn{org140}\And Z.~Song\Irefn{org7}\And F.~Soramel\Irefn{org29}\And S.~Sorensen\Irefn{org127}\And F.~Sozzi\Irefn{org99}\And E.~Spiriti\Irefn{org74}\And I.~Sputowska\Irefn{org119}\And M.~Spyropoulou-Stassinaki\Irefn{org91}\And J.~Stachel\Irefn{org96}\And I.~Stan\Irefn{org64}\And P.~Stankus\Irefn{org87}\And E.~Stenlund\Irefn{org34}\And G.~Steyn\Irefn{org67}\And J.H.~Stiller\Irefn{org96}\And D.~Stocco\Irefn{org115}\And P.~Strmen\Irefn{org39}\And A.A.P.~Suaide\Irefn{org122}\And T.~Sugitate\Irefn{org48}\And C.~Suire\Irefn{org53}\And M.~Suleymanov\Irefn{org16}\And M.~Suljic\Irefn{org25}\Aref{0}\And R.~Sultanov\Irefn{org60}\And M.~\v{S}umbera\Irefn{org86}\And S.~Sumowidagdo\Irefn{org51}\And S.~Swain\Irefn{org63}\And A.~Szabo\Irefn{org39}\And I.~Szarka\Irefn{org39}\And A.~Szczepankiewicz\Irefn{org136}\And M.~Szymanski\Irefn{org136}\And U.~Tabassam\Irefn{org16}\And J.~Takahashi\Irefn{org123}\And G.J.~Tambave\Irefn{org18}\And N.~Tanaka\Irefn{org130}\And M.~Tarhini\Irefn{org53}\And M.~Tariq\Irefn{org19}\And M.G.~Tarzila\Irefn{org80}\And A.~Tauro\Irefn{org36}\And G.~Tejeda Mu\~{n}oz\Irefn{org2}\And A.~Telesca\Irefn{org36}\And K.~Terasaki\Irefn{org129}\And C.~Terrevoli\Irefn{org29}\And B.~Teyssier\Irefn{org132}\And J.~Th\"{a}der\Irefn{org76}\And D.~Thakur\Irefn{org50}\And D.~Thomas\Irefn{org120}\And R.~Tieulent\Irefn{org132}\And A.~Tikhonov\Irefn{org58}\And A.R.~Timmins\Irefn{org124}\And A.~Toia\Irefn{org55}\And S.~Trogolo\Irefn{org26}\And G.~Trombetta\Irefn{org33}\And V.~Trubnikov\Irefn{org3}\And W.H.~Trzaska\Irefn{org125}\And T.~Tsuji\Irefn{org129}\And A.~Tumkin\Irefn{org101}\And R.~Turrisi\Irefn{org109}\And T.S.~Tveter\Irefn{org22}\And K.~Ullaland\Irefn{org18}\And A.~Uras\Irefn{org132}\And G.L.~Usai\Irefn{org24}\And A.~Utrobicic\Irefn{org131}\And M.~Vala\Irefn{org61}\And L.~Valencia Palomo\Irefn{org72}\And J.~Van Der Maarel\Irefn{org59}\And J.W.~Van Hoorne\Irefn{org36}\textsuperscript{,}\Irefn{org114}\And M.~van Leeuwen\Irefn{org59}\And T.~Vanat\Irefn{org86}\And P.~Vande Vyvre\Irefn{org36}\And D.~Varga\Irefn{org138}\And A.~Vargas\Irefn{org2}\And M.~Vargyas\Irefn{org125}\And R.~Varma\Irefn{org49}\And M.~Vasileiou\Irefn{org91}\And A.~Vasiliev\Irefn{org82}\And A.~Vauthier\Irefn{org73}\And O.~V\'azquez Doce\Irefn{org95}\textsuperscript{,}\Irefn{org37}\And V.~Vechernin\Irefn{org134}\And A.M.~Veen\Irefn{org59}\And A.~Velure\Irefn{org18}\And E.~Vercellin\Irefn{org26}\And S.~Vergara Lim\'on\Irefn{org2}\And R.~Vernet\Irefn{org8}\And L.~Vickovic\Irefn{org118}\And J.~Viinikainen\Irefn{org125}\And Z.~Vilakazi\Irefn{org128}\And O.~Villalobos Baillie\Irefn{org103}\And A.~Villatoro Tello\Irefn{org2}\And A.~Vinogradov\Irefn{org82}\And L.~Vinogradov\Irefn{org134}\And T.~Virgili\Irefn{org30}\And V.~Vislavicius\Irefn{org34}\And Y.P.~Viyogi\Irefn{org135}\And A.~Vodopyanov\Irefn{org68}\And M.A.~V\"{o}lkl\Irefn{org96}\And K.~Voloshin\Irefn{org60}\And S.A.~Voloshin\Irefn{org137}\And G.~Volpe\Irefn{org33}\textsuperscript{,}\Irefn{org138}\And B.~von Haller\Irefn{org36}\And I.~Vorobyev\Irefn{org95}\textsuperscript{,}\Irefn{org37}\And D.~Vranic\Irefn{org99}\textsuperscript{,}\Irefn{org36}\And J.~Vrl\'{a}kov\'{a}\Irefn{org41}\And B.~Vulpescu\Irefn{org72}\And B.~Wagner\Irefn{org18}\And J.~Wagner\Irefn{org99}\And H.~Wang\Irefn{org59}\And M.~Wang\Irefn{org7}\And D.~Watanabe\Irefn{org130}\And Y.~Watanabe\Irefn{org129}\And M.~Weber\Irefn{org36}\textsuperscript{,}\Irefn{org114}\And S.G.~Weber\Irefn{org99}\And D.F.~Weiser\Irefn{org96}\And J.P.~Wessels\Irefn{org56}\And U.~Westerhoff\Irefn{org56}\And A.M.~Whitehead\Irefn{org92}\And J.~Wiechula\Irefn{org35}\And J.~Wikne\Irefn{org22}\And G.~Wilk\Irefn{org79}\And J.~Wilkinson\Irefn{org96}\And G.A.~Willems\Irefn{org56}\And M.C.S.~Williams\Irefn{org106}\And B.~Windelband\Irefn{org96}\And M.~Winn\Irefn{org96}\And S.~Yalcin\Irefn{org71}\And P.~Yang\Irefn{org7}\And S.~Yano\Irefn{org48}\And Z.~Yin\Irefn{org7}\And H.~Yokoyama\Irefn{org130}\And I.-K.~Yoo\Irefn{org98}\And J.H.~Yoon\Irefn{org52}\And V.~Yurchenko\Irefn{org3}\And A.~Zaborowska\Irefn{org136}\And V.~Zaccolo\Irefn{org83}\And A.~Zaman\Irefn{org16}\And C.~Zampolli\Irefn{org106}\textsuperscript{,}\Irefn{org36}\And H.J.C.~Zanoli\Irefn{org122}\And S.~Zaporozhets\Irefn{org68}\And N.~Zardoshti\Irefn{org103}\And A.~Zarochentsev\Irefn{org134}\And P.~Z\'{a}vada\Irefn{org62}\And N.~Zaviyalov\Irefn{org101}\And H.~Zbroszczyk\Irefn{org136}\And I.S.~Zgura\Irefn{org64}\And M.~Zhalov\Irefn{org88}\And H.~Zhang\Irefn{org18}\textsuperscript{,}\Irefn{org7}\And X.~Zhang\Irefn{org76}\textsuperscript{,}\Irefn{org7}\And Y.~Zhang\Irefn{org7}\And C.~Zhang\Irefn{org59}\And Z.~Zhang\Irefn{org7}\And C.~Zhao\Irefn{org22}\And N.~Zhigareva\Irefn{org60}\And D.~Zhou\Irefn{org7}\And Y.~Zhou\Irefn{org83}\And Z.~Zhou\Irefn{org18}\And H.~Zhu\Irefn{org18}\textsuperscript{,}\Irefn{org7}\And J.~Zhu\Irefn{org7}\textsuperscript{,}\Irefn{org115}\And A.~Zichichi\Irefn{org27}\textsuperscript{,}\Irefn{org12}\And A.~Zimmermann\Irefn{org96}\And M.B.~Zimmermann\Irefn{org56}\textsuperscript{,}\Irefn{org36}\And G.~Zinovjev\Irefn{org3}\And M.~Zyzak\Irefn{org43} \renewcommand\labelenumi{\textsuperscript{\theenumi}~} \section*{Affiliation notes} \renewcommand\theenumi{\roman{enumi}} \begin{Authlist} \item \Adef{0}Deceased \item \Adef{idp1803184}{Also at: Georgia State University, Atlanta, Georgia, United States} \item \Adef{idp3208048}{Also at: Also at Department of Applied Physics, Aligarh Muslim University, Aligarh, India} \item \Adef{idp3929568}{Also at: M.V. Lomonosov Moscow State University, D.V. Skobeltsyn Institute of Nuclear, Physics, Moscow, Russia} \end{Authlist} \section*{Collaboration Institutes} \renewcommand\theenumi{\arabic{enumi}~} \begin{Authlist} \item \Idef{org1}A.I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation, Yerevan, Armenia \item \Idef{org2}Benem\'{e}rita Universidad Aut\'{o}noma de Puebla, Puebla, Mexico \item \Idef{org3}Bogolyubov Institute for Theoretical Physics, Kiev, Ukraine \item \Idef{org4}Bose Institute, Department of Physics and Centre for Astroparticle Physics and Space Science (CAPSS), Kolkata, India \item \Idef{org5}Budker Institute for Nuclear Physics, Novosibirsk, Russia \item \Idef{org6}California Polytechnic State University, San Luis Obispo, California, United States \item \Idef{org7}Central China Normal University, Wuhan, China \item \Idef{org8}Centre de Calcul de l'IN2P3, Villeurbanne, France \item \Idef{org9}Centro de Aplicaciones Tecnol\'{o}gicas y Desarrollo Nuclear (CEADEN), Havana, Cuba \item \Idef{org10}Centro de Investigaciones Energ\'{e}ticas Medioambientales y Tecnol\'{o}gicas (CIEMAT), Madrid, Spain \item \Idef{org11}Centro de Investigaci\'{o}n y de Estudios Avanzados (CINVESTAV), Mexico City and M\'{e}rida, Mexico \item \Idef{org12}Centro Fermi - Museo Storico della Fisica e Centro Studi e Ricerche ``Enrico Fermi'', Rome, Italy \item \Idef{org13}Chicago State University, Chicago, Illinois, USA \item \Idef{org14}China Institute of Atomic Energy, Beijing, China \item \Idef{org15}Commissariat \`{a} l'Energie Atomique, IRFU, Saclay, France \item \Idef{org16}COMSATS Institute of Information Technology (CIIT), Islamabad, Pakistan \item \Idef{org17}Departamento de F\'{\i}sica de Part\'{\i}culas and IGFAE, Universidad de Santiago de Compostela, Santiago de Compostela, Spain \item \Idef{org18}Department of Physics and Technology, University of Bergen, Bergen, Norway \item \Idef{org19}Department of Physics, Aligarh Muslim University, Aligarh, India \item \Idef{org20}Department of Physics, Ohio State University, Columbus, Ohio, United States \item \Idef{org21}Department of Physics, Sejong University, Seoul, South Korea \item \Idef{org22}Department of Physics, University of Oslo, Oslo, Norway \item \Idef{org23}Dipartimento di Fisica dell'Universit\`{a} 'La Sapienza' and Sezione INFN Rome, Italy \item \Idef{org24}Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Cagliari, Italy \item \Idef{org25}Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Trieste, Italy \item \Idef{org26}Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Turin, Italy \item \Idef{org27}Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Bologna, Italy \item \Idef{org28}Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Catania, Italy \item \Idef{org29}Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Padova, Italy \item \Idef{org30}Dipartimento di Fisica `E.R.~Caianiello' dell'Universit\`{a} and Gruppo Collegato INFN, Salerno, Italy \item \Idef{org31}Dipartimento di Scienze e Innovazione Tecnologica dell'Universit\`{a} del Piemonte Orientale and Gruppo Collegato INFN, Alessandria, Italy \item \Idef{org32}Dipartimento DISAT del Politecnico and Sezione INFN, Turin, Italy \item \Idef{org33}Dipartimento Interateneo di Fisica `M.~Merlin' and Sezione INFN, Bari, Italy \item \Idef{org34}Division of Experimental High Energy Physics, University of Lund, Lund, Sweden \item \Idef{org35}Eberhard Karls Universit\"{a}t T\"{u}bingen, T\"{u}bingen, Germany \item \Idef{org36}European Organization for Nuclear Research (CERN), Geneva, Switzerland \item \Idef{org37}Excellence Cluster Universe, Technische Universit\"{a}t M\"{u}nchen, Munich, Germany \item \Idef{org38}Faculty of Engineering, Bergen University College, Bergen, Norway \item \Idef{org39}Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia \item \Idef{org40}Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Prague, Czech Republic \item \Idef{org41}Faculty of Science, P.J.~\v{S}af\'{a}rik University, Ko\v{s}ice, Slovakia \item \Idef{org42}Faculty of Technology, Buskerud and Vestfold University College, Vestfold, Norway \item \Idef{org43}Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany \item \Idef{org44}Gangneung-Wonju National University, Gangneung, South Korea \item \Idef{org45}Gauhati University, Department of Physics, Guwahati, India \item \Idef{org46}Helmholtz-Institut f\"{u}r Strahlen- und Kernphysik, Rheinische Friedrich-Wilhelms-Universit\"{a}t Bonn, Bonn, Germany \item \Idef{org47}Helsinki Institute of Physics (HIP), Helsinki, Finland \item \Idef{org48}Hiroshima University, Hiroshima, Japan \item \Idef{org49}Indian Institute of Technology Bombay (IIT), Mumbai, India \item \Idef{org50}Indian Institute of Technology Indore, Indore (IITI), India \item \Idef{org51}Indonesian Institute of Sciences, Jakarta, Indonesia \item \Idef{org52}Inha University, Incheon, South Korea \item \Idef{org53}Institut de Physique Nucl\'eaire d'Orsay (IPNO), Universit\'e Paris-Sud, CNRS-IN2P3, Orsay, France \item \Idef{org54}Institut f\"{u}r Informatik, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany \item \Idef{org55}Institut f\"{u}r Kernphysik, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany \item \Idef{org56}Institut f\"{u}r Kernphysik, Westf\"{a}lische Wilhelms-Universit\"{a}t M\"{u}nster, M\"{u}nster, Germany \item \Idef{org57}Institut Pluridisciplinaire Hubert Curien (IPHC), Universit\'{e} de Strasbourg, CNRS-IN2P3, Strasbourg, France \item \Idef{org58}Institute for Nuclear Research, Academy of Sciences, Moscow, Russia \item \Idef{org59}Institute for Subatomic Physics of Utrecht University, Utrecht, Netherlands \item \Idef{org60}Institute for Theoretical and Experimental Physics, Moscow, Russia \item \Idef{org61}Institute of Experimental Physics, Slovak Academy of Sciences, Ko\v{s}ice, Slovakia \item \Idef{org62}Institute of Physics, Academy of Sciences of the Czech Republic, Prague, Czech Republic \item \Idef{org63}Institute of Physics, Bhubaneswar, India \item \Idef{org64}Institute of Space Science (ISS), Bucharest, Romania \item \Idef{org65}Instituto de Ciencias Nucleares, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico \item \Idef{org66}Instituto de F\'{\i}sica, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico \item \Idef{org67}iThemba LABS, National Research Foundation, Somerset West, South Africa \item \Idef{org68}Joint Institute for Nuclear Research (JINR), Dubna, Russia \item \Idef{org69}Konkuk University, Seoul, South Korea \item \Idef{org70}Korea Institute of Science and Technology Information, Daejeon, South Korea \item \Idef{org71}KTO Karatay University, Konya, Turkey \item \Idef{org72}Laboratoire de Physique Corpusculaire (LPC), Clermont Universit\'{e}, Universit\'{e} Blaise Pascal, CNRS--IN2P3, Clermont-Ferrand, France \item \Idef{org73}Laboratoire de Physique Subatomique et de Cosmologie, Universit\'{e} Grenoble-Alpes, CNRS-IN2P3, Grenoble, France \item \Idef{org74}Laboratori Nazionali di Frascati, INFN, Frascati, Italy \item \Idef{org75}Laboratori Nazionali di Legnaro, INFN, Legnaro, Italy \item \Idef{org76}Lawrence Berkeley National Laboratory, Berkeley, California, United States \item \Idef{org77}Moscow Engineering Physics Institute, Moscow, Russia \item \Idef{org78}Nagasaki Institute of Applied Science, Nagasaki, Japan \item \Idef{org79}National Centre for Nuclear Studies, Warsaw, Poland \item \Idef{org80}National Institute for Physics and Nuclear Engineering, Bucharest, Romania \item \Idef{org81}National Institute of Science Education and Research, Bhubaneswar, India \item \Idef{org82}National Research Centre Kurchatov Institute, Moscow, Russia \item \Idef{org83}Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark \item \Idef{org84}Nikhef, Nationaal instituut voor subatomaire fysica, Amsterdam, Netherlands \item \Idef{org85}Nuclear Physics Group, STFC Daresbury Laboratory, Daresbury, United Kingdom \item \Idef{org86}Nuclear Physics Institute, Academy of Sciences of the Czech Republic, \v{R}e\v{z} u Prahy, Czech Republic \item \Idef{org87}Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States \item \Idef{org88}Petersburg Nuclear Physics Institute, Gatchina, Russia \item \Idef{org89}Physics Department, Creighton University, Omaha, Nebraska, United States \item \Idef{org90}Physics Department, Panjab University, Chandigarh, India \item \Idef{org91}Physics Department, University of Athens, Athens, Greece \item \Idef{org92}Physics Department, University of Cape Town, Cape Town, South Africa \item \Idef{org93}Physics Department, University of Jammu, Jammu, India \item \Idef{org94}Physics Department, University of Rajasthan, Jaipur, India \item \Idef{org95}Physik Department, Technische Universit\"{a}t M\"{u}nchen, Munich, Germany \item \Idef{org96}Physikalisches Institut, Ruprecht-Karls-Universit\"{a}t Heidelberg, Heidelberg, Germany \item \Idef{org97}Purdue University, West Lafayette, Indiana, United States \item \Idef{org98}Pusan National University, Pusan, South Korea \item \Idef{org99}Research Division and ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum f\"ur Schwerionenforschung, Darmstadt, Germany \item \Idef{org100}Rudjer Bo\v{s}kovi\'{c} Institute, Zagreb, Croatia \item \Idef{org101}Russian Federal Nuclear Center (VNIIEF), Sarov, Russia \item \Idef{org102}Saha Institute of Nuclear Physics, Kolkata, India \item \Idef{org103}School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom \item \Idef{org104}Secci\'{o}n F\'{\i}sica, Departamento de Ciencias, Pontificia Universidad Cat\'{o}lica del Per\'{u}, Lima, Peru \item \Idef{org105}Sezione INFN, Bari, Italy \item \Idef{org106}Sezione INFN, Bologna, Italy \item \Idef{org107}Sezione INFN, Cagliari, Italy \item \Idef{org108}Sezione INFN, Catania, Italy \item \Idef{org109}Sezione INFN, Padova, Italy \item \Idef{org110}Sezione INFN, Rome, Italy \item \Idef{org111}Sezione INFN, Trieste, Italy \item \Idef{org112}Sezione INFN, Turin, Italy \item \Idef{org113}SSC IHEP of NRC Kurchatov institute, Protvino, Russia \item \Idef{org114}Stefan Meyer Institut f\"{u}r Subatomare Physik (SMI), Vienna, Austria \item \Idef{org115}SUBATECH, Ecole des Mines de Nantes, Universit\'{e} de Nantes, CNRS-IN2P3, Nantes, France \item \Idef{org116}Suranaree University of Technology, Nakhon Ratchasima, Thailand \item \Idef{org117}Technical University of Ko\v{s}ice, Ko\v{s}ice, Slovakia \item \Idef{org118}Technical University of Split FESB, Split, Croatia \item \Idef{org119}The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Cracow, Poland \item \Idef{org120}The University of Texas at Austin, Physics Department, Austin, Texas, USA \item \Idef{org121}Universidad Aut\'{o}noma de Sinaloa, Culiac\'{a}n, Mexico \item \Idef{org122}Universidade de S\~{a}o Paulo (USP), S\~{a}o Paulo, Brazil \item \Idef{org123}Universidade Estadual de Campinas (UNICAMP), Campinas, Brazil \item \Idef{org124}University of Houston, Houston, Texas, United States \item \Idef{org125}University of Jyv\"{a}skyl\"{a}, Jyv\"{a}skyl\"{a}, Finland \item \Idef{org126}University of Liverpool, Liverpool, United Kingdom \item \Idef{org127}University of Tennessee, Knoxville, Tennessee, United States \item \Idef{org128}University of the Witwatersrand, Johannesburg, South Africa \item \Idef{org129}University of Tokyo, Tokyo, Japan \item \Idef{org130}University of Tsukuba, Tsukuba, Japan \item \Idef{org131}University of Zagreb, Zagreb, Croatia \item \Idef{org132}Universit\'{e} de Lyon, Universit\'{e} Lyon 1, CNRS/IN2P3, IPN-Lyon, Villeurbanne, France \item \Idef{org133}Universit\`{a} di Brescia, Brescia, Italy \item \Idef{org134}V.~Fock Institute for Physics, St. Petersburg State University, St. Petersburg, Russia \item \Idef{org135}Variable Energy Cyclotron Centre, Kolkata, India \item \Idef{org136}Warsaw University of Technology, Warsaw, Poland \item \Idef{org137}Wayne State University, Detroit, Michigan, United States \item \Idef{org138}Wigner Research Centre for Physics, Hungarian Academy of Sciences, Budapest, Hungary \item \Idef{org139}Yale University, New Haven, Connecticut, United States \item \Idef{org140}Yonsei University, Seoul, South Korea \item \Idef{org141}Zentrum f\"{u}r Technologietransfer und Telekommunikation (ZTT), Fachhochschule Worms, Worms, Germany \end{Authlist} \endgroup \section*{Acknowledgements} \input{acknowledgements.tex} \end{acknowledgement} \bibliographystyle{utphys} \section{Methods} A detailed description of the ALICE detector and of its performance can be found in~\cite{Aamodt:2008zz,Abelev:2014ffa}. We briefly outline the main detectors utilized for this analysis. The V0 detectors are two scintillator hodoscopes employed for triggering, background suppression and event-class determination. They are placed on either side of the interaction region at $z~=~3.3$~m and $z~=~-0.9$~m, covering the pseudorapidity regions $2.8~<~\eta~<~5.1$ and $-3.7~<~\eta~<~-1.7$, respectively. Vertex reconstruction, central-barrel tracking and charged-hadron identification are performed with the Inner Tracking System (ITS) and the Time-Projection Chamber (TPC), which are located inside a solenoidal magnet providing a 0.5~T magnetic field. The ITS is composed of six cylindrical layers of high-resolution silicon tracking detectors. The innermost layers consist of two arrays of hybrid Silicon Pixel Detectors (SPD) located at average radii 3.9 and 7.6~cm from the beam axis and covering \etaless{2.0} and \etaless{1.4}, respectively. The TPC is a large cylindrical drift detector of radial and longitudinal size of about $85~<~r~<~250$~cm and $-250~<~z~<~250$~cm, respectively. It provides charged-hadron identification information via ionisation energy loss in the fill gas. The data were collected in 2010 using a minimum-bias trigger requiring a hit in either the V0 scintillators or in the SPD detector, in coincidence with the arrival of proton bunches from both directions. The contamination from beam-induced background is removed offline by using the timing information and correlations in the V0 and SPD detectors, as discussed in details in~\cite{Abelev:2014ffa}. Events used for the data analysis are further required to have a reconstructed vertex within $\left|z\right|~<$~10~cm. Events containing more than one distinct vertex are tagged as pileup and are discarded. The remaining pileup fraction is estimated to be negligible, ranging from about $10^{-4}$ to $10^{-2}$ for the lowest and highest multiplicity classes, respectively. A total of about 100 million events has been utilised for the analysis. The mean pseudorapidity densities of primary charged particles \average{\ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace} are measured at midrapidity, \etaless{0.5}, for each event class using the technique described in~\cite{ALICE:2012xs}. The \average{\ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace} values, corrected for acceptance and efficiency, as well as for contamination from secondary particles and combinatorial background, are listed in Table~\ref{tab:multi}. The relative RMS width of the corresponding multiplicity distributions ranges from 68\% to 30\% for the lowest and highest multiplicity classes, respectively. The corresponding fractions of the INEL$>$0\xspace cross-section are also summarized in Table~\ref{tab:multi}. Strange \ensuremath{{\rm K}^{0}_{S}}\xspace, \ensuremath{\Lambda}\xspace and \ensuremath{\overline{\Lambda}}\xspace and multi-strange \ensuremath{\Xi^{-}}\xspace, \ensuremath{\overline{\Xi}^{+}}\xspace, \ensuremath{\Omega^{-}}\xspace and \ensuremath{\overline{\Omega}^{+}}\xspace candidates are reconstructed via topological selection criteria and invariant-mass analysis of their characteristic weak decays~\cite{Agashe:2014kda}: \begin{center} \centering \addtolength{\tabcolsep}{-3pt} \begin{tabular*}{\linewidth}{rcll} \footnotesize \ensuremath{{\rm K}^{0}_{S}}\xspace & $\rightarrow$ & \ensuremath{{\pi}^{+}}\xspace + \ensuremath{{\pi}^{-}}\xspace & {\footnotesize B.R. = (69.20 $\pm$ 0.05) \%} \\ \footnotesize \ensuremath{\Lambda}(\ensuremath{\overline{\Lambda}}) & $\rightarrow$ & \ensuremath{\rm p}(\ensuremath{\overline{\rm p}}) + \ensuremath{{\pi}^{-}}(\ensuremath{{\pi}^{+}}) & {\footnotesize B.R. = (63.9 $\pm$ 0.5) \%} \\ \footnotesize \ensuremath{\Xi^{-}}(\ensuremath{\overline{\Xi}^{+}}) & $\rightarrow$ & \ensuremath{\Lambda}(\ensuremath{\overline{\Lambda}}) + \ensuremath{{\pi}^{-}}(\ensuremath{{\pi}^{+}}) & {\footnotesize B.R. = (99.887 $\pm$ 0.035) \%} \\ \footnotesize \ensuremath{\Omega^{-}}(\ensuremath{\overline{\Omega}^{+}}) & $\rightarrow$ & \ensuremath{\Lambda}(\ensuremath{\overline{\Lambda}}) + \ensuremath{{\rm K}^{-}}(\ensuremath{{\rm K}^{+}}) & {\footnotesize B.R. = (67.8 $\pm$ 0.7) \%} \\ \end{tabular*} \addtolength{\tabcolsep}{+3pt} \end{center} Details on the analysis technique are described in~\cite{Abelev:2013haa,Aamodt:2011zza,Abelev:2012jp}. The results are corrected for detector acceptance and reconstruction efficiency calculated using events from the PYTHIA6 (tune Perugia~0) MC generator~\cite{Skands:2010ak} with particle transport performed via a GEANT3~\cite{Brun:1994aa} simulation of the ALICE detector. The contamination to \ensuremath{\Lambda}\xspace (\ensuremath{\overline{\Lambda}}\xspace) yields from weak decays of charged and neutral \ensuremath{\Xi}\xspace baryons (feed-down) is subtracted using a data-driven approach~\cite{Abelev:2013haa}. The study of systematic uncertainties follows the analysis described in~\cite{Abelev:2013haa,Aamodt:2011zza,Abelev:2012jp}. Contributions common to all event classes (\ensuremath{N_{\rm ch}}\xspace-independent) are estimated and removed to determine the remaining uncertainties which are uncorrelated across different multiplicity intervals. The main sources of systematic uncertainty and their corresponding values are summarized in Table~\ref{tab:sys}. The results on pion and proton production have been obtained following the analysis method discussed in~\cite{Adam:2015qaa}. \begin{sidewaystable*}[p] \begin{minipage}{\linewidth} \centering \caption{Event multiplicity classes, their corresponding fraction of the INEL$>$0\xspace cross-section ($\sigma/\sigma_{\rm INEL>0}$) and their corresponding \average{\ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace} at midrapidity (\etaless{0.5}). The value of \average{\ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace} in the inclusive (INEL$>$0\xspace) class is 5.96 $\pm$ 0.23. The uncertainties are the quadratic sum of statistical and systematic contributions and represent standard deviations. } \label{tab:multi} \begin{tabularx}{\textwidth}{l*{10}{Y}} \toprule Class name & I & II & III & IV & V & VI & VII & VIII & IX & X \\ \midrule $\sigma/\sigma_{\rm INEL>0}$ & 0--0.95\% & 0.95--4.7\% & 4.7--9.5\% & 9.5--14\% & 14--19\% & 19--28\% & 28--38\% & 38--48\% & 48--68\% & 68--100\% \\ \average{\ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace} & 21.3$\pm$0.6 & 16.5$\pm$0.5 & 13.5$\pm$0.4 & 11.5$\pm$0.3 & 10.1$\pm$0.3 & 8.45$\pm$0.25 & 6.72$\pm$0.21 & 5.40$\pm$0.17 & 3.90$\pm$0.14 & 2.26$\pm$0.12 \\ \bottomrule \end{tabularx} \end{minipage} \vspace{2cm} \begin{minipage}{\linewidth} \caption{Main sources and values of the relative systematic uncertainties (standard deviations expressed in \%) of the \ensuremath{p_{\rm T}}\xspace-differential yields. The values are reported for low, intermediate and high \ensuremath{p_{\rm T}}\xspace. The sums of the contributions common to all event classes are listed separately as \ensuremath{N_{\rm ch}}\xspace-independent systematics.} \label{tab:sys} \begin{tabularx}{\textwidth}{l*{3}{Y}*{3}{Y}*{3}{Y}*{3}{Y}} \toprule Hadron & \multicolumn{3}{c}{\ensuremath{{\rm K}^{0}_{S}}\xspace} & \multicolumn{3}{c}{\ensuremath{\Lambda}\xspace(\ensuremath{\overline{\Lambda}}\xspace)} & \multicolumn{3}{c}{\ensuremath{\Xi^{-}}\xspace(\ensuremath{\overline{\Xi}^{+}}\xspace)} & \multicolumn{3}{c}{\ensuremath{\Omega^{-}}\xspace(\ensuremath{\overline{\Omega}^{+}}\xspace)} \\ \ensuremath{p_{\rm T}}\xspace (GeV/$c$\xspace) & 0.05 & 6.2 & 11.0 & 0.5 & 3.7 & 7.2 & 0.8 & 2.1 & 5.8 & 1.2 & 2.8 & 4.7 \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13} Material budget & 4.0 & 4.0 & 4.0 & 4.0 & 4.0 & 4.0 & 4.0 & 4.0 & 4.0 & 4.0 & 4.0 & 4.0 \\ Transport code & \multicolumn{3}{c}{negligible} & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ Track selection & 1.0 & 5.0 & 0.8 & 0.2 & 5.9 & 4.3 & 0.4 & 0.3 & 2.2 & 0.8 & 0.6 & 4.1 \\ Topological selection & 2.6 & 1.1 & 2.3 & 0.8 & 0.6 & 3.2 & 3.1 & 2.0 & 4.0 & 5.0 & 5.6 & 8.1 \\ Particle identification & 0.1 & 0.1 & 0.1 & 0.2 & 0.2 & 3.0 & 1.0 & 0.2 & 1.2 & 1.1 & 1.7 & 3.2 \\ Efficiency determination & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 \\ Signal extraction & 1.5 & 1.2 & 3.6 & 0.6 & 0.7 & 3.0 & 1.5 & 0.2 & 1.0 & 3.2 & 2.5 & 2.3 \\ Proper lifetime & 1.3 & 0.1 & 0.2 & 0.3 & 2.3 & 0.1 & 0.9 & 0.1 & 0.1 & 2.2 & 0.7 & 0.7 \\ Competing decay rejection & negl. & 0.7 & 1.3 & negl. & 1.0 & 6.2 & \multicolumn{3}{c}{not applicable} & 0.2 & 4.2 & 5.2 \\ Feed-down correction & \multicolumn{3}{c}{not applicable} & 3.3 & 2.1 & 4.3 & \multicolumn{3}{c}{negligible} & \multicolumn{3}{c}{negligible} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13} Total & 5.6 & 6.9 & 6.4 & 5.8 & 8.2 & 11.2 & 5.9 & 5.0 & 6.7 & 7.9 & 9.0 & 12.1 \\ Common (\ensuremath{N_{\rm ch}}\xspace-independent) & 5.0 & 5.9 & 4.4 & 5.4 & 7.8 & 9.9 & 5.2 & 4.5 & 6.2 & 7.3 & 8.7 & 11.6 \\ \bottomrule \end{tabularx} \end{minipage} \end{sidewaystable*}
1,314,259,996,205
arxiv
\section{Motivation and relation to previous work} The framework of \emph{Sesqui-Pushout (SqPO) rewriting} has been introduced relatively recently in~\cite{Corradini_2006} as a novel alternative to the pre-existing algebraic graph transformation frameworks known as \emph{Double-Pushout (DPO)}~\cite{Ehrig1973,Ehrig1991,DBLP:conf/gg/CorradiniMREHL97,lack2005adhesive} and \emph{Single-Pushout (SPO) rewriting}~\cite{Kennaway,Loewe_1993,Loewe_2014}. In the setting of the rewriting of graph-like structures, the distinguishing feature of the aforementioned DPO-type rewriting is that the deletion of vertices with incident edges is only possible if the incident edges are explicitly deleted via the application of the rewriting rule. In contrast, in both the SqPO and the SPO rewriting setups, \emph{``deletion in unknown context''} is implementable. Thus for practical applications of rewriting, in particular in view of the modeling of stochastic rewriting systems, the S(q)PO rewriting semantics provide an important additional option for the practitioners, and will thus in particular complement the existing DPO-type associative rewriting and rule algebra framework as introduced in~\cite{bdg2016,bp2018}. Referring the interested readers to~\cite{Loewe_2015} for a recent review and further conceptual details of the three approaches, suffice it here to quote that SqPO and SPO rewriting via linear rules\footnote{While non-linear rules in SqPO rewriting have interesting applications in their own right (permitting e.g.\ the cloning and fusing of vertices in graphs), this most general case is left for future work.} (defined as monic spans) and along monomorphic matches effectively encode the same semantics of rewriting. We chose (by the preceding argument without loss of expressivity) to develop the theory of associative rewriting within the SqPO rather than the SPO setting, since the SqPO framework bears certain close technical similarities to the DPO rewriting framework, which proved crucial in finding a strategy for the highly intricate proofs of the concurrency and associativity theorems presented in this paper. While it is well-known (see e.g.\ Section~5.1 of~\cite{Corradini_2006}) that DPO- and SqPO-type semantics coincide for certain special classes of linear rules (essentially rules that do not delete vertices), and while these cases might provide some valuable cross-checks of technical results to the experts, SqPO-type semantics is in its full generality a considerably more intricate variant of semantics due to its inherent ``mixing'' of pushouts with final pullback complements. It should further be noted that we must impose a set of additional assumptions on the underlying adhesive categories (see Assumption~\ref{ass:SqPO}) in order to ensure certain technical properties necessary for our concurrency and associativity theorems to hold. To the best of our knowledge, apart from some partial results in the direction of developing a concurrency theorem for SqPO-type rewriting in~\cite{Corradini_2006,Loewe_2015,Corradini2018}, prior to this work neither of the aforementioned theorems had been available in the SqPO framework.\\ Associativity of SqPO rewriting theories plays a pivotal role in our development of a novel form of concurrent semantics for these theories, the so-called \emph{SqPO-type rule algebras}. Previous work on associative DPO-type rewriting theories~\cite{bdg2016,bdgh2016,bp2018} (see also~\cite{bp2019-ext}) has led to a category-theoretical understanding of associativity that may be suitably extended to the SqPO setting. In contrast to the traditional and well-established formalisms of concurrency theory for rewriting systems (see e.g.~\cite{DBLP:conf/gg/1997handbook,ehrig2004adhesive,EHRIG:2014ma,Corradini2018} for DPO-type semantics and~\cite{Corradini_2006,Corradini2018} for a notion of parallel independence and a Local Church-Rosser theorem for SqPO-rewriting of graphs), wherein the focus of the analysis is mostly on \emph{derivation traces} and their sequential independence and parallelism properties, the focus of our rule-algebraic approach differs significantly: we propose instead to put \emph{sequential compositions of linear rules} at the center of the analysis (rather than the derivation traces), and moreover to employ a vector-space based semantics in order to encode the non-determinism of such rule compositions. It is for this reason that the concurrency theorem plays a quintessential role in our rule algebra framework, in that it encodes the relationship between sequential compositions of linear rules and derivation traces, which in turn gives rise to the so-called \emph{canonical representations} of the rule algebras (see Section~\ref{sec:ACDrd}). This approach in particular permits to uncover certain combinatorial properties of rewriting systems that would otherwise not be accessible. While undoubtedly not a standard technique in the realm of theoretical computer science, certain special examples of rule algebras are ubiquitous in many areas of applied mathematics and theoretical physics. The most famous such example concerns the so-called \emph{Heisenberg-Weyl algebra} (see e.g.\ \cite{blasiak2005boson,blasiak2010combinatorial,blasiak2011combinatorial}), which is well-known to possess a representation in terms of the formal multiplication operator $\hat{x}$ and the differentiation operator $\partial_x$ on formal power series in the formal variable $x$, with $\hat{x}\,x^n:=x^{n+1}$ and $\partial_x$ acting as the derivative. Referring the interested readers to Example~\ref{ex:repInt} (see also~\cite{bp2018,bdg2019}) for the precise details, it transpires that the monomials $x^n$ (for $n$ a non-negative integer) are found to be in one-to-one correspondence with \emph{graph states} associated to $n$-vertex discrete graphs, while $\hat{x}$ and $\partial_x$ may be understood as the \emph{canonical representations} of the discrete graph rewriting rules of creation and deletion of vertices. It will thus come as no surprise that considering more general rewriting rules than those of discrete graphs will lead to a very substantial generalization of these traditional results and techniques.\\ From the very beginning of the development of the rule algebra framework~\cite{bdg2016}, one of our main motivations has been the study of stochastic rewriting systems, whence of continuous-time Markov chains (CTMCs) based upon DPO- or SqPO-type rewriting rules. While previously in particular applications of stochastic SqPO-type rewriting systems have played a role predominantly in highly specialized settings such as e.g.\ the formulation of the biochemical reaction system framework known as \emph{\textsc{Kappa}}~\cite{Danos_2010,danos2004formal,danos2008rule,danos2012graphs}, our novel approach of formulating such systems in terms of associative unital rule algebras may very well open this versatile modeling technique to many other areas of applied research. In conjunction with our previously developed DPO-type framework in~\cite{bp2018}, one could argue that our \emph{stochastic mechanics frameworks} are in a certain sense a \emph{universal construction}, in that once a semantics for associative unital rewriting is provided, the steps necessary to obtain the associated CTMCs are clearly formalized. It is interesting to compare the traditional approaches to stochastic rewriting systems with CTMC semantics such as~\cite{Heckel_2004,Heckel2012} in the DPO- and~\cite{Heckel2005} in the SPO-settings to our present reformulation in terms of rule algebras. The former approaches yet again tend to focus on derivation traces of stochastic rewriting systems, while our rule-algebraic approach aims to extract dynamical information from stochastic rewriting systems via analysis of certain combinatorial relationships (so-called nested commutators) of the infinitesimal generator of the CTMC with the (pattern-counting) observables of the system. It is via these relations that one may in certain cases obtain \emph{exact closed-form solutions} for such dynamical data (see e.g.\ Section~\ref{sec:appEx}). It would nevertheless be an intriguing avenue for future research to understand better the finer points of the ``traditional'' stochastic rewriting frameworks (which also feature sophisticated developments in terms of probabilistic model-checking and various types of stochastic logics), and furthermore whether or not rule-algebraic techniques might be of interest also in more general stochastic rewriting semantics such as probabilistic (timed) graph transformations~\cite{Krause2012,Maximova2018}.\\ \noindent\textbf{Structure of the paper:}~In Section~\ref{sec:Adh}, some category-theoretical background material is provided. The key results of associativity and concurrency of SqPO rewriting are presented in Section~\ref{ec:SqPO}, followed by the construction of SqPO-type rule algebras in Section~\ref{sec:ACDrd}. The second part of the paper contains the stochastic mechanics framework (Section~\ref{sec:SM}) as well as a practical application example (Section~\ref{sec:appEx}). Technical proofs are situated in the Appendix. \section{Background: adhesive categories and final pullback complements} \label{sec:Adh} We recall some of the elementary definitions and properties related to the notions of adhesive categories, upon which our framework will rely. \begin{definition}[\cite{lack2005adhesive}]\label{def:adhCats} A category $\bfC$ is said to be \textbf{\emph{adhesive}} if \begin{enumerate} \item $\bfC$ has pushouts along monomorphisms, \item $\bfC$ has pullbacks, \item pushouts along monomorphisms are van Kampen (VK) squares. \end{enumerate} The last property entails that in a commutative cube as in~\eqref{eq:diags} on the left where the bottom square is a pushout, this square is a VK square if and only if whenever the back and right vertical faces are pullbacks, then the top square is a pushout if and only if the front and left vertical squares are pullbacks. \end{definition} \begin{equation}\label{eq:diags} \vcenter{\hbox{\includegraphics[scale=0.7]{images/VKcube.pdf}}}\qquad \quad \vcenter{\hbox{\includegraphics[scale=0.8]{images/effectiveUnionsIllustration.pdf}}}\qquad \quad \vcenter{\hbox{\includegraphics[scale=0.8]{images/FPCillustration.pdf}}} \end{equation} We will be exclusively interested in categories that satisfy certain finiteness properties (in order to ensure finiteness of the sets of matches for rule applications and compositions, see Section~\ref{ec:SqPO}): \begin{definition}[Finitary categories~\cite{Braatz:2010aa}] A category $\bfC$ is said to be \emph{finitary} if every object $X\in \obj{\bfC}$ has only finitely many subobjects (i.e.\ if there only exist finitely many monomorphisms $Y\rightarrow X$ up to isomorphism for every $X\in \obj{\bfC}$). For every adhesive category $\bfC$, the restriction to finite objects of $\bfC$ defines a full subcategory $\bfC_{fin}$ called the \emph{finitary restriction} of $\bfC$. \end{definition} \begin{theorem}[Finitary restrictions; \cite{Braatz:2010aa}, Thm.~4.6] The \emph{finitary restriction} $\bfC_{fin}$ of any adhesive category $\bfC$ is a \emph{finitary adhesive category}. \end{theorem} Adhesive categories have been introduced and advocated in~\cite{lack2005adhesive} as a framework for rewriting due to their numerous useful properties, some of which are listed in Appendix~\ref{app:lemList} for the reader's convenience. One of the central concepts in the theory of SqPO rewriting is the following: \begin{definition}[Final Pullback Complement (FPC); \cite{Corradini_2006,Loewe_2015}] Let $\bfC$ be a category. Given a commutative diagram as in~\eqref{eq:diags} on the right, a pair of morphisms $(d,b)$ is a \emph{final pullback complement (FPC)} of a pair $(c,a)$ if (i) $(a,b)$ is a pullback of $(c,d)$ (i.e.\ if the square marked $(B)$ is a pullback square), and (ii) for each collection of morphisms $(x, y, z, w)$ as in~\eqref{eq:diags} on the right, where $(x,y)$ is pullback of $(c, z)$ and where $a\circ w=x$, there exists a unique morphism $w^{*}$ with $d\circ w^{*}=z$ and $w^{*}\circ y=b\circ w$. \end{definition} For our associative rewriting framework, it will be crucial to work with a category in which (i) FPCs are guaranteed to exist when constructing them for composable pairs of monomorphisms, and (ii) monomorphisms are stable under FPCs, i.e.\ FPCs of pairs of monomorphisms are given by pairs of monomorphisms. This property is satisfied by adhesive categories (cf.\ Lemma~\ref{lem:FPCfacts} of Appendix~\ref{app:lemList}), yet to the best of our knowledge the question of which more general types of categories possess this property has not yet been investigated to quite the level of generality as analogous classification problems in the case of DPO rewriting, even though there does exist a large body of work on classes of categories that admit SqPO constructions~\cite{Corradini_2006,lowe2015single,Loewe_2015,Corradini_2015,Loewe_2018}. Within these classes, according to~\cite{Cockett_2003,Corradini_2015} guarantees for the existence of FPCs may be provided for categories that possess a so-called $\cM$-partial map classifier. However, it appears to be an open question of whether the statement of Lemma~\ref{lem:FPCfacts} on stability of monomorphisms under FPCs may be generalized to the setting of $\cM$-adhesive categories, where $\cM$ is a class of monomorphisms. Relaying such questions to future work, we refer to Lemma~\ref{lem:GraphFPC} of Appendix~\ref{app:lemList} for a well-known instantiation of a suitable categorical setting from the SqPO literature in the form of the category $\mathbf{FinGraph}$ of finite directed multigraphs, which also serves to illustrate the FPC construction. \begin{asmptn}\label{ass:SqPO} $\bfC$ is an adhesive category in which all FPCs along monomorphisms exist, and in which monomorphisms are stable under FPCs. \end{asmptn} \section{Sesqui-Pushout rewriting} \label{ec:SqPO} We will now develop a framework for \emph{Sesqui-Pushout (SqPO) rewriting} in the setting of a category $\bfC$ satisfying Assumption~\ref{ass:SqPO}, in close analogy to the framework of associative Double-Pushout (DPO) rewriting as introduced in~\cite{bp2018,bp2019-ext}. Unlike in the general setting of SqPO rewriting, we will thus be able to not only prove a \emph{concurrency theorem} (Section~\ref{sec:SqPOconc}), but also an \emph{associativity property} of the SqPO-type rule composition (Section~\ref{sec:SqPOassoc}). \subsection{Concurrent composition and concurrency theorem}\label{sec:SqPOconc} For reasons that will become more transparent when introducing the SqPO-type rule algebra framework starting from Section~\ref{sec:ACDrd}, we opt for a non-standard convention of reading spans of monomorphisms ``from right to left'' (rather than the traditional ``left to right''), which is why we will speak of ``input'' and ``output'' of rules rather than ``left-'' and ``right hand sides'' to avoid confusion. \begin{definition}[SqPO-type rewriting; compare \cite{Corradini_2006}, Def.~4] \label{def:SqPOr} Let $\bfC$ be an adhesive category satisfying Assumption~\ref{ass:SqPO}. Denote by $\Lin{\bfC}$ the set of (isomorphism classes\footnote{Two productions $O\leftarrow K\rightarrow I$ and $O'\leftarrow K'\rightarrow I'$ are defined to be isomorphic if there exist isomorphisms $I\rightarrow I'$, $K\rightarrow K'$ and $O\rightarrow O'$ that make the obvious diagram commute; we will not distinguish between isomorphic productions. As natural in this category-theoretical setting, the constructions presented in the following are understood as defined up to such isomorphisms.} of) so-called \emph{linear productions}, defined as the set of spans of monomorphisms, \begin{equation} \Lin{\bfC}:=\{p\equiv (O\xleftarrow{o}K\xrightarrow{i}I)\mid o,i\in \mono{\bfC}\}\diagup_{\cong}\,. \end{equation} Given an object $X\in \obj{\bfC}$ and a linear production $p\in \Lin{\bfC}$, we denote the \emph{set of SqPO-admissible matches} $\sqMatch{p}{X}$ as the set of monomorphisms $m:I\rightarrow X$. Then the diagram below is constructed by taking the \emph{final pullback complement} marked $\mathsf{FPC}$ followed by taking the pushout marked $\mathsf{PO}$: \begin{equation}\label{eq:DPOr}\gdef\mycdScale{0.85} \begin{mycd} O \ar[d,"{m^{*}}"'] & K \ar[l,"o"']\ar[r,"i"]\ar[d,"k"'] \ar[dl,phantom, "{\mathsf{PO}}"]\ar[dr,phantom,"{\mathsf{FPC}}"] & I \ar[d,"m"] \\ {X'} & {\overline{K}} \ar[l,"o'"]\ar[r,"i'"'] & X\\ \end{mycd} \end{equation} We write $p_m(X):=X'$ for the object ``produced'' by the above diagram. The process is called \emph{(SqPO-) derivation} of $X$ along production $p$ and admissible match $m$, and denoted $p_m(X)\xLeftarrow[p,m]{{\tiny SqPO}} X$. \end{definition} Next, a notion of sequential composition of productions is introduced: \begin{definition}[SqPO-type concurrent composition]\label{def:SqPOcomp} Let $p_1,p_2\in \Lin{\bfC}$ be two linear productions. Then an overlap of the output object $O_1$ of $p_1$ with the input object $I_2$ of $p_2$, encoded as a span \[ {\color{h1color}\mathbf{m}}=(I_2{\color{h1color}\xleftarrow{m_2} M_{21}\xrightarrow{m_1}}O_1) \] with $m_1,m_2\in \mono{\bfC}$, is called an \emph{SqPO-admissible match of $p_2$ into $p_1$}, denoted $\mathbf{m}\in \sqMatch{p_2}{p_1}$, if the square marked $\mathsf{POC}$ in~\eqref{eq:SqPOccomp} is constructible as a pushout complement (with the cospan $I_2\xrightarrow{n_2}N_{21}\xleftarrow{n_1}O_1$ obtained by taking the pushout marked ${\color{h1color}\mathsf{PO}}$). In this case, the remaining parts of the diagram are formed by taking the final pullback complement marked $\mathsf{FPC}$ and the pushouts marked $\mathsf{PO}$: \begin{equation}\label{eq:SqPOccomp} \begin{mycd} O_2\ar[d,"n_2^{*}"'] & K_2 \ar[l,"o_2"']\ar[r,"i_2"]\ar[d,"k_2"'] \ar[dl,phantom,"\mathsf{PO}"] & I_2 \ar[dr,h1color,bend right,"n_2"]\ar[dl,phantom,"\mathsf{FPC}"] & {\color{h1color}M_{21}} \ar[l,h1color,"m_2"']\ar[r,h1color,"m_1"]\ar[d,h1color,phantom,"\mathsf{PO}"] & O_1 \ar[dl,h1color,bend left,"n_1"']\ar[dr,phantom,"\mathsf{POC}"]& K_1 \ar[l,"o_1"']\ar[r,"i_1"]\ar[d,"k_1"] \ar[dr,phantom,"\mathsf{PO}"] & I_1\ar[d,"n_1^{*}"]\\ {\color{h2color}O_{21}} & \overline{K}_2\ar[l,"o_2'"']\ar[rr,"i_2'"] & & {\color{h1color}N_{21}} & & \overline{K}_1\ar[ll,"o_1'"']\ar[r,"i_1'"] & {\color{h2color}I_{21}}\\ &&& {\color{h2color} K_{21}} \ar[u,phantom,h2color,"\mathsf{PB}"] \ar[ull,bend left=10,dotted,h2color,"i_2''"'] \ar[ulll,bend left=10,h2color,near end,"o_{21}=o_2'\circ i_2''"] \ar[urr,bend right=10,dotted,h2color,"o_1''"] \ar[urrr,bend right=10,h2color,near end,"i_{21}=o_1''\circ i_1'"'] &&& \end{mycd} \end{equation} If $\mathbf{m}\in \sqMatch{p_2}{p_1}$, we write $\sqComp{p_2}{\mathbf{m}}{p_1}\in \Lin{\bfC}$ for the \emph{composite} of $p_2$ with $p_1$ along the admissible match $\mathbf{m}$, defined as \begin{equation}\label{eq:defSqPOcomp} \begin{aligned} \sqComp{p_2}{\mathbf{m}}{p_1}&\equiv {\color{h2color}(O_{21}\xleftarrow{o_{21}}K_{21}\xrightarrow{i_{21}}I_{21})} \,. \end{aligned} \end{equation} \end{definition} Due to stability of monomorphisms under pushouts, pullbacks and FPCs in the setting of a category satisfying Assumption~\ref{ass:SqPO}, all morphisms in Definitions~\ref{def:SqPOr} and~\ref{def:SqPOcomp} are guaranteed to be monomorphisms, whence in particular the span $\sqComp{p_2}{\mathbf{m}}{p_1}$ is a span of monomorphisms and thus indeed an element of $\Lin{\bfC}$.\\ At first sight, it might appear irritating that in the definition of the SqPO-type rule composition, the right hand part of~\eqref{eq:SqPOccomp} involves a \emph{pushout complement} (marked $\mathsf{POC}$), while the left hand part of the diagram in~\eqref{eq:SqPOccomp} features a \emph{final pullback complement} (marked $\mathsf{FPC}$). Intuitively, considering the case of graph rewriting for concreteness, in a given sequential application of two productions, while the application of the first production may lead to implicit edge deletions, the second production is incapable of having any causal interaction with edges deleted by the first production. In contrast, the second production may in a given sequential application very well implicitly delete edges present in the output object of the first production, which explains the presence of the FPC in the defining equation~\eqref{eq:SqPOccomp}. We refer the interested readers to~\cite{bdgh2016} for further intuitions attainable in terms of so-called rule diagrams for presenting rule compositions. The justification for Definition~\ref{def:SqPOcomp} in the general case is provided via the following \emph{concurrency theorem}. Even though at least in certain specialized settings the ``synthesis'' part of this theorem has been foreseen already in~\cite{Loewe_2015} (where it is also commented that a full concurrency theorem for SqPO rewriting might be attainable), the following result appears to be new. \begin{theorem}[SqPO-type Concurrency Theorem]\label{thm:SqPOconcur} Let $\bfC$ be an adhesive category satisfying Assumption~\ref{ass:SqPO}. Let $p_1,p_2\in \Lin{\bfC}$ be two linear rules and $X_0\in ob(\bfC)$ an object. \begin{itemize} \item \textbf{Synthesis:} Given a two-step sequence of SqPO derivations \[ X_2\xLeftarrow[p_2,m_2]{{\tiny SqPO}} X_1\xLeftarrow[p_1,m_1]{{\tiny SqPO}}X_0\,, \] with $X_1:=p_{1_{m_1}}(X_0)$ and $X_2:=p_{2_{m_2}}(X_1)$, there exists a SqPO-composite rule $q=\sqComp{p_2}{\mathbf{n}}{p_1}$ for a unique $\mathbf{n}\in \sqRMatch{p_2}{p_1}$, and a unique SqPO-admissible match $n\in \sqMatch{q}{X}$, such that \[ q_n(X)\xLeftarrow[q,n]{{\tiny SqPO}} X_0\qquad \text{and}\qquad q_n(X_0)\cong X_2\,. \] \item \textbf{Analysis:} Given an SqPO-admissible match $\mathbf{n}\in \sqRMatch{p_2}{p_1}$ of $p_2$ into $p_1$ and an SqPO-admissible match $n\in \sqMatch{q}{X}$ of the SqPO-composite $q=\sqComp{p_2}{\mathbf{n}}{p_1}$ into $X$, there exists a unique pair of SqPO-admissible matches $m_1\in \sqMatch{p_1}{X_0}$ and $m_2\in \sqMatch{p_2}{X_1}$ with $X_1:=p_{1_{m_1}}(X_0)$ such that \[ X_2\xLeftarrow[p_2,m_2]{{\tiny SqPO}} X_1 \xLeftarrow[p_1,m_1]{{\tiny SqPO}} X_0\qquad \text{and}\qquad X_2\cong q_n(X)\,. \] \end{itemize} \begin{proof} See Appendix~\ref{app:SqPOconcur}. \end{proof} \end{theorem} \subsection{Composition and associativity}\label{sec:SqPOassoc} The following theorem establishes that in analogy to the DPO rewriting setting of~\cite{bp2018}, also the sesqui-pushout variant of rule compositions possesses a form of associativity property. \begin{theorem}[SqPO-type associativity theorem]\label{thm:SqPOassoc} Let $\bfC$ be an adhesive category satisfying Assumption~\ref{ass:SqPO}. % Then the SqPO-composition operation $\sqComp{.}{.}{.}$ on linear productions of $\bfC$ is \emph{associative} in the following sense: % given linear productions $p_1,p_2,p_3\in \Lin{\bfC}$, there exists a bijective correspondence between pairs of SqPO-admissible matches $(\mathbf{m}_{21},\mathbf{m}_{3(21)})$ and $(\mathbf{m}_{32},\mathbf{m}_{(32)1})$ such that \begin{equation}\label{eq:THMassoc} \sqComp{p_3}{\mathbf{m}_{3(21)}}{\left(\sqComp{p_2}{\mathbf{m}_{21}}{p_1}\right)}\; \cong \; \sqComp{\left(\sqComp{p_3}{\mathbf{m}_{32}}{p_2}\right)}{\mathbf{m}_{(32)1}}{p_1}\,. \end{equation} \begin{proof} Intuitively, the associativity property in the SqPO case manifests itself in a form entirely analogous to the DPO case~\cite{bp2018}, whereby the data provided along the path highlighted in orange below permits to uniquely compute the data provided along the path highlighted in blue and vice versa (with both sets of overlaps computing the same ``triple composite'' production that is encoded as the composition of the three spans in the bottom front row): \begin{equation} \vcenter{\hbox{\includegraphics[scale=0.4,page=1]{images/newAssocProof-final.pdf}}} \end{equation} See Appendix~\ref{app:SqPOassoc} for the precise technical details of the proof. \end{proof} \end{theorem} We invite the interested readers to compare the SqPO-type constructions presented here against those contained in the extended journal version~\cite{bp2019-ext} of~\cite{bp2018} for the DPO framework, since this might lend some intuitions on the otherwise very abstract nature of the proofs to the experts. \section{From associativity to SqPO-type rule algebras} \label{sec:ACDrd} For the rule algebra constructions, we will require an additional structure: \begin{definition}[Initial objects] An object $\varnothing\in \obj{\bfC}$ of some category $\bfC$ is said to be a \emph{strict initial object} if for every object $X\in \obj{\bfC}$, there exists a unique morphism $\varnothing\rightarrow X$, and if any morphism $X\rightarrow \varnothing$ must be an isomorphism. \end{definition} For example, the category $\mathbf{Graph}$ and its finitary restriction $\mathbf{FinGraph}$ possess a strict initial object (the empty graph). For the experts, it appears worthwhile noting the following result: \begin{lemma}[Extensive categories; \cite{lack2005adhesive}, Lem.~4.1] An adhesive category $\bfC$ is an \emph{extensive category}\footnote{For the purposes of this paper, it suffices to consider the ``if'' direction as a definition of extensivity, since the relevant structure to our constructions is that of having a strict initial object (see e.g.~\cite{lack2005adhesive} for the precise definition of extensivity).} if and only if it possesses a strict initial object. \end{lemma} \begin{asmptn}[Prerequisites for SqPO-type rule algebras]\label{ass:RAsqpo} We assume that $\bfC$ is an adhesive category satisfying Assumption~\ref{ass:SqPO}, and which is in addition \emph{finitary} and possesses a strict initial object $\varnothing\in \obj{\bfC}$. \end{asmptn} \begin{definition}[SqPO-type rule algebras] Let $\delta:\Lin{\bfC}\rightarrow \cR_{\bfC}$ be defined as an isomorphism from $\Lin{\bfC}$ to the basis of a free $\bR$-vector space $\cR_{\bfC}\equiv(\cR_{\bfC},+,\cdot)$, such that\footnote{Recall that for a set $A$, the notation $span_{\bR}(\{e(a)\mid a\in A\})$ entails to ``take the $\bR$-span over basis vectors $e(a)$ indexed by elements of $A$'', i.e.\ elements of the resulting $\bR$-vector space are (finite) linear combinations of the basis vectors $e(a)$ with real coefficients .} \begin{equation} \cR_{\bfC}:=span_{\bR}(\{\delta(p)\mid p\in \Lin{\bfC}\})\,. \end{equation} In order to clearly distinguish between elements of $\Lin{\bfC}$ and basis vectors of $\cR_{\bfC}$, we introduce the notation \begin{equation} (\grule{O}{p}{I}):=\delta\left(O\xleftarrow{o}K\xrightarrow{i}I\right)\,. \end{equation} Define the \emph{SqPO rule algebra product} $\odot_{\cR_{\bfC}}$ on a category $\bfC$ that satisfies Assumption~\ref{ass:RAsqpo} as the binary operation \begin{equation} \odot_{\cR_{\bfC}}:\cR_{\bfC}\times \cR_{\bfC}\rightarrow \cR_{\bfC}:(R_2,R_1)\mapsto R_2\odot_{\cR_{\bfC}} R_1\,, \end{equation} where for two basis vectors $R_i=\delta(p_i)$ encoding the linear rules $p_i\in Lin(\bfC)$ ($i=1,2$), \begin{equation}\label{eq:defRcompSqPO} R_2\odot_{\cR_{\bfC}}R_1 :=\sum_{\mathbf{m}\in \sqRMatch{p_2}{p_1}}\delta\left(\sqComp{p_2}{\mathbf{m}}{p_1}\right)\,. \end{equation} The definition is extended to arbitrary (finite) linear combinations of basis vectors by bilinearity, whence for $p_i,p_j\in \Lin{\bfC}$ and $\alpha_i,\beta_j\in \bR$, \begin{equation} \left(\sum_i \alpha_i\cdot\delta(p_i)\right)\odot_{\cR_{\bfC}}\left(\sum_j\beta_j\cdot \delta(p_j)\right):=\sum_{i,j}(\alpha_i\cdot\beta_j)\cdot \left(\delta(p_i)\odot_{\cR_{\bfC}}\delta(p_j)\right)\,. \end{equation} We call $\cR^{sq}_{\bfC}\equiv(\cR_{\bfC},\odot_{\cR_{\bfC}})$ the \textbf{\emph{SqPO-type rule algebra}} over the finitary adhesive and extensive category $\bfC$. \end{definition} The rule algebra product $R_2\odot_{\cR_{\bfC}}R_1$ for $R_j=\delta(r_j)$ ($j=1,2$) thus encodes the non-determinism in the SqPO-type sequential composition of the linear rule $r_2$ with $r_1$ in terms of the ``sum over all possible compositions''. As the following example illustrates, since $\delta$ is defined to map from \emph{isomorphism classes} of linear rules to basis vectors of $\cR_{\bfC}$, and since two distinct matches may lead to isomorphic composite rules, $R_2\odot_{\cR_{\bfC}}R_1$ typically evaluates to a linear combination of basis vectors $\delta(r)$ with integer coefficients: \begin{example}\label{ex:ruleAlgComp} Let $\bfC=\mathbf{FinGraph}$ be the category of finite directed multigraphs, with $\varnothing$ the empty graph. Then with $\odot\equiv \odot_{\cR_{\bfC}}$, we find for example \begin{equation} \begin{aligned} &\delta(\varnothing\hookleftarrow \varnothing\hookrightarrow \OneVertG[])\odot \delta(\OneVertG[]\;\;\OneVertG[]\hookleftarrow \varnothing\hookrightarrow\varnothing)\\ &\qquad=\sum_{\substack{\mathbf{m}\in \{ (\OneVertG[]\hookleftarrow \varnothing\hookrightarrow \OneVertG[]\;\;\OneVertG[]), (\OneVertG[blue]\hookleftarrow \OneVertG[blue]\hookrightarrow \OneVertG[blue]\;\;\OneVertG[]),\\ \qquad(\OneVertG[blue]\hookleftarrow \OneVertG[blue]\hookrightarrow \OneVertG[]\;\;\OneVertG[blue]) \}}}\delta\left(\sqComp{(\varnothing\hookleftarrow \varnothing\hookrightarrow \OneVertG[])}{\mathbf{m}}{(\OneVertG[]\;\;\OneVertG[]\hookleftarrow \varnothing\hookrightarrow\varnothing)}\right)\\ &\qquad = \delta(\OneVertG[]\;\;\OneVertG[]\hookleftarrow \varnothing\hookrightarrow \OneVertG[]) +2\delta(\OneVertG[]\hookleftarrow \varnothing\hookrightarrow \varnothing)\,. \end{aligned} \end{equation} The result of the composition thus captures the combinatorial insight that there are two contributions that evaluate to an isomorphic rule algebra element. More generally, one finds the following structure of compositions of rule algebra elements based upon ``discrete'' graph rewriting rules: letting $\bullet^{\uplus\:n}$ denote the $n$-vertex graph without edges (for $n\geq 0$), one finds (for $p,q,r,s\geq 0$) \begin{equation}\label{eq:combinatorial} \begin{aligned} &\delta(\bullet^{\uplus\:p}\hookleftarrow \varnothing\hookrightarrow \bullet^{\uplus\:q}) \odot\delta(\bullet^{\uplus\:r}\hookleftarrow \varnothing\hookrightarrow \bullet^{\uplus\:s})\\ &\qquad =\sum_{k=0}^{min(q,r)}k!\binom{q}{k}\binom{r}{k} \delta(\bullet^{\uplus\:(p+r-k)}\hookleftarrow \varnothing\hookrightarrow \bullet^{\uplus\:(q+s-k)})\,. \end{aligned} \end{equation} This result is further interpreted in Example~\ref{ex:repInt}. \end{example} \begin{theorem}[Properties of $\cR^{sq}_{\bfC}$] For every category $\bfC$ satisfying Assumption~\ref{ass:RAsqpo}, the associated SqPO-type rule algebra $\cR^{sq}_{\bfC}\equiv(\cR_{\bfC},\odot_{\cR_{\bfC}})$ is an \emph{associative unital algebra}, with unit element $R_{\varnothing}:=(\grule{\varnothing}{}{\varnothing})$. (Proof: Appendix~\ref{app:SqPOraProps}) \end{theorem} For the unital and associative SqPO-type rule algebras, one may provide a notion of \emph{representations} in analogy to the DPO-type case (compare~\cite{bdg2016,bp2018}): \begin{definition}[Canonical representation of $\cR^{sq}_{\bfC}$]\label{def:canRepSqPO} Let $\bfC$ be a category satisfying Assumption~\ref{ass:RAsqpo}, with a strict initial object $\varnothing\in ob(\bfC)$, and let $\cR^{sq}_{\bfC}$ be its associated rule algebra of SqPO type. Denote by $\hat{\bfC}$ the free $\bR$-vector space spanned by basis vectors $\ket{X}$ indexed by isomorphism classes of objects, \begin{equation} \hat{\bfC}:= span_{\bR}\left(\left\{\left. \ket{X} \right\vert X\in \obj{\bfC}_{\cong} \right\}\right)\equiv (\hat{C},+,\cdot)\,. \end{equation} Then the \emph{canonical representation} $\rho^{sq}_{\bfC}:\cR^{sq}_{\bfC}\rightarrow End_{\bR}(\hat{\bfC})$ of $\cR^{sq}_{\bfC}$ is defined as a morphism from the SqPO-type rule algebra $\cR^{sq}_{\bfC}$ to endomorphisms of $\hat{\bfC}$, with \begin{equation}\label{eq:canRepSqPO} \rho^{sq}_{\bfC}(\delta(p))\ket{X}:=\begin{cases} \sum_{m\in \sqMatch{p}{X}}\ket{p_m(X)}\quad &\text{if }\sqMatch{p}{X}\neq \emptyset\\ 0_{\hat{\bfC}}&\text{otherwise,} \end{cases} \end{equation} and extended to arbitrary elements of $\cR^{sq}_{\bfC}$ and of $\hat{\bfC}$ by linearity. \end{definition} \begin{example}\label{ex:repInt} Extending Example~\ref{ex:ruleAlgComp}, letting $\rho\equiv \rho^{sq}_{\mathbf{FinGraph}}$, note first that by definition for all (isomorphism classes of) finite multigraphs $G\in \obj{\mathbf{FinGraph}}_{\cong}$, $\ket{G}=\rho(\delta(G\hookleftarrow \varnothing\hookrightarrow \varnothing))\ket{\varnothing}$. With \begin{equation}\label{eq:HWrep} \hat{D}:=\rho(\delta(\varnothing\hookleftarrow \varnothing\hookrightarrow \OneVertG[]))\,,\; \hat{X}:=\rho(\delta(\OneVertG[]\hookleftarrow \varnothing\hookrightarrow \varnothing))\,,\; \ket{n}:=\ket{\bullet^{\uplus\:n}} (n\geq0)\,, \end{equation} as a consequence of~\eqref{eq:combinatorial} of Example~\ref{ex:ruleAlgComp} one may verify that \begin{equation}\label{eq:HWreP2} \hat{D}\ket{0}=0_{\widehat{\mathbf{FinGraph}}}\,,\; \hat{D}\ket{n}=n\ket{n-1}\; (n>0)\,,\; \hat{X}\ket{n}=\ket{n+1}\,. \end{equation} In other words, the data of~\eqref{eq:HWrep} and~\eqref{eq:HWreP2} furnishes a representation of the famous \emph{Heisenberg-Weyl algebra} that is of fundamental importance in combinatorics and physics (see e.g.\ \cite{blasiak2005boson,blasiak2010combinatorial,blasiak2011combinatorial}). An alternative such representation is given by the linear operators $\hat{x}$ (multiplication by $x$) and $\partial_x$ (derivation by $x$) acting on the $\bR$-vector space spanned by monomials $x^n$, which reproduces equations isomorphic to~\eqref{eq:HWrep} and~\eqref{eq:HWreP2}, with $\partial_x x^n=n x^{n-1}$ and $\hat{x} x^n=x^{n+1}$. However, the action of $\hat{D}$ and $\hat{X}$ is of course defined on \emph{all} states $\ket{G}$ with $G\in \obj{\mathbf{FinGraph}}$, so that we may e.g.\ compute the following ``derivative of a graph'': \begin{equation} \hat{D}\ket{\tP{% \node[vertices] (a) at (1,1) {}; \node[vertices] (b) at (1.7,1) {}; \node[vertices] (c) at (2.4,1) {}; \draw (a) edge[dirEdge] (b); \draw (b) edge[dirEdge] (c);}}= 2\ket{\tP{% \node[vertices] (a) at (1,1) {}; \node[vertices] (b) at (1.7,1) {}; \draw (a) edge[dirEdge] (b);}} +\ket{\tP{% \node[vertices] (a) at (1,1) {}; \node[vertices] (b) at (1.7,1) {};}} \end{equation} \end{example} The following theorem states that $\rho_C^{sq}$ as given in Definition~\ref{def:canRepSqPO} is indeed a homomorphism (and thus qualifies as a representation of $\cR^{sq}_{\bfC}$). \begin{restatable}[SqPO-type canonical representation]{theorem}{canRepSqpo} For a category $\bfC$ satisfying Assumption~\ref{ass:RAsqpo}, $\rho^{sq}_{\bfC}: \cR^{sq}_{\bfC} \rightarrow End(\hat{\bfC})$ of Definition~\ref{def:canRepSqPO} is a homomorphism of unital associative algebras. (Proof: Appendix~\ref{app:SqPOcanrep}) \end{restatable} \section{Applications of SqPO-type rule algebras to stochastic mechanics} \label{sec:SM} In practical applications of stochastic rewriting systems, the type of rewriting semantics presents one of the key design choices. For example, if in a given situation a stochastic graph rewriting system should be implemented, choosing DPO- vs.\ SqPO-type rewriting entails two entirely different semantics in terms of the behavior of vertex deletion rules: in the former case, vertices may only be deleted if also all its incident edges are explicitly deleted as well, while in the latter case no such restriction applies (i.e.\ an application of a vertex deletion rule ``automatically'' leads to the deletion of all incident edges). Evidently, such fundamentally different behavior at the level of rewriting rules will also have strong influence on the dynamical behavior of the associated stochastic rewriting systems, whence it is of considerable practical interest to have a universal implementation of such systems available in both formalisms. We begin by specializing the general definition of continuous-time Markov chains (see e.g.\ \cite{norris,Anderson_1991}) to the setting of SqPO-type rewriting systems in close analogy to~\cite{bdg2016,bdp2017,bp2018}. \begin{definition}[Continuous-time Markov Chains (CTMCs); compare \cite{bp2018}, Def.~7.1]\label{def:CTMCs} Let $\bfC$ be a category satisfying Assumption~\ref{ass:RAsqpo}, and which in addition possesses a \emph{countable} set of isomorphism classes of objects $\obj{\bfC}_{\cong}$. Let $\hat{\bfC}$ denote the free $\bR$-vector space introduced in Definition~\ref{def:canRepSqPO}. We define the space $Prob(\bfC)$ as the \emph{space of sub-probability distributions} in the following sense: \begin{equation} Prob(\bfC):=\left.\left\{ \ket{\Psi}=\!\!\!\!\!\!\sum_{o\in \obj{\bfC}_{\cong}}\!\!\!\!\!\!\psi_o \ket{o} \right\vert \forall o\in \obj{\bfC}_{\cong}: \psi_o\in \bR_{\geq0} \land\!\!\!\!\!\! \sum_{o\in \obj{\bfC}_{\cong}}\!\!\!\!\!\!\psi_o\leq 1 \right\} \end{equation} Let $Stoch(\bfC):=End_{\bR}(Prob(\bfC))$ be the space of endomorphisms of $Prob(\bfC)$, with elements referred to as \emph{sub-stochastic operators}. Then a \textbf{\emph{continuous-time Markov chain (CTMC)}} is specified in terms of a tuple of data $(\ket{\Psi(0)},H)$, where $\ket{\Psi(0)}\in Prob(\bfC)$ is the \emph{initial state}, and where $H\in End_{\bR}(\cS_{\bfC})$ is the \emph{infinitesimal generator} or \emph{Hamiltonian} of the CTMC (with $\cS_{\bfC}$ the space of real-valued sequences indexed by elements of $\obj{\bfC}_{\cong}$ and with finite coefficients). $H$ is required to be an infinitesimal (sub-) stochastic operator, which entails that for $H\equiv (h_{o,o'})_{o,o'\in \obj{\bfC}_{\cong}}$ and for all $o,o'\in \obj{\bfC}_{\cong}$, \begin{equation}\label{def:Hprops} (i)\; h_{o,o}\leq 0\,,\; (ii) \forall o\neq o':\; h_{o,o'}\geq 0\,,\; (iii)\; \sum_{o'} h_{o,o'}=0\,. \end{equation} Then this data encodes the \emph{evolution semi-group} $\cE:\bR_{\geq 0}\rightarrow Stoch(\bfC)$ as the (point-wise minimal non-negative) solution of the \emph{Kolmogorov backwards} or \emph{master equation}: \begin{equation} \tfrac{d}{dt}\cE(t)=H\cE(t)\,,\; \cE(0)=\mathbb{1}_{Stoch(\bfC)} \Rightarrow \;\forall t,t'\in \bR_{\geq 0}: \cE(t)\cE(t')=\cE(t+t') \end{equation} Consequently, the \emph{time-dependent state} $\ket{\Psi(t)}$ of the system is given by \begin{equation} \forall t\in \bR_{\geq 0}:\quad \ket{\Psi(t)}=\cE(t)\ket{\Psi(0)}\,. \end{equation} \end{definition} An important technical aspect of the above definition of CTMCs is the definition of the relevant space of (sub-)probability distributions in interaction with the definition of the infinitesimal generator $H$ and of the space $\cS_{\bfC}$. Some remarks on this interaction and a short explanation of the relevant mathematical concepts are provided in Appendix~\ref{app:StochMechProof}. Our main approach in studying CTMCs based on rewriting systems will consist in analyzing the dynamical statistical behavior of so-called observables: \begin{definition}[Observables; \cite{bp2018}, Def.~7.1]\label{def:obs} Let $\cO_{\bfC}\subset End_{\bR}(S_{\bfC})$ denote the space of \emph{observables}, defined as the space of \emph{diagonal operators}\footnote{Depending on the concrete case, the eigenvalue $\omega_O(X)$ in $O\ket{X}=\omega_O(X)\ket{X}$ may e.g.\ coincide with the number of occurrences of a pattern in the object $X$ (see also Appendix~\ref{sec:appProofSMF}).}, \begin{equation} \cO_{\bfC}:=\{O\in End_{\bR}(S_{\bfC})\mid \forall X\in \obj{\bfC}_{\cong}:\; O\ket{X}=\omega_O(X)\ket{X}\,,\; \omega_O(X)\in \bR\}\,. \end{equation} We furthermore define the so-called \emph{projection operation} $\bra{}:S_{\bfC}\rightarrow \bR$ via extending by linearity the definition of $\bra{}$ acting on basis vectors of $\hat{\bfC}$, \begin{equation} \forall X\in \obj{\bfC}_{\cong}:\quad \braket{}{X}:=1_{\bR}\,. \end{equation} These definitions induce a notion of \emph{correlators} of observables (also referred to as (mixed) moments), defined for $O_1,\dotsc,O_n\in \cO_{\bfC}$ and $\ket{\Psi}\in Prob(\bfC)$ as \begin{equation} \langle O_1,\dotsc,O_n\rangle_{\ket{\Psi}}:=\bra{}O_1,\dotsc,O_n\ket{\Psi} =\sum_{X\in \obj{\bfC}_{\cong}}\psi_X\cdot\omega_{O_1}(X)\cdots \omega_{O_n}(X)\,. \end{equation} \end{definition} The precise relationship between the notions of CTMCs and SqPO-type rewriting rules as encoded in the corresponding SqPO-type rule algebra formalism is established in the form of the following theorem, where in particular the notion of observables is quite different in nature to the DPO-type analogon (compare Thm.~7.12 of~\cite{bp2018}). This result is the first-of-its-kind \emph{universal} definition of SqPO-type stochastic rewriting systems with ``mass-action semantics'' (where activities of productions are proportional to their number of admissible matches in a given system state). \begin{restatable}[SqPO-type stochastic mechanics framework]{theorem}{thmStochMechSqPO}\label{thm:smfSqPO} Let $\bfC$ be a category satisfying Assumption~\ref{ass:RAsqpo}. Let $\{(\grule{O_j}{p_j}{I_j})\in \cR^{sq}_{\bfC}\}_{j\in \cJ}$ be a (finite) set of rule algebra elements, and $\{\kappa_j\in \bR_{\geq 0}\}_{j\in \cJ}$ a collection of non-zero parameters (called \emph{base rates}). Then one may construct the Hamiltonian $H$ of the associated CTMC from this data according to \begin{equation} H:=\hat{H}+\bar{H}\,,\quad \hat{H}:=\sum_{j\in \cJ}\kappa_j\cdot \rho_{\bfC}^{sq}\left(\grule{O_j}{p_j}{I_j}\right)\,,\quad \bar{H}:=-\sum_{j\in \cJ}\kappa_j\cdot \bO_{I_j}^{sq}\,. \end{equation} Here, the notation $\bO_M^{sq}$ for arbitrary objects $M\in \obj{\bfC}$ denotes the \emph{observables} (sometimes referred to as \emph{motif counting observables}) for the resulting CTMC of SqPO-type, with \begin{equation} \bO_M^{sq}:=\rho_{\bfC}^{sq}\left(\delta\left( M\xleftarrow{id_M}M\xrightarrow{id_M}M \right)\right)\,. \end{equation} We furthermore have the \emph{SqPO-type jump-closure property}, whereby for all $(\grule{O}{p}{I})\in \cR^{sq}_{\bfC}$ \begin{equation}\label{eq:ojcSqPO} \bra{}\rho^{sq}_{\bfC}(\grule{O}{p}{I})=\bra{}\bO_I^{sq}\,. \end{equation} \end{restatable} \begin{proof} See Appendix~\ref{app:StochMechProof}. \end{proof} \section{Application example: a dynamical random graph model} \label{sec:appEx} In order to illustrate our novel SqPO-type stochastic mechanics framework, let us consider a dynamical system evolving on the space of finite directed multigraphs. \begin{example} Let $\mathbf{FinGraph}$ be the finitary restriction of the category $\mathbf{Graph}$ (see also Lemma~\ref{lem:GraphFPC}), and denote by $\varnothing\in \mathbf{FinGraph}$ the strict initial object (the empty graph). We define a stochastic SqPO rewriting system based upon rules encoding \emph{vertex creation/deletion} ($v_{\pm}$) and \emph{edge creation/deletion} ($e_{\pm}$): \begin{equation} \begin{aligned} v_{+}&:=(\OneVertG\leftarrow \varnothing\rightarrow \varnothing) &\qquad v_{-}&:=(\varnothing\leftarrow \varnothing\rightarrow \OneVertG)\\ e_{+}&:=(\TwoVertDirEdgeG[] \leftarrow \TwoVertG[]\rightarrow \TwoVertG[]) &\qquad e_{-}&:=(\TwoVertG[] \leftarrow \TwoVertG[]\rightarrow \TwoVertDirEdgeG[]) \end{aligned} \end{equation} Together with a choice of \emph{base rates} $\nu_{\pm},\varepsilon_{\pm}\in \bR_{\geq0}$ and an initial state $\ket{\Psi(0)}\in Prob(\mathbf{FinGraph})$, this data defines a stochastic rewriting system with Hamiltonian $H:=\hat{H}+\bar{H}$, \begin{equation} \begin{aligned} \hat{H}&=\nu_{+}V_{+}+\nu_{-}V_{-}+\varepsilon_{+}E_{+}+\varepsilon_{-}E_{-}\\ \bar{H}&=-\nu_{+}\bO_{\varnothing}-\nu_{-}\bO_{\OneVertG} -\varepsilon_{+}\bO_{\OneVertG\; \OneVertG}-\varepsilon_{-}\bO_{\TwoVertDirEdgeG[]}\,, \end{aligned} \end{equation} where $V_{\pm}:=\rho^{sq}_{\mathbf{FinGraph}}(\delta(v_{\pm}))$ and $E_{\pm}:=\rho^{sq}_{\mathbf{FinGraph}}(\delta(e_{\pm}))$. \end{example} Despite the apparent simplicity of this model (which might be seen as a paradigmatic example of a \emph{random graph model}), the explicit analysis via the stochastic mechanics framework will uncover a highly non-trivial interaction of the dynamics of the vertex- and of the edge-counting observables. Intuitively, since in SqPO-rewriting no conditions are posed upon vertices that are to be deleted, the model is expected to possess a vertex dynamics that is the one of a so-called \emph{birth-death process}. If it were not for the vertex deletions, one would find a similar dynamics for the edge-counting observables (compare e.g.\ the DPO-type rewriting model considered in~\cite{bp2018}). However, since deletion of vertices deletes all incident edges, the dynamics of the edge-counting observable is rendered considerably more complicated, and in particular much less evident to foresee by heuristic arguments.\\ In order to compute the dynamics of the vertex counting observable $O_V:=\bO_{\OneVertG}$, we follow the approach of \emph{exponential moment-generating functions} put forward in~\cite{bdg2016,bdp2017,bdg2019} and define \begin{equation} M_V(t;\lambda):=\bra{}e^{\lambda O_V}\ket{\Psi(t)}\,, \end{equation} with $\lambda$ a formal variable. $M_V(t;\lambda)$ encodes the moments of the observable $O_V$, in that taking the $n$-th derivative of $M_V(t;\lambda)$ w.r.t.\ $\lambda$ followed by setting $\lambda\to0$ yields the $n$-th moment of $O_V$. Note that we must assume the \emph{finiteness} of all statistical moments as standard in the probability theory literature in order for $M_V(t;\lambda)$ to be well-posed, a property that we will in the case at hand indeed derive explicitly. Referring the interested readers to~\cite{bp2019-ext} for further details, suffice it here to recall the following variant of the BCH formula (see e.g.\ \cite{hall2015lieGroups}, Prop.~3.35), for $\lambda$ a formal variable and $A,B$ two composable linear operators, \begin{equation} e^{\lambda A}Be^{-\lambda A}=e^{ad_{\lambda A}}B =\sum_{n\geq 0}\frac{\lambda^n}{n!} ad_{ A}^{\circ n}(B)\,,\quad ad_A(B):=AB-BA\equiv [A,B]\,, \end{equation} with the convention that $ad_A^{\circ 0}(B):=B$. The operation $[.,.]$ is typically referred to as the \emph{commutator}. We may then derive the \emph{formal evolution equation} for $M_V(t;\lambda)$: \begin{equation} \begin{aligned} \tfrac{\partial}{\partial t}M_V(t;\lambda)&=\bra{}e^{\lambda O_V}H\ket{\Psi(t)}=\bra{}\left(e^{\lambda O_V}H e^{-\lambda O_V}\right)e^{\lambda O_V}\ket{\Psi(t)}\\ &=\bra{}\left(e^{ad_{\lambda O_V}}H\right)e^{\lambda O_V}\ket{\Psi(t)}\,. \end{aligned} \end{equation} Since by definition $\bra{}H=0$, it remains to compute the adjoint action $ad_{O_V}(H)$ of $O_V$ on $H$: \begin{equation} \begin{aligned} ad_{O_V}(H)&=\nu_{+}[O_V,V_{+}]+\nu_{-}[O_V,V_{-}]+\varepsilon_{+}[O_V,E_{+}]+\varepsilon_{-}[O_V,E_{-}]\\ &=\nu_{+}V_{+}-\nu_{-}V_{-} \end{aligned} \end{equation} Here, the result that $[O_V,E_{\pm}]=0$ has a very simple intuitive meaning: in applications of the linear rules $e_{\pm}$, the number of vertices remains unchanged, whence the vanishing of the commutator. Combining these results with the SqPO-type \emph{jump-closure property} (cf.\ Theorem~\ref{thm:smfSqPO}), we finally arrive at the following \emph{formal evolution equation} for $M_V(t;\lambda)$: \begin{equation}\label{eq:FEQ} \begin{aligned} \tfrac{\partial}{\partial t}M_V(t;\lambda)&= \nu_{+}\left(e^{\lambda}-1\right)\bra{}V_{+}e^{\lambda O_V}\ket{\Psi(t)} +\nu_{-}\left(e^{-\lambda}-1\right)\bra{}V_{-}e^{\lambda O_V}\ket{\Psi(t)}\\ &\overset{\eqref{eq:ojcSqPO}}{=} \nu_{+}\left(e^{\lambda}-1\right)\bra{}e^{\lambda O_V}\ket{\Psi(t)} +\nu_{-}\left(e^{-\lambda}-1\right)\bra{}O_Ve^{\lambda O_V}\ket{\Psi(t)}\\ &=\left( \nu_{+}\left(e^{\lambda}-1\right)+\nu_{-}\left(e^{-\lambda}-1\right)\tfrac{\partial}{\partial \lambda} \right)M_V(t;\lambda)\,. \end{aligned} \end{equation} Supposing for simplicity an initial state $\ket{\Psi(0)}=\ket{G_0}$ (for $G_0\in \obj{\mathbf{Graph}_{fin}}$ some graph with $N_V$ vertices and $N_E$ edges), we find that $M_V(0;\lambda)=\exp(\lambda N_V)$. The resulting initial value problem may be solved in closed-form via \emph{semi-linear normal-ordering} techniques known from the combinatorics literature~\cite{Dattoli:1997iz,blasiak2005boson,blasiak2011combinatorial,bdp2017} (see also~\cite{bp2019-ext,bdg2019}), and we obtain (for $t\geq 0$) \begin{equation}\label{eq:MVsol} M_V(t;\lambda)=\exp\left({\frac{\nu_{+}}{\nu_{-}}(e^{\lambda}-1)(1-e^{-\nu_{-}t})}\right)\left(1+(e^{\lambda}-1)e^{-\nu_{-}t} \right)^{N_V}\,. \end{equation} In the limit $t\to\infty$, the moment-generating function becomes that of a \emph{Poisson-distribution} (of parameter $\nu_{+}/\nu_{-}$), thus confirming the aforementioned intuition that the vertex-counting observable has the dynamical behavior of a so-called \emph{birth-death process} (see e.g.\ \cite{bdp2017}).\\ Let us consider next the dynamics of the edge-counting observable $O_E:=\bO_{\TwoVertDirEdgeG[]}$, where for brevity we will only consider the evolution of the mean edge count. The calculation of the evolution equation for the expectation value of $O_E$ simplifies to the analogue of the so-called \emph{Ehrenfest equation}, \begin{equation} \begin{aligned} \tfrac{\partial}{\partial t}\bra{}O_E\ket{\Psi(t)}&= \bra{}O_E \,H\ket{\Psi(t)}=\bra{}\big(H\, O_E+[O_E,H]\big)\ket{\Psi(t)}\,. \end{aligned} \end{equation} Recalling that $\bra{}H=0$, it remains to compute the commutator $[O_E,H]$: \begin{equation} \begin{aligned} [O_E,H]&=\nu_{+}[O_E,V_{+}]+\nu_{-}[O_E,V_{-}]+\varepsilon_{+}[O_E,E_{+}]+\varepsilon_{-}[O_E,V_{-}]\\ &=\nu_{+}\cdot 0 -\nu_{-}(E_{-}^{0,1}+E_{-}^{1,0})+\varepsilon_{+}E_{+}-\varepsilon_{-}E_{-}\\ E_{-}^{0,1}&=\rho_{\mathbf{FinGraph}}^{sq}\left(\delta\left(\OneVertGL[]{b}\leftarrow\OneVertGL[]{b}\rightarrow \TwoVertDirEdgeGLb{}{a}{b}\right)\right)\\ E_{-}^{1,0}&=\rho_{\mathbf{FinGraph}}^{sq}\left(\delta\left(\OneVertGL[]{a}\leftarrow\OneVertGL[]{a}\rightarrow \TwoVertDirEdgeGLb{}{a}{b}\right)\right)\,. \end{aligned} \end{equation} This calculation is a representative example of various effects that may occur in rule-algebraic commutation relations: we find a zero commutator $[O_E,V_{+}]$, indicating the fact that application of the vertex creation rule $V_{+}$ does not influence the edge count. The commutators $[O_E,E_{\pm}]=\pm E_{\pm}$ encode that application of the edge creation/deletion rules leads to positive/negative contributions to the edge count. Finally, the contribution of the commutator $[O_E,V_{-}]=-E_{-}^{0,1}-E_{-}^{1,0}$ is given by the representations of two rule algebra elements not originally present in the Hamiltonian $H$, with the structure of the underlying linear rules indicated by the labels $a$ and $b$ on the vertices (as customary in the rewriting literature). It then remains to apply the jump-closure property (Theorem~\ref{thm:smfSqPO}) together with the identity $\bO_{\OneVertG\; \OneVertG}=O_V(O_V-1)$ in order to obtain the \emph{evolution equation} \begin{equation}\label{eq:evoEmean} \tfrac{\partial}{\partial t}\bra{}O_E\ket{\Psi(t)} =\varepsilon_{+}\bra{}O_V(O_V-1)\ket{\Psi(t)}-(\varepsilon_{-}+2\nu_{-})\bra{}O_E\ket{\Psi(t)}\,. \end{equation} Together with an initial condition such as e.g.\ $\ket{\Psi(0)}=\ket{G_0}$ for some (finite) directed graph $G_0$ with $N_V$ vertices and $N_E$ edges, and computing the closed-form expression for the first contribution in~\eqref{eq:evoEmean} from our previous solution~\eqref{eq:MVsol} (as $\partial_{\lambda}(\partial_{\lambda}-1)M_V(t;\lambda)$ followed by setting $\lambda\to0$), the initial value problem for the mean edge count evolution may be easily solved in closed form via the use of a computer algebra software such as \textsc{Maple}, \textsc{Mathematica} or \textsc{Sage}. It is also straightforward to verify that for an arbitrary initial state $\ket{\Psi(0)}=\ket{G_0}$, the limit value of the mean edge count for $t\to\infty$ reads \begin{equation} \lim\limits_{t\to \infty}\bra{}O_E\ket{\Psi(t)}=\tfrac{\nu_{+}^2\varepsilon_{+}}{\nu_{-}^2(2\nu_{-}+\varepsilon_{-})}\,. \end{equation} Since the rates $\nu_{\pm}$ and $\varepsilon_{\pm}$ are free parameters, the above result entails that in this model one may freely adjust the limit value of the average vertex count as encoded in~\eqref{eq:FEQ} (whence $\nu_{+}/\nu_{-}$) as well as the limit value of the average edge count via suitable choices of the parameters $\varepsilon_{\pm}$. For illustration, we present some plots of the mean edge count evolution for the case $\ket{\Psi(0)}=\ket{\varnothing}$ and various choices of parameters in Figure~\ref{fig:meanEdgeEvo}. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{./images/meanEdgePlot.pdf} \caption{Time-evolution of $\bra{}O_E\ket{\Psi(t)}$ for $\ket{\Psi(0)}=\ket{\varnothing}$.\label{fig:meanEdgeEvo}} \end{figure} \section{Conclusion and Outlook} Extending our previous work on Double-Pushout (DPO) rewriting theories as presented in~\cite{bdg2016,bp2018,bdg2019} to the important alternative setting of Sesqui-Pushout (SqPO) rewriting, we provide a number of original results in the form of \emph{concurrency} and \emph{associativity} theorems for SqPO rewriting theories on adhesive categories. % These fundamental results in turn permit us to formulate so-called \emph{SqPO-type rule algebras}, which play a central role in our novel \emph{universal stochastic mechanics framework}. % We strongly believe that these contributions will provide fruitful grounds for further developments both in theory and practice of rewriting beyond the specialists' communities, especially in view of static analysis techniques~\cite{b2019c}. \bibliographystyle{eptcs}
1,314,259,996,206
arxiv
\section{Introduction} \label{sub:intro} The recent advances in information and communication technology (ICT) have promoted the evolution of conventional computer-aided industry to \emph{smart industry} featured with data-driven decision making \cite{Lade:IEEEIS2018}. During this paradigm shift, Internet of Things (IoT) plays an important role of connecting the physical industrial environment to the cyberspace of computing systems consequently forming a Cyber-Physical System (CPS). IoT can support a wide diversity of industrial applications such as manufacturing, logistics, food industry and utilities. IoT aims to improve operation efficiency and production throughput, reduce the machine downtime and enhance product quality. In particular, IoT has the following features: 1) decentralization of IoT systems, 2) diversity of IoT devices and systems, 3) heterogeneity of IoT data and 4) network complexity. All of them result in the challenges including heterogeneity of IoT system, poor interoperability, resource constraints of IoT devices, privacy and security vulnerabilities. The appearance of blockchain technologies brings the opportunities in overcoming the above challenges of IoT. A blockchain is essentially a distributed \emph{ledger} spreading over the whole distributed system. With the decentralized consensus, blockchains can enable a transaction to occur and be validated in a mutually-distrusted distributed system without the intervention of the trusted third party. Unlike incumbent transaction-management systems where the centralized agency needs to validate the transaction, blockchains can achieve the \emph{decentralized} validation of transactions, thereby greatly saving the cost and mitigating the performance bottleneck at the central agency. Moreover, each transaction saved in blockchains is essentially \emph{immutable} since each node in the network keeps all the committed transactions in the blockchain. Meanwhile, crytographic mechanisms (such as asymmetric encryption algorithms, digital signature and hash functions) guarantee the integrity of data blocks in the blockchains. Therefore, the blockchains can ensure non-repudiation of transactions. In addition, each transaction in blockchains is traceable to every user with the attached historic timestamp. Blockchain is essentially a perfect complement to IoT with the improved interoperability, privacy, security, reliability and scalability. In this paper, we investigate a new paradigm of integrating blockchain with IoT. We name such synthesis of blockchain and IoT as Blockchain of Things (BCoT). In particular, BCoT has the following merits: \begin{itemize} \item \emph{Interoperability} across IoT devices, IoT systems and industrial sectors, where the interoperability is the ability of interacting with physical systems and exchanging information between IoT systems. It can be achieved through the \emph{blockchain-composite layer} built on top of an overlay peer-to-peer (P2P) network with uniform access across different IoT systems. \item \emph{Traceability} of IoT data, where the traceability is the capability of tracing and verifying the spatial and temporal information of a data block saved in the blockchain. Each data block saved in a blockchain is attached with a historic timestamp consequently assuring the data traceability. \item \emph{Reliability} of IoT data is the quality of IoT data being trustworthy. It can be ensured by the integrity enforced by crytographic mechanisms including asymmetric encryption algorithms, hash functions and digital signature, all of which are inherent in blockchains. \item \emph{Autonomic interactions} of IoT system refer to the capability of IoT systems interacting with each other without the intervention of a trusted third party. This autonomy can be achieved by \emph{smart contracts} enabled by blockchains. In particular, contract clauses embedded in smart contracts will be executed automatically when a certain condition is satisfied (\textit{e.g.}, the user breaching the contract will be punished with a fine automatically). \end{itemize} Though BCoT can benefit IoT, there are also a number of challenges to be addressed before the potentials of BCoT can be fully unleashed. Therefore, this paper aims to present an in-depth survey on the state-of-the-art advances, challenges and open research issues in BCoT. \subsection{Comparison between this paper and existing surveys} There are several published papers discussing the convergence of blockchain with IoT. For example, the work of \cite{Dorri:PercomWorkshp2017} presents a smart home application of using blockchains for IoT. Zhang and Wen \cite{Zhang:P2PNA2017} proposed a business model to support P2P trading based on smart contracts and blockchains. However, these studies are too specific to a certain scenario of incorporating blockchain with IoT (\textit{e.g.}, a smart home application). Recently, several surveys on the convergence of blockchain with IoT have been published. In particular, \cite{Conoscenti:AICCSA16} gives a systematic literature review on blockchain for IoT with the categorization of a number of use cases. The work of \cite{BANERJEE:2017} presents a survey on IoT security and investigates the potentials of blockchain technologies as the solutions. Reyna \textit{et al.} \cite{REYNA:2018} investigated the possibility and research issues of integrating blockchain with IoT. The work of \cite{Fernandez-Carames:Access2018} presents a review on integrating blockchain with IoT in the application aspect. Ref. \cite{MSAli:CST2018} attempted to give a comprehensive survey on application of blockchain in IoT. The work of \cite{Panarello:Sensors18} gives a categorization of applications of blockchain for IoT. However, most of the existing surveys suffer from the following limitations: 1) there is no general architecture proposed for BCoT; 2) there is no study explicitly discussing blockchain for 5G beyond networks for IoT (however, this topic is of great importance for the development of IoT); 3) other important issues like life cycle of smart contracts are missing in most of the existing surveys. \subsection{Contributions} In view of prior work, we aim to (i) provide a conceptual introduction on IoT and blockchain technologies, (ii) present in-depth analysis on the potentials of incorporating blockchains into IoT and (iii) give insightful discussions of technical challenges enabling BCoT. In summary, the main contributions of this paper are highlighted as follows: \begin{enumerate} \item A brief introduction on IoT is first given and then accompanied by a summary of key characteristics of IoT. Meanwhile, research challenges of IoT are outlined. \item An overview of key blockchain technologies is then given with a summary of key characteristics of blockchains and a taxonomy of the incumbent blockchain systems. \item The core part of this paper is focused on the convergence of blockchain and IoT. In this respect, the opportunities of integrating blockchain with IoT are first discussed. An architecture of BCoT is then proposed and illustrated. \item The 5G-beyond networks play an important role in constructing the infrastructure for BCoT. Research issues about blockchain for 5G-beyond networks in IoT are also discussed. \item Furthermore, this paper summarizes the applications of BCoT and outlines the open research issues in BCoT. \end{enumerate} The remainder of the paper is organized as follows. Section \ref{sec:IIoT} first presents an overview on IoT. Section \ref{sec:blockchain} then gives the introduction of blockchain technology. The convergence of blockchain and IoT is discussed in Section \ref{sec:IBoT}. Section \ref{sec:5G} discusses the research issues about blockchain for 5G-beyond networks. Section \ref{sec:IBoTapp} next summarizes the applications of BCoT. Open research issues are discussed in Section \ref{sec:chall-ibot}. Finally, the paper is concluded in Section \ref{sec:conc}. \section{Internet of Things} \label{sec:IIoT} In this section, we briefly introduce Internet of Things (IoT) in Section \ref{subsec:intro-IIoT} and summarize the challenges of IoT in Section \ref{subsec:challenges-IIoT}. \subsection{Introduction to Internet of Things} \label{subsec:intro-IIoT} Today's industry is experiencing a paradigm shift from conventional computer-aided industry to \emph{smart industry} driven by recently advances in Internet of Things (IoT) and Big Data Analytics (BDA). During this evolution, IoT plays a critical role of bridging the gap between the physical industrial environment and the cyberspace of computing systems while BDA can help to extract hidden values from massive IoT data so as to make intelligent decisions. IoT is essentially a network of smart objects (\textit{i.e.}, things) with provision of various industrial services. A typical IoT system consists of the following layered sub-systems (from bottom to up) as shown in Fig. \ref{fig:IIoT}: \begin{itemize} \item \emph{Perception Layer}: There is a wide diversity of IoT devices including sensors, actuators, controllers, bar code/Quick Response Code (QR Code) tags, RFID tags, smart meters and other wireless/wired devices. These devices can sense and collect data from the physical environment. Meanwhile, some of them (like actuators and controllers) can make actions on the environment. \item \emph{Communication Layer}: Various wireless/wired devices such as sensors, RFIDs, actuators, controllers and other tags can then connect with IoT gateways, WiFi Access Points (APs), small base stations (BS) and macro BS to form an industrial network. The network connection is enabled by a diverse of communication protocols such as Bluetooth, Near Field Communications (NFC), Low-power Wireless Personal Area Networks (6LoWPAN), Wireless Highway Addressable Remote Transducer (WirelessHART) \cite{Petersen:IEEE2011}, Low Power Wide Area Networks (LPWAN) technologies including Sigfox, LoRa, Narrowband IoT (NB-IoT) and industrial Ethernet \cite{MEKKI2018}. \item \emph{Industrial Applications}: IoT can be widely used to support a number of industrial applications. The typical industrial applications include manufacturing, supply chain, food industry, smart grid, health care and internet of vehicles. \begin{figure}[t] \centering \includegraphics[width=8.9cm]{IIoT.pdf} \caption{Internet of Things (IoT) consists of perception layer, communication layer and industrial applications} \label{fig:IIoT} \end{figure} \end{itemize} \subsection{Challenges of Internet of Things} \label{subsec:challenges-IIoT} In this paper, we mainly focus on Industrial IoT. We denote Industrial IoT by IoT thereafter without loss of generality. The IoT ensures the connection of various \emph{things} (smart objects) mounted with various electronic or mechanic sensors, actuators and software systems which can sense and collect information from the physical environment and then make actions on the physical environment. The unique features of IoT pose a number of research challenges exhibiting in the following aspects. \begin{figure*}[t] \centering \includegraphics[width=18cm]{blockchain.pdf} \caption{Blockchain consists of a number of consecutively-connected blocks and the detailed view represents a Merkle tree structure (where TX represents a transaction)} \label{fig:blockchain} \end{figure*} \begin{itemize} \item \emph{Heterogeneity} of IoT systems exhibits in the heterogeneous IoT devices, heterogeneous communication protocols and heterogeneous IoT data types (\textit{i.e.}, structured, semi-structured and nonstructured). The heterogeneity is also the root of other challenges such as interoperability, privacy and security (to be explained as follows). \item \emph{Complexity} of networks. There are a number of communication/network protocols coexisting in IoT. Typical network protocols include NFC, Bluetooth, 6LoWPAN, WirelessHART, Sigfox, LoRa and NB-IoT, all of which offer different network services. For example, 6LoWPAN and WirelessHART have typically short communication coverage (\textit{e.g.}, less than 100 meters) while LPWAN technologies can provide the coverage from 1km to 10 km \cite{MChen:IEEEAccess2017,Khutsoane:IECON2017,hndai:EIS19}. \item \emph{Poor interoperability} is the capability of IoT systems (both hardware and software) to exchange, make use of information and collaborate with each other. Due to the decentralization of IoT systems and the heterogeneity of IoT systems, it is challenging to exchange the data between different industrial sectors, strategic centers, IoT systems. As a result, the interoperability of IoT is difficult to be achieved. \item \emph{Resource constraints of IoT devices}. IoT devices such as sensors, actuators, RFID tags and smart meters suffer from limited resources including computing resource, storage resource and battery power. For example, there is no battery power for passive RFID tags that can only harvest the energy from RFID readers or from ambient environment \cite{XLu:IEEEWC2018}. Moreover, the resource constraints also result in the vulnerability of IoT devices to malicious attacks. \item \emph{Privacy vulnerability}. Privacy is to guarantee the appropriate usage of IoT data while there is no disclosure of user private information without user consent. It is challenging to preserve data privacy in IoT due to the complexity and the decentralization of IoT systems, the heterogeneity of IoT systems. Moreover, it becomes a trend to integrate IoT with cloud computing since cloud computing can empower IoT with extra computing and storage capabilities. However, uploading the confidential IoT data to the third-party cloud servers may also compromise the vulnerable privacy of IoT \cite{JZhou:ComMag2017}. \item \emph{Security vulnerability}. The decentralization and the heterogeneity of IoT systems also result in the difficulty in ensuring the security of IoT while the security is extremely important for an enterprise. The typical solutions such as authentication, authorization and communication encryption may not be appropriate to IoT due to the difficulty in implementing the security countermeasures in resource-constrained IoT systems. Moreover, IoT systems are also vulnerable to malicious attacks due to the failure of security firmware updates in time \cite{Roman:CN2013}. \end{itemize} \emph{Discussion.} Some intrinsic limitations of IoT can be overcome via recent ICT advances. For example, ambient backscatter assisted communications \cite{XLu:IEEEWC2018} can help IoT nodes obtain extra energy from ambience. Meanwhile, mobile edge computing can extend the capability of IoT nodes via offloading the computationally-intensive tasks to edge servers \cite{JHe:IoTJ18}. Moreover, the recent advances in blockchain technologies offer potential solutions to the challenges such as poor interoperability, privacy and security vulnerabilities. In addition, blockchain is also beneficial to improve heterogeneity of IoT systems. We will discuss these opportunities brought by blockchain to IoT in Section \ref{subsec:opp} after giving a briefing on blockchain technologies in Section \ref{sec:blockchain}. \section{Blockchain Technologies} \label{sec:blockchain} In this section, we first give an overview on blockchain technologies in Section \ref{subsec:overview-blockchain}, then summarize the key blockchain characteristics in Section \ref{subsec:keychar} and present a taxonomy of blockchain platforms in Section \ref{subsec:taxonomy}. \subsection{Overview of Blockchain Technologies} \label{subsec:overview-blockchain} \subsubsection{Blockchain} A blockchain is essentially a distributed \emph{ledger} spreading over the whole blockchain system \cite{zibin2016blockchain}. Fig. \ref{fig:blockchain} shows an exemplary blockchain consisting of a number of consecutively-connected blocks. Each block (with the exception of the first block) in a blockchain points to its immediately-previous block (called parent block) via an \emph{inverse} reference that is essentially the \emph{hash} value of the parent block. For example, block $i$ contains the hash of block $i-1$ as shown in Fig. \ref{fig:blockchain}. The first block of a blockchain is called the \emph{genesis} block having no parent block. In particular, a block structure consists of the following information: 1) block version (indicating the validation rules to follow), 2) the hash of parent block, 3) Timestamp recording the current time in seconds, 4) Nonce staring from 0 and increasing for every hash calculation, 5) the number of transactions, 6) MerkleRoot (\textit{i.e.}, the hash value of the root of a Merkel tree with concatenating the hash values of all the transactions in the block) as shown in the detailed view of Fig. \ref{fig:blockchain}. A blockchain is continuously growing with the transactions being executed. When a new block is generated, all the nodes in the network will participate in the block validation. A validated block will be automatically appended at the end of the blockchain via the inverse reference pointing to the parent block. In this manner, any unauthorized alterations on the previously-generated block can be easily detected since the hash value of the tampered block is significantly different from that of the unchanged block. Moreover, since the blockchain is distributed throughout the whole network, the tampering behavior can also be easily detected by other nodes in the network. \emph{Data integrity guarantee in blockchain.} Blockchains leverage cryptographic techniques to guarantee data integrity. In particular, there are two mechanisms in blockchains to ensure the data integrity: 1) \emph{an ordered link list structure of blocks}, in which each newly-appended block must include the hash value of the preceding block. In this manner, a falsification on any of the previous blocks will invalidate the subsequent blocks. 2) \emph{Merkel Tree structure}, in which each block contains a root hash of a Merkel tree of all the transactions. Each non-leave node is essentially a hash value of two concatenated values of its two children. Therefore, a Merkel tree is typically a binary tree. In this way, any falsification on the transactions will lead to a new hash value in the above layer, consequently resulting in a falsified root hash. As a result, any falsification can be easily detected. \subsubsection{Consensus algorithms} \label{subsec:consensus} One of the advantages of blockchain technologies is to validate the block trustfulness in a decentralized trustless environment without the necessity of the trusted third-party authority. In distributed environment, it is challenging to reach a \emph{consensus} on a newly-generated block as the consensus may be biased in favor of malicious nodes. This trustfulness validation in a decentralized environment can be achieved by \emph{consensus} algorithms. Typical consensus algorithms include proof of work (PoW), proof of stake (PoS) and practical byzantine fault tolerance (PBFT) \cite{castro1999practical}. Take PoW as an example. The creation of a newly-generated block is equivalent to the solution of a computationally-difficult problem. This computationally-difficult problem (\textit{aka} a puzzle) can nevertheless be verifiable without difficulty \cite{LI:FGCS2017}. Each node in the distributed peer-to-peer (P2P) network can participate in the validation procedure. The first node who solves the puzzle can append the validated block to the blockchain; this node is also called a \emph{miner}. It then broadcasts the validation results in the whole blockchain system, consequently other nodes validating and updating the new results in the blockchain. A small portion of bonus will then be given to this node as a compensation for solving the puzzle. \emph{Discrepancy solution.} In a distributed system, multiple nodes may validate blocks nearly at the same time. Meanwhile, the network latency can somehow result in bifurcated (or forked) chains at the same time. To solve the discrepancy, most of existing blockchain systems typically maintain the longest chain as the valid chain because the longest chain implies the most tolerant of being compromised by adversaries. If so, a shorter chain is automatically deserted (\textit{i.e.}, the blue dash-line box as shown in Fig. \ref{fig:blockchain}) and the future validation work will continue on the longest chain. \emph{Trustfulness of PoW.} The trustfulness of PoW is based on the assumption that a majority of blockchain nodes is trustful. Generally, 51\% of computational capability is regarded as the threshold of PoW being tolerant of malicious attacks. The incentive mechanisms can encourage miners to be honest against compromising. Meanwhile, solving the puzzle typically requires extensive computing power. The probability of solving the puzzle at a miner is often proportional to the computational capability and resource of a miner \cite{MConti:CST2018}. PoW schemes require extensive computation to solve the puzzle, thereby resulting in the extensive energy consumption. Unlike PoW, PoS requires the proof of ownership to validate the trustfulness of a block since the users with more cryptocurrencies (\textit{i.e.}, more stakes) are more trustful than those with fewer cryptocurrencies. In PBFT, each node who has the equal right to vote for the consensus will send its voting state to other nodes. After multiple rounds of voting procedure, the consensus reaches. We roughly categorize typical consensus algorithms into two types: 1) Probabilistic consensus algorithms and 2) Deterministic consensus algorithms. Table \ref{tab:consensus} gives the taxonomy. Probabilistic consensus algorithms including PoW, PoS and Delegated proof of stake (DPOS) typically first save the validated block to the chain and then seek the consensus of all the nodes while deterministic consensus algorithms first consent to the block and then saved the validated block to the chain. Moreover, probabilistic consensus algorithms often result in multiple bifurcate chains and the discrepancy is solved by choosing the longest chain. In contrast, deterministic consensus algorithms solve the discrepancy through multiple rounds of communications in the overlay network. \begin{table}[t] \caption{Taxonomy of typical consensus algorithms} \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{m{1.2cm}|m{3.2cm}|m{3.2cm}} \hline & \textbf{Probabilistic Consensus} & \textbf{Deterministic Consensus} \\ \hline \hline Consensus procedure & Saving first and then consenting & Consenting first and then saving\\ \hline Bifurcation (fork) & Yes & No \\ \hline Arbitration mechanism & Choosing the longest chain when there are multiple forked chains & Voting to solve discrepancy through multiple communication-rounds \\ \hline Adversary tolerance & $<$ 50\% computing or stakes & $<$ 1/3 voting nodes \\ \hline Complexity & High computational-complexity & High network-complexity \\ \hline Examples & PoW, PoS, DPOS & PBFT and PBFT variants, Tendermint\\ \hline \end{tabular} \label{tab:consensus} \end{table} There are many attempts to improve incumbent consensus algorithms, such as Ripple \cite{chase2018analysis}, Algorand \cite{Gilad:SOSP2017}, Tendermint, proof of authority (PoA) \cite{FRYu:Access18}, proof of elapsed time (PoET) \cite{Dinh:SIGMOD2017}. Instead of choosing single consensus algorithm, there is a trend of integrating multiple consensus algorithms to fulfill the requirements from different applications. \subsubsection{Working flow of blockchains} We next show how a blockchain works in an example. Take a money transfer as an example as shown in Fig. \ref{fig:blockchain-working}. Alice wants to transfer an amount of money to Bob. She first initiates the transaction at a computer through her Bitcoin wallet (\textit{i.e.}, Step \circled{\small 1}). The transaction includes the information such as the sender's wallet, the receiver's address and the amount of money. The transaction is essentially signed by Alice's private key and can be accessible and verifiable by other users via Alice's public key thereafter. Then the computer broadcasts the initiated transaction to other computers (or nodes) in the P2P network (\textit{i.e.}, Step \circled{\small 2}). Next, a validated transaction is then appended to the end of the chain of transactions consequently forming a new block in the blockchain once a miner successfully solves the puzzle (\textit{i.e.}, Step \circled{\small 3}). Finally, every node saves a replica of the updated blockchain when the validated transaction is appended to the blockchain (\textit{i.e.}, Step \circled{\small 4}). \subsection{Key Characteristics of Blockchain} \label{subsec:keychar} In summary, blockchain technologies have the following key characteristics. \begin{itemize} \item \emph{Decentralization.} In traditional transaction management systems, the transaction validation has been conducted through a trusted agency (\textit{e.g.}, a bank or government). This centralization manner inevitably results in the extra cost, the performance bottleneck and the single-point failure (SPF) at centralized service providers. In contrast, blockchain allows the transaction being validated between two peers without the authentication, jurisdiction or intervention done by the central agency, thereby reducing the service cost, mitigating the performance bottleneck, lowering the SPF risk. \item \emph{Immutability.} A blockchain consists of a consecutively-linked chain of blocks, in which each link is essentially an inverse hash point of previous block. Any modification on the previous block invalidates all the consequently-generated blocks. Meanwhile, the root hash of the Merkle tree saves the hash of all the committed transactions. Any (even tiny) changes on any transactions generates a new Merkle root. Therefore, any falsification can be easily detected. The integration of the inverse hash point and the Merkle tree can guarantee the data integrity. \begin{figure}[t] \centering \includegraphics[width=8.8cm]{workings-blockchain.pdf} \caption{Working flow of blockchains} \label{fig:blockchain-working} \end{figure} \item \emph{Non-repudiation.} Recall the fact that the private key is used to put the signature to the transaction, which can then be accessible and verified by others via the corresponding public key. Therefore, the crytographically-signed transaction cannot be denied by the transaction initiator. \item \emph{Transparency.} For most of public blockchain systems (like Bitcoin and Ethereum), every user can access and interact with the blockchain network with an equal right. Moreover, every new transaction is validated and saved in the blockchain, consequently being available for every user. Therefore, the blockchain data is essentially transparent to every user who can access and verify the committed transactions in the blockchain. \item \emph{Pseudonymity.} Despite the transparency of blockchain data, blockchain systems can preserve a certain level of the privacy via making blockchain addresses anonymous. For example, the work of \cite{Zyskind:IEEESPW15} presents an application of blockchain to preserve the privacy of personal data. However, blockchain can only preserve the privacy at a certain level since blockchain addresses are essentially traceable by inference \cite{MSAli:CST2018}. For example, it is shown in \cite{Chawathe2019} that the analysis of blockchain data can help to detect fraud and illegal transactions. Therefore, blockchain can only preserve the pseudonymity instead of full privacy. \item \emph{Traceability.} Each transaction saved in the blockchain is attached with a timestamp (recorded when the transaction occurs). Therefore, users can easily verify and trace the origins of historical data items after analyzing the blockchain data with corresponding timestamps. \end{itemize} \subsection{Smart Contract} \label{subsec:smart-contract} Smart contracts are a great advance for blockchain technology \cite{ream2016upgrading}. In 1990s, smart contracts were proposed as a computerized transaction protocol that executes the contractual terms of an agreement \cite{szabo1997idea}. Contractual clauses that are embedded in smart contracts will be enforced automatically when a certain condition is satisfied (\textit{e.g.}, one party who breaches the contract will be punished automatically). \begin{figure}[t] \centering \includegraphics[width=8.8cm]{lifecycle.pdf} \caption{Life cycle of smart contracts consisting of four consecutive phases: Creation, Deployment, Execution and Completion} \label{fig:contract} \end{figure} Blockchains are enabling smart contracts. Essentially, smart contracts are implemented on top of blockchains. The approved contractual clauses are converted into executable computer programs. The logical connections between contractual clauses have also been preserved in the form of logical flows in programs (\textit{e.g.}, \texttt{if-else-if} statement). The execution of each contract statement is recorded as an immutable transaction stored in the blockchain. Smart contracts guarantee appropriate access control and contract enforcement. In particular, developers can assign access permission for each function in the contract. Contract enforcement ensures that the contract execution is deterministic. Once any conditions in a smart contract are satisfied, the triggered statement will automatically execute the corresponding function in a predictable manner. For example, Alice and Bob agree on the penalty of violating the contract. If Bob breaches the contract, the corresponding penalty (as specified in the contract) will be automatically paid from Bob's deposit. The whole life cycle of smart contracts consists of four consecutive phases~\cite{ZZheng:FGCS20} as illustrated in Fig. \ref{fig:contract}: \begin{enumerate} \item \textit{Creation} of smart contracts. Several involved parties first negotiate on the obligations, rights and prohibitions on contracts. After multiple rounds of discussions and negotiations, an agreement can reach. Lawyers or counselors will help parties to draft an initial contractual agreement. Software engineers then convert this agreement written in natural languages into a smart contract written in computer languages including declarative language and logic-based rule language \cite{Idelberger:2016}. Similar to the development of computer software, the procedure of the smart contract conversion is composed of design, implementation and validation (\textit{i.e.}, testing). It is worth mentioning that the creation of smart contracts is an iterative process involving with multiple rounds of negotiations and iterations. Meanwhile, it is also involved with multiple parties, such as stakeholders, lawyers and software engineers. \item \textit{Deployment} of smart contracts. The validated smart contracts can then be deployed to platforms on top of blockchains. Contracts stored on the blockchains cannot be modified due to the immutability of blockchains. Any emendation requires the creation of a new contract. Once the smart contracts are deployed on blockchains, all the parties can access the contracts through the blockchains. Moreover, digital assets of both involved parties in the smart contract are locked via freezing the corresponding digital wallets \cite{Sillaber2017}. For example, the coin transfers (either incoming or outgoing) on the wallets relevant to the contract are blocked. Meanwhile, the parties can be identified by their digital wallets. \item \textit{Execution} of smart contracts. After the deployment of smart contracts, the contractual clauses have been monitored and evaluated. Once the contractual conditions reach (\textit{e.g.}, product reception), the contractual procedures (or functions) will be automatically executed. It is worth noting that a smart contract consisting of a number of declarative statements with logical connections. When a condition is triggered, the corresponding statement will be automatically executed, consequently a transaction being executed and validated by miners in the blockchains \cite{koulu2016blockchains}. The committed transactions and the updated states have been stored on the blockchains thereafter. \item \textit{Completion} of smart contracts. After a smart contract has been executed, new states of all involved parties are updated. Accordingly, the transactions during the execution of the smart contracts as well as the updated states are stored in blockchains. Meanwhile, the digital assets have been transferred from one party to another party (\textit{e.g.}, money transfer from the buyer to the supplier). Consequently, digital assets of involved parties have been unlocked. The smart contract has completed the whole life cycle. \end{enumerate} It is worth mentioning that during deployment, execution and completion of a smart contract, a sequence of transactions has been executed (each corresponding to a statement in the smart contract) and stored in the blockchain. Therefore, all the three phases need to write data to the blockchain as shown in Fig. \ref{fig:contract}. \subsection{Taxonomy of Blockchain Systems} \label{subsec:taxonomy} \begin{table}[t] \caption{Comparisons of Blockchain systems} \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{m{1.7cm} |m{1.5cm} | m{1.5cm} | m{2.2cm} } \hline &\bf{Public} & \bf{Private} & \bf{Consortium} \\ \hline\hline Decentralization & Decentralized & Centralized & Partially Decentralized\\ Immutability & Immutable & Alterable & Partially Immutable\\ Non-repudiation & Non-refusable & Refusable & Partially Refusable\\ Transparency & Transparent & Opaque & Partially Transparent \\ Traceability & Traceable & Traceable & Partially Traceable \\ \hline Scalability & Poor & Superior & Good \\ \hline Flexibility & Poor & Superior & Good \\ \hline Permission & Permissionless & Permissioned & Permissioned \\ \hline Consensus & PoW, PoS & Ripple & PBFT, PoA, PoET \\ \hline Examples & Bitcoin \cite{nakamoto2008bitcoin}, Ethereum\cite{Ethereum} & GemOS \cite{GemOS}, Multichain \cite{MultiChain} & Hyperledger \cite{hybperledge:2015} Ethereum\cite{Ethereum} \\ \hline \end{tabular} \label{tab:comp-blockchains} \end{table} We classify blockchain systems into three types: 1) public blockchains, 2) private blockchains and 3) consortium (or community) blockchains \cite{Xu:ICSA2017}. Most digital currencies such as BTC (\textit{i.e.}, the ticker symbol of Bitcoin cryptocurrency) and ETH (\textit{i.e.}, the ticker symbol of Ethereum cryptocurrency) are implemented on public blockchains, thereby being accessible by anyone in the P2P network. Differently, private blockchains can be managed or controlled by a single organization while consortium blockchains sit in limbo between public and private blockchains. Table \ref{tab:comp-blockchains} presents a comparison of three types of blockchains. In particular, we summary the comparison among public, private and consortium blockchains in the following aspects. \begin{itemize} \item \emph{Key characteristics.} Public blockchains are fully-decentralized while private and consortium blockchains are partially decentralized or fully controlled by a single group or multiple groups. Moreover, it is nearly impossible to tamper transactions in public blockchains as every node keeps a replica of the blockchain (containing all the transactions) while the dominant organization or multiple parties of consortium and private blockchains can modify the blockchain. Similarly, public blockchains can fully ensure the non-repudiation, transparency and traceability of transactions while private and consortium blockchains cannot or can only partially ensure these properties. \item \emph{Scalability.} Although public blockchains can guarantee the decentralization, immutability, transparency, non-repudiation and traceability, the merits are obtained in the cost of low transaction-validation rate, high latency and extra storage space consumption, consequently limiting the scalability of public blockchains. Compared with public blockchains, private and consortium blockchains have a better scalability since blockchains are fully controlled by a single group or multiple organizations and the consensus can be easily reached. \item \emph{Flexibility.} Similarly, public blockchains have the less flexibility than private and consortium blockchains since configurations of private and consortium blockchains are more adjustable. \item \emph{Permission.} Permission refers to consent or authorization to access the blockchains. In public blockchains, public participation is allowed, thereby being permissionless. However, private and consortium blockchains can allow one or more users to access and interact with blockchains with different permission levels. For example, some users can only read the blockchain data while others can either read or initiate transactions. \item \emph{Consensus.} Public blockchains usually use PoW and PoS as the consensus algorithms, which are Byzantine-failure tolerant while resulting in extensive resource consumption. Private blockchains can easily achieve the consensus among the authenticated users. Typical consensus algorithms used for private blockchains include PBFT, PoA and PoET. Moreover, consortium blockchains are a hybrid type of public blockchains and private blockchains. In particular, Ripple \cite{chase2018analysis} is a variant of PBFT typically used for consortium blockchains. \begin{figure*}[t] \centering \subfigure[Blockchain-composite layer]{ \includegraphics[width=8.7cm]{IBoT.pdf} \label{fig:ibot-layer}} \subfigure[P2P overlay network and blockchain node architecture]{ \includegraphics[width=8.7cm]{IBoT-deployment.pdf} \label{fig:ibot-deployment}} \caption{Overview of BCoT architecture} \label{fig:ibot} \end{figure*} \item \emph{Exemplary platforms.} Bitcoin \cite{nakamoto2008bitcoin} and Ethereum\cite{Ethereum} are two typical public blockchain platforms, which are mainly used for digital currency. With regard to private blockchains, GemOS \cite{GemOS} is a private blockchain platform for healthcare and supply chain. In addition, MultiChain \cite{MultiChain} is an open source platform granting the implementation of private blockchains. As for consortium blockchains, Hyperledger \cite{hybperledge:2015} is developing business consortium blockchain frameworks. Moreover, Ethereum also provides tools for building consortium blockchains \cite{consortium}. \end{itemize} \section{Convergence of Blockchain and IoT} \label{sec:IBoT} In this section, we first discuss the opportunities of integrating blockchain with IoT in Section \ref{subsec:opp}. We then present the architecture of the integration of blockchain and IoT (namely BCoT) in Section \ref{subsec:architecture}. We next discuss the deployment issues on BCoT in Section \ref{subsec:deployment}. \subsection{Opportunities of integrating blockchain with IoT} \label{subsec:opp} As summarized in Section \ref{subsec:challenges-IIoT}, IoT systems are facing many challenges such as heterogeneity of IoT systems, poor interoperability, resource constraints of IoT devices, privacy and security vulnerabilities. Blockchain technologies can complement IoT systems with the enhanced interoperability and the improved privacy and security. Moreover, blockchain can also enhance the reliability and scalability of IoT systems \cite{REYNA:2018}. In short, we name such integration of blockchain with IoT as BCoT. BCoT has the following potential benefits in contrast to incumbent IoT systems. \begin{itemize} \item \emph{Enhanced interoperability} of IoT systems. Blockchain can essentially improve the interoperability of IoT systems via transforming and storing IoT data into blockchains. During this procedure, heterogeneous types of IoT data are converted, processed, extracted, compressed and finally stored in blockchains. Moreover, the interoperability also exhibits in easily passing through different types of fragmented networks since blockchains are established on top of the P2P overlay network that supports universal internet access. \item \emph{Improved security} of IoT systems. On one hand, IoT data can be secured by blockchains since they are stored as blockchain transactions which are encrypted and digitally-signed by cryptographic keys (\textit{e.g.}, elliptic curve digital signature algorithm \cite{Johnson:2001}). Moreover, the integration of IoT systems with blockchain technologies (like smart contracts) can help to improve the security of IoT systems by automatically-updating IoT device firmwares to remedy vulnerable breaches thereby improving the system security \cite{christidis2016blockchains}. \item \emph{Traceability} and \emph{Reliability} of IoT data. Blockchain data can be identified and verified anywhere and anytime. Meanwhile, all the historical transactions stored in the blockchains are \emph{traceable}. For example, the work of \cite{QLu:IEEESoftware2017} has developed a blockchain-based product traceability system, which provide suppliers and retailers with traceable services. In this manner, the quality and originality of the products can be inspected and verified. Moreover, the immutability of blockchains also assures the reliability of IoT data since it is nearly impossible to alter or falsify any transactions stored in blockchains. \item \emph{Autonomic} interactions of IoT systems. Blockchain technologies can grant IoT devices or subsystems to interact with each other automatically. For example, the work of \cite{zhang2015iot} proposes Distributed autonomous Corporations (DACs) to automate transactions, in which there are no traditional roles like governments or companies involved with the payment. Being implemented by smart contracts, DACs can work automatically without human intervention consequently saving the cost. \end{itemize} \subsection{Architecture of Blockchain of Things} \label{subsec:architecture} We propose the architecture of BCoT as shown in Fig. \ref{fig:ibot}. In this architecture, the blockchain-composite layer plays as a middleware between IoT and industrial applications. This design has two merits: 1) offering an abstraction from the lower layers in IoT and 2) providing users with blockchain-based services. In particular, the blockchain-composite layer hides the heterogeneity of lower layers (like perception layer and communication layer in IoT). On the other hand, the blockchain-composite layer offers a number of blockchain-based services, which are essentially application programming interfaces (APIs) to support various industrial applications. As a result, the difficulty of developing industrial applications can also be lowered down due to the abstraction achieved by the blockchain-composite layer. \begin{figure*}[t] \centering \includegraphics[width=14.8cm]{IoTEdgeCloud.pdf} \caption{Deployment scenario of BCoT} \label{fig:IoTEdgeCloud} \end{figure*} In particular, the blockchain-composite layer consists of 5 sub-layers as shown in Fig. \ref{fig:ibot-layer} (from bottom to up): \begin{enumerate} \item \emph{Data sub-layer} collects the IoT data from the lower layers (\textit{e.g.}, perception layer) and wraps up the encrypted data with digital signature via asymmetric cryptographic algorithms and hash functions. These consecutively-connected data blocks then form the blockchain after the distributed validation. Different blockchain platforms may choose different cryptographic algorithms and hash functions. For example, Bitcoin blockchain chooses SHA-256 as the hash function and elliptic curve digital signature algorithm (ECDSA) as the signature algorithm. \item \emph{Network sub-layer} is essentially an overlay P2P network running on top of the communication layer. The overlay network consists of either virtual or physical links connecting nodes in the underlying communication networks (\textit{i.e.}, wired/wireless communication networks). One node only simply broadcasts the block of transactions to its connected peers. Once receiving the block of transactions, other peers will verify it locally. If it is valid, the block will be further propagated to other nodes through the overlay network. \item \emph{Consensus sub-layer} is mainly involved with the distributed consensus for the trustfulness of a block. The consensus can be achieved by various consensus algorithms like PoW, PoS, PBFT and DPOS (as explained in Section \ref{subsec:consensus}). It is worth mentioning that block propagation mechanisms (such as relay network propagration and advertisement-based propagation \cite{LI:FGCS2017}) are the prerequisite for the distributed consensus protocols. \item \emph{Incentive sub-layer} is responsible for the following tasks: 1) digital currency issuing, 2) digital currency distribution, 3) designing reward mechanism (especially for miners), 4) handling transaction cost, etc. In particular, it is important to design appropriate monetary policy of digital currency (\textit{i.e.}, money creation and distribution), distribute rewards to participants who contribute to distributed consensus (\textit{i.e.}, mining). \item \emph{Service sub-layer} provides users with blockchain-based services for various industrial sectors include manufacturing, logistics, supply chains, food industry and utilities. The blockchain as a service (BaaS) can be achieved by smart contracts, which can be automatically triggered when a special event occurs. For example, a payment contract is automatically executed when a product is well received by a consumer. \end{enumerate} It is worth mentioning that the network sub-layer that is established on top of the communication layer is the abstraction of underneath communication networks, consequently offering a universal network access across different networks as shown in Fig. \ref{fig:ibot-deployment}. Fig. \ref{fig:ibot-deployment} also shows the architecture of a blockchain node, which essentially includes blockchain data and other elements in the data sub-layer. \subsection{Deployment of BCoT} \label{subsec:deployment} The realistic deployment of BCoT is of great importance. However, due to the constraints of IoT devices, it is challenging to store the whole blockchain at IoT devices. In particular, there are two modes to store the blockchain data \cite{REYNA:2018}: i) \emph{full storage}, in which the entire blockchain is stored, ii) \emph{partial storage}, in which only a subset of data blocks are stored locally. Accordingly, we name the nodes with full storage of blockchain data as \emph{full nodes} and the nodes with partial storage of blockchain data as \emph{lightweight nodes}. In practice, a full node can be a cloud server or an edge server with adequate computing resources since it requires a large storage space to save the entire blockchain (\textit{e.g.}, the whole Bitcoin blockchain occupies nearly 185 GB at the end of September 2018 according to the statistic report\footnote{https://www.statista.com/statistics/647523/worldwide-bitcoin-blockchain-size/}) and strong computing capability of solving consensus puzzles (\textit{i.e.}, mining). On the other hand, resource-constrained IoT devices (\textit{e.g.}, sensors, IoT objects) can be lightweight nodes that can validate the trustfulness of a transaction without downloading or saving the whole blockchain (\textit{i.e.}, only saving partial blockchain data such as hash values). It is worth mentioning that the lightweight nodes highly rely on the full nodes. Fig. \ref{fig:IoTEdgeCloud} presents a possible deployment scenario of BCoT, in which cloud servers and edge servers may store the whole blockchain (or partial blockchain) data while IoT devices may only save the particial blockchain data. In addition to the deployment of BCoT, there are also several possible interaction manners between IoT and blockchain \cite{MSAli:CST2018}: (i) direct interaction between IoT and blockchain, in which IoT devices can directly access blockchain data saved at edge servers co-located with IoT gateways, Macro Base Stations (MBS) or Small BS; (ii) direct interaction between IoT nodes, in which IoT nodes can directly exchange/access partial blockchain data via D2D links; (iii) hybrid interaction of cloud and edge servers with IoT devices, in which IoT devices can interact with blockchain data through edge/cloud servers. There are several initiatives addressing the configuration and initialization of blockchain at edge servers or at IoT devices. For example, Raspnode\footnote{http://raspnode.com/} is a project mainly for installing Bitcoin and other blockchains at Raspberry Pi micro computers. EthArmbian\footnote{http://raspnode.com/} offers the customized Ubuntu Linux image for ARM devices, each of which can serve as an Ethereum node. Despite these initiatives, most of IoT devices are still lightweight nodes due to the limited storage. \section{Blockchain for 5G Beyond in IoT} \label{sec:5G} \begin{figure*}[t] \centering \includegraphics[width=14.8cm]{5Gbeyond.pdf} \caption{Blockchain for 5G Beyond Networks in IoT} \label{fig:5Gbeyond} \end{figure*} Although blockchain technology is promising to IoT, there are still many research issues to be addressed before the integration of blockchain with IoT, especially for the next-generation networks (\textit{i.e.}, 5G-beyond or 6G networks), which play a critical role in constructing the infrastructure for blockchains. Fig. \ref{fig:5Gbeyond} illustrates the potentials brought by blockchain to 5G-beyond networks in the perspectives from communications, network management and computing management. We explain them in details as follows. \subsection{Blockchain for communications} The growing demands of mobile data traffic are driving the more efficient resource management in the fifth generation (5G) communication systems. For example, radio spectrum is one of the most important resources \cite{MASSARO:2017}. Radio spectrum management typically includes spectrum auction and spectrum sharing. It is shown in the latest speech \cite{FCC:whitepaper18} given by Federal Communications Commission (FCC) commissioner J. Rosenworcel that blockchain technology could be used to achieve the dynamic and secure spectrum management in 5G and 5G beyond (\emph{aka} 6G) communication systems \cite{Saracco:6G18,Gatherer:6G18}. The benefits of using blockchains for 5G-beyond networks lie in the secure and traceable transaction-management without the necessity of a central intermediary, consequently saving the management cost. Ref. \cite{Seppo:2018} gives several use cases to illustrate that using blockchain technology can benefit radio spectrum sharing in terms of trustfulness, consensus and cost reduction. Moreover, Kotobi and Bilen \cite{Kotobi:VTM2108} put forth a blockchain-based protocol to secure spectrum sharing between primary users and cognitive users in wireless communication systems. In addition, blockchain may potentially help to share link conditions to multiple IoT nodes with privacy preservation consequently improving spectral efficiency via traffic optimization \cite{Kure:IoTJ19}. In addition to the radio spectrum management, blockchains also have the potentials to provide users with the improved mobile services. For example, 5G networks typically consist of a number of fragmented heterogeneous networks. Blockchains that are built on top of the network layer can help to integrate different networks with the provision of seamless access between different networks. Moreover, smart contracts can automate the procedure of provisions and agreements between network operators and subscribers while operational cost can be greatly saved \cite{Huawei:whitepaper17}. The work of \cite{Sheng:ICBC2018} also shows that a blockchain-based system can help operating nodes to improve their operational and service capabilities. In the future, the synthesis of blockchains and big data analytics can help service providers to extract valuable insights from transactions of subscribers and offer the better services for users \cite{hndai:BDAWireless2019}. \subsection{Blockchain for network management} Recently, software defined networking (SDN) technology can bestow the flexibility and scalability for distributed IoT \cite{Bera:IoTJ2017}. However, it is shown in \cite{Kalkan:ComMag2017} that the centralization of SDN can also result in the single-point-of-failure. Moreover, incumbent SDN devices (such as gateways) are also incapable of conducting computational-intensive analysis on data traffic. The integration of blockchain technology with SDN can overcome the disadvantages of SDN. For example, the work of \cite{Sharma:ComMag2017} proposes a secure blockchain-based SDN framework for IoT. In particular, a blockchain-based scheme has been developed to update the flow rule table in a secure way without the necessity of the intermediary. In addition, blockchain can also help to secure the network management of network function visualization (NFV). In particular, it is shown in \cite{Alvarenga:NOMS18} that the integration of blockchain with NFV can ensure that the configuration of NFV is immutable, auditable, non-repudiable, consistent and anonymous. A prototype of the proposed architecture was also developed and implemented in this work. In addition to SDN and NFV, the appearance of network slicing technologies \cite{Afolabi:CST2018} brings the agility and flexibility of networks to support different functional and performance requirements. As mentioned in Section \ref{sec:IBoT}, different industrial sectors have diverse application demands on blockchains. For example, a single blockchain is typically used in digital-currency like applications while an enterprise may maintain several blockchains to serve for different purposes. In particular, four isolated blockchains are dedicate to Enterprise resource planning (ERP), Product Lifecycle Management (PLM), Manufacturing execution systems (MES) and Customer Relationship Management (CRM), respectively \cite{Esposito:IEEECloudComp2016}. Network slicing can essentially offer a solution to the diverse demands of blockchain applications in mobile edge computing. For example, each of network instances can be created for the provision of a specific blockchain service on top of network slicing and network visualization. However, it is necessary to optimize and allocate both network and computing resources to fulfill the diverse demands in the composite environment of mobile edge computing and cloud computing. Moreover, the integration of blockchain and network slicing technologies can also support the reliable content sharing in content-centric networks (CCNs) \cite{Ortega:VTM2018} and privacy preservation in data sharing in 5G networks \cite{Fan:IETCom2018}. \subsection{Blockchain for computing management} Due to the resource constraints of IoT devices, massive IoT data has been typically uploaded to remote cloud servers for further processing. However, the pure cloud-based computing paradigm also causes the network traffic bottlenecks, long latency, context unawareness and privacy exposure \cite{CChen:IEEENet2018}, thereby limiting the scalability of IoT. Recently, Mobile Edge Computing (MEC) \cite{Abbas:IoTJ2018} is becoming a crucial complement to cloud computing by offloading computing tasks from distant cloud servers to MEC servers typically installed at IoT gateways, WiFi APs, Macro BS and Small BS, which are close to users. In this manner, the context-aware, latency-critical and less-computing-intensive tasks can be migrated from remote cloud servers to local MEC servers, thereby improving the response, privacy-preservation and context-awareness. Blockchain technology has been applied in a variety of fields due to its capability of establishing trust in a decentralized fashion. There are still a number of issues needed to be solved before MEC can be used in BCoT \cite{ZXiong:ComMag2018}. In contrast to cloud servers with strong computing capability and extensive storage space, mobile edge servers usually have inferior capability. Moreover, mobile edge servers are heterogeneous in terms of computing capability, main memory, storage space and network connection. As a result, mobile edge servers cannot accommodate the computational demands alone. For example, a mobile edge server may not be able to solve the consensus puzzle in blockchains while a cloud server can serve for this goal. Therefore, it is worthwhile to investigate the orchestration of mobile edge computing and cloud computing for the provision of blockchain services \cite{MLiu:TVT18}. \subsection{Orchestration of cloud and edge computing with blockchain} During the orchestration of cloud and edge computing with blockchain, there are several challenges including computational task offloading and incentivizing resource sharing. Offloading the computational tasks to edge servers can significantly reduce the delay. Therefore, it is crucial to conduct edge-cloud interoperation \cite{Yang:NetMag2017}. Nevertheless, it can cause a performance bottleneck and a single-point-of-failure if all the nodes offload their tasks to the same MEC server. The work of \cite{YDai:IoTJ19} presents an offloading method with consideration of load balancing among multiple MEC servers. Meanwhile, it is worthwhile to investigate how to incentivize both edge severs and cloud servers. For example, \cite{ZZhou:TVT19} presents a contract-match approach to allocate computational resource and assign tasks while incentivizing edge severs and cloud servers effectively. Moreover, it is challenging to design an optimal solution to the offloading tasks with consideration of spectrum, computation and energy consumption together. The work of \cite{YYang:IoTJ18} essentially provides a solution to optimize the offloading energy consumption with consideration of feasible modulation schemes and tasks scheduling. However, most of existing studies only consider a task is either done at an edge sever or at a cloud. In realistic application, a task can be partitioned into multiple sub-tasks with task dependency and those sub-tasks can be either executed at the edge server or at the cloud server. It is worthwhile to investigate the task partition with consideration of sub-task dependency in blockchains in the future. \section{Applications of Blockchain of Things} \label{sec:IBoTapp} There is a growing trend in applying blockchain in IoT since blockchain technologies can help to overcome the challenges of IoT. We then provide an overview of the applications of BCoT. It is worth mentioning that there is a wide diversity of applications of blockchains (ranging from smart manufacturing to internet of vehicles and unmanned aerial vehicles). In this paper, we mainly focus on the industrial applications of BCoT. We roughly categorize the applications of BCoT into six types as shown in Fig. \ref{fig:applications}. \begin{figure}[t] \centering \includegraphics[width=9.1cm]{applications.pdf} \caption{Applications of Blockchain of Things} \label{fig:applications} \end{figure} \subsection{Smart manufacturing} The manufacturing industry is experiencing an upgrading from automated manufacturing to ``smart manufacturing'' \cite{Kusiak:IJPR2018}. Big data analytics on manufacturing data plays an important role during this upgrading process. Massive data is generated during every phase of the product life cycle consisting of product designing, raw material supply, manufacturing, distribution, retail and after-sales service. However, the manufacturing data is highly fragmented, consequently leading to the difficulty in data aggregation and data analytics. BCoT can address the interoperability issue by interconnecting IoT systems via P2P network and allowing data sharing across industrial sectors. For example, several distributed blockchains can be constructed to serve for different sectors and each blockchain is serving for a sector or more than one sector. BCoT can also improve the security of smart manufacturing. One of major bottlenecks limiting the upgrading of factories is that the IoT systems have been maintained in a centralized way. For example, IoT firmware needs to be upgraded regularly to remedy security breaches. However, most of the firmware updates are downloaded from a central server and then are manually installed at IoT devices. It is expensive and in-efficient to install and upgrade the firmware updates in distributed IoT. The work of \cite{christidis2016blockchains} presents an automatic firmware upgrading solution based on smart contract and blockchains. In particular, smart contracts describing the firmware upgrading manners (\textit{e.g.}, when and where to upgrade firmwares) are deployed across the whole industrial network. Devices can then download and install the firmware hashes via smart contracts being automatically executed. As a result, the security maintenance cost can be greatly saved. In addition, a decentralized blockchain-based automatic production platform was proposed in \cite{JWan:TII19} to offer a better security and privacy protection than conventional centralized architecture. In addition, a blockchain-based mobile crowdsensing system was proposed to solve the incentive issue with data quality assurance in smart manufacturing \cite{JHuang:TII20}. \subsection{Supply chain management} A product often consists of multiple parts provided by different manufacturers across countries. However, some forged (or low-quality) parts may seep into the supply chain. It is quite expensive to apply anti-fraud technologies in every part of a product. The integration of blockchain and IoT can solve this problem. In particular, every part will be associated with a unique ID with the creation. Meanwhile, an immutable timestamp is also attached with this ID. The identification of every part can then be saved into a blockchain, which is tamper-resistant and traceable. For example, the work of \cite{Konstantinidis:2018} shows that the part ownership of a product can be authenticated through a blockchain-based system. Moreover, the work of \cite{Kim:ISAFM2018} presents a traceability ontology with the integration of IoT and blockchain technologies based on Ethereum blockchain platform. The proposed framework has demonstrated to guarantee data provenance of supply chain. On the other hand, BCoT can also be used to reduce the costs in after-sale services in the supply chain management. The work of \cite{tapscott2017blockchain} shows a user case of a motor insurance, in which the settlement of claims can be automated via smart contracts based on blockchains, thereby improving the efficiency and reducing the claim-processing time. Moreover, it is shown in \cite{Kshetri:2018} that integrating blockchain with IoT can help to reduce the cost, fasten the speed and reduce the risk in the supply chain management. Furthermore, a blockchain-based Machine Learning platform \cite{ZLi:TII19} was proposed to secure the data sharing among different enterprises to improve the quality of customer service. \subsection{Food industry} BCoT can enhance the visibility of the product life cycle especially in food industry. In particular, the traceability of food products is a necessity to ensure food safety. However, it is challenging for the incumbent IoT to guarantee the food traceability in the whole food supply chain \cite{DTse:IEEM2017}. For example, a food company may be provisioned by a number of suppliers. The traceability requires digitizing the information of raw materials from sources to every sector of food manufacturing. During this procedure, blockchain technologies can ensure the traceability and the provenance of food industry data. There are several proposals in this aspect. For example, the work of \cite{FTian:2016} proposed to use RFID and blockchain technology to establish a supply chain platform from agriculture to food production in China. This system has demonstrated to guarantee the traceability of food supply-chain data. Meanwhile, the work of \cite{Sander:BFJ2018} shows that blockchain technologies can help to improve food safety via the provision of the traceable food products. Moreover, it is shown in \cite{Rafael:ICCSA18} that the integration of blockchain in food supply chain can allow customers to track the whole process of food production. Authors also gave a user case of using blockchain for the organic coffee industry in Colombian. Furthermore, \cite{QLin:Access19} proposes a food safety traceability system based on the blockchain and Electronic Product Code (EPC) IoT tags. In particular, this system can prevent data tampering and privacy exposure via smart contracts. A prototype of the proposed architecture has been implemented to demonstrate the effectiveness. \subsection{Smart grid} The appearance of distributed renewable energy resources is reshaping the role of energy consumers from pure consumers to \emph{prosumers} who can also generate energy (\textit{e.g.}, from renewable energy resources) in addition to consuming energy only \cite{ZHANG:AE2018}. Energy prosumers who have extra energy can sell it to other consumers. We name the energy trading between a prosumer and a consumer (\textit{i.e.}, peers) as P2P energy trading. However, it is challenging to ensure the secured and trusted energy trading between two trading parties in the distributed environment. The appearance of blockchain technology brings the opportunities to ensure the secured P2P energy trading. Some of recent studies proposed using blockchain technologies to tackle these challenges. For example, the work in \cite{ZLi:TII2018} developed a secure energy trading system based on consortium blockchains. This system can greatly save the trading cost without going through a central broker via the distributed consensus of blockchains. Moreover, Aitzhan and Svetinovic \cite{Aitzhan:TDSC2018} developed a decentralized energy-trading system based on blockchain technology. This system demonstrated the effectiveness in protecting confidential energy-trading transaction in decentralized smart grid systems. Furthermore, the work of \cite{Claudia:Sensors18} proposed a blockchain based mechanism to provide a secure and transparent energy demand-side management on smart grid. \subsection{Health care} Health care becomes one of the major social-economic problems due to the aging population while it also poses new challenges in traditional healthcare services because of the limited hospital resources. The recent advances in wearable health-care devices as well as BDA in health-care data bring the opportunities in promoting the remote health-care services at home or at clinic. As a result, the burden of the hospital resources can be potentially released \cite{Wang:IEEENet16}. For example, senior citizens staying at their homes are wearing the health-care devices at their bodies. These wearable devices continuously measure and collect health-care data including heart beat rate, blood sugar and blood pressure readings. Doctors and health-care teams can access health-care data at any time and anywhere via the health-care networks. However, assessing health-care data also brings privacy and security concerns. The vulnerability of health-care devices and the heterogeneity of health-care networks pose the challenges in preserving privacy an ensuring security of health-care data. Incorporating blockchains into health-care networks can potentially overcome the challenges in privacy preservation and security assurance of health-care data. For example, the work of \cite{Esposito:IEEECloudComp2018} shows that using blockchain technology can protect health-care data stored in cloud servers. Meanwhile, Griggs \textit{et al.} \cite{Griggs2018} developed a blockchain-based system to assure the private health-care data management. In particular, the health-care data generated by medical sensors can be automatically collected and transmitted to the system via executing smart contracts, consequently supporting the real-time patient monitoring. During the whole procedure, the privacy can be preserved via underneath blockchains. Moreover, the work of \cite{Bhuiyan:BBD2018} proposed a blockchain-based solution to manage individual health-care data and support data-sharing across different hospitals, medical centers, insurance companies and patients. During the whole process, the privacy and security of health-care data can be assured. Furthermore, Sun \textit{et al.} \cite{YSun:ICCCN18} put forth an attribute-based signature scheme in decentralized health-care blockchain systems. On one hand, this scheme can verify the authenticity of health-care data and identification of the health-care data owner. On the other hand, this scheme can also preserve the privacy of the health-care data owner. The recent work \cite{Rahman:Access18} presents an in-home therapy management framework integrating IoT and blockchain-based MEC scheme to provide secrecy and anonymity assurance. The experimental results on a prototype demonstrate the effectiveness of the proposed system. In addition, blockchain also bestows the traceability to the patients who are infected by some contagious viruses such as Severe Acute Respiratory Syndrome (SARS), Middle East Respiratory Syndrome (MERS), Wuhan Novel Coronavirus~\cite{Wu:Nature2020}. In particular, the infected or suspected patients who are wearing IoT devices can be tracked in their trajectories so that countermeasures such as quarantine can be made while the privacy of patients can be protected via blockchains. \begin{table}[t] \centering \caption{Comparison of applications of blockchain of things} \label{tab:applications} \renewcommand{\arraystretch}{1.5} \begin{tabular}{m{2.8cm}m{5.5cm}} \hline \textbf{Application} & \textbf{Benefits} \\ \hline \multirow{2}{2.8cm}{Smart manufacturing \cite{christidis2016blockchains,Kusiak:IJPR2018,JWan:TII19}} & \checkmark Improving interoperability \\ & \checkmark Automating P2P business trading \\ & \checkmark Reducing cost for trusted third party \\ \hline \multirow{3}{2.8cm}{Supply chain management \cite{Konstantinidis:2018,Kim:ISAFM2018,tapscott2017blockchain,Kshetri:2018,ZLi:TII19}} & \checkmark Assuring data provenance \\ & \checkmark Reducing the costs in after-sale services \\ & \checkmark Mitigating the supply chain risk \\ \hline \multirow{2}{2.8cm}{Food industry \cite{DTse:IEEM2017,FTian:2016,Sander:BFJ2018,Rafael:ICCSA18,QLin:Access19}} & \checkmark Improving data traceability \\ & \checkmark Enhancing food safety \\ \hline \multirow{3}{2.8cm}{Smart grid \cite{ZHANG:AE2018,ZLi:TII2018,Aitzhan:TDSC2018,Claudia:Sensors18}} &\checkmark Securing energy trading\\ & \checkmark Improving transparency \\ & \checkmark Preserving privacy\\ \hline \multirow{3}{2.8cm}{Health care \cite{Wang:IEEENet16,Esposito:IEEECloudComp2018,Griggs2018,Bhuiyan:BBD2018,YSun:ICCCN18,Rahman:Access18,Wu:Nature2020}} & \checkmark Assuring security \\ & \checkmark Preserving privacy \\ & \checkmark Verifying authenticity \\ \hline \multirow{3}{2.8cm}{IoV and UAVs \cite{ZYang:IOTJ2018,HLiu:IEEENet2018,JKang:TII2017,JKang:IoTJ19,YDai:WCMag2019,YZeng:ComMag2016,kimchi2017unmanned,WANG:AC2016,Cheng:ComMag2018,Kapitonov:REDUAS2017,kumar2018unmanned}}& \checkmark Assuring trustworthiness of messages \\ & \checkmark Securing energy-trading in electric vehicles \\ & \checkmark Guaranteeing mutual-confidence among UAVs \\ \hline \end{tabular} \end{table}% \subsection{Internet of vehicles and unmanned aerial vehicles} Internet of vehicles (IoV) essentially integrates vehicle-to-vehicle networks, vehicle-to-roadside networks, vehicle-to-infrastructure networks and vehicle-to-pedestrian networks. The decentralization, heterogeneity and non-trustworthiness of IoV pose the challenges in securing message-transmission and transaction-execution. Integrating blockchain with IoV can tackle the above challenges. For example, the work of \cite{ZYang:IOTJ2018} developed a trust-management platform in IoV on top of blockchains. In particular, the trustworthiness of messages can be validated via PoW/PoS consensus executed by Roadside Units (RSUs). Moreover, blockchain tehcnologies can be used to protect both the energy and information interactions between electric vehicles \cite{HLiu:IEEENet2018} and hybrid electric vehicles in smart grids \cite{JKang:TII2017,JKang:IoTJ19}. In the future, incorporating artificial intelligence, mobile edge computing and blockchain can further optimize the resource allocation in IoVs \cite{YDai:WCMag2019}. Recently, unmanned aerial vehicles (UAVs) communication networks can compensate in-sufficient coverage of wireless communication networks \cite{YZeng:ComMag2016}. Meanwhile, UAVs can also be used to deliver product items \cite{kimchi2017unmanned} and acquire real-time traffic flow data \cite{WANG:AC2016}. Moreover, the recent study of\cite{Cheng:ComMag2018} also shows that UAVs can be used to support content-centric networking and mobile edge computing. However, it is challenging to assure the trustworthiness in decentralized non-trusted UAV-networks and restrict the misbehaving UAVs \cite{BLi:IoTJ19}. The integration of blockchain technology with UAV-networks can guarantee the mutual-confidence among UAVs. The work of \cite{Kapitonov:REDUAS2017} developed an autonomous platform based on Ethereum blockchain to provide the trust-management of UAVs. Moreover, IBM \cite{kumar2018unmanned} recently applied for a patent to develop a blockchain-based system to preserve privacy and assure security of UAV data. In particular, blocks in blockchains will store the information related to UAVs including model type, manufacturer, proximity to restricted region. Consequently, the misbehavior of UAVs can be detected and identified in time. \emph{Summary.} Table \ref{tab:applications} summarizes major BCoT applications. In particular, it is shown in Table \ref{tab:applications} that incorporating blockchain with IoT can bring a number of benefits in the aforementioned applications. In summary, BCoT has merits like reducing the cost for trusted third party, assuring security, improving data traceability, verifying the data authenticity and preserving privacy. \section{Open research issues of Blockchain of Things} \label{sec:chall-ibot} Although the convergence of blockchain and IoT brings a number opportunities in upgrading the industry, there are many challenges to be addressed before the potentials of BCoT can be fully unleashed. In this section, we identify several major challenges in incorporating blockchain into IoT and discuss the potential solutions. Fig. \ref{fig:futuredir} summarizes the open research issues for blockchain of things. \begin{figure*}[t] \centering \includegraphics[width=16.8cm]{futuredir.pdf} \caption{Open research issues for blockchain of things} \label{fig:futuredir} \end{figure*} \subsection{Resource constraints} Most of IoT devices are resource-constrained. For example, sensors, RFID tags and smart meters have inferior computing capability, limited storage space, low battery power and poor network connection capability. However, the decentralized consensus algorithms of blockchains often require extensive computing power and energy consumption. For example, PoW in Bitcoin is shown to have high energy consumption \cite{REYNA:2018}. Therefore, the consensus mechanisms with huge energy consumption may not be feasible to low-power IoT devices. On the other hand, the bulky size of blockchain data also results in infeasibility of fully deploying blockchains across IoT. For example, the Bitcoin blockchain size almost reaches 185 GB by the end of September 2018. It is impossible to fully store the whole blockchain at each IoT device. Meanwhile, the massive IoT data generated in nearly real time manner makes this status quo even worse. Moreover, blockchains are mainly designed for a scenario with the stable network connection, which may not be feasible for IoT that often suffers from the poor network connection of IoT devices and the unstable network due to the failure of nodes (\textit{e.g.}, battery depletion). \emph{Potential solutions.} Incorporating MEC and cloud computing technologies into BCoT may potentially overcome resource constraints of IoT devices. For example, cloud servers or some MEC servers may serve as full nodes that store the whole blockchain data and participate in most of blockchain operations, such as initiating transactions, validating transactions (\textit{i.e.}, mining) while IoT devices may serve as lightweight nodes that only store partial blockchain data (even hash value of blockchain data) and undertake some less-computational-intensive tasks (such as initiating transactions) \cite{YDai:TVT18}. The orchestration of MEC and cloud computing becomes an important issue in the sense of allocating resource in BCoT \cite{Tran:ComMag17}. \subsection{Security vulnerability} Although incorporating blockchain technologies into IoT can improve the security of IoT via the encryption and digital signature brought by blockchains, the security is still a major concern for BCoT due to the vulnerabilities of IoT systems and blockchain systems. On one hand, there is a growing trend in deploying wireless networks into industrial environment due to the feasibility and scalability of wireless communication systems. However, the open wireless medium also makes IoT suffering from the security breaches such as passive eavesdropping \cite{li2016analytical}, jamming, replaying attacks \cite{JLin:2017}. Moreover, due to the resource constraints of IoT devices, conventional heavy-weighted encryption algorithms may not be feasible to IoT \cite{YYang:IoTJ2017}. In addition, it is also challenging to manage the keys (which are crucial to encryption algorithms) in distributed environment. Meanwhile, blockchain systems also have their own security vulnerabilities such as program defects of smart contracts \cite{LI:FGCS2017}. In particular, it is shown in \cite{apostolaki2017hijacking} that the malicious users can exploit Border Gateway Protocol (BGP) routing scheme to hijack blockchain messages, thereby resulting in the higher delay of block broadcasting. The work of \cite{ADHAMI2018} also shows that a Decentralized Autonomous Organization (DAO) attack stole \$50 million worth of Ethereum by leveraging the vulnerability of smart contracts. \emph{Potential solutions.} Security vulnerabilities of BCoT can be remedied via either the security enhancement of IoT systems or loophole repairing of blockchain. For example, cooperative jamming scheme \cite{LHu:IoTJ18} was explored to improve the security of IoT systems while no extra hardware is required for existing IoT nodes. Meanwhile, \cite{WHu:IoTJ19} exploits key generations based on reciprocity and randomness of wireless channels in Long Range (LoRa) IoT network. In the perspective of repairing blockchain loopholes, there are also some advances. In particular, the recent work of \cite{Apostolaki:2018} proposes a secure relaying-network for blockchains, namely SABRE, which can prevent blockchain from BGP routing attacks. Regarding DAO attacks, Corda and Stellar trade the expressiveness for the verifiability of smart contracts \cite{Dinh:TKDE2018} so as to avoid DAO attacks. \subsection{Privacy leakage} Blockchain technologies have some mechanisms to preserve a certain data privacy of transaction records saved in blockchains. For example, transactions are made in Bitcoin via IP addresses instead of users' real identities thereby ensuring a certain anonymity. Moreover, one-time accounts are generated in Bitcoin to achieve the anonymity of users. However, these protection schemes are not robust enough. For example, it is shown in \cite{MConti:CST2018} that user pseudonyms can be cracked via learning and inferring the multiple transactions associated with one common user. In addition, the full storage of transaction data on blockchain can also lead to the potential privacy leakage as indicated in \cite{DORRI:FGCS2019}. \emph{Potential solutions.} Recently, mixed coins are proposed to confuse attackers so that they cannot infer the exact number of real coins spent by a transaction. However, recent study \cite{moser2018empirical} demonstrates the weakness of the coin-mixed schemes via extensive realistic experiments based on Monero\footnote{A private digital currency platform (https://getmonero.org/)}. Moreover, the actual transaction can be deduced by leveraging the vulnerability of the coin-mixed schemes. The work of \cite{DORRI:FGCS2019} presents a memory optimized and flexible blockchain data storage scheme, which can somewhat reduce the privacy leakage risk. \subsection{Incentive mechanism in BCoT} An appropriate incentive mechanism is a benign stimulus to blockchain systems. For example, a number of Bitcoins (BTC) will be rewarded to a miner who first solves the computationally-difficult task. Meanwhile, a transaction in Ethereum will be charged with a given fee (\textit{i.e.}, gas) to pay the miners for the execution of contracts. Therefore, there are two issues in designing incentive mechanisms in blockchains: 1) the reward for proving (or mining) a block and 2) the compensation for processing a transaction (or a contract). However, it is challenging to design a proper incentive mechanism for BCoT to fulfill the requirements of different applications. Take digital currency platforms as an example, where miners are keen on the price of digital currency. For instance, the BTC reward for a generated block will be halved every 210,000 blocks \cite{saito2018make}. The reward decrement will discourage miners to contribute to the solution of the puzzle consequently migrating to other blockchain platforms. How to design a proper rewarding and publishing mechanism of digital currency is necessary to ensure the stability of blockchain systems. \emph{Potential solutions.} On the other hand, the reputation and honesty is an impetus to users in private or consortium blockchain systems. Therefore, going beyond digital currency, reputation credits can be used as incentives in the scenarios like personal reputation systems\cite{yasin2016online}, sharing economy \cite{bogner2016decentralised}, data provenance \cite{Liang:CCGRID2017} and the medication supply chain \cite{Darryl:ACT2017}. The recent work \cite{huang2019repchain} presents RepChain, which exploits the reputation of each node to develop the incentive mechanism. \subsection{Difficulty in BDA in BCoT} There is a surge of big volume of IoT data generated in nearly real time fashion. The IoT data exhibits in massive volume, heterogeneity and huge business value. Big data analytics on IoT data can extract hidden values and make intelligent decisions. However, it is challenging for apply conventional big data analytics schemes in BCoT due to the following reasons: \begin{itemize} \item \emph{Conventional BDA schemes cannot be applied to IoT devices due to the resource limitations}. Since IoT devices have inferior computing capability, the complicated BDA schemes cannot be deployed at IoT devices directly. Moreover, the bulky size of blockchain data also leads to the infeasibility of the local storage of blockchain data at IoT devices. Although cloud computing can address these issues, uploading the data to remote cloud servers can also result in the privacy-breach and the long-latency \cite{wang2015cloud}. \item \emph{It is difficult to conduct data analytics on anonymous blockchain data.} Blockchain technologies can protect data privacy via encryption and digital signature on data records. However, it often requires the data decryption before conducting data analytics. Nevertheless, the decryption process is often time-consuming thereby resulting in the inefficiency of data analytics \cite{xkxiao:icde2018}. It is challenging to design data analytics schemes on blockchain data without decryption. \end{itemize} \emph{Potential solutions.} MEC is serving as a crucial complement to cloud computing by offloading computing tasks from distant cloud servers to MEC in approximation to users. As a result, MEC can improve the response, privacy-preservation and context-awareness in contrast to cloud computing. Therefore, offloading BDA tasks to MEC servers can potentially solve the privacy-leakage and long latency issue of cloud computing with blockchain \cite{Dai:2019}. Regarding data analytics on anonymous blockchain data, there are some recent advances: 1) complex network-based community detection \cite{Remy:2018} to identify multiple addresses associated with an identical user, 2) feature extraction of transaction patterns of Bitcoin blockchain data to identify payment relationships \cite{Tasca:JRF18}, 3) analysis of user accounts and operation codes on Ethereum to detect Ponzi fraud behavior \cite{Chen:2018}. \subsection{Scalability of BCoT} The scalability of incumbent blockchains also limits the wide usage of blockchains in large scale IoT. The scalability of blockchains can be measured by the \emph{throughput} of transactions per second against the number of IoT nodes and the number of concurrent workloads \cite{Dinh:SIGMOD2017,Dinh:TKDE2018}. Many blockchain systems are suffering from the poor throughput. For example, it is shown in \cite{croman2016scaling} that Bitcoin can only process seven transactions per second. In contrast, VISA can process nearly 2,000 transactions per second and PayPal has the throughput of 170 transactions per second \cite{Vermeulen:2017,albrecht2018dynamics}. Ref. \cite{Conoscenti:AICCSA16} shows that Bitcoin blockchain may not be suitable for IoT due to the poor scalability. In summary, the incumbent blockchain systems may not be suitable for the applications with a large volume of transactions especially for IoT. \emph{Potential solutions.} There are two possible directions in improving the scalability of blockchains in IoT: 1) designing more scalable consensus algorithms and 2) constructing private or consortium blockchains for IoT. Regarding 1), we can choose the consensus-localization strategy to improve the throughput of transactions. Meanwhile, we may implement some new blockchain structures such as directed acyclic graph (DAG) \cite{lewenberg2015inclusive} to allow the non-conflicting blocks from the side-chain to be assembled with the main chain, consequently reducing the cost for resolving bifurcation. In addition, we may consider integrating PoW with PBFT to improve the throughput of PoW similar to Sharding Protocol proposed in \cite{Luu:2016}, in which less computational-extensive puzzle is first solved in PoW and consensus is then reached in multiple small groups. Regarding 2), transactions in private and consortium blockchains can be processed much faster than public blockchains due to the fully-controlled systems and the limited number of permitted users. Meanwhile, the consensus can also be easily reached in private and consortium blockchains. Moreover, the fully-controlled blockchains also fulfill the requirement that an enterprise needs to have a control on different strategic sectors, \textit{e.g.}, ERP, MES, PLM and CRM systems \cite{Esposito:IEEECloudComp2016,Dinh:TKDE2018}. Though there are some attempts such as GemOS \cite{GemOS}, Multichain \cite{MultiChain} and Hyperledger \cite{hybperledge:2015}, more mature private and consortium blockchain platforms serving for specific industrial sectors are still expected in the future. \section{Conclusion} \label{sec:conc} The incumbent Internet of Things (IoT) systems are facing a number of challenges including heterogeneity, poor interoperability, resource constraints, privacy and security vulnerability. The recent appearance of blockchain technologies essentially offers a solution to the issues with the enhanced interoperability, privacy, security, traceability and reliability. In this paper, we investigate integrating blockchain with IoT. We name such synthesis of blockchain and IoT as BCoT. We provide a comprehensive survey on BCoT. In particular, we first briefly introduce internet of things and blockchain technology. We then discuss the opportunities of BCoT and depict the architecture of BCoT. We next outline the research issues in blockchain for next-generation networks. We further discuss the potential applications of BCoT and outline the open research directions in BCoT. \section*{Acknowledgement} This work was supported by the National Key Research and Development Program (2016YFB1000101), the National Natural Science Foundation of China (61722214 and U1811462), Macao Science and Technology Development Fund under Grant No. 0026/2018/A1, and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (2016ZT06D211). In addition, this project has also received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No 824019. The authors would like to thank Gordon K.-T. Hon for his constructive comments.
1,314,259,996,207
arxiv
\section{Introduction} The construction of confidence sets is one of the fundamental problems of statistical inference, along with parameter estimation and hypothesis testing. Consider a model $\{P_f: f\in\mathcal{F}\}$, indexed by a family of functions $\mathcal{F}$, and observe (some quantity $n$ of) data from the true distribution $P_{f_0}$, where $f_0\in\mathcal{F}$. For most applications, having a single point estimate $\hat{f}_n$ of the true parameter $f_0$ is not enough, and one desires to evaluate its performance in terms of a loss function, that is, to know how far it lies from $f_0$. Producing a random set $C_n\subset\mathcal{F}$ from the data containing $f_0$ with a prescribed high probability $1-\alpha$ achieves this aim. In this work, we investigate the existence of \emph{adaptive honest confidence sets}. Since $f_0$ is unknown, we must insist that $C_n$ possesses the previous property not just for $f_0$, but for all $f\in\mathcal{F}$: we say that the confidence set $C_n$ is \emph{honest} if, at least for all sufficiently large $n$, $$ \inf_{f\in\mathcal{F}}P_f(f\in C_n) \geq 1-\alpha.$$ Furthermore, we desire the diameter of the set $C_n$ to shrink in $n$ as quickly as possible; however, typically the precise speed of this shrinkage depends on aspects of the unknown density $f_0$ such as its regularity, and so we find ourselves in an adaptation problem. We work in a \emph{density estimation model}: consider observations $X_1,\dots,X_n$ independent and identically distributed (i.i.d.) from a probability measure $P_{f_0}$ with probability density $f_0$. The sample space of the $X_i$'s will either be the $d-$dimensional torus $\mathbb{T}^d$ or $\mathbb{R}^d$. We then study procedures in a representative `two-class adaptation problem', where $f_0$ belongs to one of two classes $\mathcal{F}(r)$ and $\mathcal{F}(s)$ (to be precisely defined below), indexed by regularity parameters $r<s$, such that $\mathcal{F}(s)\subset\mathcal{F}(r)$. An adaptive honest confidence set $C_n$ should satisfy the above honest coverage condition, and also have a diameter that shrinks at the minimax estimation rate of whichever class $f_0$ belongs to (typically the rate is faster for the smaller class $\mathcal{F}(s)$). The construction of such a confidence set involves assessing the accuracy with which one can estimate $f_0$, which turns out to be more challenging than point estimation, as qualitative aspects of the parameter need to be identified. This problem has primarily been studied for $L_p$ or related distances \cite{lowNonparametricConfidenceIntervals1997, juditskyNonparametricConfidenceSet2003, caiAdaptiveConfidenceBalls2006,robinsAdaptiveNonparametricConfidence2006, hoffmannAdaptiveInferenceConfidence2011, bullAdaptiveConfidenceSets2013, carpentierHonestAdaptiveConfidence2013}. In $L_2$ loss, adaptive honest confidence sets exist only if the regularity parameters of interest lie in some `small' interval. More troublesome is the case of pointwise or $L_\infty$ loss, where no such procedures exist. This starkly contrasts the situation of adaptive estimation, where (perhaps at the cost of a logarithmic factor) it is possible to construct estimators which adapt to any regularity parameter (\cite{lepskiiProblemAdaptiveEstimation1991, donohoDensityEstimationWavelet1996}). Informally, these negative results come from the fact that, in $L_2$ loss, a related testing problem is easier (admits a faster convergence rate) than estimation, whereas for $L_\infty$ loss, the testing and estimation problems are equally difficult (\cite{hoffmannAdaptiveInferenceConfidence2011, bullAdaptiveConfidenceSets2013}). This distinction highlights how the existence of adaptive honest confidence sets depends on the geometry induced by the loss function (see \cite[Chapter 8]{gineMathematicalFoundationsInfiniteDimensional2015} for an overview of these results). Arising from the ideas of Optimal Transport \cite{monge1781memoire, Kant42}, Wasserstein distances $W_p,\ p\geq1$, between probability measures have recently been studied in a wide array of fields such as optimization, machine learning, and statistics. For $p\geq1$, the $p-$Wasserstein distance between $\mu$ and $\nu$, probability measures on a metric space $\left(\mathcal{X}, d\right)$, is defined as \[ W_p(\nu,\mu) \coloneqq \underset{\pi\in\Pi\left(\nu,\mu\right)}{\inf}\left(\int_{\mathcal{X}\times\mathcal{X}} d(x,y)^p d\pi(x,y)\right)^{1/p},\] with the infimum ranging over the set $\Pi\left(\nu,\mu\right)$ of measures on $\mathcal{X}\times\mathcal{X}$ with given marginals $\nu$ and $\mu$. It quantifies the minimal cost, as measured by the metric $d$, to morph the distribution $\mu$ into $\nu$. For measures $P_f$ and $P_g$ dominated by a common measure and with densities $f$ and $g$, this also entails a distance between those densities, with $W_p(f,g)\coloneqq W_p\left(P_f,P_g\right)$. Not only do these distances possess desirable theoretical properties (\cite{villaniOptimalTransportOld2009}), as they take into account the geometry of the underlying sample space, but recent numerical developments (\cite{COTFNT}) have led to increased use in practical applications. They therefore now play a prominent role in statistics (see, for example, the review \cite{PanaretosZemel}). The convergence of the empirical distribution in $W_p$-distance is a well-studied problem (it stretches back to \cite{Dud69}, with definitive results on limit theorems for the $\mathbb{R}$ sample space in \cite{MR1698999}; for state-of-the-art results, see \cite{fournierRateConvergenceWasserstein2015, weedSharpAsymptoticFinitesample2019}). In dimensions $d\geq3$, the convergence rate of the empirical distribution (without further structural assumptions) is $n^{-1/d}$, demonstrating that convergence in $W_p$ suffers from the curse of dimensionality. When measures have densities, as is the case in density estimation, \cite{weedEstimationSmoothDensities2019} prove that, for certain classes of densities, $W_p$ compares with Besov norms of smoothness $-1$, a classical result for the $W_1$ distance due to the Kantorovich-Rubinstein duality formula. The convergence rates they obtain for regular densities using this comparison result, which lie closer to the parametric rate $n^{-1/2}$, highlight the importance of regularity of the signal in high-dimensional settings: to some extent, the curse of dimensionality can be mitigated by smoothness. In addition, these rates are faster than the standard $s$-smooth nonparametric convergence rate $n^{-\frac{s}{2s+d}}$ for $L_p$ loss, $1\leq p<\infty$, reflecting the fact that Wasserstein distances are weaker than $L_p$ distances. In this paper, we obtain similar quantitative improvements for testing separation rates of nonparametric statistical hypotheses. From this, on the bounded sample space $\mathbb{T}^d$ we deduce new qualitative phenomena regarding the existence and non-existence of adaptive honest confidence sets when using the loss functions $W_p$, $1\leq p\leq 2$. Surprisingly, in dimensions $d\leq 4$ we construct confidence sets that can adapt to \emph{any} set of regularities. This contrasts significantly with the fundamental limitations of adaptive confidence sets in $L_p$. In higher dimensions $d>4$, adaptation is still possible for regularities belonging to a certain interval, which is wider than in the $L_p$ case. The reason for this phenomenon is that while both the testing and estimation rates are faster than for $L_p$, the testing rate accelerates more, leaving `more space' for adaptation to occur than in the analogous problem for $L_p$ loss. As for densities on an unbounded sample space such as $\mathbb{R}^d$, the same phenomenon occurs, though we currently only have results for the $W_1$ distance. The paper is organized as follows. Section \ref{Section: Main Results} formalizes our problem on the potential existence of adaptive honest confidence sets, and states our main results. The construction of such sets, whenever possible, and non-existence results are presented in Section \ref{Section: Existence Results} for the bounded sample space $\mathbb{T}^d$ and Section \ref{Section: Rd} for the unbounded sample space $\mathbb{R}^d$. Proofs are deferred to Appendices \ref{Section: additional torus proofs} and \ref{Section: Rd proofs}. \section{Main Results}\label{Section: Main Results} \subsection{Setting and Definitions} Initially, we assume that $f_0$ is a density on the $d$-dimensional torus, $\mathbb{T}^d$, which may be identified with $(0,1]^d$. Our results also apply to the case of the unit cube $[0,1]^d$ (and hence any bounded rectangular subset of $\mathbb{R}^d$), which is the focus of \cite{weedEstimationSmoothDensities2019}; see Section \ref{Subsection: unit cube} below. For our loss function, we take the distance $W_2$; as described in Remark \ref{Remark: choice of W_2}, this distance dominates $W_p$ for $1\leq p<2$, in particular the important case of $W_1$. Later, we consider the situation where $f_0$ is a density on the whole of $\mathbb{R}^d$; while a study for $W_p,p>1$ is beyond the scope of the present work, we obtain some definitive results for the loss function $W_1$ in Section \ref{Section: Rd}. \subsubsection{Parameter Spaces} Here we define the classes of probability densities on $\mathbb{T}^d$ we consider; definitions for $\mathbb{R}^d$ are similar but deferred to Section \ref{Section: Rd}. Let $ \left\{ \phi \equiv 1, \psi_{lk} : l\geq0, 0\leq k<2^{ld}\right\}$ be an $S$-regular periodised Daubechies wavelet basis of $L_2(\mathbb{T}^d)$; see Appendix \ref{Section: Wavelet Appendix} for further details. We denote by $\langle f,g \rangle = \int_{\mathbb{T}^d} fg$ the usual inner product on $L_2$. For any $f\in L_p(\mathbb{T}^d), 1\leq p<\infty$, the wavelet expansion \begin{equation}\label{Eq: wavelet expansion} f = \langle f, 1 \rangle + \sum_{l\geq0}\sum_{k=0}^{2^{ld}-1} \langle f,\psi_{lk}\rangle \psi_{lk} \end{equation} converges in $L_p$, and if $f$ is continuous then the expansion converges uniformly on $\mathbb{T}^d$. We write $K_j(f)$ for the projection of $f$ onto the first $j$ resolution levels, i.e. \begin{equation}\label{Eq: j level projection definition} K_j(f) = \langle f,1 \rangle + \sum_{l<j}\sum_{k=0}^{2^{ld}-1}\langle f,\psi_{lk} \rangle \psi_{lk}. \end{equation} To define the parameter classes, we use the scale of \emph{Besov spaces,} $B^s_{pq},1\leq p,q\leq\infty, s\geq0$ as defined in Appendix \ref{Section: Wavelet Appendix}. The index $s$ should be interpreted as a smoothness or regularity parameter. Using the definition of the Besov norm (\ref{Eq: Besov norm definition}) and the embedding $\ell_q\subset\ell_{\infty}$, for $f\in B^{s}_{pq}(\mathbb{T}^d)$ we have that \begin{equation}\label{Eq: Besov wavelet coefficient bound} \left\| \langle f,\psi_{l\cdot} \rangle\right\|_p \leq \|f\|_{B^s_{pq}}2^{-l\left(s + \frac{d}{2} - \frac{d}{p}\right)}. \end{equation} Thus $f\in B^s_{pq}$ if its wavelet coefficients decay sufficiently fast as $l$ grows, as measured by $s$. The use of subsets of Besov spaces as parameter spaces in nonparametric statistics is well-established, and the scale contains several of the regularity classes usually considered in such settings: for example, the Sobolev spaces ($H^s = B^s_{22}$) and the H\"{o}lder spaces (for $s\not\in\mathbb{N}, C^s=B^s_{\infty\infty}$, and for $s\in\mathbb{N},C^s\subsetneq B^s_{\infty\infty}$). See \cite[Section 4.3]{gineMathematicalFoundationsInfiniteDimensional2015} for further discussion on this subject. In standard loss functions such as $L_2$ or $L_{\infty}$, it is typically assumed that $f$ lies in some norm-ball in $B^s_{pq}$, for some choice of $s,p,q$. Here we slightly restrict the function class, insisting that the densities under consideration are bounded and bounded away from 0. In particular, the lower bound condition facilitates the faster minimax estimation rates of Proposition \ref{Prop: minimax estimation rates}; it is shown in \cite{weedEstimationSmoothDensities2019} that removing this condition results in slower rates for most parameter configurations. \begin{defn} Let $1\leq p,q\leq \infty$, $s\geq0$, $B\geq 1$, $M\geq 1\geq m>0$. Define the function class \begin{equation} \mathcal{F}_{s,p,q}(B;m,M) = \left\{ f\in B^s_{pq}: \int_{\mathbb{T}^d} f=1,\quad \|f\|_{B^s_{pq}}\leq B,\quad m\leq f\leq M \,\mathrm{a.e.} \right\}; \end{equation} Note that we always have $1\in\mathcal{F}_{s,p,q}(B;m,M)$, and so the class is non-empty. Henceforth we fix $p=2$ and consider $q,B,m,M$ to be given. Define $$ \mathcal{F}(s) := \mathcal{F}_{s,2,q}(B;m,M). $$ \end{defn} For large $s$ and smaller values of $B\geq1$, the condition $f\leq M$ is superfluous. However, the imposition of the uniform lower bound $f\geq m>0$ means that $\mathcal{F}(s)$ is a strict subset of the more typical parameter space $\{f\in B^s_{2q}:f\geq0,\int f = 1,\|f\|_{B^s_{2q}}\leq B\}$. Also, it is clear from the definition (\ref{Eq: Besov norm definition}) that the continuous embedding $B^s_{pq}\subset B^r_{pq}$ holds with operator norm 1, so $\mathcal{F}(s) \subset \mathcal{F}(r)$ for $r\leq s$. \subsubsection{Notation} For a probability density $f$, let $P_f$ and $E_f$ denote respectively the probability and expectation when $X_1,\ldots, X_n \overset{\mathrm{i.i.d.}}{\sim}f$. For real numbers $a,b$, we write $a\wedge b=\min(a,b)$ and $a\vee b = \max(a,b)$. Given sequences $(a_n)$ and $(b_n)$, we write $a_n\lesssim b_n$ if there exists a constant $C>0$ that is independent of $n$ such that for all $n$, $a_n \leq Cb_n$; we also write $a_n\simeq b_n$ if $a_n \lesssim b_n$ and $b_n \lesssim a_n$. Given any subset $A$ of a metric space $(\mathcal{A},d)$, we write $|A|_d$ for the $d$-diameter of $A$, defined by $$ |A|_d := \sup_{x,y\in A} d(x,y).$$ Given a subset $B\subset \mathcal{A}$ and a point $a\in \mathcal{A}$, we define the distance of $a$ to $B$ as $$ d(a,B) := \inf_{b\in B}d(a,b).$$ \subsection{Description of the Problem}\label{subsection: problem description} Suppose initially that $f\in\mathcal{F}(r)$ for some given $r\geq0$. We wish to construct a confidence set $C_n$ for the unknown density $f$; informally, we would like $C_n$ to contain $f$ with (some chosen) high probability. Specifically, given $\alpha\in(0,1)$, we require any confidence set $C_n = C_n(\alpha,X_1,\ldots,X_n)$ to have \emph{honest coverage} at level $1-\alpha$ over the class $\mathcal{F}(s)$, that is, there exists $n_0\in\mathbb{N}$ such that for all $n\geq n_0$, \begin{equation}\label{Eq: honesty definition} \inf_{f\in\mathcal{F}(r)} P_f(f\in C_n) \geq 1-\alpha. \end{equation} The `honesty' refers to the uniformity over $\mathcal{F}(r)$. We remark that in the minimax paradigm, one must necessarily insist on honesty, since the true density $f_0$ is unknown: `dishonest' adaptive confidence sets exist (see \cite[Corollary 8.3.10]{gineMathematicalFoundationsInfiniteDimensional2015}), but the index $n_0$ from which coverage is valid depends on the unknown $f$, so such procedures produce questionable guarantees in practice. It is clear that the smaller the set $C_n$, the more informative it is; otherwise one could just take $C_n$ to be the whole parameter space $\mathcal{F}(r)$. Thus we desire the $W_2$-diameter of our set $C_n$ to shrink as quickly as possible in $n$. Suppose $C_n$ satisfies the honest coverage condition (\ref{Eq: honesty definition}) for some $\alpha\in(0,1)$, and let $r_n$ be a positive sequence such that for some $\beta>\alpha$ and every $n\geq n_0$, we have \begin{equation}\label{Eq: O_P minimax estimation rate} \inf_{\tilde{f}_n}\sup_{f\in\mathcal{F}(r)}P_f\big(W_2(\tilde{f}_n,f)\geq r_n\big)\geq \beta. \end{equation} Here, the infimum is taken over all \emph{estimators} (i.e. measurable functions) $\tilde{f}_n = \tilde{f}_n(X_1,\ldots,X_n)$. Then by Lemma 2 in \cite{robinsAdaptiveNonparametricConfidence2006}, the $W_2$-diameter of $C_n$ satisfies, for $n\geq n_0$, $$ \sup_{f\in\mathcal{F}(r)}P_f\left(|C_n|_{W_2}\geq r_n\right) \geq \beta - \alpha;$$ in particular, its diameter cannot shrink faster than $r_n$ with high probability. We define the \emph{minimax estimation rate} (in probability) over $\mathcal{F}(s)$, denoted $r_n^*(s)$, to be the `slowest' sequence (i.e. the largest such sequence up to a multiplicative prefactor) $r_n$ such that (\ref{Eq: O_P minimax estimation rate}) is satisfied for some $\beta>0$ and some $n_0\geq1$. Usually this rate depends on the smoothness parameter $s$. \begin{remark}\label{Remark: expectation vs O_P rates} The term `minimax estimation rate' is often reserved for any sequence $\bar{r}_n$ such that $$ \inf_{\tilde{f}_n}\sup_{f\in\mathcal{F}(r)} E_f W_2(\tilde{f}_n,f) \simeq \bar{r}_n. $$ By Markov's inequality, we have that $r_n^* \lesssim \bar{r}_n$. In fact, as shown by Proposition \ref{Prop: minimax estimation rates} below, in this problem the rates $r_n^*$ and $\bar{r}_n$ coincide (possibly up to a logarithmic factor when $d=2$). \end{remark} In general, it is unrealistic to assume that the regularity $r$ is known. Thus we find ourselves in an adaptation problem, where we wish to construct procedures that do not depend on the unknown smoothness $r$, but which result in (near-)optimal performance for a range of values of $r$. In order to highlight the main ideas, let us consider the two class adaptation problem, where for some fixed $s>r\geq0$ we consider the model $\mathcal{F}(r)$, but also seek optimal performance over the smoother subclass $\mathcal{F}(s) \subset \mathcal{F}(r)$. We discuss after Theorem \ref{Thm: d > 2 confidence sets} how one might construct confidence sets adapting to a continuous window of smoothnesses $[r,R]$ or even all $r\geq0$ simultaneously. \begin{defn}\label{Def: optimal adaptive confidence set} We say that $C_n = C_n(\alpha,\alpha', X_1,\ldots,X_n)$ is a \emph{near-optimal adaptive $W_2$ confidence set over $\mathcal{F}(s)\cup\mathcal{F}(r)$}, $s>r$, if it satisfies the following properties, for given $\alpha,\alpha'\in(0,1)$: \begin{enumerate}[(i)] \item \textbf{Honest Coverage:} for all $n$ sufficiently large, \begin{equation}\label{Eq: acs def, coverage} \inf_{f\in\mathcal{F}(r)} P_f(f\in C_n) \geq 1-\alpha; \end{equation} \item \textbf{Diameter Shrinkage:} there exists a constant $K = K(\alpha')>0$ such that \begin{equation}\label{Eq: acs def, slow shrinkage} \sup_{f\in\mathcal{F}(r)} P_f(|C_n|_{W_2} > KR_n(r)) \leq \alpha' \end{equation} and \begin{equation}\label{Eq: acs def, fast shrinkage} \sup_{f\in\mathcal{F}(s)} P_f(|C_n|_{W_2} > KR_n(s)) \leq \alpha' , \end{equation} for $n$ large enough, where the rate sequences $R_n(r)$ and $R_n(s)$ satisfy $$ R_n(r) \leq a_nr_n^*(r) \quad \text{and} \quad R_n(s) \leq a_nr_n^*(s), $$ for $r_n^*(r)$ and $r_n^*(s)$ the minimax rates of estimation over $\mathcal{F}(r)$ and $\mathcal{F}(s)$ respectively and $a_n$ some power of $\log{n}$. \end{enumerate} \end{defn} Typically, for optimal adaptive confidence sets one insists that the rates $R_n(r),R_n(s)$ in (\ref{Eq: acs def, slow shrinkage}) and (\ref{Eq: acs def, fast shrinkage}) are equal up to constants to the minimax estimation rates $r_n^*(r), r_n^*(s)$. Our definition of `near-optimal' allows for $R_n(t)$ to equal $r_n^*(t), t=r,s$, up to a logarithmic factor in $n$, and is thus a slight relaxation. Admitting this relaxation does not alter the (existence and) non-existence results of \cite{bullAdaptiveConfidenceSets2013}, \cite{carpentierHonestAdaptiveConfidence2013}, \cite{hoffmannAdaptiveInferenceConfidence2011}, \cite{gineMathematicalFoundationsInfiniteDimensional2015}, since these results are due to a polynomial discrepancy between minimax estimation and testing rates; see Section \ref{Subsection: nonexistence} below. We only consider the problem of adaptation in the smoothness parameter and do not address the question of adaptation to other parameters in the definition of the class $\mathcal{F}(s)$, such as the Besov norm bound $B$. See Remark \ref{Remark: adapting over other parameters} below for a discussion of this issue. \subsection{Adaptive $W_2$ Confidence Sets on $\mathbb{T}^d$} Our first theorem exhaustively classifies the parameter configurations for which adaptive honest confidence sets exist for $W_2$ loss; in the cases where such confidence sets do exist, an explicit construction is given in Theorem \ref{Thm: d > 2 confidence sets} below. \begin{theorem}\label{Thm: existence and nonexistence of conf sets} Fix $1\leq q\leq\infty$, $B\geq1$, $M\geq1\geq m>0$. Consider the two class adaptation problem for confidence sets as defined by (\ref{Eq: acs def, coverage})-(\ref{Eq: acs def, fast shrinkage}). \begin{enumerate}[(i)] \item Let $d\leq4$ and $s>r\geq0$. Then for any $\alpha,\alpha'>0$, there exists a near-optimal adaptive $W_2$ confidence set. \item Let $d>4$ and $0\leq r<s\leq \frac{2d-4}{d-4}r+\frac{d}{d-4}$. Then for any $\alpha,\alpha'>0$, there exists a near-optimal adaptive $W_2$ confidence set. \item Let $d>4$ and $0\leq r<s$ with $s> \frac{2d-4}{d-4}r+\frac{d}{d-4}$. Then for any $\alpha,\alpha'>0$ such that $2\alpha + \alpha'<1$, no near-optimal adaptive $W_2$ confidence set exists. \end{enumerate} \end{theorem} \begin{remark}\label{Remark: choice of W_2} We have focussed on the particular choice of $W_2$; by Jensen's inequality, this distance dominates $W_p$ for $1\leq p<2$. Since the minimax estimation rates in these problems are independent of $p$ (c.f. Proposition \ref{Prop: minimax estimation rates}), this means that the above existence results hold for $W_p,1\leq p\leq 2$, in particular for the important case of $W_1$. Moreover, in the case of $W_1$, one may remove the lower bound condition in the definition of $\mathcal{F}(s)$; see Remark \ref{Remark: classes not bounded below for W_1} below. \end{remark} Theorem \ref{Thm: existence and nonexistence of conf sets} says that in low dimensions, $d\leq 4$, there exists a confidence set which adapts optimally in $W_2$-diameter to \emph{any} two smoothnesses $s>r\geq0$. As the construction does not depend on $s$, in fact adaptation occurs simultaneously for all $s\geq r$ (strictly speaking, $r\leq s\leq S$ where $S$ is the regularity of the wavelet basis used), where $r$ is a chosen `baseline' smoothness. Contrast this to the case of $L_p$ loss, $2\leq p\leq \infty$: for $p<\infty$, in any dimension, there exists a (near-)optimal adaptive confidence set if and only if $s\leq \frac{p}{p-1}r$ (\cite{bullAdaptiveConfidenceSets2013}, \cite{carpentierHonestAdaptiveConfidence2013}); for $L_{\infty}$ loss, adaptive confidence sets do not exist for any choice of $s>r\geq0$ (\cite{lowNonparametricConfidenceIntervals1997}, \cite{hoffmannAdaptiveInferenceConfidence2011}). See \cite[Section 8.3]{gineMathematicalFoundationsInfiniteDimensional2015} for a complete account of the $L_2$ and $L_{\infty}$ theory. In higher dimensions $d>4$, Theorem \ref{Thm: existence and nonexistence of conf sets} gives a `window' of smoothnesses for which adaptation occurs, in a similar vein to the case of $L_p,p<\infty$. However, for the $W_2$ loss the window is significantly wider; moreover, regardless of how small we choose $r\geq0$, this window has width at least $\frac{d}{d-4}$, whereas for $L_p, 2\leq p<\infty$, the window is of width $\frac{r}{p-1}\leq r$, which will be very narrow for small values of $r$. These results are related to the fact that $W_2$ is a weaker loss function than $L_p$: specifically, Proposition \ref{Prop: W-Besov comparison} and (\ref{Eq: W_2 - log Sobolev comparison}) show that on the class $\mathcal{F}(s)$, $W_2$ is comparable to a Sobolev (or Besov) norm of smoothness -1. In very low dimensions $d=1,2$, the estimation rate is independent of the smoothness parameter $s$, meaning that any confidence set satisfying (\ref{Eq: acs def, slow shrinkage}) automatically satisfies the faster shrinkage condition (\ref{Eq: acs def, fast shrinkage}) (with a possibly enlarged constant $K$). In low dimensions $d=3,4$, one finds a very fast minimax testing separation rate, which can be as fast as the parametric rate of estimation $n^{-1/2}$ (this is implied by the above existence results and Lemma \ref{Lemma: confidence set impossibility} below). Even in higher dimensions, there is a substantial acceleration in the testing separation rate as compared to $L_2$ loss. Meanwhile, although there is also some acceleration in the estimation rates, the effect is not so pronounced. This explains the wider window of adaptation seen in Theorem \ref{Thm: existence and nonexistence of conf sets} for $W_2$ loss, as compared to $L_p$ loss: the greater discrepancy between testing and estimation rates gives more room for adaptation to take place. Theorem \ref{Thm: existence and nonexistence of conf sets} is proved in Section \ref{Section: Existence Results}; we outline the arguments now. For the existence result, we use the method of constructing confidence sets via risk estimation as in \cite{juditskyNonparametricConfidenceSet2003}, \cite{caiAdaptiveConfidenceBalls2006}, \cite{robinsAdaptiveNonparametricConfidence2006}; see \cite[Section 6.4]{gineMathematicalFoundationsInfiniteDimensional2015} for a concise summary of these ideas. These methods require the loss function under consideration to be a Hilbert space norm. Accordingly, we upper bound $W_2$ by a suitable Sobolev-type norm for which one can perform risk estimation with fast convergence rates; moreover, the estimation rates for this dominating norm differ from those for $W_2$ by only a logarithmic factor. In particular, the notions of near-optimal adaptive confidence sets for these two loss functions are equivalent. The non-existence result is obtained using a testing argument as in \cite{hoffmannAdaptiveInferenceConfidence2011}, \cite{bullAdaptiveConfidenceSets2013} and others, together with a lower bound for the minimax separation rate in a related testing problem. Moreover, the precise characterisation of the separation rate identifies a certain small subset of $\mathcal{F}(r)$ consisting of `problematic' densities which, once removed, permit the existence of confidence sets (with honesty relative to a smaller set of densities), as in the previous two references. We discuss the existence of these more general confidence sets after Theorem \ref{Thm: testing rate}. These theoretical results and constructions extends more generally to the study of adaptive honest confidence sets with negative Sobolev norm distances, and we discuss them in Section \ref{subsection: Sobolev norm theory extension}. For $p>2$, \cite{carpentierHonestAdaptiveConfidence2013} develops a construction of adaptive $L_p-$confidence sets whose radii are selected via testing. Though an extension of these ideas to $W_p$-confidence sets should be possible, we do not pursue it here as the methodology greatly differs from the one used in the present paper. \subsection{Adaptive $W_1$ Confidence Sets on $\mathbb{R}^d$} The case of densities on $\mathbb{R}^d$ is also of great interest; there are several situations in which it is unrealistic to assume compact support of the density $f$. Accordingly, let $X_1,\ldots,X_n$ be an i.i.d. sample drawn from some unknown density $f$ on $\mathbb{R}^d$. We take the Wasserstein-1 distance $W_1$ to be our loss function. We generalise our methods from the case of $\mathbb{T}^d$ to produce adaptive confidence sets for $f$ which adapt over similar function classes $\mathcal{G}(s)$, defined in \eqref{Eq: function class definition Rd} below and involving a constant $L$ which uniformly bounds the exponential moments of the densities in $\mathcal{G}(s)$. The discussion following Theorem \ref{Thm: existence and nonexistence of conf sets} is relevant in this context as well: in particular, since the confidence sets constructed in cases (i) and (ii) do not depend on $s$, adaptation in fact takes place for the full range of possible values of $s$ (i.e. $s\geq r$ when $d\leq 4$ and $s$ in some given window when $d>4$). \begin{theorem}\label{Thm: existence and nonexistence of conf sets - Rd} Fix $1\leq q\leq\infty$, $B\geq1$, $M\geq1\geq m>0$. Consider the two class adaption problem for confidence sets defined by (\ref{Eq: acs def, coverage})-(\ref{Eq: acs def, fast shrinkage}), with function classes $\mathcal{F}$ replaced by $\mathcal{G}$ and $W_2$ in place of $W_1$. \begin{enumerate}[(i)] \item Let $d\leq4$ and $s>r\geq0$. Then for any $\alpha,\alpha'>0$, there exists a near-optimal adaptive $W_1$ confidence set. \item Let $d>4$ and $0\leq r<s\leq \frac{2d-4}{d-4}r+\frac{d}{d-4}$. Then for any $\alpha,\alpha'>0$, there exists a near-optimal adaptive $W_1$ confidence set. \item\label{item: last case} Let $d>4$, $L$ be large enough and $0\leq r<s$ with $s> \frac{2d-4}{d-4}r+\frac{d}{d-4}$. Then for any $\alpha,\alpha'>0$ such that $2\alpha + \alpha'<1$, no near-optimal adaptive $W_1$ confidence set exists. \end{enumerate} \end{theorem} The bound $L$ on exponential moments in \eqref{Eq: function class definition Rd} is a technical condition which allows us to construct adaptive estimators and confidence sets via the method of risk minimization (see Section \ref{Section: Rd}). We are naturally interested in the existence of confidence sets for large $L$, i.e. on larger classes of densities. Moreover, small values of $L$ may lead to empty classes (see the discussion after Definition \ref{def: class on Rd} below) for which the theory of confidence sets is superfluous. \subsection{Extension to negative Sobolev norm distances}\label{subsection: Sobolev norm theory extension} To better understand the phenomena in Theorems \ref{Thm: existence and nonexistence of conf sets} and \ref{Thm: existence and nonexistence of conf sets - Rd}, it is elucidating to consider negative order Sobolev norm loss, $H^{-t}=B^{-t}_{22},t>0$ (see Appendix \ref{Section: Wavelet Appendix} for definitions), since the $W_2$ distance is dominated by such a norm (see (\ref{Eq: W_2 - log Sobolev comparison}) below). One finds that the minimax estimation rate for $t\geq d/2$ is (up to a log factor) $n^{-1/2}$, so no meaningful adaptation is required and one constructs a confidence set which `adapts' over all smoothnesses as in Proposition \ref{Prop: confidence set, d=1,2} below. When $t<d/2$, computations analogous to those in Section \ref{Section: Existence Results} show that the gap between testing and estimation rates are wider for larger $t$, enabling adaptation over a larger window of regularities (see Remark \ref{Remark: weak Sobolev norms} below). Here, one finds a continuous transition as $t$ increases from 0 (which is the $L_2$ case) to $d/2$, at which point confidence sets can adapt to any two smoothnesses. However, the specific geometry of the parameter space induced by the loss function is crucial, rather than how weak the loss function is \emph{per se}: if instead we consider $B^{-t}_{\infty\infty}$ loss, when $t<d/2$ the minimax estimation and testing rates can be shown to coincide; meanwhile, the estimation rate is independent of the smoothness parameter when $t\geq d/2$. So in the case of $B^{-t}_{\infty\infty}$ loss, when $t<d/2$ no adaptive confidence sets exist for \emph{any} two smoothnesses by Lemma \ref{Lemma: confidence set impossibility} below, but for $t\geq d/2$ they trivially exist. Whenever they exist, the construction of confidence sets in Section \ref{Section: Existence Results} below extends easily to the case of negative order Sobolev norms $H^{-t}, t>0$, and other Besov norms using norm embeddings as in \cite[Section 4.3]{gineMathematicalFoundationsInfiniteDimensional2015}; see Remark \ref{Remark: weak Sobolev norms} below. \section{Extension of the Theory to $\mathbb{R}^d$}\label{Section: Rd} Having provided a fairly complete resolution of the problem of adaptive $W_2$ confidence sets when the sample space is $\mathbb{T}^d$, we extend our results to the case of the unbounded sample space $\mathbb{R}^d$ with $W_1$ loss. The key tool is the Kantorovich-Rubinstein duality formula (\cite{kantorovich1958space}) \begin{equation}\label{Eq: K-R duality, 2} W_1(f,g) = \sup_{h\in\mathrm{Lip}_1(\mathbb{R}^d)} \int_{\mathbb{R}^d} h(x)(f(x) - g(x))\,\,\mathrm{d} x, \end{equation} where $\mathrm{Lip}_1(\mathbb{R}^d)$ is the set of 1-Lipschitz functions on $\mathbb{R}^d$. We also briefly discuss what happens when using $W_p$ loss for $p>1$ in Section \ref{Subsection: W_p,p>1 on Rd}; the unbounded sample space introduces complications which preclude a direct generalisation of the ideas from Section \ref{Section: Existence Results}. In this section, it is assumed that we observe $X_1,\ldots,X_n\overset{\mathrm{i.i.d.}}{\sim}f_0$ for some density $f_0$ on $\mathbb{R}^d$, and we wish to perform inference on $f_0$ using $W_1$ as the loss function. \subsection{Parameter Spaces} We use an $S$-regular tensor product wavelet basis of $L^2(\mathbb{R}^d)$ of the form $$ \left\{\phi_k,\psi_{lk}: k\in\mathbb{Z}^d,l\geq0\right\}$$ as introduced in Appendix \ref{Section: Wavelet Appendix} (we index the $\psi_{lk}$ using only $k,l$ by a slight abuse of notation). We write $K_j(f)$ for the projection of $f$ onto the first $j$ resolution layers, as in (\ref{Eq: j level projection definition}). Besov norms on $\mathbb{R}^d$, also defined in Appendix \ref{Section: Wavelet Appendix}, are defined analogously to those on $\mathbb{T}^d$, and the relation (\ref{Eq: Besov wavelet coefficient bound}) holds. Our goal is to construct an adaptive confidence set for the true density $f_0$ using risk estimation, where the adaptation occurs with respect to the smoothness parameter $s$. We shall consider functions in $B^s_{2q}$. Unlike our previous classes $\mathcal{F}(s)$ on $\mathbb{T}^d$, we need not assume that our densities are bounded away from zero, or something analogous such as sufficiently slow decay in the tails. However, in order to deal with the unboundedness of the sample space $\mathbb{R}^d$, we must impose a moment condition. For $\alpha,\beta>0$, define the $\alpha,\beta$-exponential moment of a density $f$ as \begin{equation}\label{Eq: exponential moment definition} \mathcal{E}_{\alpha,\beta}(f) := \int_{\mathbb{R}^d} \exp{(\beta\|x\|^{\alpha})}f(x)\,\,\mathrm{d} x = E_f\left(e^{\beta \|X\|^{\alpha}}\right). \end{equation} \begin{defn}\label{def: class on Rd} Let $1\leq p,q\leq\infty,s\geq0,B\geq1, M>0,\alpha,\beta>0$ and $L\geq1$. Define the function class \begin{equation}\label{Eq: function class definition Rd} \mathcal{G}_{s,p,q}(B,M;\alpha,\beta,L) = \left\{f\in B^{s}_{pq}(\mathbb{R}^d): \int_{\mathbb{R}^d}f =1,\quad \|f\|_{B^s_{pq}}\leq B,\quad 0\leq f\leq M \,\text{a.e.},\quad \mathcal{E}_{\alpha,\beta}(f) \leq L \right\}. \end{equation} Henceforth, we fix $p=2$ and consider $q,B,M,\alpha,\beta,L$ to be given. Define $$ \mathcal{G}(s) := \mathcal{G}_{s,2,q}(B,M;\alpha,\beta,L). $$ \end{defn} Observe that for $M$ close to 0 and $L$ close to 1, the class $\mathcal{G}(s)$ is empty. We therefore assume in the sequel that $L$ is sufficiently large (depending on $M,B$) for $\mathcal{G}(s)$ to be non-empty. The focus on $p=2$ is quite natural in view of the material developed in the previous section, relying on risk estimation to compute the diameter of confidence sets. Combining the exponential moment condition and the bound on the $B^s_{2q}$-norm, we prove in Lemma \ref{lemma: class inclusion} that densities in $\mathcal{G}(s)$ also have their $B^s_{1q}$-norm bounded by a constant depending on the class parameters. \subsection{Estimation Upper Bounds for $W_1$}\label{Subsection: Rd estimation} As before, we should insist on our estimator $\tilde{f}_n$ being a density almost surely. Indeed, the fact that $\tilde{f}_n$ has total mass 1 is vital to the proof of Proposition \ref{Prop: W1 loss decomposition} below. However, we note that there is no intrinsic requirement in (\ref{Eq: K-R duality, 2}) that $f$ and $g$ should be nonnegative, and so we will allow our estimators to take negative values. If a genuine density is required, one can just take the positive part of the estimator and renormalize. The following proposition gives an upper bound on the $W_1$ distance which is convenient for wavelet estimators. \begin{prop}\label{Prop: W1 loss decomposition} For any probability density $f$ with a finite first moment and any estimator $\tilde{f}_n$ of $f$ which has a finite first moment almost surely, we have that \begin{equation}\label{Eq: W1 loss decomposition} W_1(\tilde{f}_n,f) \lesssim \sum_{k\in\mathbb{Z}^d} \|k\||\langle f-\tilde{f}_n,\phi_k \rangle| + \sum_{l\geq0} 2^{-l\left(\frac{d}{2} + 1\right)} \sum_{k\in\mathbb{Z}^d} |\langle f-\tilde{f}_n, \psi_{lk}\rangle|, \end{equation} where the constant depends only on the wavelet basis. \end{prop} \begin{remark} Let $\hat{f}_n$ be some estimator of $f$, not necessarily with total mass 1. We obtain an estimator which integrates to 1 almost surely, which we call $\tilde{f}_n$, by renormalising the first wavelet layer of $\hat{f}_n$, that is, renormalising $\hat{f}_0 := K_0(\hat{f}_n)$. Then we set $$ \tilde{f}_n = \frac{\hat{f}_0}{\int \hat{f}_0(x)\,\,\mathrm{d} x} + \sum_{l\geq0}\sum_{k\in\mathbb{Z}^d}\langle \hat{f}_n,\psi_{lk}\rangle \psi_{lk}.$$ Note that while one can perform this procedure for any estimator $\hat{f}_n$, it is particularly simple for wavelet-based estimators. Assuming $L_1$-consistency of $\hat{f}_n$, $\hat{f}_0\to K_0(f)$ and thus $K_0(\tilde{f}_n)\to K_0(f)$ in $L_1$. Moreover, for the wavelet estimators we use below, this convergence occurs very fast, at the rate $n^{-\frac{S}{2S+d}}$, where $S$ is the regularity of the wavelet basis. Thus it suffices to consider the un-normalised estimator $\hat{f}_n$ in the decomposition (\ref{Eq: W1 loss decomposition}) whenever $s\leq S-1$, which we do in the sequel. \end{remark} We first establish an upper bound for the estimation rate over the class $\mathcal{G}(s)$. \begin{theorem}\label{Thm: Rd estimation upper bound} For any $s\geq0$, there exists an estimator $\hat{f}_n$ such that for all sufficiently large $n$, $$ \sup_{f\in\mathcal{G}(s)} E_f\,W_1(\hat{f}_n,f) \lesssim \begin{cases} (\log{n})^{\frac{\gamma d}{2}+1}n^{-1/2}, \quad &d=2, \\ (\log{n})^{\frac{\gamma d}{2}}n^{-\frac{s+1}{2s+d}}, \quad &d\geq3. \end{cases} $$ where $\gamma$ is a constant depending on $\alpha$ and $\beta$ only, and the constant depends on the parameters of the class $\mathcal{G}(s)$ and the wavelet basis. For $d=1$, the empirical measure $P_n$ satisifies \[ \sup_{f\in\mathcal{G}(s)} E_f\,W_1(P_n,P_f) \lesssim n^{-1/2}.\] \end{theorem} \begin{remark} These rates are sharp up to a logarithmic factor so long as $L$ is sufficiently large: one uses a reduction to a multiple testing problem as in the proof of the lower bounds in Proposition \ref{Prop: minimax estimation rates}, and then uses an analogous collection of well-separated densities defined on some common compact set. For large enough $L$, the compact support ensures that these densities have suitable exponential moments and so belong to $\mathcal{G}(s)$. \end{remark} \begin{remark} An inspection of the proof reveals that in fact it suffices to assume a suitable polynomial moment, depending on $s$; however, for convenience we assume an exponential moment which works for all $s\geq0$. \end{remark} The proofs of Proposition \ref{Prop: W1 loss decomposition} and Theorem \ref{Thm: Rd estimation upper bound} are given in Appendix \ref{Section: Rd proofs}. The estimator $\hat{f}_n$ is simply a wavelet projection estimator which is zero outside of a growing compact set; the risk outside of the compact is controlled using the moment assumption. As in the case of $\mathbb{T}^d$, we require an adaptive estimator. \begin{theorem}\label{Thm: Rd adaptive estimator} Let $d\geq2$, and let $\gamma>0$ be as in Theorem \ref{Thm: Rd estimation upper bound}. Then there exists an estimator $\hat{f}_n$ of $f$ such that for all $n\geq n_0(B)$ and all $s\geq0$, $$ \sup_{f\in\mathcal{G}(s)} E_f\, W_1(\hat{f}_n,f) \lesssim (\log{n})^{\frac{\gamma d}{2}}\left(\frac{n}{\log{n}}\right)^{-\frac{s+1}{2s+d}}, $$ where the constant depends on the parameters of the class $\mathcal{G}(s)$ and the wavelet basis. \end{theorem} The definition of $\hat{f}_n$ and proof of Theorem \ref{Thm: Rd adaptive estimator} are given in Appendix \ref{Section: Rd proofs}. \subsection{Construction of Confidence Sets} Let us now concretely state the two-class adaptation problem we wish to solve. Fix two smoothnesses $s>r\geq0$ and consider the model $\mathcal{G}(r) = \mathcal{G}(r)\cup\mathcal{G}(s)$. Given $\alpha\in(0,1)$, we seek a confidence set $C_n$ which has honest coverage at level $1-\alpha$, that is, for all $n$ sufficiently large, \begin{equation}\label{Eq: Rd honest coverage} \inf_{f\in\mathcal{G}(r)}P_f(f\in C_n) \geq 1-\alpha, \end{equation} as well as the two diameter shrinkage conditions: for all $\alpha'>0$ there exists a constant $K=K(\alpha')>0$ such that \begin{align} \sup_{f\in\mathcal{G}(r)} P_f(|C_n|_{W_1}>KR_n(r)) &\leq \alpha', \label{Eq: Rd diameter shrinkage slow} \\ \sup_{f\in\mathcal{G}(s)} P_f(|C_n|_{W_1}>KR_n(s)) &\leq \alpha', \label{Eq: Rd diameter shrinkage fast} \end{align} where $R_n(r)$ and $R_n(s)$ equal the convergence rates in Theorem \ref{Thm: Rd estimation upper bound} up to a poly-logarithmic factor. As discussed previously, the $d=1$ and $d=2$ cases are straightforward given the existence of the estimator from Theorem \ref{Thm: Rd adaptive estimator}, since here the convergence rates do not depend on the smoothness $r$. We thus restrict our attention to the case $d\geq3$. Let $X_1,\ldots,X_{2n}$ be an i.i.d. sample from the unknown $f\in\mathcal{G}(r)$. We split the sample as before into two equal halves, $\mathcal{S}^1$ and $\mathcal{S}^2$, and denote by $P^{(i)},E^{(i)}$ probabilities and expectations taken over $\mathcal{S}_i$. We wish to construct a confidence set via risk estimation, centred at the estimator $\hat{f}_n$ which we compute using $\mathcal{S}^1$. Proposition \ref{Prop: W1 loss decomposition} provides a natural upper bound for $W_1(f,\hat{f}_n)^2$ which we then decompose into several terms. Define the thresholds $\kappa_{-1n} = \kappa_{0n} \simeq (\log{n})^{\gamma}, \kappa_{ln} = 2^l\kappa_{0n}$ for $\gamma$ chosen as in Theorem \ref{Thm: Rd estimation upper bound}. Applying the Cauchy-Schwarz inequality several times, we obtain the bound \begin{align}\label{Eq: Rd square loss decomposition} W_1(f,\hat{f}_n)^2 \leq& 3\Bigg( (\log{n})^{\gamma(d+2)}\left[\sum_{\|k\|_{\infty}\leq \kappa_{-1n}}\langle f-\hat{f}_n,\phi_k\rangle^2 + j\sum_{l<j}2^{-2l}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}\langle f-\hat{f}_n,\psi_{lk}\rangle^2 \right] \nonumber \\ &\ldots + \left[\sum_{l\geq j}2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}|\langle f-\hat{f}_n,\psi_{lk}\rangle|\right]^2 \nonumber \\ &\ldots + \left[\sum_{\|k\|_{\infty}> \kappa_{-1n}}\|k\||\langle f,\phi_k\rangle| + \sum_{l\geq0}2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}>\kappa_{ln}}|\langle f, \psi_{lk}\rangle| \right]^2 \Bigg). \end{align} The final term is controlled using the moment assumption on $f\in\mathcal{G}(r)$; indeed, from the proof of Theorem \ref{Thm: Rd estimation upper bound} we have that for all $f\in\mathcal{G}(r)$, this term is bounded above by \begin{equation}\label{Eq: Rd remainder term definition} \Delta_{n} := C(d)L^2(\log{n})^{2\gamma}n^{-1}, \end{equation} where $C(d)$ is a constant depending only on $d$ and the wavelet basis. We next consider the remaining terms in (\ref{Eq: Rd square loss decomposition}). We introduce pseudo-distances $\tilde{W}^{(n,j)}(f,g)$ defined as \begin{equation}\label{Eq: W_1 proxy distances definition} \tilde{W}^{(n,j)}(f,g) = \left[\sum_{\|k\|_{\infty}\leq \kappa_{-1n}}\langle f-g,\phi_k\rangle^2 + j\sum_{l<j}2^{-2l}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}\langle f-g,\psi_{lk}\rangle^2 \right]^{1/2} + \sum_{l\geq j}2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}|\langle f-g,\psi_{lk}\rangle|. \end{equation} Observe that for $f,g\in\mathcal{G}(r)$, $$ W_1(f,g) \leq \sqrt{3(\log{n})^{\gamma(d+2)}}\cdot \tilde{W}^{(n,j)}(f,g) + \sqrt{3\Delta_n};$$ this is true uniformly over $r\geq0$. Since $\sqrt{\Delta_n}$ converges (up to a logarithmic factor) at the parametric rate, this means that any diameter shrinkage condition with respect to $\tilde{W}^{(n,j)}$ provides an analogous shrinkage condition for $W_1$, with only a slightly worse rate. Moreover, the first part of $\tilde{W}^{(n,j)}(f,g)$ is well-suited to estimation using a $U$-statistic. To this end, define the $U$-statistic \begin{align} \label{Eq: Rd U-stat definition} V_{n,j} = V_{n,j}(\hat{f}_n) :=& \frac{2}{n(n-1)}\sum_{i<i',i,i'\in\mathcal{S}^2} \Bigg[ \sum_{\|k\|_{\infty}\leq \kappa_{-1n}}\left(\phi_k(X_i) - \langle \hat{f}_n,\phi_k\rangle \right)\left(\phi_k(X_{i'}) - \langle \hat{f}_n,\phi_k\rangle \right) \nonumber \\ &\ldots+ j\sum_{l<j}2^{-2l}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}\left(\psi_{lk}(X_i) - \langle \hat{f}_n,\psi_{lk}\rangle \right)\left(\psi_{lk}(X_{i'}) - \langle \hat{f}_n,\psi_{lk}\rangle \right) \Bigg]. \end{align} Clearly we have that $E^{(2)}_f V_{n,j}$ is equal to the square of the first term in (\ref{Eq: W_1 proxy distances definition}). Analogously to Lemma \ref{Lemma: U-stat variance bound}, one shows that $V_{n,j}$ has small variance. \begin{lemma}\label{Lemma: Rd U-stat variance bound} For $f\in L_{\infty}(\mathbb{R}^d)$, we have that, for some constant $C_d$ depending only on $d$ and the wavelet basis, \begin{align*} \mathrm{Var}^{(2)}_f(V_{n,j}) &\leq \frac{C_d}{2}\left(\frac{j^2\|f\|_{\infty}^2(\log{n})^{\gamma d}}{n(n-1)}\sum_{l<j}2^{l(d-4)} + \frac{\|f\|_{\infty}}{n}\left[\sum_{\|k\|_{\infty}\leq \kappa_{-1n}}\langle f-\hat{f}_n,\phi_k\rangle^2 + j^2\sum_{l<j}2^{-4l}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}\langle f-\hat{f}_n,\psi_{lk}\rangle^2\right]\right) \\ &\leq C_d\left(\frac{j^2\|f\|_{\infty}^2(\log{n})^{\gamma d}}{n(n-1)}\sum_{l<j}2^{l(d-4)} + \tilde{W}^{(n,j)}(f,\hat{f}_n)^2 \right) \\ &=: \lambda_{j,n}^2(f). \end{align*} \end{lemma} For the second part of $\tilde{W}^{(n,j)}(f,\hat{f}_n)$, we use the concentration arguments from the proof of Theorem \ref{Thm: Rd adaptive estimator} to show that this term is suitably small with high probability uniformly over $f\in\mathcal{G}(r)$. Given a sequence $(j_n)$, we write $\tilde{W}^{(n)}$ for $\tilde{W}^{(n,j_n)}$, and $V_{j_n}$ for $V_{n,j_n}$. \begin{theorem}\label{Thm: Rd adaptive confidence sets} Let $d\geq3$. Fix $B\geq1,M>0,\alpha,\beta,L>0,1\leq q\leq \infty,$ and $s>r\geq0$. Let $\gamma\geq1$ be as in Theorem \ref{Thm: Rd estimation upper bound}. If $d>4$, assume additionally that $s\leq\frac{2d-4}{d-4}r + \frac{d}{d-4}$. Fix $\alpha\in(0,1)$. Consider the confidence set based on a sample of size $2n$ given by \begin{equation}\label{Eq: Rd conf set definition} C_n = \left\{g\in\mathcal{G}(r): \tilde{W}^{(n)}(g,\hat{f}_n) \leq C(d)\sqrt{z_{\alpha}\lambda_{n,j_n}(g)+V_{j_n}+ G_{j_n}}\right\} \end{equation} where $\hat{f}_n$ is computed on $\mathcal{S}^1$, $V_{j_n}$ is computed on $\mathcal{S}^2$, $C(d)$ is a constant depending on $d$ and the wavelet basis, and: \begin{itemize} \item $\lambda_{n,j_n}(g)$ is as in Lemma \ref{Lemma: Rd U-stat variance bound}; \item $j_n$ is such that $2^{j_n}\simeq\left(\frac{n}{\log{n}}\right)^{\frac{1}{2r+d/2}}$; \item $G_{j_n} = (\log{n})^{\gamma d + 1}2^{-2j_n(r+1)}$; \item $z_{\alpha} = (\alpha/2)^{-1/2}$. \end{itemize} Then for all $n\geq n_0(B)$, $C_n$ satisfies (\ref{Eq: Rd honest coverage}), as well as (\ref{Eq: Rd diameter shrinkage slow}) and (\ref{Eq: Rd diameter shrinkage fast}) for a suitable constant $K>0$ with the rates $$ R_n(r) = (\log{n})^{\gamma(d+1)}\left(\frac{n}{\log{n}}\right)^{-\frac{r+1}{2r+d}}, \quad R_n(s) = (\log{n})^{\gamma(d+1)}\left(\frac{n}{\log{n}}\right)^{-\frac{s+1}{2s+d}}. $$ In particular, $C_n$ is a near-optimal adaptive $W_1$ confidence set over $\mathcal{F}(s)\cup\mathcal{F}(r)$. \end{theorem} The proof is almost identical to that of Theorem \ref{Thm: d > 2 confidence sets}; a more detailed argument can be found in Appendix \ref{Section: Rd proofs}. In particular, this proves statements (i) and (ii) of Theorem \ref{Thm: existence and nonexistence of conf sets - Rd}. \subsection{Non-Existence of Confidence Sets} We now turn to the non-existence result (iii) in Theorem \ref{Thm: existence and nonexistence of conf sets - Rd}, a consequence of Lemma \ref{Lemma: confidence set impossibility} (which holds in a general decision theoretic framework). We therefore require a lower bound on the minimax separation rate in the testing problem \begin{equation}\label{Eq: testing problem Rd} H_0: f\in\mathcal{G}(s) \quad \mathrm{vs.} \quad H_1: f\in\tilde{\mathcal{G}}(r,\rho), \end{equation} where the separated alternative $\tilde{\mathcal{G}}(r,\rho)$ is defined analogously to before: $$ \tilde{\mathcal{G}}(r,\rho) := \left\{f\in\mathcal{G}(r): W_1(f,\mathcal{G}(s))\geq\rho\right\}. $$ \begin{theorem}\label{Thm: testing rate Rd} Assume that $d>4$ and $s>r\geq0$. Let $\rho_n^*$ be the minimax rate of testing for the problem (\ref{Eq: testing problem Rd}). Then, for $L$ sufficiently large in (\ref{Eq: function class definition Rd}), there exist a constant $c>0$ depending on the parameters of the class $\mathcal{G}(s)$ and the wavelet basis, and $n_0 = n_0(B,M)$ such that for all $n\geq n_0$, $$\rho_n^* \geq c n^{-\frac{r+1}{2r+d/2}}.$$ Also, \eqref{lower condition tests} holds for any $\beta<1$.\\ \end{theorem} The proof is given Appendix \ref{Section: Rd proofs}, and is similar to the proof of Theorem \ref{Thm: testing rate}. As before, this implies statement (iii) of Theorem \ref{Thm: existence and nonexistence of conf sets - Rd}. \subsection{The Case of $W_p,p>1$}\label{Subsection: W_p,p>1 on Rd} We briefly explain why the above techniques do not extend to other Wasserstein distances $W_p$, for $p>1$. On $\mathbb{R}^d$, one may bound the $W_1$-distance using the Kantorovich-Rubinstein duality formula (\ref{Eq: K-R Duality}). However, the generalisation of this formula on $\mathbb{T}^d$ for $W_p,p>1$, Proposition \ref{Prop: W-Besov comparison}, relies crucially on the compactness of $\mathbb{T}^d$ and the fact that the densities to which it applies are bounded uniformly away from zero, say by $m>0$; moreover, the constant in the upper bound is inversely proportional to a power of $m$. To apply this result, certainly we would have to consider only positive densities, use Proposition \ref{Prop: W-Besov comparison} on some compact, and then use a moment condition to control terms outside of this compact. However, a stronger moment condition will lead to a smaller lower bound $m$ over large compacts; this antagonistic relationship cannot be resolved without a polynomial contribution to the convergence rates. Given that the $W_1$ estimation rates on $\mathbb{R}^d$ in Theorem \ref{Thm: Rd estimation upper bound} match the rates from the compact case up to logarithmic factors, as well as the fact that such rates for $W_p$ on $\mathbb{T}^d$ do not depend on $p$, we conjecture that this method does not lead to sharp upper bounds. \section{Proofs for Section \ref{Section: Rd}}\label{Section: Rd proofs} \begin{proof}[Proof of Proposition \ref{Prop: W1 loss decomposition}] As $f$ and $\tilde{f}_n$ have the same total mass, we may without loss of generality take the supremum over functions $h\in\mathrm{Lip}_1(\mathbb{R}^d)$ for which $h(0)=0$; observe that $x\mapsto\|x\|$ is an envelope for this function class. Since both $f$ and $\tilde{f}_n$ have finite first moments (almost surely), the wavelet expansion of any $h$ in this class converges in $L_1(f)$ and $L_1(\tilde{f}_n)$ and so $$ \int_{\mathbb{R}^d}h(f-\tilde{f}_n) = \sum_{k\in\mathbb{Z}^d}\langle h,\phi_k\rangle \langle f-\tilde{f}_n, \phi_k\rangle + \sum_{l\geq0}\sum_{k\in\mathbb{Z}^d} \langle h,\psi_{lk} \rangle \langle f-\tilde{f}_n,\psi_{lk} \rangle. $$ As the father wavelets $\phi_k$ are compactly supported in some interval about $k$, $$ |\langle h,\phi_k \rangle| \lesssim |h(k)| \leq \|k\| $$ for some constant depending on the wavelet basis. Moreover, $h-K(h) = \sum_{l\geq0}\sum_{k\in\mathbb{Z}^d}\langle h,\psi_{lk}\rangle\psi_{lk}$ is in a $B^1_{\infty\infty}$-ball of radius depending only on the wavelet basis, and so by (\ref{Eq: Besov wavelet coefficient bound}), $$ \sup_{k\in\mathbb{Z}^d}|\langle h,\psi_{lk}\rangle| \lesssim 2^{-l\left(\frac{d}{2}+1\right)}. $$ Plugging these uniform estimates for the wavelet coefficients of $h$ into the first equation gives the result. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm: Rd estimation upper bound}] When $d=1$, the empirical measure achieves the stated rate (\cite{fournierRateConvergenceWasserstein2015}). Thus we assume $d\geq2$. The estimator we use is $$ \hat{f}_n := \sum_{\|k\|_{\infty}\leq \kappa_{-1n}} \hat{f}_{-1k}\phi_k + \sum_{l\leq l_n(s)}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}\hat{f}_{lk}\psi_{lk}, $$ where $\hat{f}_{lk}$ are empirical wavelet coefficients and the cutoffs $\kappa_{ln}, l_n(s)$ are chosen such that $$ 2^{l_n(s)} \simeq n^{\frac{1}{2s+d}}, \kappa_{-1n} = \kappa_{0n} \simeq (\log{n})^{\gamma}, \kappa_{ln} = 2^l\kappa_{0n},$$ where $\gamma$ is to be chosen below. We then use the decomposition in Proposition \ref{Prop: W1 loss decomposition}, which we further split to obtain six terms: \begin{align*} W_1(f,\hat{f}_n) \lesssim& \sum_{\|k\|_{\infty}\leq \kappa_{-1n}} \|k\| |\hat{f}_{-1k}-f_{-1k}| + \sum_{\|k\|_{\infty}> \kappa_{-1n}} \|k\||f_{-1k}| \\ &\ldots + \sum_{l< l_n(s)} 2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}|\hat{f}_{lk}-f_{lk}| + \sum_{l< l_n(s)} 2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}>\kappa_{ln}}|f_{lk}| \\ &\ldots + \sum_{l\geq l_n(s)} 2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}|f_{lk}| + \sum_{l\geq l_n(s)} 2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}>\kappa_{ln}}|f_{lk}| \\ &=: I + II + III + IV + V + VI. \end{align*} We first consider the bias terms $II,IV,VI$. For term $II$, we have that $$ \sum_{\|k\|_{\infty}> \kappa_{-1n}}\|k\||f_{-1k}| \leq \int_{\mathbb{R}^d}\sum_{\|k\|_{\infty}> \kappa_{-1n}}\|k\||\phi_k(x)|f(x)\,\,\mathrm{d} x.$$ Since each $\phi_k$ is compactly supported in some interval about $k$, and $\sum_{k\in\mathbb{Z}^d}|\phi_k|$ is uniformly bounded on $\mathbb{R}^d$, we have that $$ \sum_{\|k\|_{\infty}> \kappa_{-1n}}\|k\||\phi_k(x)| \lesssim \|x\| $$ for some constant depending on the wavelet basis. Moreover, the integrand is supported for all large enough $n$ in $([-\kappa_{-1n}/2,\kappa_{-1n}/2]^d)^c =: D_n$. Thus, for $n$ large enough, \begin{equation}\label{eq: bound bias mother wavelet 1} II \lesssim \int_{D_n}\|x\|f(x)\,\,\mathrm{d} x \leq \mathcal{E}_{\alpha,\beta}(f) \kappa_{-1n}\exp\left(-\beta\left(\frac{\kappa_{-1n}}{2}\right)^{\alpha}\right). \end{equation} Since $\sum_{k\in\mathbb{Z}^d}|\psi_{lk}| $ is uniformly bounded by a constant depending on the wavelet basis times $2^{ld/2}$, we analogously have \begin{equation}\label{eq: bound bias wavelet 2} \sum_{\|k\|_{\infty}>\kappa_{ln}} |f_{lk}| \lesssim 2^{ld/2}\mathcal{E}_{\alpha,\beta}(f)\exp\left(-\beta\left(\frac{\kappa_{0n}}{2}\right)^{\alpha}\right). \end{equation} Thus $$ IV + VI \lesssim \mathcal{E}_{\alpha,\beta}(f)\exp\left(-\beta\left(\frac{\kappa_{0n}}{2}\right)^{\alpha}\right). $$ Choosing $\gamma>0$ sufficiently large depending on $\alpha,\beta$, these terms converge faster than $n^{-1/2}$. Next, we deal with the final bias term $V$. By Cauchy-Schwarz and the fact that $\|f\|_{B^s_{2q}}\leq B$, $$ \sum_{\|k\|_{\infty}\leq\kappa_{ln}} |f_{lk}| \leq \sqrt{\kappa_{ln}^d}\|f_{l\cdot}\|_2 \lesssim (\log{n})^{\gamma d/2}2^{l\left(\frac{d}{2} - s\right)},$$ and so $$ V \lesssim \sum_{l\geq l_n(s)} 2^{-l(s+1)}(\log{n})^{\gamma d/2} \simeq (\log{n})^{\gamma d/2}2^{-l_n(s)(s+1)}$$ which is of the correct order by the definition of $l_n(s)$. To bound the stochastic terms $I$ and $III$, we use the expectation bound Lemma \ref{Lemma: H-J inequality}, whose proof generalises naturally to the case of $\mathbb{R}^d$. We have for all $l\geq-1$ such that $2^{ld}\leq n$ and $k\in\mathbb{Z}^d$ that $$ E_f|\hat{f}_{lk} - f_{lk}| \lesssim n^{-1/2},$$ for some constant depending on $M$ and the wavelet basis. So $$ E_f(I) \lesssim (\kappa_{-1n})^{d+1}n^{-1/2} $$ and $$ E_f(III) \lesssim (\log{n})^{\gamma d}n^{-1/2}\sum_{l< l_n(s)} 2^{l\left(\frac{d}{2}-1\right)}.$$ When $d=2$, the sum contributes an extra $\log{n}$ factor as in the statement. For $d\geq3$, the final term of the sum dominates, and so $$ E_f(III) \lesssim (\log{n})^{\gamma d/2}n^{-\frac{s+1}{2s+d}} $$ as stated. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm: Rd adaptive estimator}] Define the thresholds $\kappa_{-1n} = \kappa_{0n} \simeq (\log{n})^{\gamma}, \kappa_{ln} = 2^l\kappa_{0n}$ for $\gamma$ chosen as in Theorem \ref{Thm: Rd estimation upper bound}. As before, let $l_{\max}$ be such that $2^{l_{\max}} \simeq (n/\log{n})^{1/d}$; for $0\leq l\leq l_{\max}$, define the thresholds $\tau_l$ via $$ \tau_l^2 = \tau^2\kappa_{ln}^d\frac{\log{n}}{n}, $$ where $\tau>0$ is to be chosen below. For any sequence $(a_k)_{k\in\mathbb{Z}^d}$, set $ \|a_{\cdot}\|_{2,\kappa_{ln}} := \left( \sum_{\|k\|_{\infty}\leq\kappa_{ln}} a_k^2 \right)^{1/2}. $ The thresholded estimator is then defined as \begin{equation}\label{Eq: Rd thresholded estimator definition} \hat{f}_n = \sum_{\|k\|_{\infty}\leq \kappa_{-1n}}\hat{f}_{-1k}\phi_k + \sum_{l=0}^{l_{\max}}\mathrm{1}_{\left\{ \|\hat{f}_{l\cdot}\|_{2,\kappa_{ln}}> \tau_l \right\}}\sum_{\|k\|_{\infty}\leq\kappa_{ln}} \hat{f}_{lk}\psi_{lk}. \end{equation} We perform a decomposition of the risk similar to that in the previous proof: \begin{align*} W_1(f,\hat{f}_n) \lesssim& \sum_{\|k\|_{\infty}\leq \kappa_{-1n}} \|k\| |\hat{f}_{-1k}-f_{-1k}| + \sum_{\|k\|_{\infty}> \kappa_{-1n}} \|k\||f_{-1k}| \\ &\ldots + \sum_{l\leq l_{\max}} 2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}\left|\mathrm{1}_{\left\{ \|\hat{f}_{l\cdot}\|_{2,\kappa_{ln}}> \tau_l \right\}}\hat{f}_{lk}-f_{lk}\right| + \sum_{l\leq l_{\max}} 2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}>\kappa_{ln}}|f_{lk}| \\ &\ldots + \sum_{l>l_{\max}} 2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}|f_{lk}| + \sum_{l> l_{\max}} 2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}>\kappa_{ln}}|f_{lk}| \\ &=: I + II + III + IV + V + VI. \end{align*} We treat terms $I,II,IV$ and $VI$ identically to before. Term $V$ is also dealt with as in the previous proof, noting that for all $n$ sufficiently large, $2^{l_{\max}}>n^{1/(2s+d)}$. It remains to deal with term $III$; by Cauchy-Schwarz and the definition of $\kappa_{ln}$, we have that $$ III \lesssim (\log{n})^{\frac{\gamma d}{2}}\sum_{l=0}^{l_{\max}}2^{-l}\left\| \mathrm{1}_{\left\{ \|\hat{f}_{l\cdot}\|_{2,\kappa_{ln}}> \tau_l \right\}}\hat{f}_{l\cdot} - f_{l\cdot} \right\|_{2,\kappa_{ln}},$$ where the constant depends on $d$. By splitting this sum into two parts at $l_n(s)$, $2^{l_n(s)} \simeq B^{1/s}(n/\log{n})^{1/(2s+d)}$, one can bound it exactly as in the proof of Theorem \ref{Thm: Thresholded estimator} \end{proof} \begin{proof}[Proof of Theorem \ref{Thm: Rd adaptive confidence sets}] We first establish coverage. Define the thresholds $\kappa_{ln}$ as in the previous proof. Given $f\in\mathcal{G}(s)$, as in the proof of Theorems \ref{Thm: Thresholded estimator} and \ref{Thm: Rd adaptive estimator}, on an event of probability tending to 1, for all $l$ such that $l_n(r)\leq l\leq l_{\max}$, $\langle \hat{f}_n,\psi_{l\cdot}\rangle \equiv 0$. Note that $l_{\max}>j_n >l_n(s)>l_n(r)$. So on this event, by Cauchy-Schwarz, \begin{align*} \left(\sum_{l\geq j_n}2^{-l\left(\frac{d}{2}+1\right)}\sum_{\|k\|_{\infty}\leq\kappa_{ln}}|\langle f-\hat{f}_n,\psi_{lk}\rangle|\right)^2 &\lesssim (\log{n})^{\gamma d}\left(\sum_{l\geq j_n}2^{-l}\|\langle f,\psi_{l\cdot}\rangle\|_2\right)^2 \\ &\lesssim (\log{n})^{\gamma d}B2^{-2j_n(r+1)} \\ &\leq G_{j_n} \end{align*} for all $n$ sufficiently large, i.e. this quantity is $O_P(G_{j_n})$. The other term in $\tilde{W}^{(n)}(f,\hat{f}_n)^2$ is precisely $E^{(2)}_f V_{j_n}$; by Chebyshev's inequality we obtain condition (\ref{Eq: Rd honest coverage}). It remains to confirm the diameter conditions (\ref{Eq: Rd diameter shrinkage slow}) and (\ref{Eq: Rd diameter shrinkage fast}) with the rates $R_n(r),R_n(s)$ as given in the statement of the result. As the remainder term $\sqrt{r_n}$ converges up to a logarithmic factor at the rate $n^{-1/2}$, it is dominated by $\tilde{W}^{(n)}$ for diameter considerations. As observed previously, we may instead prove the diameter conditions for the $\tilde{W}^{(n)}$ distance with the augmented rates $$ \bar{R}_n(r) = (\log{n})^{\gamma d/2}\left(\frac{n}{\log{n}}\right)^{-\frac{r+1}{2r+d}}, \quad \bar{R}_n(s) = (\log{n})^{\gamma d/2}\left(\frac{n}{\log{n}}\right)^{-\frac{s+1}{2s+d}}. $$ By the same argument as in the proof of Theorem \ref{Thm: d > 2 confidence sets}, the $\tilde{W}^{(n)}$-diameter of $C_n$ is bounded by a constant multiple of $$ (\log{n})^{\gamma d/4 + 1/2}n^{-1/2}\left(\sum_{l<j}2^{l(d-4)}\right)^{1/4} + \sqrt{V_{j_n}} + \sqrt{G_{j_n}} + n^{-1/2}. $$ The final term is dominated by the first, and (using the condition on $s$ when $d>4$) $\sqrt{G_{j_n}} = O(\bar{R}_n(s)) = o(\bar{R}_n(r))$. One checks the first term is of the correct order as in Theorem \ref{Thm: d > 2 confidence sets}. Finally, since $\mathrm{Var}^{(2)}_f(V_{j_n})\to0$, we have that $$V_{j_n} = O_{P_f}\left(E_f V_{j_n}\right);$$ as in the proof of Theorem \ref{Thm: Rd adaptive estimator}, this expectation is of order $\bar{R}_n(r)$ or $\bar{R}_n(s)$ when $f$ belongs to $\mathcal{G}(r)$ or $\mathcal{G}(s)$ respectively. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm: testing rate Rd}] For some $\alpha'>\alpha$, $D>0$ and $\alpha(x)=\alpha'e^{-1/\left(\norm{x}_2-D\right)}\mathds{1}_{B(0,D)^c}(x)$, the density $f$ defined by \[f(x)\propto e^{-\beta\norm{x}_2^{\alpha(x)}}\]is such that $\mathbb{E}_f\Big[e^{\beta\norm{X}^\alpha}\Big]<+\infty$. Then, for $\sigma>0$, if $X\sim P_f$, $\sigma X$ has density $g: x\mapsto \sigma^{-d}f(\sigma^{-1} x)$ satisfying \[ \mathbb{E}_g\Big[e^{\beta\norm{X}^\alpha}\Big] = \mathbb{E}_f\Big[e^{\sigma^\alpha\beta\norm{X}^\alpha}\Big]<+\infty.\] Then, we verify that $f\in H_2^m(\mathbb{R}^d)\subset B_{2\infty}^{m}(\mathbb{R}^d)\subset B_{2q}^{s}(\mathbb{R}^d)$, for any $m\in N$ and $s<m$. Also, $\norm{g}_p=\sigma^{-d(1-1/p)}\norm{f}_p$ and, the moduli of continuity of $g$ satisifies, for $t>0$ and an integer $r>s$, \begin{align*} \omega_r(g,t,2) &\coloneqq \underset{0\leq \norm{h}\leq t}{\sup} \norm{\sum_{k=0}^r \binom{r}{k} (-1)^{r-k} \sigma^{-d} f(\sigma^{-1} \cdot + k\sigma^{-1} h)}_2 \\ &= \sigma^{-d} \underset{0\leq \norm{h}\leq \sigma^{-1} t}{\sup} \norm{\sum_{k=0}^r \binom{r}{k} (-1)^{r-k} \sigma^d f(\sigma^{-1} \cdot + k h)}_2\\ &= \sigma^{-d/2} \omega_r(f,\sigma^{-1} t,2). \end{align*} Therefore, with $\left|f\right|_{B^s_{pq}}\coloneqq \left[ \int_0^{\infty} \Big|\frac{\omega_r(f,t,p)}{t^s}\Big|^q\frac{dt}{t}\right]^{1/q}$, we have \begin{equation}\label{eqn: dilatation_besovnorm} \norm{g}_{B^{s}_{pq}} = \norm{g}_p + \left|g\right|_{B^s_{pq}} = \sigma^{-d(1-1/p)}\norm{f}_p + \sigma^{-d(1-1/p)-s} \left|f\right|_{B^s_{pq}},\end{equation} so that $\norm{g}_{B_{2q}^s}\leq B$ for $\sigma$ large enough. Also, since $f\in L_\infty(\mathbb{R}^d)$, $g\leq M$ for $\sigma$ large enough. So, for some large $L$, $g\in \mathcal{G}(s)$. For some sequence $L_n\to\infty$, and any $\omega\in \left\{-1;1\right\}^{\mathbb{Z}\cap\left[0,2^{L_n}\right)^d\times \mathcal{I}}$, we define for some $\epsilon>0$, \[ f^n_\omega = g + \epsilon2^{-L_n(r+d/2)}\sum_{k\in\mathbb{Z}\cap\left[0,2^{L_n}\right)^d, \iota\in\mathcal{I}} \omega_{k,\iota} \Psi^{\iota}_{L_nk}.\] Assuming that the scaling and mother wavelets functions are compactly supported (as assumed in Appendix \ref{Section: Wavelet Appendix}), the $\Psi^{\iota}_{L_nk}$, for $k\in\mathbb{Z}\cap\left[0,2^{L_n}\right)^d, \iota\in\mathcal{I}$, are supported on a compact set $K$ independent of $n$. Then, since \begin{align*} \norm{f^n_\omega}_{B^{r}_{2q}} &\leq \norm{g}_{B^{r}_{2q}} + \epsilon2^{-L_n(r+d/2)} \norm{\sum_{k\in\mathbb{Z}\cap\left[0,2^{L_n}\right)^d, \iota\in\mathcal{I}} \omega_{k,\iota} \Psi^{\iota}_{L_nk}}_{B^{r}_{2q}}\\ &\leq \norm{g}_{B^{r}_{2q}} + C\epsilon, \end{align*} for some $C>0$ depending on $d$ only, reasoning as for \eqref{eqn: dilatation_besovnorm}, and since $B_{2q}^s\subset B_{2q}^r$, $f^n_\omega$ is the $\norm{\cdot}_{B_{2q}^r}$-Besov ball of radius $B$ for $\epsilon$ small enough and $\sigma$ large enough. Then, by assumption, $\int_{\mathbb{R}^d}f^n_\omega(t)dt=1$ and, since \[ \norm{2^{-L_n(r+d/2)}\sum_{k\in\mathbb{R}^d, \iota\in\mathcal{I}} \left|\Psi^{\iota}_{L_nk}\right|}_\infty \lesssim2^{-rL_n},\] $0< f^n_\omega \leq M$ for $n, \sigma$ large enough (or $\epsilon$ small enough if $r=0$). Indeed, $g$ is lower bounded by a some positive constant on $K$, So, $f^n_\omega$ actually is a density function. For these to belong to the alternative hypothesis, it remains to check that these are well separated from the null hypothesis. For any $h\in \mathcal{G}(s)$, the reversed triangular inequality gives \begin{align*} W_1\left(f^n_\omega, h\right)&\gtrsim \norm{f^n_\omega - h}_{B^{-1}_{1\infty}}\\ &\gtrsim 2^{-L_n(d/2+1)} \sum_{k\in\mathbb{Z}\cap\left[0,2^{L_n}\right)^d, \iota\in\mathcal{I}}\left|\langle f^n_\omega - h, \Psi^{\iota}_{L_nk}\rangle\right|\\ &\geq 2^{-L_n(d/2+1)}\left| \sum_{k, \iota}|\langle f^n_\omega -g , \Psi^{\iota}_{L_nk}\rangle| - \sum_{k, \iota}|\langle h-g, \Psi^{\iota}_{L_nk}\rangle|\right|\\ &= 2^{-L_n(d/2+1)} \left[ C 2^{-L_n(r-d/2)}- C'2^{-L_n(s-d/2)}\right]\\ &\gtrsim2^{-L_n(1+r)}, \end{align*} for constants independent of $n$. Above, we used that $s>r$ and that, for any $s>0$, $\mathcal{G}(s)\subset \left\{f: \norm{f}_{B^{s}_{1q}}\leq B'\right\}$ for some $B'>0$ according to Lemma \ref{Eq: exponential moment definition}.. The last inequality holds for $n$ large enough. Therefore, if $L_n^*$ is such that $2^{-L_n^*(1+r)}\asymp \xi_n$, it is possible to take $L_n>L_n^*$ such that $\rho_n\leq C'2^{-L_n(1+r)} = o(\xi_n)$, so that, for any $\omega$, $f^n_\omega \in \tilde{\mathcal{G}}(s,\rho_n)$. For $N_n=2^{2^{dL_n}\left(2^d-1\right)}$, let's index $\omega\in \left\{-1;1\right\}^{\mathbb{Z}\cap\left[0,2^{L_n}\right)^d\times \mathcal{I}}=\left\{w^{(m)}:\ m=1,\dots, N_n\right\}$ and denote $P_m=P_{f_{\omega^{(m)}}}$. Then, \[ \underset{n}{\lim\inf} \ \underset{\Psi_n}{\inf}\Big[\underset{f\in H_0}{\sup} \mathbb{E}_f \left[\Psi_n\right] + \underset{f\in H_1(r_n)}{\sup} \mathbb{E}_f \left[1-\Psi_n\right] \Big] \geq 1-\frac{1}{2}\sqrt{\chi^2\left(Q^{\otimes n}, P_0^{\otimes n}\right)},\] where $Q=N_n^{-1}\sum_{m=1}^{N_n} P_m$ and $P_0$ has density $g\in H_0$. Then, for any $1\leq m,q\leq N_n$, one has by properties of the wavelet basis, denoting $\nu_m=f_{\omega^{(m)}}-g$, \begin{align*} &\int \frac{dP_m^{\otimes n}}{dP_0^{\otimes n}}\frac{dP_q^{\otimes n}}{dP_0^{\otimes n}}dP_0^{\otimes n} \\ &= \prod_{i=1}^n \int_{[0,1]^d} \left[g(x_i) + \epsilon2^{-L_n(r+d/2)}\sum_{k, \iota} \omega^{(m)}_{k,\iota} \Psi^{\iota}_{L_nk}(x_i)\right]\\ &\qquad \left[g(x_i) + \epsilon2^{-L_n(r+d/2)}\sum_{k, \iota} \omega^{(q)}_{k,\iota} \Psi^{\iota}_{L_nk}(x_i)\right] g^{-1}(x_i)dx_i\\ &= \left(1+ \int_{\mathbb{R}^d} \frac{\nu_m(x)\nu_q(x)}{g(x)} dx \right)^n. \end{align*} For $\sigma$ large enough, $g$ is constant on the compact support of $\nu_m$ and $\nu_q$, equal to $g(0)$. Hence, following the same arguments as above, \[ \chi^2\left(Q^{\otimes n}, P_0^{\otimes n}\right) = (\cosh{\gamma_n})^{2^{dL_n}\left(2^d-1\right)} -1, \] where $\gamma_n=n\epsilon^2g(0)^{-1}2^{-L_n(2r+d)}$, and for any $\delta>0$, $\chi^2\left(Q^{\otimes n}, P_0^{\otimes n}\right) \leq \delta^2$ for $n$ large enough. This concludes the proof. \end{proof} \begin{lemma}\label{lemma: class inclusion} Let $B\geq1,M>0,\alpha>0,\beta>0,L>0,1\leq q\leq \infty,$ and $s\geq0$. Then, there exists a constant $B'$, depending on the class parameters, the wavelet basis and the dimension $d$, such that \[ \mathcal{G}_{s,2,q}(B,M;\alpha,\beta,L) \subset \mathcal{G}_{s,1,q}(B',M;\alpha,\beta,L). \] \end{lemma} \begin{proof} Let $f\in \mathcal{G}_{s,2,q}(B,M;\alpha,\beta,L)$. All we have to prove is that \[\norm{f}_{B_{1q}^{s}}=\norm{\langle f, \phi_{\cdot}\rangle}_1+ \left(\sum_{l\geq 0} \big[2^{l(s-d/2)} \norm{\langle f, \psi_{l\cdot}\rangle}_1 \big]^q\right)^{1/q}\leq B',\] for some $B'$ as in the lemma. Let $\kappa>0$. Then, \[ \norm{\langle f, \phi_{\cdot}\rangle}_1 = \sum_{\norm{k}_\infty\leq \kappa} \left|\langle f,\phi_k\rangle\right|+\sum_{\norm{k}_\infty> \kappa} \left|\langle f,\phi_k\rangle\right|.\] For the second term, the same arguments as the one used to obtain \eqref{eq: bound bias mother wavelet 1} give that it is bounded by $\mathcal{E}_{\alpha,\beta}(f) \exp\left(-\beta\left(\frac{\kappa}{2}\right)^{\alpha}\right)$, up to a constant depending on the wavelet basis. The first term is controlled via the Cauchy-Schwarz inequality \[\sum_{\norm{k}_\infty\leq \kappa} \left|\langle f,\phi_k\rangle\right| \lesssim (2\kappa+1)^d \left(\sum_{\norm{k}_\infty\leq \kappa} \left|\langle f,\phi_k\rangle\right|^2\right)^{1/2}\leq (2\kappa+1)^d \norm{\langle f, \phi_{\cdot}\rangle}_2,\] for a constant depending on $d$ only. Next consider, for $l\geq0$, $\norm{\langle f, \psi_{l\cdot}\rangle}_1$. As before, letting $\kappa_l=2^{l/2}$, we have \[ \norm{\langle f, \psi_{l\cdot}\rangle}_1 = \sum_{\norm{k}_\infty\leq \kappa_l} \left|\langle f,\psi_{lk}\rangle\right|+\sum_{\norm{k}_\infty> \kappa_l} \left|\langle f,\psi_{lk}\rangle\right|.\] Arguing as with \eqref{eq: bound bias wavelet 2}, the second term is bounded by $2^{ld/2} \mathcal{E}_{\alpha,\beta}(f) \exp\left(-\beta\left(\frac{\kappa_l}{2}\right)^{\alpha}\right)$, up to a constant depending on the wavelet basis. The first term is controlled as above. Then, using the $l^q$ triangular inequality, \begin{align*} \left(\sum_{l\geq 0} \big[2^{l(s-d/2)} \norm{\langle f, \psi_{l\cdot}\rangle}_1 \big]^q\right)^{1/q} &\lesssim \left(\sum_{l\geq 0} 2^{ql(s-d/2)}\Big[ 2^{ld/2} \mathcal{E}_{\alpha,\beta}(f) \exp\left(-\beta\left(\frac{\kappa_l}{2}\right)^{\alpha}\right) + (2\kappa_l+1)^d \norm{\langle f, \psi_{l\cdot}\rangle}_2 \Big]^q\right)^{1/q} \\ &\lesssim \left(\sum_{l\geq 0} \Big[2^{ls} \mathcal{E}_{\alpha,\beta}(f) \exp\left(-2^{-\alpha}\beta2^{l\alpha/2}\right) \Big]^q\right)^{1/q} + \left(\sum_{l\geq 0}\Big[ 2^{ls} \norm{\langle f, \psi_{l\cdot}\rangle}_2 \Big]^q\right)^{1/q}, \end{align*} for a constants depending on the wavelet basis and $d$. The first term is upper bounded by \[\mathcal{E}_{\alpha,\beta}(f) \left(\sum_{l\geq 0} 2^{qls}\exp\left(-q2^{-\alpha}\beta2^{l\alpha/2}\right) \right)^{1/q}\lesssim \mathcal{E}_{\alpha,\beta}(f),\] as the series converges. In the end, following our assumptions on $\norm{f}_{B_{2q}^{s}}$, \begin{align*} \norm{f}_{B_{1q}^{s}} &\lesssim (2\kappa+1)^d \norm{\langle f, \phi_{\cdot}\rangle}_2 + \left(\sum_{l\geq 0}\Big[ 2^{ls} \norm{\langle f, \psi_{l\cdot}\rangle}_2 \Big]^q\right)^{1/q} + \mathcal{E}_{\alpha,\beta}(f) \exp\left(-\beta\left(\frac{\kappa}{2}\right)^{\alpha}\right) + \mathcal{E}_{\alpha,\beta}(f)\\ &\lesssim B + \mathcal{E}_{\alpha,\beta}(f) \leq B+L, \end{align*} where the constants depend on the wavelet basis, $d$, the arbitrary $\kappa>0$ we took, $s$, $\alpha$, $\beta$ and $q$. \end{proof} \section{Proof of Theorem \ref{Thm: existence and nonexistence of conf sets}}\label{Section: Existence Results} \subsection{A Hilbert Norm Upper Bound for $W_2$} We wish to construct confidence sets by performing risk estimation. The inner product structure of Hilbert space norms makes them particularly amenable to risk estimation, and so we seek some Hilbert norm which upper bounds the $W_2$ distance. For this, we introduce the \emph{logarithmic Sobolev norm} (\cite[Section 4.4]{gineMathematicalFoundationsInfiniteDimensional2015}; see \cite{castilloNonparametricBernsteinMises2013}, \cite{castilloBernsteinvonMisesPhenomenon2014} for another statistical application of such norms). \begin{defn} Define the $H^{-1,\delta}$ norm of $f\in L_2(\mathbb{T}^d)$ as $$ \|f\|_{H^{-1,\delta}} = |\langle f,1 \rangle| + \left(\sum_{l\geq0} 2^{-2l}\max(l,1)^{2\delta}\|\langle f, \psi_{l\cdot}\rangle\|_{2}^2\right)^{1/2}.$$ Note the similarity to the definition of the $B^{-1}_{22} = H^{-1}$ norm given by (\ref{Eq: Besov norm definition}); indeed, when $\delta=0$ the two norms coincide with the Sobolev norm of regularity -1. We refer to this as a `logarithmic' Sobolev space because the parameter $\delta$ measures the smoothness of $f$ on a logarithmic scale. \end{defn} We require the following comparison inequality from \cite{weedEstimationSmoothDensities2019}. \begin{prop}[Theorem 3, \cite{weedEstimationSmoothDensities2019}]\label{Prop: W-Besov comparison} Let $1\leq p<\infty$. Let $f,g$ be two densities in $L_p(\mathbb{T}^d)$, and assume that for almost every $x\in\mathbb{T}^d$, $M\geq\max(f(x), g(x))\geq m>0$, for real numbers $M$ and $m$. Then \begin{equation}\label{Eq: W-B estimate} M^{-1/p'}\|f-g\|_{B^{-1}_{p\infty}} \lesssim W_p(f,g) \lesssim m^{-1/p'}\|f-g\|_{B^{-1}_{p1}}, \end{equation} where $\frac{1}{p}+\frac{1}{p'}=1$, and the constants depend only on $d,p$ and the wavelet basis. Moreover, when $p=1$, one may choose $m=0$. \end{prop} This result is an extension of the celebrated Kantorovich-Rubinstein duality formula, which states that for two \emph{probability measures} $\mu,\nu$ on $\mathbb{T}^d$, \begin{equation}\label{Eq: K-R Duality} W_1(\mu,\nu) = \sup_{h\in\mathrm{Lip}_1(\mathbb{T}^d)} \int h\,\,\mathrm{d}(\mu-\nu), \end{equation} where the supremum is taken over all functions $h:\mathbb{T}^d\to\mathbb{R}$ with Lipschitz constant bounded by 1. We may relate this to (\ref{Eq: W-B estimate}) using the sequence of norm-continuous embeddings (\cite[Section 4.3]{gineMathematicalFoundationsInfiniteDimensional2015}) $$ B^{-1}_{11}\subset \left(B^1_{\infty\infty}\right)^* \subset BL(\mathbb{T}^d)^* \subset \left(B^1_{\infty 1}\right)^* \subset B^{-1}_{1\infty},$$ where $BL(\mathbb{T}^d)$ is the space of bounded Lipschitz functions on $\mathbb{T}^d$ (note that any Lipschitz function on $\mathbb{T}^d$ is bounded, so $BL(\mathbb{T}^d)$ and $\mathrm{Lip}_1(\mathbb{T}^d)$ coincide). However, in order to generalise this to $W_p,p>1$, one must impose that the probability measures have densities which are bounded and bounded away from zero; indeed, for densities not bounded below, no norm provides a similar comparison to $W_p$ (\cite[Theorem 7]{weedEstimationSmoothDensities2019}), and convergence rates are slower than those in Proposition \ref{Prop: minimax estimation rates}. Thus the restriction from the usual choices of Besov norm-balls to the classes $\mathcal{F}(s),s\geq0$ is necessary. A simple application of the Cauchy-Schwarz inequality confirms that $H^{-1,\delta} \subset B^{-1}_{21}$ as soon as $\delta>1/2$. Thus in conjunction with the upper bound in Proposition \ref{Prop: W-Besov comparison}, we have that, for $r\geq0,$ $f\in\mathcal{F}(r)$ and $\tilde{f}_n$ any estimator of $f$, \begin{equation}\label{Eq: W_2 - log Sobolev comparison} W_2(f,\tilde{f}_n) \lesssim \|f-\tilde{f}_n\|_{B^{-1}_{21}} \lesssim \|f-\tilde{f}_n\|_{H^{-1,\delta}}, \end{equation} where the first constant depends on the parameters of the class $\mathcal{F}(r)$, but the second constant depends only on the wavelet basis and $d$. \begin{remark}\label{Remark: classes not bounded below for W_1} When using $W_1$ loss, one may consider the class $\mathcal{F}(s)$ with the choice $m=0$, i.e. densities are not required to be bounded away from zero. Then the $H^{-1,\delta}$ norm still provides an upper bound for $W_1$ for densities in $\mathcal{F}(s)$ due to the upper bound in (\ref{Eq: W-B estimate}) and the sequence of continuous embeddings $ H^{-1,\delta} \subset B^{-1}_{21} \subset B^{-1}_{11},$ where the second embedding follows from Jensen's inequality (with operator norm 1). \end{remark} For the remainder of this section, we work in $H^{-1,\delta}$ risk; as soon as $\delta>1/2$, this provides a Hilbert norm upper bound for the $W_2$ risk. In particular, any coverage guarantee for a $H^{-1,\delta}$ ball is automatically inherited by the $W_2$ ball with the same centre and radius scaled by the embedding constant from (\ref{Eq: W_2 - log Sobolev comparison}). Of course, by constructing confidence sets for a stronger loss function, we may not be able to attain near-optimal diameter shrinkage, but we shall see that this is not the case. \subsection{Construction of Confidence Sets} We first give the minimax estimation rates for the problem under consideration. These are important for two reasons: firstly, they provide the benchmark for the `size' of an optimal confidence set. Moreover, our confidence sets are centred at a suitable estimator of $f$, which must perform well for the resulting confidence set to also have good performance. In the density estimation problem, the estimation rates for $W_2$ loss are as follows: \begin{prop}\label{Prop: minimax estimation rates} Let $s\geq0$ and let $r_n^*(s)$ denote the minimax rate of estimation over $\mathcal{F}(s)$. Then $$ r_n^*(s) \lesssim \begin{cases} n^{-1/2}, \quad &d=1,\\ n^{-1/2}\log{n}, \quad &d=2,\\ n^{-\frac{s+1}{2s+d}}, \quad &d\geq3, \end{cases}$$ where the constant depends on the parameters of the class $\mathcal{F}(s)$ and the wavelet basis. Moreover, for any $s\geq0$, $$ r_n^*(s) \gtrsim \begin{cases} n^{-1/2}, \quad &d=1,2\\ n^{-\frac{s+1}{2s+d}}, \quad &d\geq3, \end{cases}$$ where the infimum is over all estimators $\tilde{f}_n$ based on a sample of size $n$. \end{prop} The upper bounds follow from Theorem 1 in \cite{weedEstimationSmoothDensities2019} and Remark \ref{Remark: expectation vs O_P rates}. The lower bounds are proved as in Theorem 6.3.9 in \cite{gineMathematicalFoundationsInfiniteDimensional2015}, where one ensures the existence of a suitable $W_2$-separated set using the lower bound in Proposition \ref{Prop: W-Besov comparison}. See also Theorem 2 in \cite{weedEstimationSmoothDensities2019}. We centre our confidence sets at an estimator $\hat{f}_n$ of $f$ which has near-optimal convergence over the classes $\mathcal{F}(s)$ and $\mathcal{F}(r)$. The theory of adaptive estimation is relatively complete, and in the vast majority of cases it is possible to construct adaptive estimators which converge at the minimax estimation rate (perhaps up to a logarithmic factor) over a wide range of smoothnesses - we mention only the classical references \cite{lepskiiProblemAdaptiveEstimation1991} and \cite{donohoDensityEstimationWavelet1996}. The consideration of Wasserstein loss adds a minor complication to the usual case of `norm-type' loss functions. The Wasserstein distance $W_p(f,\tilde{f}_n)$ is only well-defined if $\tilde{f}_n$ is also a density, and thus we ought to insist that any estimator we define is indeed a density almost surely. To achieve this, given any wavelet-based estimator of the form $$ \tilde{f}_n = \tilde{f}_{-1} + \sum_{l\geq0}\sum_{k=0}^{2^{ld}-1}\tilde{f}_{lk}\psi_{lk} $$ where $\tilde{f}_{lk}$ are the wavelet coefficients of the estimator, we insist that $\tilde{f}_{-1} = 1$. This ensures that $\int_{\mathbb{T}^d}\tilde{f}_n = 1$. The problem of non-negativity is more subtle. In \cite{weedEstimationSmoothDensities2019}, it was addressed by projecting $\tilde{f}_n$ onto the class of densities $\mathcal{F}(r)$ with respect to the $B^{-1}_{p1}$ norm, where $r$ is the smallest regularity to which we want to adapt. Then, for $f\in\mathcal{F}(r)$, denoting the projection of $\tilde{f}_n$ by $\tilde{f}^D_n$, $$ W_p(f,\tilde{f}^D_n) \lesssim \|f-\tilde{f}^D_n\|_{B^{-1}_{p1}} \leq \|f-\tilde{f}_n\|_{B^{-1}_{p1}} + \inf_{g\in\mathcal{F}(r)}\|\tilde{f}_n-g\|_{B^{-1}_{p1}} \leq 2\|f-\tilde{f}_n\|_{B^{-1}_{p1}},$$ and so it suffices to analyse the performance of $\tilde{f}_n$ in $B^{-1}_{p1}$ loss. However, this projection step makes the estimator essentially intractable. Instead, we use the well-known $L_{\infty}$ consistency of the adaptive estimators considered below (c.f. \cite{donohoDensityEstimationWavelet1996}, for example) together with the fact that the densities in $\mathcal{F}(r)$ are uniformly bounded away from 0 to conclude that for sufficiently large $n$, with high probability $\tilde{f}_n$ is in fact a probability density. Whenever $\tilde{f}_n$ fails to be non-negative, we simply replace it with an arbitrary choice of density (e.g. uniform); as $n\to\infty$, this event occurs with vanishing probability. \begin{theorem}\label{Thm: Thresholded estimator} Let $d\geq2$. Then there exists an estimator $\hat{f}_n$ of $f$ such that for all $n\geq n_0(B)$ and all $s\geq0$, $$ \sup_{f\in\mathcal{F}(s)} E_f\|f-\hat{f}_n\|_{H^{-1,\delta}}^2 \lesssim (\log{n})^{2\delta}\left(\frac{n}{\log{n}}\right)^{-\frac{2(s+1)}{2s+d}},$$ where the constant depends on $B,d$ and the wavelet basis. \end{theorem} The definition of $\hat{f}_n$ and proof of Theorem \ref{Thm: Thresholded estimator} can be found in Appendix \ref{Section: additional torus proofs}, and follows from the classical ideas of \cite{donohoDensityEstimationWavelet1996}. Next, we introduce a $U$-statistic to perform risk estimation. Recall that given any estimator $\tilde{f}_n$ of $f$ such that $\langle \tilde{f}_n, 1\rangle = 1$, the $H^{-1,\delta}$ loss can be expressed as $$ \|f-\tilde{f}_n\|_{H^{-1,\delta}}^2 = \sum_{l\geq0}2^{-2l}(l\vee1)^{2\delta}\sum_{k=0}^{2^{ld} -1}\langle f-\tilde{f}_n,\psi_{lk}\rangle^2. $$ To estimate this loss, we use the approach of sample splitting. Suppose we have a sample of size $2n$ which we divide into two subsamples $$ \mathcal{S}^1=(X_1,\ldots,X_n),\quad \mathcal{S}^2 = (X_{n+1},\ldots,X_{2n}).$$ Denote expectation with respect to sample $i$ by $E^{(i)}$; we denote variances and probabilities accordingly. We compute our estimator $\tilde{f}_n=\tilde{f}_n(X_1,\ldots,X_n)$ based on $\mathcal{S}^1$ and, for $j\geq0$, define the $U$-statistic based on the sample $\mathcal{S}^2$ as \begin{equation}\label{Eq: U-statistic definition} U_{n,j}(\tilde{f}_n) = \frac{2}{n(n-1)}\sum_{i<i',i,i'\in\mathcal{S}^2}\sum_{l<j}2^{-2l}(l\vee1)^{2\delta}\sum_{k=0}^{2^{ld}-1}\left(\psi_{lk}(X_i) - \langle\psi_{lk},\tilde{f}_n\rangle \right)\left(\psi_{lk}(X_{i'}) - \langle\psi_{lk},\tilde{f}_n\rangle\right). \end{equation} Since the sample is i.i.d., we see that $$ E^{(2)}_fU_{n,j}(\tilde{f}_n) = \sum_{l<j}2^{-2l}(l\vee1)^{2\delta}\sum_{k=0}^{2^{ld}-1}\langle\psi_{lk},f-\tilde{f}_n\rangle^2 = \|K_j(f-\tilde{f}_n)\|_{H^{-1,\delta}}^2.$$ Thus $U_{n,j}(\tilde{f}_n)$ is an unbiased estimator of the $j^{th}$ resolution level approximation of the loss $\|f-\tilde{f}_n\|_{H^{-1,\delta}}$. The key idea behind the $U$-statistic is that the removal of the diagonal in the outermost sum in (\ref{Eq: U-statistic definition}) eliminates the highest variance terms. Thus by averaging over $O(n^2)$ terms with small variance, we expect the $U$-statistic to have very small variance (as in Theorem 6.4.6 of \cite{gineMathematicalFoundationsInfiniteDimensional2015}). This is confirmed by the next lemma. \begin{lemma}\label{Lemma: U-stat variance bound} Assume $f\in L^{\infty}(\mathbb{T}^d)$ is a probability density, and $\tilde{f}_n$ is an estimator for $f$ based on the subsample $\mathcal{S}^1$. Then \begin{align} \mathrm{Var}^{(2)}(U_{n,j}(\tilde{f}_n)) &\leq \frac{4\|f\|_{\infty}}{n}\left(\underset{l\geq -1}{\max}\ 4^{-l} (1\vee l)^{2\delta}\right)\|K_j(f-\tilde{f}_n)\|_{H^{-1,\delta}}^2 + \frac{2\|f\|_{\infty}^2}{n(n-1)}\sum_{l\leq j-1}2^{l(d-4)}(l\vee1)^{4\delta} \nonumber \\ &=: \kappa_{n,j,\delta}^2(f). \label{Eq: kappa definition} \end{align} \end{lemma} This result is analogous to Theorem 4.1 in \cite{robinsAdaptiveNonparametricConfidence2006}; for completeness, we give a proof in Appendix \ref{Section: additional torus proofs}. With the adaptive estimator $\hat{f}_n$ and the $U$-statistic $U_{n,j}(\hat{f}_n)$ in hand, we are now ready to give the construction of optimal confidence sets for the two-class adaptation problem. We first note that for $d=1,2$, the minimax rates of estimation from Proposition \ref{Prop: minimax estimation rates} do not depend on the smoothness parameter $s$; in particular, the two diameter shrinkage conditions (\ref{Eq: acs def, slow shrinkage}) and (\ref{Eq: acs def, fast shrinkage}) become a single condition. Thus in these dimensions, defining an adaptive confidence set is very easy; indeed, there is no meaningful adaptation which needs to take place. When $d=1$, the empirical measure is a minimax optimal estimator of the sampling measure (see, for instance, \cite{weedSharpAsymptoticFinitesample2019} or \cite{fournierRateConvergenceWasserstein2015}). When $d=2$, we centre at the adaptive estimator from Theorem \ref{Thm: Thresholded estimator} in place of the empirical measure $P_n$, as $P_n$ is no longer minimax optimal, and standard kernel or wavelet projection estimators require choices of tuning parameters depending on the smoothness parameter to attain optimal rates. \begin{prop}\label{Prop: confidence set, d=1,2} \begin{enumerate}[(i)] \item Let $d=1$. Consider the two-class adaptation problem over $\mathcal{F}(s)\cup\mathcal{F}(r)$ where $s>r\geq0,q\in[1,\infty],B\geq1,M\geq1\geq m>0$ are all fixed. Then given any $\alpha\in(0,1)$, the confidence set based on a sample $X_1,\ldots,X_n$ defined by $$ C_n = \left\{ g\in\mathcal{F}(r) : W_2(P_g,P_n) \leq D\alpha^{-1/2}n^{-1/2} \right\} $$ is an optimal adaptive $W_2$ confidence set, where $P_n = n^{-1}\sum_{i=1}^n \delta_{X_i}$ is the $n$-sample empirical measure and the constant $D$ depends on $B, m$ and the wavelet basis. \item Let $d=2$. Consider the two-class adaptation problem over $\mathcal{F}(s)\cup\mathcal{F}(r)$ where $s>r\geq0,q\in[1,\infty],B\geq1,M\geq1\geq m>0$ are all fixed. Then given any $\alpha\in(0,1)$, the confidence set based on a sample $X_1,\ldots,X_n$ defined by $$ C_n = \left\{ g\in\mathcal{F}(r) : W_2(g,\hat{f}_n) \leq D\alpha^{-1/2}n^{-1/2}(\log{n})^{2+\delta} \right\} $$ is a near-optimal adaptive $W_2$ confidence set, where $\hat{f}_n$ is the adaptive estimator from Theorem \ref{Thm: Thresholded estimator} and the constant $D$ depends on $B, m$ and the wavelet basis. \end{enumerate} \end{prop} The diameter shrinkage conditions are met trivially, while honest coverage follows from Chebyshev's inequality in a standard fashion. When $d\geq3$, the minimax rates depend on the smoothness parameter and so the diameter shrinkage condition differs between $\mathcal{F}(r)$ and $\mathcal{F}(s)$, $r\neq s$. In particular, this precludes any confidence set $C_n$ with deterministic radius, as used above. Instead, we centre at the adaptive estimator $\hat{f}_n$ from Theorem \ref{Thm: Thresholded estimator}, and use the estimate of its loss provided by the $U$-statistic $U_{j,n}(\hat{f}_n)$ as defined in (\ref{Eq: U-statistic definition}) to determine the radius. We write $U_{j} := U_{j,n}(\hat{f}_n)$ in the sequel. \begin{theorem}\label{Thm: d > 2 confidence sets} Let $d\geq3$. Fix $B\geq1,M\geq1\geq m>0,1\leq q\leq\infty$, and let $s>r\geq0$. If $d>4$, assume additionally that $s\leq \frac{2d-4}{d-4}r + \frac{d}{d-4}$. Fix $\alpha\in(0,1)$, and $\delta>1/2$. Consider the confidence set based on a sample of size $2n$, $\mathcal{S}^1\cup\mathcal{S}^2$ given by \begin{equation}\label{Eq: Wasserstein confidence set definition} C_n = \left\{ g\in\mathcal{F}(r): \|g-\hat{f}^T_n\|_{H^{-1,\delta}} \leq \sqrt{z_{\alpha}\kappa_{n,j_n,\delta}(g) + U_{j_n} + G(j_n)} \right\} \end{equation} where $\hat{f}^T_n$ is computed on $\mathcal{S}^1$, $U_{j_n}$ is computed on $\mathcal{S}^2$ and: \begin{itemize} \item $ \kappa_{n,j,\delta}^2(g) := \frac{4\|g\|_{\infty}}{n}\|K_j(g-\hat{f}^T_n)\|_{H^{-1,\delta}}^2 + \frac{2\|g\|_{\infty}^2}{n(n-1)}\sum_{l\leq j-1}2^{l(d-4)}(l\vee1)^{4\delta}$; \item $j_n$ is such that $2^{j_n}\simeq \left(\frac{n}{\log{n}}\right)^{\frac{1}{2r+d/2}}$; \item $G(j_n)$ = $j_n^{2\delta}2^{-2j_n(r+1)}\log{n}$; \item $z_{\alpha}$ = $(\alpha/2)^{-1/2}$. \end{itemize} Then for all $n\geq n_0(B)$, $C_n$ satisfies (\ref{Eq: acs def, coverage}), as well as (\ref{Eq: acs def, slow shrinkage}) and (\ref{Eq: acs def, fast shrinkage}) for a suitable constant $K>0$ depending on $r,s,\alpha,\alpha'$ and the parameters of the class $\mathcal{F}(r)$ with the rates $$ R_n(r) = (\log{n})^{\delta+\frac{r+1}{2r+d}}n^{-\frac{r+1}{2r+d}}, \qquad R_n(s) = (\log{n})^{\delta+\frac{s+1}{2s+d}}n^{-\frac{s+1}{2s+d}}. $$ In particular, $C_n$ is a near-optimal adaptive $W_2$ confidence set over $\mathcal{F}(s)\cup\mathcal{F}(r)$. \begin{remark}[Adaptation over ranges of classes] Note that the construction of $C_n$ is completely independent of $s$, and $\hat{f}_n$ adapts simultaneously over all $s\geq0$. So when $d\leq4$, $C_n$ adapts simultaneously over all $s\geq r$, and when $d>4$, $C_n$ adapts simultaneously over the full window of admissible values of $s$. \end{remark} \begin{remark}[Adaptation to other parameters]\label{Remark: adapting over other parameters} We note that the construction of the confidence set in Theorem \ref{Thm: d > 2 confidence sets} does not depend on $B$ or $m$, and so in fact this particular confidence set is also adaptive over $B\geq1$ and $m>0$, in the sense that any dependence of the minimax rates $r_n^*(r),r_n^*(s)$ on $B$ or $m$ are eventually accounted for by the logarithmic term in $R_n(r),R_n(s)$. (Note however that the constants in our theoretical guarantees explode as $B\to\infty$ or $m\to0$.) However, the construction of $C_n$ does depend on $M$. See \cite{bullAdaptiveConfidenceSets2013} for more discussion on the role of $M$. \end{remark} \begin{remark}[Adapting to wider ranges of smoothnesses in high dimensions] In the $d>4$ case, following the ideas in \cite{bullAdaptiveConfidenceSets2013}, one may still obtain adaptation over a window of the form $[0,R]$ for arbitrary $R>0$ at the cost of removing certain troublesome portions of the classes $\mathcal{F}(r),r\in[0,R]$. In this restricted model, one can identify the smoothness of the unknown density within a window of the form $\left[r,\frac{2d-4}{d-4}r+\frac{d}{d-4}\right]$ using tests as in \cite{bullAdaptiveConfidenceSets2013} or \cite{nicklSharpAdaptiveConfidence2016}. Once this window is identified, in particular the relevant value of $r$, one can use the associated confidence set as constructed in Theorem \ref{Thm: d > 2 confidence sets}. \end{remark} \begin{remark}(Necessity of log-factors) One may ask whether it is possible to remove the log-factors in the shrinkage rates and construct a confidence set with $R_n(r)=r_n^*(r), R_n(s)=r_n^*(s)$. These log factors fundamentally arise from the use of the embedding $H^{-1,\delta}\hookrightarrow B^{-1}_{21}$ for $\delta>1/2$. For confidence sets constructed via risk estimation we conjecture that this is a necessary step, as it is precisely the accelerated risk estimation for Hilbert space norms which enables the adaptivity of the confidence set. Working with the $B^{-1}_{21}$ norm directly, which is in some sense an $L_1$-type norm, it seems one will run into problems as outlined in \cite{lepskiEstimationLrNorm1999} and \cite{caiAccuracyAssessmentHighDimensional2016}, where it is shown that for the $L_1$ norm, risk estimation cannot be performed (polynomially) more accurately than the size of the risk itself in both a Gaussian regression model and a sparse high-dimensional linear model. While we do not yet have any precise negative results for the $B^{-1}_{21}$ norm, risk estimation is itself an important topic of study and thus this question should be addressed in the future. However, it is conceivable that another approach, such as the testing method of \cite{carpentierHonestAdaptiveConfidence2013}, could be used to construct $W_2$ confidence sets with sharp diameter shrinkage rates. \end{remark} \begin{remark}[Weak Sobolev norms $H^{-t},t>0$]\label{Remark: weak Sobolev norms} Our methods extend to the use of negative order Sobolev norms $H^{-t}=B^{-t}_{22}, t>0$ as loss functions in place of $H^{-1,\delta}$ (see Appendix \ref{Section: Wavelet Appendix} for definitions). The analysis of the estimator $\hat{f}_n$ is completely analogous, and one must suitably augment the $U$-statistic $U_{n,j}$ to estimate the $H^{-t}$ loss. One finds that the resulting confidence set $\tilde{C}_n$ adapts to any two smoothnesses $0\leq r<s<\infty$ when $t\geq d/4$; if instead $t<d/4$, adaptation is possible over a window of smoothnesses $0\leq r<s\leq \frac{d}{d-4t}t + \frac{2d-4t}{d-4t}r$. Moreover, in this latter case, the arguments of Section \ref{Subsection: nonexistence} below can be augmented to show that if $s$ does not lie in this window, then no such confidence set can exist. \end{remark} \end{theorem} The proof of this theorem proceeds similarly to that of Proposition 2.1 in \cite{robinsAdaptiveNonparametricConfidence2006}, and is given in Appendix \ref{Section: additional torus proofs}. The confidence sets constructed above prove statements (i) and (ii) of Theorem \ref{Thm: existence and nonexistence of conf sets}. \subsection{Testing rates and non-existence of Confidence Sets}\label{Subsection: nonexistence} We turn now to proving the impossibility result (iii) in Theorem \ref{Thm: existence and nonexistence of conf sets}. The question of existence of adaptive confidence sets is closely related to a composite hypothesis testing problem. This connection was identified in the first works on adaptive confidence sets; for a complete decision-theoretic overview, see \cite[Chapter 8]{gineMathematicalFoundationsInfiniteDimensional2015}. For $\rho\geq0$ and $s>r\geq0$, define the separated function class $$ \tilde{\mathcal{F}}(r, \rho) := \left\{ f\in\mathcal{F}(r): W_2(f,\mathcal{F}(s)) \geq \rho \right\}$$ We may have $\rho=0$, in which case $\tilde{\mathcal{F}}(r,0) = \mathcal{F}(r)$. However, if $\rho>0$ then $\tilde{\mathcal{F}}(r,\rho)$ is a strict subset of $\mathcal{F}(r)$, disjoint from $\mathcal{F}(s)$. The testing problem we consider is \begin{equation}\label{Eq: testing problem} H_0: f\in\mathcal{F}(s) \quad \mathrm{vs.} \quad H_1: f\in\tilde{\mathcal{F}}(r,\rho). \end{equation} As the usefulness of a test is naturally assessed by the sum of its Type I and Type II errors, the minimax rate of testing for the problem \eqref{Eq: testing problem} is defined as any sequence $\left(\rho_n^*\right)_{n\geq 1}$ such that \begin{itemize} \item For any $\beta'>0$, there exists a constant $L=L\left(\beta'\right)$ and a measurable test $\Psi_n: \left(\mathbb{T}^d\right)^n\to \{0,1\}$ such that \begin{equation}\label{Eq: uniform consistency} \underset{f\in \mathcal{F}(s)}{\sup} \mathbb{E}_f \left[\Psi_n\right] + \underset{f\in \tilde{\mathcal{F}}(r,L\rho^*_n)}{\sup} \mathbb{E}_f \left[1-\Psi_n\right] \leq \beta'. \end{equation} \item There exists some $\beta>0$ such that for all $\rho_n=o\left(\rho_n^*\right),$ \begin{equation}\label{lower condition tests} \underset{n\to\infty}{\lim\inf} \ \underset{\Psi_n}{\inf}\Big[\underset{f\in\mathcal{F}(s)}{\sup} \mathbb{E}_f \left[\Psi_n\right] + \underset{f\in \tilde{\mathcal{F}}(r,\rho_n)}{\sup} \mathbb{E}_f \left[1-\Psi_n\right] \Big] \geq \beta, \end{equation} where the infimum ranges over the set of tests $\Psi_n$. \end{itemize} The following result characterises the role of the minimax testing rate $\rho_n^*$ in the existence and non-existence of confidence sets. Essentially, it says $\rho_n^*$ provides a `speed limit' on how quickly the confidence set can shrink when $f$ is in the smoother submodel $\mathcal{F}(s)$: \begin{lemma}[Proposition 8.3.6, \cite{gineMathematicalFoundationsInfiniteDimensional2015}] \label{Lemma: confidence set impossibility} Let $\rho_n^*$ be the minimax testing rate for (\ref{Eq: testing problem}), and $\Tilde{r}_n(s), \Tilde{r}_n(r)$ be two sequences such that $\Tilde{r}_n(s)=o\left(\rho_n^*\right)$ and $\Tilde{r}_n(s)=o\left(\Tilde{r}_n(r)\right)$. Let $\alpha, \alpha'>0$. Then, for any $\rho_n=o\left(\rho_n^*\right)$ and $L>0$, there does not exist any set $C_n\left(\alpha, X_1,\dots,X_n\right)$ satisfying \begin{itemize} \item $\liminf_{n\to\infty} \inf_{f\in\mathcal{F}(s)\cup\tilde{\mathcal{F}}(r,\rho_n)} P_f(f\in C_n) \geq 1-\alpha,$ \item $ \limsup_{n\to\infty} \sup_{f\in\tilde{\mathcal{F}}(r,\rho_n )}P_f\big(|C_n|_{W_2}>L\Tilde{r}_n(r)\big) \leq \alpha',$ \item $ \limsup_{n\to\infty} \sup_{f\in\mathcal{F}(s)}P_f\big(|C_n|_{W_2}>L\Tilde{r}_n(s)\big) \leq \alpha',$ \end{itemize} as long as $\alpha,\alpha'$ are such that $0<2\alpha+\alpha'<\beta$, with $\beta$ as in \eqref{lower condition tests}. \end{lemma} This non-existence phenomenon occurs because any $C_n$ satisfying the conditions of the Lemma induces a test $$ \Psi_n = \mathrm{1}\{ C_n \cap \tilde{\mathcal{F}}(r,\rho'_n)\neq \varnothing \} $$ which is uniformly consistent for the separation rate $\rho_n'$ in the sense of (\ref{Eq: uniform consistency}) whenever $\rho_n = o(\rho'_n)$. If we were able to choose $\rho'_n$ to be $o(\rho^*_n)$, this would contradict the definition of the minimax testing rate $\rho^*_n$; thus no such confidence set can exist. Note that the argument works for any rate $\tilde{r}_n(s) = o(\rho^*_n)$, not just the minimax rate of estimation; in particular, we can multiply the minimax estimation rate by a poly-logarithmic factor so long as there is a polynomial gap between the testing and estimation rates. It remains to determine the minimax rate of testing for the problem (\ref{Eq: testing problem}); this is done in the following theorem. \begin{theorem}\label{Thm: testing rate} Assume $s>r\geq0$ and $d> 4$. Let $\rho_n^*$ be the minimax rate of testing for the problem (\ref{Eq: testing problem}). Then there exist a constant $c>0$ depending on the parameters of the class $\mathcal{F}(s)$ and the wavelet basis, and $n_0 = n_0(B,M)$ such that for all $n\geq n_0$, $$\rho_n^* \geq c n^{-\frac{r+1}{2r+d/2}}.$$ Also, \eqref{lower condition tests} holds for any $\beta<1$. \end{theorem} The proof of Theorem \ref{Thm: testing rate} is given in Appendix \ref{Section: additional torus proofs}, and follows a multiple-testing lower bound. Assume now that $d>4$ and $s>\frac{2d-4}{d-4}r + \frac{d}{d-4}$. Then the minimax rate of testing $\rho^*_n$ is slower than the minimax estimation rate $r_n^*(s)$ by a polynomial factor; in light of Lemma \ref{Lemma: confidence set impossibility}, this means there is no near-optimal adaptive $W_2$ confidence set over $\mathcal{F}(s)\cup\mathcal{F}(r)$ for any practical choice of $\alpha,\alpha'$ (for such a set to exist, we would require $2\alpha+\alpha'\geq1$). This proves statement (iii) of Theorem \ref{Thm: existence and nonexistence of conf sets}. However, this does not rule out the existence of confidence sets satisfying weaker conditions than those in Definition \ref{Def: optimal adaptive confidence set}, namely those listed in Lemma \ref{Lemma: confidence set impossibility} for some $\rho_n\geq L \rho_n^*$, $L>0$. Such sets actually exists in view of Proposition 8.3.7 of \cite{gineMathematicalFoundationsInfiniteDimensional2015} and Theorem \ref{Thm: Thresholded estimator}. Moreover, the confidence set $C_n$ constructed in Theorem \ref{Thm: d > 2 confidence sets} in conjunction with the argument used to prove Lemma \ref{Lemma: confidence set impossibility} shows that the lower bound of Theorem \ref{Thm: testing rate} is sharp up to a poly-logarithmic factor. \section{Proofs for Section \ref{Section: Existence Results}}\label{Section: additional torus proofs} We first give the definition of our adaptive estimator. The estimator is based on the empirical wavelet coefficients, defined as $$ \hat{f}_{lk} := \frac{1}{n}\sum_{i=1}^n \psi_{lk}(X_i).$$ We also write $f_{l\cdot}$ and $\hat{f}_{l\cdot}$ for the vectors of coefficients $(f_{lk}:0\leq k<2^{ld})$ and $(\hat{f}_{lk}:0\leq k<2^{ld})$ respectively. Next, define the truncation point $l_{\max}$ such that $$ 2^{l_{\max}} \simeq \left(\frac{n}{\log{n}}\right)^{1/d}, $$ and for $0\leq l\leq l_{\max}$, define the thresholds $$ \tau_{l} := \tau 2^{\frac{ld}{2}}\left(\frac{\log{n}}{n}\right)^{1/2}, $$ for some $\tau>0$ to be chosen below, depending only on $B,d,M$ and the wavelet basis. We then define \begin{equation}\label{Eq: thresholded estimator definition} \hat{f}_n := 1 + \sum_{l=0}^{l_{\max}} \mathrm{1}_{\left\{\|\hat{f}_{l\cdot}\|_2^2 > \tau_{l}^2\right\}} \sum_{k=0}^{2^{ld}-1}\hat{f}_{lk}\psi_{lk}. \end{equation} To prove Theorem \ref{Thm: Thresholded estimator}, we must first collect some results on the expectation and concentration of the empirical wavelet coefficients $\hat{f}_{lk}$. \begin{lemma}\label{Lemma: H-J inequality} Let $f\in\mathcal{F}(s)$ and let $\hat{f}_{lk}$ be the empirical wavelet coefficients of $f$ based on a sample of $n$ observations. Then for every $t\geq2$ there exists a constant $C_t$ depending only on $t$ such that for all $l\geq0$ satisfying $2^{ld}\leq n$, $$ E\left| \hat{f}_{lk} - f_{lk}\right|^t \leq C_t M\|\psi\|_{\infty}^{t-2}n^{-t/2}. $$ \end{lemma} For $t=2$, the proof is immediate from the i.i.d. assumption on the data, the orthonormality of the wavelets and the bound $\|f\|_{\infty}\leq M$. For $t>2$, the result follows from the $t=2$ case and Hoffmann-J\o rgensen's inequality (\cite{hoffmann-jorgensenSumsIndependentBanach1974}, \cite[Theorem 3.1.22]{gineMathematicalFoundationsInfiniteDimensional2015}). We also require a concentration result for the $\hat{f}_{lk}$; for this we use Bernstein's inequality (\cite[Theorem 3.1.7]{gineMathematicalFoundationsInfiniteDimensional2015}). \begin{prop}\label{Prop: Bernstein's inequality}[Bernstein's Inequality] Let $Y_1,\ldots,Y_n$ be independent centred random variables which are almost surely bounded by $c>0$ in absolute value. Let $\sigma^2 = n^{-1}\sum_{i=1}^nEY_i^2$ and $S_n = \sum_{i=1}^n Y_i$. Then for all $u\geq0$, $$ P(|S_n|>u) \leq 2\exp{\left(-\frac{u^2}{2n\sigma^2 + \frac{2cu}{3}}\right)}. $$ \end{prop} For fixed $l,k$ and $f\in\mathcal{F}(s)$, the random variables $(\psi_{lk}(X_i) - f_{lk})$ are i.i.d., centred, bounded by $2^{ld/2}\|\psi\|_{\infty} =: c_l$, and have variance bounded by $M$. Thus from Bernstein's inequality, we deduce that \begin{equation}\label{Eq: concentration for wavelet coeffs} P_f\left(|\hat{f}_{lk}-f_{lk}|>u\right) \leq 2\exp\left(-\frac{nu^2}{2M + \frac{2c_lu}{3}}\right). \end{equation} We also need a result on wavelet approximations in the $H^{-1,\delta}$ norm to control bias terms. The following lemma about the error of $j$-level approximations to Besov functions is standard; see Propositions 4.3.8 and 4.3.14 in \cite{gineMathematicalFoundationsInfiniteDimensional2015}, for instance. \begin{lemma}\label{Lemma: Besov projection accuracy} Let $0\leq s<S$ and $1\leq q\leq \infty$, $\delta\in\mathbb{R}$. Then for $f\in B^{s}_{2q}$, we have that \begin{equation}\label{Eq: Besov projection accuracy} \|K_j(f) - f\|_{H^{-1,\delta}} \leq C \underset{l\geq j}{\sup}\left(2^{-l(s+1)}l^{\delta}\right) \|f\|_{B^{s}_{2q}}, \end{equation} where the constant $C$ depends only on the wavelet basis. In particular, for $j\geq1\vee \frac{\delta}{s+1}$, we have that $$ \|K_j(f) - f\|_{H^{-1,\delta}} \leq C 2^{-j(s+1)}j^{\delta}\|f\|_{B^{s}_{2q}} $$ \end{lemma} \begin{proof}[Proof of Theorem \ref{Thm: Thresholded estimator}] Fix $f\in\mathcal{F}(s)$. Define $l_n(s)$ such that $$ 2^{l_n(s)} \simeq B^{\frac{1}{s}}\left(\frac{n}{\log{n}}\right)^{\frac{1}{2s+d}};$$ for all sufficiently large $n$ depending on $B$, we have that $l_n(s)<l_{\max}$. We then decompose the risk as follows: \begin{align} \|f-\hat{f}_n\|_{H^{-1,\delta}}^2 =& \sum_{l=0}^{l_n(s)}2^{-2l}(l\vee1)^{2\delta}\|\langle f-\hat{f}_n,\psi_{l\cdot}\rangle\|_2^2 + \sum_{l=l_n(s)+1}^{l_{\max}}2^{-2l}l^{2\delta}\|\langle f-\hat{f}_n,\psi_{l\cdot}\rangle\|_2^2 \nonumber \\ &+ \sum_{l>l_{\max}}2^{-2l}l^{2\delta}\|\langle f,\psi_{l\cdot}\rangle\|_2^2 \nonumber \\ =:&\,\, I + II + III. \label{Eq: thresholded estimator proof, decomposition} \end{align} This is a bias-stochastic decomposition, where we have further divided the stochastic term into terms $I$ and $II$. We first deal with the bias term $III$: a direct application of Lemma \ref{Lemma: Besov projection accuracy} gives \begin{align*} III &= \|K_{l_{\max}}(f) - f\|_{H^{-1,\delta}}^2 \\ &\lesssim l_{\max}^{2\delta}2^{-2l_{\max}(s+1)} \\ &= o\left((\log{n})^{2\delta}\left(\frac{n}{\log{n}}\right)^{-\frac{2(s+1)}{2s+d}}\right) \end{align*} for a constant depending on $B$ and the wavelet basis. Next, we deal with term $I$. For any $l\geq0$, by the triangle inequality we have that $$ \|\langle f-\hat{f}_n,\psi_{l\cdot}\rangle \|_2 \leq \|f_{l\cdot} - \hat{f}_{l\cdot}\|_2 + \|\hat{f}_{l\cdot}\|_2\mathrm{1}_{\left\{ \|\hat{f}_{l\cdot}\|_2\leq \tau_l \right\}} \leq \|f_{l\cdot} - \hat{f}_{l\cdot}\|_{2} + \tau2^{ld/2}\sqrt{\frac{\log{n}}{n}}.$$ Using Lemma \ref{Lemma: H-J inequality} to control the expectation of the square of the first term, we see that \begin{align*} E_f(I) &\lesssim \sum_{l=0}^{l_n(s)} 2^{-2l}(l\vee1)^{2\delta}\left[2^{ld}n^{-1} + \tau^22^{ld}\frac{\log{n}}{n}\right] \\ &\lesssim \tau^2\frac{\log{n}}{n}(l_n(s))^{2\delta}\sum_{l=0}^{l_n(s)}2^{l(d-2)}, \end{align*} for $n$ large enough. Note that $l_n(s)\lesssim \log{n}$. Thus when $d=2$, the sum contributes at most some power of $\log{n}$, and so $E_f(I)$ is clearly sufficiently small. For $d>2$, the final term dominates the sum and so using the definition of $l_n(s)$, $$ E_f(I) \lesssim \tau^2(\log{n})^{2\delta}\left(\frac{n}{\log{n}}\right)^{-\frac{2(s+1)}{2s+d}} $$ as required. Lastly, we must analyse term $II$. Since we consider resolution levels $l>l_n(s)$, we have that $$ \|f_{l\cdot}\|_2 \leq B2^{-ls} < B2^{-l_n(s)s} \simeq \left(\frac{n}{\log{n}}\right)^{-\frac{s}{2s+d}},$$ for a constant depending only on $B$. Moreover, $$ \tau_l = \tau2^{ld/2}\left(\frac{n}{\log{n}}\right)^{-1/2} > \tau2^{l_n(s)d/2}\left(\frac{n}{\log{n}}\right)^{-1/2} \geq \tau \left(\frac{n}{\log{n}}\right)^{-\frac{s}{2s+d}}, $$ and so for $\tau$ chosen sufficiently large depending only on $B$, we have that $\|f_{l\cdot}\|_2 \leq \tau_l/2$. Define events $$ A_{l,n} := \left\{ \|\hat{f}_{l\cdot}\|_2 \leq \tau_l \right\}, \quad l_n(s)<l\leq l_{\max}. $$ Then by the above observations, the triangle inequality, a union bound and the bound (\ref{Eq: concentration for wavelet coeffs}), we have that \begin{align} P_f(A_{l,n}^c) &\leq P_f(\|\hat{f}_{l\cdot}-f_{l\cdot}\|_2>\tau_l/2) \nonumber \\ &\leq \sum_{k=0}^{2^{ld}-1}P_f\left(|\hat{f}_{lk} - f_{lk}|>\frac{\tau}{2}\sqrt{\frac{\log{n}}{n}}\right) \nonumber \\ &\leq 2^{ld}\cdot 2\exp\left(-\frac{\tau^2n\log{n}/4}{2Mn + \tau c_l\sqrt{n\log n}/3}\right) \nonumber \\ &\lesssim \frac{n}{\log{n}}\exp\left(-C\tau\log{n}\right), \label{Eq: threshold low prob event} \end{align} for $\tau$ large enough depending on $M$ and the wavelet basis, as $l\leq l_{\max}$ and so $2^{l}\leq (n/\log{n})^{1/d}$. Here, $C$ is an absolute constant. Note that on the event $A_{l,n}^c$, $\langle \hat{f}_n, \psi_{lk}\rangle = \hat{f}_{lk}$, whereas on $A_{l,n}$, $\langle \hat{f}_n,\psi_{lk}\rangle = 0$. Thus for $l_n(s)<l\leq l_{\max}$, \begin{align} E_f\|\langle \hat{f}_n - f,\psi_{l\cdot}\rangle\|_2^2\mathrm{1}_{A_{l,n}} &\leq \|\langle f,\psi_{l,\cdot}\rangle\|_2^2 \lesssim 2^{-2ls} \label{Eq: term II first part} \end{align} for some constant depending on $B$, using (\ref{Eq: Besov wavelet coefficient bound}). Next, using Cauchy-Schwarz in conjunction with (\ref{Eq: threshold low prob event}) and Lemma \ref{Lemma: H-J inequality}, \begin{align} E_f\|\langle \hat{f}^T_n - f,\psi_{l\cdot}\rangle \|_2^2\mathrm{1}_{A_{l,n}^c} &= \sum_{k=0}^{2^{ld}-1}E_f|\hat{f}_{lk} - f_{lk}|^2\mathrm{1}_{A_{l,n}^c} \nonumber \\ &\leq \sum_{k=0}^{2^{ld}-1} \left(E_f|\hat{f}_{lk}-f_{lk}|^4\right)^{1/2}\left(P_f(A_{l,n}^c)\right)^{1/2} \nonumber \\ &\lesssim 2^{ld}(n\log{n})^{-1/2}n^{-C\tau/2}. \label{Eq: term II second part} \end{align} Combining the estimates (\ref{Eq: term II first part}) and (\ref{Eq: term II second part}), we may bound $II$ as follows: \begin{align*} E_f(II) &\lesssim \sum_{l=l_n(s)+1}^{l_{\max}} 2^{-2l}l^{2\delta}\left[2^{-2ls} + 2^{ld}(n\log{n})^{-1/2}n^{-C\tau/2}\right] \\ &\lesssim (\log{n})^{2\delta}\left[2^{-2(s+1)l_n(s)} + (n\log{n})^{-1/2}n^{-C\tau/2}\sum_{l\leq l_{\max}}2^{l(d-2)} \right]. \end{align*} By the definition of $l_n(s)$, the first term is of the correct order. It remains to consider the second term. When $d=2$ the sum contributes a logarithmic factor and so the second term is clearly sufficiently small. When $d>2$, the sum is dominated by its final term and so the second term inside the brackets is of order $$ (n\log{n})^{-1/2}n^{-C\tau/2}2^{l_{\max}(d-2)} \simeq \log{n}^{-1/2}n^{\frac{1}{2} - \frac{2}{d} - \frac{C\tau}{2}};$$ by choosing $\tau$ sufficiently large, we can make this term sufficiently small for all $s\geq0$. This concludes the proof. \end{proof} We will also later require the following lemma, which gives control of the $B^s_{2q}$ norm of the estimator $\hat{f}_n$. \begin{lemma}\label{Lemma: norm control of adaptive estimator} Under the hypotheses of Theorem \ref{Thm: Thresholded estimator}, given $\alpha\in(0,1)$ there exists $n_0 = n_0(\alpha)$ such that for all $n\geq n_0$ and any $f\in\mathcal{F}(s)$, with $P_f$-probability at least $1-\alpha$, $$ \|\hat{f}_n\|_{B^s_{2q}} \lesssim B + \tau B^{d/2s}, $$ where the constant depends on $d,q$ only. \begin{proof} Let $l_n(s),A_{l,n}$ be as in the previous proof. Further define events $B_{l,n} = \{\|\hat{f}_{l\cdot}-f_{l\cdot}\|_2 \leq \tau_l\}$, and $$A_n = \left(\bigcap_{0\leq l\leq l_n(s)}B_{l,n} \right)\bigcap \left(\bigcap_{l_n(s)<l\leq l_{\max}}A_{l,n}\right).$$ We have from (\ref{Eq: threshold low prob event}), which holds with $B_{l,n}$ in place of $A_{l,n}$ when $l\leq l_n(s)$, and a union bound that $$ P_f(A_n^c) \lesssim l_{\max} \frac{n}{\log{n}}\exp\left(-C\tau\log{n}\right) \lesssim n \exp\left(-C\tau\log{n}\right) $$ and so by choosing $\tau>0$ sufficiently large (independently of $\alpha$), we can make this smaller than $\alpha$ for all sufficiently large $n$. Then on the event $A_n$, using $(a+b)^q \leq 2^{q-1}(a^q + b^q)$, \begin{align*} \|\hat{f}_n\|_{B^s_{2q}}^q &= 1 + \sum_{l=0}^{l_{\max}}2^{lqs}\mathrm{1}_{\left\{\|\hat{f}_{l\cdot}\|_2 > \tau_{l}\right\}}\|\hat{f}_{l\cdot}\|_2^q \\ &\lesssim 1 + \sum_{l=0}^{l_n(s)}2^{lqs}\|f_{l\cdot}\|_2^q + \sum_{l=0}^{l_n(s)}2^{lqs}\|\hat{f}_{l\cdot} - f_{l\cdot}\|_2^q \\ &\leq \|f\|_{B^s_{2q}}^q + \sum_{l=0}^{l_n(s)}2^{lqs}\tau_l^q \\ &= B^q + \tau^q\left(\frac{\log{n}}{n}\right)^{q/2}\sum_{l=0}^{l_n(s)}2^{lq\left(\frac{d}{2}+s\right)} \\ &\lesssim B^q + \tau^qB^{dq/2s}, \end{align*} by choice of $l_n(s)$, since the sum is dominated by its largest term. \end{proof} \end{lemma} \begin{proof}[Proof of Lemma \ref{Lemma: U-stat variance bound}] The kernel of the $U$-statistic is $$ R(x,y) = \sum_{l\leq j-1}2^{-2l}(l\vee1)^{2\delta}\sum_{k=0}^{2^{ld}-1}\left[(\psi_{lk}(x) - \langle \psi_{lk},\tilde{f}_n \rangle)(\psi_{lk}(y) - \langle \psi_{lk}, \tilde{f}_n \rangle)\right]$$ which is symmetric, and so has Hoeffding decomposition (see Section 11.4 of \cite{vaartAsymptoticStatistics2000}) \begin{equation}\label{Eq: decomposition parts} \begin{aligned} U_n(\tilde{f}_n) - E^{(2)}_f U_n(\tilde{f}_n) &= \frac{2}{n}\sum_{i\in\mathcal{S}^2}(\pi_1R)(X_i) + \frac{2}{n(n-1)}\sum_{i<i',i,i'\in\mathcal{S}^2}(\pi_2R)(X_i,X_{i'}) \\ &=: L_n + D_n, \end{aligned} \end{equation} with linear kernel $$ (\pi_1R)(x) = \sum_{l\leq j-1}2^{-2l}(l\vee1)^{2\delta}\sum_{k=0}^{2^{ld}-1}\left[(\psi_{lk}(x) - \langle \psi_{lk},f\rangle)\langle \psi_{lk},f-\tilde{f}_n\rangle\right] $$ and degenerate kernel $$ (\pi_2R)(x,y) = \sum_{l\leq j-1}2^{-2l}(l\vee1)^{2\delta}\sum_{k=0}^{2^{ld}-1}\left[ (\psi_{lk}(x) - \langle \psi_{lk},f\rangle)(\psi_{lk}(y) - \langle \psi_{lk},f\rangle) \right]. $$ One checks that $L_n$ and $D_n$ are uncorrelated. It thus remains to bound their variances separately. For $\mathrm{Var}^{(2)}(L_n)$, we use the uncentred version of the kernel $\pi_1R$ and orthonormality of the wavelet basis \begin{align*} \mathrm{Var}^{(2)}(L_n) &\leq \frac{4}{n}\int\left(\sum_{l\leq j-1}2^{-2l}(l\vee1)^{2\delta}\sum_{k=0}^{2^{ld}-1}\psi_{lk}(x)\langle\psi_{lk},f-\tilde{f}_n\rangle\right)^2 f(x)\,\,\mathrm{d} x \\ &\leq \frac{4\|f\|_{\infty}}{n}\left(\underset{l\geq -1}{\max}\ 4^{-l} (1\vee l)^{2\delta}\right)\sum_{l\leq j-1}2^{-2l}(l\vee1)^{2\delta}\sum_{k=0}^{2^{ld}-1}\langle \psi_{lk}, f-\tilde{f}_n\rangle^2 \\ &= \frac{4\|f\|_{\infty}}{n} \left(\underset{l\geq -1}{\max}\ 4^{-l} (1\vee l)^{2\delta}\right) \|K_j(f-\tilde{f}_n)\|_{H^{-1,\delta}}^2. \end{align*} We next bound $\mathrm{Var}^{(2)}(D_n)$. By the degeneracy of the kernel, the summands are uncorrelated. So \begin{align*} \mathrm{Var}^{(2)}(D_n) &\leq E^{(2)}\left( \frac{2}{n(n-1)}\sum_{i<i',i,i'\in\mathcal{S}^2}(\pi_2R)(X_i,X_{i'}) \right)^2 \\ &\leq \frac{2}{n(n-1)}E^{(2)}_f\left(\sum_{l\leq j-1}2^{-2l}(l\vee1)^{2\delta}\sum_{k=0}^{2^{ld}-1}[\psi_{lk}(X_i)\psi_{lk}(X_{i'})]\right)^2 \\ &\leq \frac{2\|f\|_{\infty}^2}{n(n-1)} \sum_{l\leq j-1} 2^{-4l}(l\vee1)^{4\delta}\sum_{k=0}^{2^{ld}-1}\left( \int \psi_{lk}(x)^2\,\,\mathrm{d} x\right)^2 \\ &= \frac{2\|f\|_{\infty}^2}{n(n-1)}\sum_{l\leq j-1} 2^{l(d-4)}(l\vee1)^{4\delta}, \end{align*} using the orthonormality of the wavelet basis. Combining these two estimates concludes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm: d > 2 confidence sets}] We first establish the coverage condition (\ref{Eq: acs def, coverage}). By Lemma \ref{Lemma: norm control of adaptive estimator}, for all $n$ sufficiently large we have with $P_f$-probability at least $1-\alpha/2$ that $\hat{f}_n$ is in a $B^s_{2q}$-norm ball of constant radius. Thus for any $f\in\mathcal{F}(r)$, with $P_f$-probability at least $1-\alpha/2$, for $n\geq n_0(B,\alpha)$ we have from (\ref{Eq: Besov projection accuracy}) that $$\| K_{j_n}(f-\hat{f}_n) - (f-\hat{f}_n)\|_{H^{-1,\delta}}^2 \leq G(j_n). $$ By conditioning on this event, we have that \begin{align*} P_f(f\in C_n) &= P_f \left(U_{n,j}(\hat{f}_n) - \|f-\hat{f}_n\|_{H^{-1,\delta}}^2 \geq -G(j) - z_{\alpha}\kappa_{n,j,\delta}(f)\right) \\ &\geq \left(1-\frac{\alpha}{2}\right)P^{(2)}_f\left(U_{n,j}(\hat{f}_n) - \|K_j(f-\hat{f}_n)\|_{H^{-1,\delta}}^2 \geq -z_{\alpha}\kappa_{n,j,\delta}(f)\right) \\ &\geq \left(1-\frac{\alpha}{2}\right)\left(1 - \frac{\mathrm{Var}^{(2)}_f(U_{n,j}(\hat{f}_n))}{(z_{\alpha}\kappa_{n,j,\delta}(f))^2}\right) \\ &\geq \left(1-\frac{\alpha}{2}\right)^2 \\ &\geq 1-\alpha \end{align*} by Chebyshev's inequality and Lemma \ref{Lemma: U-stat variance bound}. We now move on to checking the diameter shrinkage conditions (\ref{Eq: acs def, slow shrinkage}) and (\ref{Eq: acs def, fast shrinkage}). Writing $S_j:= \sum_{l<j}2^{l(d-4)}(l\vee1)^{4\delta}$ and using the fact that for positive numbers $a,b$, $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$, for $g\in\mathcal{F}(r)$ we have that $\kappa_{n,j_n,\delta}(g) \leq 2\sqrt{M}n^{-1/2}\|g-\hat{f}_n\|_{H^{-1,\delta}} + 2M\sqrt{S_{j_n}}n^{-1}$ and so $g\in C_n$ if and only if $$ \|g-\hat{f}_n\|_{H^{-1,\delta}} \leq \sqrt{z_{\alpha}\frac{2M}{n}\sqrt{S_{j_n}} + U_{j_n}+G(j_n)} + n^{-1/4} \sqrt{2z_{\alpha}\sqrt{M}}\sqrt{\|g-\hat{f}_n\|_{H^{-1,\delta}}}.$$ For positive numbers $x,a,b$, the inequality $x\leq b+a\sqrt{x}$ implies that $x\leq 2b+2a^2$. Thus the diameter of $C_n$ is bounded by a multiple of $$ n^{-1/2}S_{j_n}^{1/4} + \sqrt{U_{j_n}} + \sqrt{G_{j_n}} + n^{-1/2}.$$ We consider each of these terms separately; note that the final term is always sufficiently small. First, consider $G(j_n)$: this is deterministic, of order $$ G(j_n) \lesssim (\log{n})^{1+2\delta}\left(\frac{n}{\log{n}}\right)^{-\frac{2(r+1)}{2r+d/2}} = o(R_n(s)^2) = o(R_n(r)^2). $$ (When $d\leq4$ this is trivial; for $d>4$, it necessitates the assumption on $s$.) Next, $n^{-2}S_{j_n}$ is of order $$ n^{-2}\sum_{l\leq j_n-1}2^{l(d-4)}(l\vee1)^{4\delta}. $$ When $d\leq4$, this contributes at most a logarithmic factor in $n$ times $n^{-2}$, so this is clearly $o(R_n(s)^4)$ and $o(R_n(r)^4)$. When $d>4$, the final term dominates the sum and so the contribution is of order $$ (\log{n})^{4\delta - \frac{d-4}{2r + d/2}}n^{-\frac{4(r+1)}{2r+d/2}} = O(R_n(s)^4) = o(R_n(r)^4),$$ again by the assumption on $s$. Finally, since $\mathrm{Var}(U_{j_n})\to0$ as $n\to\infty$, we know that $$ U_{j_n} = O_P\left(E_fU_{j_n}\right) = O_P\left(E_f\|K_j(f-\hat{f}_n)\|_{H^{-1,\delta}}^2)\right) = O_P\left(E_f\|f-\hat{f}_n\|_{H^{-1,\delta}}^2\right).$$ As $\hat{f}_n$ converges at the rates $R_n(s)$ and $R_n(r)$ uniformly over $\mathcal{F}(s)$ and $\mathcal{F}(r)$ respectively, $U_{j_n}$ is of the correct order in probability in both cases. This concludes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm: testing rate}] For some sequence $L_n\to\infty$, to be defined below, and any $\omega\in \left\{-1;1\right\}^{\mathbb{Z}\cap\left[0,2^{L_n}\right)^d}$, we define for some $\epsilon>0$, \[ f_{n,\omega} \coloneqq 1 + \epsilon2^{-L_n(r+d/2)}\sum_{k\in\mathbb{Z}\cap\left[0,2^{L_n}\right)^d}\omega_{k} \psi_{L_n,k}.\] Provided that $B>1$, \begin{align*} \norm{f_{n,\omega}}_{B^{r}_{2q}} &= 1 + 2^{L_nr} \left(\sum_{k\in\mathbb{Z}\cap\left[0,2^{L_n}\right)^d}|\langle f_{n,\omega}, \psi_{L_n,k}\rangle|^2\right)^{1/2}\\ &= 1 + \epsilon 2^{L_nr} 2^{-L_n(r+d/2)} 2^{dL_n/2} \\ &= 1 + \epsilon, \end{align*} ensuring that $ f_{n,\omega}$ is in the $\norm{\cdot}_{B^2_{2q}}$-Besov ball of radius $B$ for $\epsilon$ small enough. Also, $\int_{\mathbb{T}^d}f_{n,\omega}(t)dt=1$ and, as the tensor product wavelet basis is assumed to be $S-$regular (cf. Appendix \ref{Section: Wavelet Appendix}), \[ \norm{\sum_{k} |\psi_{L_n,k}|}_\infty \lesssim 2^{dL_n/2},\] for some constant depending on the basis only. Therefore, \[ \norm{f_{n,\omega}-1}_\infty \leq \epsilon c 2^{-rL_n},\] so that, for any $M>1\geq m>0$, $f_{n,\omega}\in\mathcal{F}(r)$ for $n$ large enough (or $\epsilon$ small enough if $r=0$). Finally, for any $\rho_n=o\left(n^{-\frac{1+r}{2r+d/2}}\right)$, $f_{n,\omega}\in \tilde{\mathcal{F}}(r,\rho_n)$ if, for any $g\in \mathcal{F}(s)$, $W_2\left(f_{n,\omega}, g\right)\geq \rho_n$. By definition of $\mathcal{F}(r)$, $\mathcal{F}(s)$ and Proposition \ref{Prop: W-Besov comparison}, we have, for $n$ large enough \begin{align*} W_2\left(f_{n,\omega}, g\right)^2 &\gtrsim \norm{f_{n,\omega} - g}^2_{B^{-1}_{2\infty}}\\ &\geq 2^{-2L_n}\norm{\langle f_{n,\omega} - g, \psi_{L_n,\cdot}\rangle}_2^2\\ &\geq 2^{-2L_n}\left[ \Bigg(\sum_{k=0}^{2^{L_nd}-1} |\langle f_{n,\omega} , \psi_{L_n,k}\rangle|^2\Bigg)^{1/2} - \Bigg(\sum_{k=0}^{2^{L_nd}-1}|\langle g, \psi_{L_n,k}\rangle|^2\Bigg)^{1/2}\right]^2\\ &\geq 2^{-2L_n} \left[ \epsilon 2^{-L_nr} - B2^{-L_ns}\right]^2\\ &\geq \frac{\epsilon^2}{2}2^{-2L_n(1+r)}. \end{align*} Therefore, if $L_n^*$ is such that $2^{-2L_n^*(1+r)}\asymp n^{-2\frac{1+r}{2r+d/2}}$, it is possible to find $L_n>L_n^*$ such that $\rho_n^2\leq \frac{\epsilon^2}{2}2^{-2L_n(1+r)} = o\left(n^{-2\frac{1+r}{2r+d/2}}\right)$. This choice ensures that, for any $\omega$, $f_{n,\omega}\in \tilde{\mathcal{F}}(r,\rho_n)$. Note also that the density $f_0\coloneqq 1$ naturally belongs to $\mathcal{F}(s)$.\\ Re-index $\left\{-1;1\right\}^{\mathbb{Z}^d\cap\left[0,2^{L_n}\right)^d}$ as $\left\{\omega^{(i)}:\ i=1,\dots, 2^{2^{dL_n}} \right\}$ and denote by $P_i$ the distribution with Lebesgue density $f_i\coloneqq f_{n,\omega^{(i)}}$, $Q:=2^{-2^{dL_n}} \sum_{i=1}^{2^{2^{L_nd}}}P_i$ and $P_0$ the distribution with density $f_0$. Then, with $\mu$ the Lebesgue measure and for any test $\Psi_n$, \begin{align*} \underset{f\in \Sigma_0}{\sup} \mathbb{E}_f \left[\Psi_n\right] + \underset{f\in \Sigma(\rho_n)}{\sup} \mathbb{E}_f \left[1-\Psi_n\right] &\geq \mathbb{E}_{f_0} \left[\Psi_n\right] + 2^{-2^{dL_n}} \sum_{i=1}^{2^{2^{L_nd}}} \mathbb{E}_{f_i} \left[1-\Psi_n\right]\\ &\geq \int \left(\Psi_n(x_1,\dots,x_n)+1-\Psi_n(x_1,\dots,x_n)\right)\\ &\qquad\qquad\left(\prod_{j=1}^n f_0(x_j)\wedge 2^{-2^{dL_n}} \sum_{i=1}^{2^{2^{L_nd}}} \prod_{j=1}^n f_i(x_j)\right)d\mu^{\otimes n}(x_1,\dots,x_n)\\ &= 1-\frac{1}{2}\norm{P_0^{\otimes n}- Q^{\otimes n} }_1\\ &\geq 1-\frac{1}{2}\sqrt{\chi^2\left(Q^{\otimes n}, P_0^{\otimes n}\right)}. \end{align*} where $\chi^2(Q,P)=\int (dP/dQ-1)^2 dQ$ if $P\ll Q$, $\chi^2(Q,P)=+\infty$ otherwise. Also, for any $1\leq \gamma,\kappa\leq 2^{2^{dL_n}}$, the orthonormality of the wavelet basis gives \begin{align*} &\int \frac{dP_\gamma^{\otimes n}}{dP_0^{\otimes n}}\frac{dP_\kappa^{\otimes n}}{dP_0^{\otimes n}}dP_0^{\otimes n} \\ &= \prod_{i=1}^n \int_{\mathbb{T}^d} \left[1 + \epsilon2^{-L_n(r+d/2)}\sum_{k}\omega^{(\gamma)}_{k} \psi_{L_n,k}(x_i)\right]\left[1 + \epsilon2^{-L_n(r+d/2)}\sum_{k}\omega^{(\kappa)}_{k} \psi_{L_n,k}(x_i)\right]dx_i\\ &= \left(1+\epsilon^2 2^{-L_n(2r+d)} \sum_{k} \omega^{(\gamma)}_{k} \omega^{(\kappa)}_{k} \right)^n. \end{align*} Then, for $\gamma_n=n\epsilon^22^{-L_n(2r+d)}\to0$ and $R_k, R_k'$ i.i.d. Rademacher random variables, \begin{align*} \chi^2\left(Q^{\otimes n}, P_0^{\otimes n}\right) &= 2^{-2^{dL_n}}\sum_{\gamma,\kappa} \left(1+\epsilon^2 2^{-L_n(2r+d)} \langle \omega^{(\gamma)}, \omega^{(\kappa)} \rangle\right)^n -1\\ &\leq \mathbb{E}\left[\exp\Big(n\epsilon^22^{-L_n(2r+d)} \sum_{k} R_k R_k'\Big)\right]-1\\ &= \mathbb{E}\left[\exp\Big(n\epsilon^22^{-L_n(2r+d)} \sum_{k} R_k\Big)\right]-1 \\ &= \cosh(\gamma_n)^{2^{L_nd}} -1, \end{align*} where we used that $1+x\leq e^x$ for $x\in\mathbb{R}$ in the second line and that $R_kR_k'$ is distributed as $R_k$ in the third. Using that $\cosh(z)=1+z^2/2+\underset{|z|\to0}{o}(z^2)$ and $1+x\leq e^x$ once again, for any $\delta>0$, \[\left(\cosh(\gamma_n)\right)^{2^{dL_n}}-1=\left(1+\frac{\gamma_n^2}{2}(1+o(1))\right)^{2^{dL_n}}-1\leq \exp\left(\gamma_n^22^{dL_n-1}(1+o(1))\right)-1\leq \delta^2\] for $n$ large enough, since $\gamma_n^22^{dL_n}=o(1)$. We have proven that, for any $\beta<1$ and $\rho_n=o\left(\rho_n^*\right)$, \[ \underset{n}{\lim\inf} \ \underset{\Psi_n}{\inf}\Big[\underset{f\in \mathcal{F}(s)}{\sup} \mathbb{E}_f \left[\Psi_n\right] + \underset{f\in \tilde{\mathcal{F}}(r,\rho_n)}{\sup} \mathbb{E}_f \left[1-\Psi_n\right] \Big] \geq \beta,\] which concludes the proof. \end{proof} \section{Wavelets and Besov Spaces}\label{Section: Wavelet Appendix} Here we introduce the wavelet bases we use, and define the various norms and spaces used in our analysis. \subsection{Wavelet Bases of $\mathbb{R}^d$ and $\mathbb{T}^d$} Let $S\in\mathbb{N}$. We begin with an $S$-regular wavelet basis of $L_2(\mathbb{R})$ generated by scaling function $\Phi$ and wavelet function $\Psi$, $$ \left\{ \Phi_k = \Phi(\cdot-k), \Psi_{lk} = 2^{l/2}\Psi(2^l(\cdot)-k): l\geq0, k\in\mathbb{Z} \right\}.$$ Concretely, we take sufficiently regular Daubechies wavelets: see \cite{meyerWaveletsOperators1993},\cite{daubechiesTenLecturesWavelets1992},\cite[Chapter 4]{gineMathematicalFoundationsInfiniteDimensional2015} for details. Such a wavelet basis has the following properties: \begin{itemize} \item $\Phi,\Psi$ are in $C^S(\mathbb{R})$, $\int_{\mathbb{R}}\Phi = 1$, and $\Psi$ is orthogonal to polynomials of degree $<S$. \item $\norm{\sum_k |\Phi_{k}|}_{\infty} \lesssim 1$, and$\norm{\sum_k |\Psi_{lk}|}_{\infty} \lesssim 2^{l/2}$ for a constant depending only on $\Psi$. \item Letting $V_j = \mathrm{span}(\Phi_k,\Psi_{lk}:l< j)$, for any $f\in V_j$ the following Bernstein estimate holds: $$ \norm{\nabla f}_p \lesssim 2^j\norm{f}_p,$$ for a constant depending only on the wavelet basis. \item $\Phi,\Psi$ are compactly supported. \end{itemize} We then form a tensor product basis of $L_2(\mathbb{R}^d)$ as follows. Let $\mathcal{I}=\{0,1\}^d\setminus\{0\}$. Define $$ \phi(x) = \Phi(x_1)\cdots\Phi(x_d), \quad x\in\mathbb{R}^d$$ and, writing $\Psi^0=\Phi,\Psi_1=\Psi$, $$ \psi^{\iota} = \Psi^{\iota_1}(x_1)\cdots\Psi^{\iota_d}(x_d), \quad \iota\in\mathcal{I}.$$ Then (\cite[Section 4.3.6]{gineMathematicalFoundationsInfiniteDimensional2015}) $$ \left\{ \phi_k = \phi(\cdot-k), \psi^{\iota}_{lk} = 2^{ld/2}\psi^{\iota}(2^lx-k) : l\geq0,k\in\mathbb{Z}^d,\iota\in\mathcal{I} \right\} $$ defines a wavelet basis of $L_2(\mathbb{R}^d)$. We omit $\iota$ from our notation and simply write $\psi_{lk}$ with $k$ now implicitly taking values in $\mathbb{Z}^d\times\mathcal{I}$; any sum over $k$ is to be understood as over all $\iota\in\mathcal{I}$ as well. \begin{enumerate}[1)] \item $\phi,\psi$ are in $C^S(\mathbb{T}^d)$, $\int_{\mathbb{R}^d}\phi = 1$, and $\psi$ is orthogonal to polynomials of degree $<S$. \item $\norm{\sum_k |\phi_{k}|}_{\infty} \lesssim 1$, and$\norm{\sum_k |\psi_{lk}|}_{\infty} \lesssim 2^{ld/2}$ for a constant depending only on $\psi$. \item $\phi,\psi$ are compactly supported. \end{enumerate} These properties follow elementarily from the previously stated properties of $\Phi$ and $\Psi$. Property 3) is used crucially in our analysis on $\mathbb{R}^d$. Notably, this precludes certain common choices of wavelet basis, such as the Meyer basis. These properties imply the following relationship between $L_p$-norms of functions and the $\ell_p$-norms of their wavelet coefficients (by an abuse of notation we denote both of these norms by $\|\cdot\|_p$). \begin{lemma}\label{Lemma: Lp-lp norm wavelet equivalence} For any $l\geq0$, any $p\in[1,\infty]$ and any $c\in\mathbb{R}^{\mathbb{Z}^d}$, we have that $$ \norm{\sum_{k\in\mathbb{Z}^d} c_k\psi_{lk}}_p \simeq 2^{ld(1/2 - 1/p)}\|c\|_p, $$ where the constants depend on $\psi$ and $p$ only. \end{lemma} When working on $\mathbb{T}^d$, we use the tensor product wavelet basis induced by the periodisations of $\Phi,\Psi$; see \cite[Section 4.3.4]{gineMathematicalFoundationsInfiniteDimensional2015} for details. This produces a basis of $L_2(\mathbb{T}^d)$ with the following properties: \begin{enumerate}[1)] \item $\psi(x) = \prod_{i=1}^d\psi^{(i)}(x_i)$ for some univariate functions $\psi^{(i)}$. \item Setting $\psi_{lk}(\cdot) = 2^{ld/2}\psi(\cdot - 2^{-l}k)$ for $l\geq0,k\in\mathbb{Z}^d\cap[0,2^l)^d$, the set $$ \left\{ \phi,\psi_{lk} : l\geq0,k\in\mathbb{Z}^d\cap[0,2^l)^d \right\} $$ forms an orthonormal basis of $L_2(\mathbb{T}^d)$. By an abuse of notation, we re-index in $k$ such that $k\in\mathbb{Z}$ varies between $0\leq k<2^{ld}$. \item $\psi$ is in $C^S(\mathbb{T}^d)$, and is orthogonal to polynomials of degree $<S$. \item $\norm{\sum_k |\psi_{lk}|}_{\infty} \lesssim 2^{ld/2}$, for a constant depending only on $\psi$. \item Letting $V_j = \mathrm{span}(\phi,\psi_{lk}:l< j)$, for any $f\in V_j$ the following Bernstein estimate holds: $$ \norm{\nabla f}_p \lesssim 2^j\norm{f}_p,$$ for a constant depending only on the wavelet basis. \end{enumerate} Again, these are basic consequences of properties of $\Phi,\Psi$, and enable the proof of Proposition \ref{Prop: W-Besov comparison}; compare to Appendix C of \cite{weedEstimationSmoothDensities2019}. \subsection{Besov Spaces} In this section, we let $(\phi_k,\psi_{lk})$ denote either the $S$-regular tensor product Daubechies wavelet basis of $L_2(\mathbb{R}^d)$, or the $S$-regular tensor product periodised Daubechies wavelet basis of $L_2(\mathbb{T}^d)$. It should be understood that any summation is over the full range of indices, for example $\sum_k\psi_{lk}$ denotes $\sum_{k\in\mathbb{Z}^d}\psi_{lk}$ in the $\mathbb{R}^d$ case and $\sum_{k=0}^{2^{ld}-1}\psi_{lk}$ in the $\mathbb{T}^d$ case. We further let $\mathcal{D}$ be either the class of tempered distributions on $\mathbb{R}^d$, or the class of periodic tempered distributions on $\mathbb{T}^d$. Let $1\leq p\leq \infty$, $1\leq q\leq \infty$, $s\in\mathbb{R}, s<S$. For $f\in\mathcal{D}$, we define the Besov norm \begin{equation}\label{Eq: Besov norm definition} \|f\|_{B^s_{pq}} = \norm{\langle f,\phi_{\cdot}\rangle}_p + \left(\sum_{l\geq0}\left[2^{ls}2^{ld\left(\frac{1}{2}-\frac{1}{p}\right)}\norm{\langle f,\psi_{l\cdot}\rangle}_p\right]^q\right)^{1/q}, \end{equation} where $\|\cdot\|_{p}$ is the $\ell_p$-norm. When $q=\infty$, the norm is defined as \begin{equation}\label{Eq: Besov norm definition, q=infty} \|f\|_{B^s_{p\infty}} = \norm{\langle f,\phi_{\cdot}\rangle}_p + \sup_{l\geq0}2^{ls}2^{ld\left(\frac{1}{2}-\frac{1}{p}\right)}\norm{\langle f,\psi_{l\cdot}\rangle}_p. \end{equation} We then define the corresponding Besov space $B^s_{pq}$ as \begin{equation}\label{Eq: Besov space definition} B^s_{pq} = \left\{f\in\mathcal{D}: \|f\|_{B^s_{pq}}<\infty \right\}. \end{equation} We will write $B^s_{pq}(\mathbb{R}^d)$ or $B^s_{pq}(\mathbb{T}^d)$ to remove any ambiguity over the choice of domain, whenever it arises. The definition of $B^s_{pq}$ is independent of the wavelet basis used, that is, using a different (sufficiently regular) basis in the definition (\ref{Eq: Besov norm definition}) produces an equivalent norm. Moreover, using a $C^{\infty}$ basis such as the Meyer basis enables us to define $B^s_{pq}$ concurrently for all $s\in\mathbb{R}$. \subsection{The Case of the Unit Cube}\label{Subsection: unit cube} We can also define a `boundary-corrected' wavelet basis of $L_2([0,1]^d)$ based on $\Phi,\Psi$ as in \cite{cohenWaveletsIntervalFast1993}; see also \cite[Section 4.3.5]{gineMathematicalFoundationsInfiniteDimensional2015}. Such a basis possesses completely analogous properties to properties 1)-5) of the periodised basis of $L_2(\mathbb{T}^d)$; moreover, all Besov spaces defined on $\mathbb{T}^d$ are defined on the unit cube $[0,1]^d$ by replacing the periodised wavelet basis with the boundary-corrected wavelet basis (as used in \cite{weedSharpAsymptoticFinitesample2019}). Thus all of our results for $\mathbb{T}^d$ hold also for the case of $[0,1]^d$. \subsection*{Acknowledgments} The authors gratefully thank Ismaël Castillo and Richard Nickl for their guidance in this project and their careful reading of this paper.
1,314,259,996,208
arxiv
\section{Introduction}\label{sec:Intro} Digital twins are a major trend in the field of digitalization \cite{grieves2017digital,tao2018digital,jones2020characterising,hartmann2021digital}. They extend Model-Based Systems Engineering (MBSE) concepts along the complete life cycle providing novel means for enhanced decision making, e.g. during operation. While the use of model-based technology for operations is not new, it typically requires hand-crafted real-time models and the corresponding manual efforts limit applicability. Model Order Reduction (MOR) is a key technology for digital twins \cite{hartmann020model} since it allows to seamlessly transform highly accurate models, as typically used in engineering, to models of less complexity. Trading use case specific prediction cost with prediction capabilities and accuracy it allows in many cases real-time execution during operations. MOR is a well-established field, historically build on top of intrusive solutions leveraging knowledge of the numerical discretization. Since, corresponding knowledge is typically not available from commercial simulation tools, industrial adoption has been limited. However, along with the success of data-based methods such as neural-networks in the past decade also the identification of low-dimensional dynamical systems from data has shown many successful examples. This has particularly led to the formation of the new field of Scientific or Physics-Informed Machine Learning \cite{karniadakis2021physics, willcox2021imperative}. Within this contribution we aim to realize a time dependent nonlinear Model Order Reduction (MOR) which leverages physics knowledge without requiring access to the underlying solver of the Full Order Model (FOM). At the same time, it should be possible to easily integrate it into any engineering workflow, such that corresponding models could be realized by simulation engineers without deep knowledge in the underlying MOR technologies. Thereby we build on the concept of Operator Inference (OI) \cite{peherstorfer2016data} and its extension to nonlinear terms using the Discrete Empirical Interpolation Method (DEIM) \cite{benner2020operator}. The concept of OI itself is built on a least-squares optimization problem, which in industrial relevant problems requires often regularization \cite{peherstorfer2020sampling,swischuk2020learning,mcquarrie2021data}. Our novel contribution is the reformulation of the OI concepts as a constrained optimization problem, which provides more accurate and robust results. Thereby, we are inspired by the recent advancements in the field of machine learning \cite{rackauckas2020universal,chen2018neural,um2020solver}. The contribution is structured as follows: In Section \ref{sec:Example} we introduce a simplified real-world industrial example which highlights the requirements for our approach as well as will be used for validation. In Section \ref{sec:MOR}, we summarize the approach taken in \cite{benner2020operator} and highlight in detail our novel extension. In Section \ref{sec:Results}, we validate our approach via numerical experiments along the example introduced in Section \ref{sec:Example}, before we conclude in Section \ref{sec:Conclusion}. \section{Example}\label{sec:Example} Sine our contribution is motivated by fostering industrial adoption of MOR technologies, let us start with a real-world example: Goal is to realize a time dependent non-linear Reduced Order Model (ROM) for a tube and shell reactor as shown in Figure \ref{fig:reactor}. That is, the temperature distribution within the reactor shall be predicted in real-time with dynamically varying coolant inflow rates and heat generation. To achieve this, an offline-online simulation snapshot-based approach is considered. That is, snapshots of dynamic Computational Fluid Dynamics (CFD) simulations with typically many Degrees of Freedom (DoF), e.g., $> 10^5$ DoF, for a selected set of parameters are generated in an offline phase. Based on these a Reduced Order Model (ROM), e.g. $<20$ DoF, shall be developed. This model shall be capable to run in real-time to support e.g. control and operation decisions. \begin{figure} \centering \raisebox{4.0cm}{(a)} \includegraphics[height=4.4cm]{Figures/reactor.png} \raisebox{4.0cm}{(b)} \includegraphics[height=4.4cm]{Figures/reactor_control.png} \caption{(a) Multi-tubular reactor where the tubular region (rose) is approximated by a porous medium model. (b) Schematic visualization of typical dynamic coolant inflow rates $v_I(t)$ (c.f. Equation \ref{eq:NS1}-\eqref{eq:NS2}), heat generation profiles $R(t)$ (c.f. Equation \eqref{eq:T04}), and resulting solid temperature at a specific spatial point.} \label{fig:reactor} \end{figure} \subsection{Continuum Equations}\label{sec:Intro:CEq} The fluid velocity $\bm{v}$ and pressure field $p$ of the coolant flow within the reactor is modeled via the dynamic Navier-Stokes\footnote{In the following, we assume temperature independence. Furthermore, we would like to highlight that most commercial programs actually solve slightly more complex CFD models including sophisticated turbulence components.} equation with porous regions (c.f. Figure \ref{fig:reactor}) modeled via the Brinkman law, i.e. \begin{align} \rho_c\left(\frac{\partial\bm{v}}{\partial{t}}+(\bm{v}\cdot\nabla)\bm{v}\right) &= - \nabla p + \mu \Delta \bm{v} + \bm{\alpha}(\bm{x}) \bm{v} & &\text{in } \Omega \label{eq:NS1}\\ \nabla\cdot\bm{v} & = 0 & &\text{in } \Omega \label{eq:NS2} \end{align} for time $t\in(0,t_\text{end})$ with temperature independent fluid density $\rho_c$, dynamic viscosity $\mu$, and porous viscous resistance $\bm{\alpha}(\bm{x})$. The flow is initiated with zero initial velocity and driven by a time-dependent inflow rate $\bm{v}_I(t)=(v_I(t),0,0)$, which is an external time-dependent parameter. Further, boundary conditions are zero flux on the reactor casing and zero normal stress at the outflow. Since the porous region $\Omega_s$ is localized, it holds $\bm{\alpha}(\bm{x})=\bm{\alpha}$ in $\Omega_s$ and $\bm{\alpha}(\bm{x})=\bm{0}$ in $\Omega \backslash \Omega_s$. The temperature of the coolant fluid $T_c$ and the solid $T_s$ are modelled by an isotropic medium approach, i.e. \begin{align} \rho_c c_{p,c} \left(\frac{\partial T_c}{\partial{t}}+(\bm{v}\cdot\nabla)T_c\right) &= \nabla \cdot \left(\bm{\mathcal{K}}_c \nabla T_c\right) + \chi_{\Omega_s} {q}(T_s-T_c) & & \text{in } \Omega \label{eq:T01}\\ \rho_s c_{p,s} \frac{\partial T_s}{\partial{t}} &= \nabla \cdot \left(\bm{\mathcal{K}}_s \nabla T_s\right) - {q}(T_s-T_c) + \mathcal{P}(t,T_s)& & \text{in } \Omega_s \label{eq:T02} \end{align} for time $t\in(0,t_\text{end})$ with solid density $\rho_s$, specific heat capacities $c_{p,c}$ and $c_{p,s}$, conduction coefficients $\bm{\mathcal{K}}_c(\bm{x})$ and $\bm{\mathcal{K}}_s$, volumetric heat transfer ${q}$, and heat generation $\mathcal{P}$, plus appropriate boundary and initial conditions. Here, $\chi_{\Omega_s}(\bm{x})$ represents the support of the solid region, i.e. $\chi_{\Omega_s}(\bm{x}) = 1$ for $\bm{x}\in\Omega_s$ and $\chi_{\Omega_s}(\bm{x}) = 0$ for $\bm{x}\in(\Omega\setminus\Omega_s)$. The volumetric heat transfer is is given by \begin{equation} \label{eq:T03} q(\delta T)=h\;A\;\delta T, \end{equation} where $\delta T$ is the temperature difference, $h$ is the heat transfer coefficient between the tubes and the coolant and $A$ is the exchange area density. The power is generated following an Arrhenius law \begin{equation} \label{eq:T04} \mathcal{P}(R(t),T) = {R}(t)\;\mathcal{A}\;\exp(\mathcal{B}/T), \end{equation} where ${R}(t)$ is an external dynamic parameter controlling the power output and $\mathcal{A}$, $\mathcal{B}$ are constants. Equations \eqref{eq:NS1}-\eqref{eq:T04} constitute the Full Order Model (FOM) of the cooling process of the multi-tubular reactor. The cooling itself is influenced by the two time-dependent external parameters for the coolant inflow $v_I(t)$ and generated power $R(t)$. We assume that parameters and conditions are chosen such that the continuous FOM is well posed up to time $t_\text{end}$. \subsection{Discretized Equations}\label{sec:Intro:DEq} In the following, we will mostly work with a spatially and temporally discretized version of the FOM, i.e.: \begin{align} \dot{\hat{\bm{v}}} &= \hat{\bm{A}}_1 \hat{\bm{v}}+ \hat{\bm{A}}_2 \hat{p} + \hat{\bm{H}}_1(\hat{\bm{v}} \otimes \hat{\bm{v}}) + \hat{\bm{B}}\dot{v}_I(t) \label{eq:FOM01}\\ 0 &= \hat{\bm{A}}_3 \hat{\bm{v}} \label{eq:FOM02}\\ \dot{\hat{T}}_c &= \hat{\bm{A}}_4 \hat{T}_c + \hat{\bm{A}}_5 \hat{T}_s + \hat{\bm{H}}_2(\hat{\bm{v}}\otimes\hat{T_c}) \label{eq:FOM03}\\ \dot{\hat{T}}_s &= \hat{\bm{A}}_6 \hat{T}_s + \hat{\bm{A}}_7 \hat{T}_c + \hat{\bm{P}}(R(t),\hat{T}_s) \label{eq:FOM04} \end{align} where we mark spatially discretized variables by a hat and we have introduced the term $\hat{\bm{B}}\dot{v}_I(t)$ to model time dependent inflow conditions. Taking the time derivative of the external parameterized inflow rather than the inflow itself will allow to consider the parameterization in the context of operator inference straight forward\footnote{Though within this work we will consider only q constant steady flow}. In general we cannot assume that the vectors / tensors $\hat{\bm{B}}$, $\hat{\bm{A}}_i$, $\hat{\bm{H}}_i$, and nonlinear vector function $\hat{\bm{P}}(t,\cdot)$ are known, particularly working with a commercial software package as in this contribution\footnote{3D CFD simulations are based on Simcenter STAR-CCM+ (\url{https://www.plm.automation.siemens.com/global/en/products/simcenter/STAR-CCM.html})}. \section{Model Order Reduction}\label{sec:MOR} Within this project the discretized FOM system of time-dependent nonlinear partial differential equations (as detailed in Section \ref{sec:Intro}) shall be reduced using MOR technologies to allow for an operation parallel real-time simulation. The ROM should be significantly faster than the original model, while maintaining a sufficient degree of accuracy. At the same time, we would like it to be as interpretable and robust as possible. Thereby, the following parameters shall be able to vary: generated power controlled through $R(t)$ and coolant influx $v_I(t)$. Furthermore, the corresponding MOR technologies shall not require knowledge on the specific discretization or matrices of the discretized FOM, i.e. $\hat{\bm{B}}$, $\hat{\bm{A}}_i$, $\hat{\bm{H}}_i$, or $\hat{\bm{P}}(t,\cdot)$ in Equations \eqref{eq:FOM01}-\eqref{eq:FOM04}. These are typical industrial requirements found in the context of MOR of large nonlinear dynamic 3D multi-physics models \cite{hartmann020model}. \paragraph{Approach:} Within this project, we will adopt a snapshot-based approach consisting of the two consecutive steps of reduced (latent) dimension identification and model identification in the reduced space following the approach of \cite{benner2020operator}. Thereby, we will leverage three different concepts based on optimization: \begin{itemize} \item Proper Orthogonal Decomposition (POD): Based on the set of generated snapshots, a reduced (latent) basis will be identified following \cite{chatterjee2000introduction} (reduced basis identification). \item Discrete Empirical Interpolation Method (DEIM): Leveraging concepts from sparse sensing we will directly reduce parts of the model, i.e. the power generation as determined by the Arrhenius law, assuming mass lumping (see e.g. \cite{thomee2007galerkin}) and following \cite{chaturantabut2010nonlinear, holmes2012turbulence, lumley1970toward} (model identification). \item Operator Inference (OI): Assuming that the operators are polynomial, operator inference provides an efficient way to identify operators from given snapshot data, particularly in reduced dimensions by POD \cite{peherstorfer2016data} (model identification). \end{itemize} Following the ideas of \cite{benner2020operator} allows us to effectively address non-polynomial terms, e.g., as introduced by the Arrhenius law \eqref{eq:T04}. In the context of chemical reactors this is a crucial step, since many chemical reactions cannot be formulated in polynomial terms only, which is the basic requirement of OI adopted here \cite{qian2020lift}. Our novel contribution is to adopt a different optimization approach compared to \cite{benner2020operator}. The concept of OI requires in many cases significant regularization \cite{peherstorfer2020sampling,swischuk2020learning,mcquarrie2021data}. This clearly limits industrial applicability by non-experts. Inspired by recent works in machine learning \cite{rackauckas2020universal,chen2018neural,um2020solver} we take an additional constrained optimization approach. That is, we start with the classical least squares minimization approach adopted in OI (requiring usually regularization) and use this as an initial condition for solving another constrained optimization approach. The latter could be also interpreted as an optimal control problem, where the control parameters are the operator coefficients. Therefore we refer to this step, which is further detailed in Section \ref{sec:MOR:OC}, as operator calibration. \subsection{Proper Orthogonal Decomposition}\label{sec:MOR:POD} Within this approach we work with snapshot data. That is, we consider a set of $l$ different simulation time series with different control parameters $R^i(t)$ and $\dot{v}^i_I(t)$ with $1 \leq i \leq l$. Let $\hat{\bm{s}}_1^i$, $\ldots$, $\hat{\bm{s}}_k^i$ be solutions of the FOM with the corresponding control parameters $\bm{u}^i_1,\ldots\bm{u}_k^i$ at time steps $t_1$, $\ldots$, $t_k$ of the $i$th set of simulations ($1 \leq i \leq l$) where \begin{align} \hat{\bm{s}}_j^i &= [\hat{v}_j^i, \hat{T}_{c,j}^i, \hat{T}_{s,j}^i], \label{eq:composition01} \\ \bm{u}_j^i &= [R_j^i,\dot{v}_{I,j}^i]. \label{eq:composition02} \end{align} Here we have dropped the pressure $\hat{\bm{p}}_j^i$ since this is actually not a free variable but rather is a Lagrange multiplier enforcing incompressibility of the fluid.\footnote{Since we do not vary inflow conditions within our experiments in Section \ref{sec:Results}, the flow is actually a constant variable, i.e. does not change over time.} In our case, a single simulation snapshot $\hat{\bm{s}}_j^i$ covers more than 380 000 different values. As a first step, we therefore identify a hierarchical coordinate system, which will allow us to represent the snapshots in a much lower dimensional system. To this end we use a truncated Proper Orthogonal Decomposition (POD) or Principal Component Analysis (PCA). Let \begin{equation}\label{eq:composition03} \bm{S}= \begin{bmatrix} \mid & & \mid & & \mid & & \mid \\ \hat{\bm{s}}^1_1 & \ldots & \hat{\bm{s}}^1_k & \ldots & \hat{\bm{s}}^l_1 & \ldots & \hat{\bm{s}}^l_k \\ \mid & & \mid & & \mid & & \mid \\ \end{bmatrix} \end{equation} be the complete set of snapshot data, i.e., composed of $l$ simulation sets with different parameters and each with $k$ time steps\footnote{Since the different variables in Equation \eqref{eq:composition01} might be of different orders of magnitudes, this collection of time series data is typically re-scaled, such that all variables are of the same order of magnitude.}. That is, in total we have $m=l \cdot k$ snapshot vectors. The objective is now to determine a "minimal" number of modes to accurately represent the dynamics of the FOM using a Galerkin projection. Using Singular Value Decomposition (SVD) for any matrix $\bm{S}\in\mathbb{R}^{n \times m}$ (typically $m \ll n$), a truncated matrix decomposition \begin{equation} \tilde{\bm{S}} = {\bm{U}} \cdot {\bm{\Sigma}} \cdot {\bm{V}}, \end{equation} of rank $r$ can be calculated. ${U} = [\psi_1,...\psi_r] \in \mathbb{R}^{n\times r}$ is a unitary matrix consisting of a truncated subset of columns of the corresponding full singular value decomposition, $\bm{\Sigma}^{r \times m}$ is a diagonal matrix with the corresponding $r$ non-negative singular values ordered from largest to smallest, and $\bm{V}^{m \times m}$ is a unitary matrix. The approach provides the best basis for $\bm{S}$ in an $l_2$ sense, i.e. $\| \bm{S}-\tilde{\bm{S}} \|$ is minimal. The specific selection of $r$ (typcially $r\ll m$ ) is an intricate manual task, but algorithmic approaches are available, e.g., using optimal thresholding \cite{gavish2014optimal}. The described method above, provides a low rank $r$-dimensional space in which we will consider the dynamics. We would like to highlight that the method does not require explicit knowledge of the underlying dynamical system and that the matrix ${\bm{U}}$ allows to project between the high dimensional snapshot data in $\mathbb{R}^n$ and the reduced dimensional space $\mathbb{R}^r$ back and forth, i.e. \begin{equation} \hat{\bm{s}} ={\bm{U}} \cdot \bm{s} \qquad \text{and} \qquad \bm{s} ={\bm{U}}^t \cdot \hat{\bm{s}}, \end{equation} where $\hat{\bm{s}}\in\mathbb{R}^n$, $\bm{s}\in\mathbb{R}^r$, and $t$ denotes the transpose. If the discretized FOM is known, the ROM can be inferred using a Galerkin projection, i.e. \begin{equation}\label{eq:GalerkinFOM} \dot{\bm{s}}(t) = {\bm{U}}^t\cdot \text{FOM}({\bm{U}}\cdot\bm{s}(t),\bm{u}(t)). \end{equation} However, in the general nonlinear case the reduction process is computationally very intensive keeping in mind that one dimension of ${\bm{U}}$ is extremely large, i.e. the evaluation of the ROM might be as challenging as the FOM. For a detailed description of dimension reduction methods and in particular POD we refer to the textbook \cite{brunton2019data}. \subsection{Discrete Empirical Interpolation Method}\label{sec:MOR:DEIM} Though in general we might not know the discretized FOM, for terms not involving spatial operators, e.g., integral or differential operators, we know the discretized version under the assumption of mass lumping \cite{thomee2007galerkin}, i.e., no direct coupling of degrees of freedom to other discretization points. In the case of chemical reactions, as considered here, mass lumping is often a valid assumption. This allows us to employ for some terms of the FOM \eqref{eq:FOM04} the Discrete Empirical Interpolation Method (DEIM) \cite{chaturantabut2010nonlinear,lumley1970toward,holmes2012turbulence} to identify the low dimensional equations. In our case, we want to identify a low dimensional version of the term $\hat{\bm{P}}(R(t),\bm{s})$ in \eqref{eq:FOM04}, that is to avoid evaluating \begin{equation}\label{eq:GalerkinArrhenius} \hat{\bm{P}}(R(t),\bm{s}) = {R}(t)\;\mathcal{A}\; \tilde{\bm{U}}^T \; \exp(\mathcal{B} / ({\bm{U}}^t \cdot\bm{s})), \end{equation} and rather find a formulation not requiring $n$-times the evaluation of the exponential function plus corresponding projections. DEIM is addressing this challenge by leveraging concepts from sparse sampling and it provides an efficient evaluation of nonlinearities that scales like the rank of the POD. DEIM is based on the snapshot matrix of the nonlinear terms: \begin{equation}\label{eq:DEIM01} \bm{P}_N = \begin{bmatrix} & \mid & \\ \ldots & \hat{\bm{P}}(R(t),\hat{\bm{s}}^i_j) & \ldots \\ & \mid & \\ \end{bmatrix}, \end{equation} where the columns are evaluations of the nonlinearity of all simulation sets $i$ at all time steps $j$. Using a corresponding low rank singular value decomposition with rank $s$\footnote{The rank $s$ can be chosen independently of $r$ but is typically of the same order.} \begin{equation}\label{eq:DEIM02} \bm{P}_N \approx \bm{U}_N \bm{\Sigma}_N \bm{V}^t_N, \end{equation} with $\bm{U}_N\in\mathbb{R}^{n \times s}$, $\bm{\Sigma}_N\in\mathbb{R}^{s \times m}$, and $\bm{V}_N\in\mathbb{R}^{m \times m}$, the method iteratively constructs a measurement matrix $\bm{P}_N \in \mathbb{R}^{n \times s}$. This measurement matrix defines the actual points where to evaluate the nonlinearity instead of evaluating it at all points in the full $n$ dimensional space of the FOM. That is, the evaluation of \eqref{eq:GalerkinArrhenius} reduces to \begin{equation}\label{eq:DEIM03} \bm{P}(R(t),\bm{s}) \approx {R}(t)\;\mathcal{A}\; \bm{P}_{1} \; \exp(\mathcal{B} / (\bm{P}_{2} \bm{s})), \end{equation} with $\bm{P}_1 = {\bm{U}}^t \cdot \bm{U}_N \cdot (\bm{P}_N^t \cdot \bm{U}_N)^{-1}\in \mathbb{R}^{r \times s}$ and $\bm{P}_2 = \bm{P}^t \cdot {\bm{U}} \in \mathbb{R}^{s \times r}$. Thus, DEIM provides an efficient mean for reduction of the FOM, if the terms are known explicitly, i.e. in our case requiring the assumption of mass lumping. For a more detailed explanation of DEIM, we e.g. refer to the textbook \cite{brunton2019data}. \subsection{(Stabilized) Operator Inference}\label{sec:MOR:OI} While we can reduce some parts of the FOM using DEIM, any parts involving spatial operators cannot be reduced by this method. To address the remaining terms, we adopt the concept of OI \cite{peherstorfer2016data, qian2020lift}. As we have seen above, it is a fair assumption to expect that the ROM of dimension $r$ is of the same nonlinearity as the FOM of dimension $n$. Thus, in the following we assume that the ROM in the reduced space is of the form \begin{align} \dot{\bm{s}} &= \bm{A} \bm{s}+ \bm{H}(\bm{s} \otimes \bm{s}) + \bm{B}\dot{v}_I(t) + \bm{P}(R(t),\bm{s}) \label{eq:ROM01}. \end{align} That is, except the nonlinear term $\bm{P}(R(t),\cdot)$, which is given by Equation \eqref{eq:DEIM03} obtained by DEIM, it has at most quadratic polynomial terms\footnote{Since within this work we drop dependence on the coolant velocity, the model is effectively linear, i.e. the terms $\bm{H}$ and $\bm{B}$ vanish.}. The goal of OI \cite{peherstorfer2016data, qian2020lift} is to infer the operators $\bm{A}$, $\bm{H}$, and $\bm{B}$, from a given set of time trajectory data $\hat{\bm{s}}$, respectively by its projected counterpart $\bm{s} ={\bm{U}}^t \cdot \hat{\bm{s}}$. Here, we follow the concept of \cite{qian2020lift} but introducing the additional function $\bm{P}(\cdot,\cdot) $ obtained from a DEIM following \cite{benner2020operator}. OI \cite{peherstorfer2016data} is based on the following least squares minimization problem: \begin{equation}\label{eq:OI_classic} \mathrm{argmin}_{\bm{A},\bm{H},\bm{B}} \sum_{i=1}^l\sum_{j=1}^k\big(\dot{\bm{s}}^{i}_{j} - \bm{\Theta}(\bm{s}^{i}_{j},\dot{v}_I(t),R(t))\big)^2, \end{equation} where we have introduced the following right hand side operator \begin{equation*} \bm{\Theta}(\bm{s},\bm{u},\bm{v},\bm{w}) = \bm{A}\bm{s} + \bm{H}(\bm{s}\otimes\bm{s}) + \bm{B}\dot{v}^{i}_{I,j} + \bm{P}(R^{i}_{j},\bm{s}). \end{equation*} The optimization problem \eqref{eq:OI_classic} thereby minimizes the difference between the observed flow / time derivative of the trajectory data and the one predicted by the model \eqref{eq:ROM01} in each time step. We would like to highlight that a good estimate of the time derivatives $\dot{\bm{s}}_i$ is imperative to get optimal results (if available it is best to use exact time derivatives) \cite{peherstorfer2020sampling}. Furthermore, we would like to highlight that the Problem \eqref{eq:OI_classic} often leads to unstable dynamics, i.e. matrices might have positive eigenvalues while the underlying dynamics might not. Therefore, Problem \eqref{eq:OI_classic} typically requires appropriate regularization \cite{peherstorfer2020sampling,swischuk2020learning,mcquarrie2021data}. This can be a quite interacted task and the required manual efforts and knowledge limit industrial applicability. \subsection{Operator Calibration}\label{sec:MOR:OC} OI poses some challenges in terms of time-derivative estimation as well as regularization. Therefore, we propose the following variant of OI: Instead of the simple quadratic least squares minimization problem \eqref{eq:OI_classic}, let us consider the corresponding constrained optimization problem: \begin{align} &\hspace*{-2.0cm}\mathrm{argmin}_{\bm{A},\bm{H},\bm{B}} \sum_{i=1}^l\sum_{j=1}^k\left(\tilde{\bm{s}}^{i}_{j} - {\bm{s}}^{i}_{j}\right)^2 \label{eq:OI_constrained} \\ \intertext{such that for all $1 \leq i \leq l$ \, it holds:} \tilde{\bm{s}}^{i}_{j} &= \tilde{\bm{s}}^{i}_{j-1} + \delta_t \bm{\Theta}(\tilde{\bm{s}}^{i}_{j-1},\bm{u}^{i}_{j-1}) \quad j=1,\dots,k \nonumber \\ \tilde{\bm{s}}^{i}_{0} &=\bm{s}^{i}_{0}, \nonumber \end{align} where we have adopted an explicit Euler time discretization directly, instead of using a continuous formulation. That is, instead of optimizing the approximation of the flow / time derivatives in all time steps, the optimization problem \eqref{eq:OI_constrained} aims to minimize the difference of the complete trajectory, i.e. we take a control-like approach \cite{rawlings2017model} where the control parameters are the matrix entries. In mathematical terms, instead of solving the simple quadratic optimization \eqref{eq:OI_classic} directly with a linear solver, we have to solve a nonlinear optimization problem. Since the number of optimization parameters are rather large (number of coefficients of the matrices $\bm{A}$, $\bm{H}$, and $\bm{B}$), gradient-based optimizers are typically required. That is solving the nonlinear optimization problem \eqref{eq:OI_constrained}, we need to rely on the Karush–Kuhn–Tucker (KKT) formulation. In each optimization step, we need to solve the forward in time problem (the ROM to be identified) as well as the dual or adjoint problem corresponding to a backward in time problem. Based on the calculation of the two, the corresponding gradient for the optimization set can be determined. The corresponding scheme is independent of the specific (gradient-based) optimizer (within this work we will use MATLAB's \texttt{fminunc}) and for more details on corresponding constrained optimization we refer e.g. to \cite{rawlings2017model}. Finally, solving a nonlinear optimization problem, a good initial guess is essential. In our case we rely on the result of the classical OI as described above. Though the problem formulation and corresponding algorithms are more complex, the novel approach provides a number of advantages. First, one can expect that less or even no regularization is required. Furthermore, a highly accurate estimation of time-derivatives is less crucial since the actual trajectories are fitted and not the time derivatives themselves. Furthermore, the approach allows to have full control of the matrices, e.g., symmetries can be enforced straight forward. We would like to highlight, that the approach is actually similar to the concepts proposed in the machine learning community \cite{chen2018neural,rackauckas2020universal,um2020solver} in the context of neural networks. These have shown a superior performance over unconstrained optimization problem. Therefore, we expect the same in our situation. \section{Results}\label{sec:Results} As a test case we will consider a multi-tubular reactor as introduced in Section \ref{sec:Example} (c.f. Figure \ref{fig:reactor}), inspired by a real-world industrial example. Goal of the model order reduction is to provide a real-time capable model which can run in parallel to the operation and predict temperatures in the reactor, e.g., as input for a control. For simplicity, we consider here only a variation of the heat input $R(t)$, the inflow rate of the cooling fluid is kept fixed. We consider 5 different transient cases varying in the amount of heat input. For all cases we consider a coolant fluid with density $\rho_c=723 kg/m^3$, a dynamic viscosity $\mu=0.0008 Pa s$, a specific heat capacity $c_{p,c}=2590 J/(kg K)$ and a thermal conductivity $\mathcal{K}_c = 0.132 W/(mK)$. The porous solid containing the tubes has a density of $\rho_s=3062 kg/m^3$, a specific heat capacity of $c_{p,s}=2000 J/(kg K)$, and a thermal conductivity $\mathcal{K}_c = 0.2 W/(mK)$. The porous medium has the vicious resistance $\bm{\alpha}=[8000,8000,8] kg/(m^3 s)$ and the heat exchange area density is $A=0.18337 m^2/m^3$. For all cases, we consider the following boundary and initial conditions: a coolant inflow rate of $3.0 kg/s$, initial constant temperatures in the reactor for the solid and fluid part of $533.15^\circ K$, and an inflow temperature of the coolant flow of $533.15^\circ K$. The heat input is varied across the different cases as follows: For all 5 cases, the heating is kept constant with heat load $R$ from $0s$ till $36010s$ ($10h$) and switched off afterwards (c.f. Figure \ref{fig:FOMscenario}). Thereby, we select $R\in{0.5, 1.0, 1.5}$ as training cases, i.e. the ones used to construct the ROM, and $R\in{0.75, 1.25}$ as validation cases. We furthermore choose $\mathcal{A}=5000 J/m^3$ and $\mathcal{B} = 1500 K$. \begin{figure} \centering \includegraphics[width=12.0cm]{Figures/FOM_temperatures.png} \caption{Heating scenario considered as a test case ($R=1$)} \label{fig:FOMscenario} \end{figure} The corresponding FOM simulations using the geometry shown in Figure\ref{fig:reactor} are performed in the commercial CFD solver Simcenter STAR-CCM+\footnote{\url{https://www.plm.automation.siemens.com/global/en/products/simcenter/}}. The reduced order model is computed using \texttt{MATLAB}. First of all let us identify the right dimension for the low dimensional model (c.f. Section \ref{sec:MOR:POD}). In Figure \ref{fig:POD_error}, the spectrum of the POD as well as the errors for the training and validation set using different number of modes are shown. For the remainder of this section we will choose 8 modes for the POD as well as for the DEIM (c.f. Section \ref{sec:MOR:DEIM}). \begin{figure} \centering \raisebox{3.5cm}{\small(a)} \includegraphics[width=5.4cm]{Figures/POD_spectrum.png} \raisebox{3.5cm}{\small(b)} \includegraphics[width=5.4cm]{Figures/POD_maxmse.png} \caption{Proper Orthogonal Decomposition of the full 3D simulation snapshots: (a) Spectrum and (b) maximum mean squared errors (blue: training data; orange validation data).} \label{fig:POD_error} \end{figure} We project the dynamics to the low dimensional space and identify in this low dimensional space the corresponding dynamical system using a combination of DEIM, OI, as well as operator calibration as described in Section \ref{sec:MOR}. This results in the following reduced order model as defined by Equations \eqref{eq:ROM01} and \eqref{eq:DEIM03}: \begin{equation}\label{eq:explicitROM01} \dot{\bm{s}} = \bm{A} \bm{s} + {R}(t)\;\mathcal{A}\; \bm{P}_{1} \; \exp(\mathcal{B} / (\bm{P}_{2} \bm{s})), \end{equation} with the $\mathbb{R}^{8\times8}$ matrices $\bm{A}$, $\bm{P}_1$, and $\bm{P}_2$ as provided in Appendix \ref{app:ROMmatrices} (considering a POD and DEIM with 8 modes) and given constants $\mathcal{A}$ and $\mathcal{B}$ (c.f. above). Please note that we have dropped the flow of the coolant as it is constant and thus model \eqref{eq:explicitROM01} has only a linear term in addition to the Arrhenius law as mentioned above. The corresponding errors comparing the full 3D simulations projected onto the reduced low dimensional space and the simulated low dimensional ROM \eqref{eq:explicitROM01} obtained through MOR are shown in Figure \ref{fig:OI_error}. We clearly see that using OI (Tikhonov regularization with $\lambda=1.0$, for lower values the trajectories become unstable) plus operator calibration reduced the errors by one order of magnitude, except at the point in time where we switch off the heating. \begin{figure} \centering \raisebox{3.5cm}{\small(a)} \includegraphics[width=5.4cm]{Figures/OI_DEIM_noncalib.png} \raisebox{3.5cm}{\small(b)} \includegraphics[width=5.4cm]{Figures/OI_DEIM_calib.png} \\ \caption{Operator Inference plus DEIM using 8 modes each: (a) Relative mean squared error of the dynamics predicted using stabilized operator inference (with stabilization parameter $\lambda= 1.0$) and (b) the same error after additional operator calibration (all 5 data sets, encoded in different color).} \label{fig:OI_error} \end{figure} Finally, we project the reduced order model predictions back to the full space and compare the results with the original data. On the one hand, we compare the time evolution of the average, minimum, and maximum temperature profiles of the cooling fluid as well as solid temperatures for the five different cases in Figure \ref{fig:ROMprediction_curves}. On the other hand, we also show the corresponding 2D temperature results (different cuts with planes) of the full 3D simulation for the first validation case ($R=0.75$) in Figure \ref{fig:ROMprediction_3d}. We can observe a good match of the spatial profiles as well as the average temperatures. Maximum and minimum temperatures could be further improved but are sufficiently accurate for most industrial applications. \begin{figure} \centering \raisebox{3.4cm}{\small(1-S)} \includegraphics[width=5.0cm]{Figures/F_OI-DEIM_Set1_T1.png} \raisebox{3.4cm}{\small(1-C)} \includegraphics[width=5.0cm]{Figures/F_OI-DEIM_Set1_T2.png}\\ \raisebox{3.4cm}{\small(2-S)} \includegraphics[width=5.0cm]{Figures/F_OI-DEIM_Set2_T1.png} \raisebox{3.4cm}{\small(2-C)} \includegraphics[width=5.0cm]{Figures/F_OI-DEIM_Set2_T2.png}\\ \raisebox{3.4cm}{\small(3-S)} \includegraphics[width=5.0cm]{Figures/F_OI-DEIM_Set3_T1.png} \raisebox{3.4cm}{\small(3-C)} \includegraphics[width=5.0cm]{Figures/F_OI-DEIM_Set3_T2.png}\\ \raisebox{3.4cm}{\small(4-S)} \includegraphics[width=5.0cm]{Figures/F_OI-DEIM_Set4_T1.png} \raisebox{3.4cm}{\small(4-C)} \includegraphics[width=5.0cm]{Figures/F_OI-DEIM_Set4_T2.png}\\ \raisebox{3.4cm}{\small(5-S)} \includegraphics[width=5.0cm]{Figures/F_OI-DEIM_Set5_T1.png} \raisebox{3.4cm}{\small(5-C)} \includegraphics[width=5.0cm]{Figures/F_OI-DEIM_Set5_T2.png}\\ \caption{Operator Inference plus DEIM: Minimum, average, and maximum, temperatures $T_s$ ($i$-S) and $T_c$ ($i$-C) for all 5 sets ($i=1\ldots5$) over time. Reduced data is shown in dashed and full data in solid lines (using 8 POD and DEIM modes).} \label{fig:ROMprediction_curves} \end{figure} \begin{figure} \centering \raisebox{4.6cm}{(a)} \includegraphics[width=10.0cm]{Figures/solid_small.png}\\[0.4cm] \raisebox{4.6cm}{(b)} \includegraphics[width=10.0cm]{Figures/fluid_small.png} \caption{Comparison of spatial temperature profiles for the validation case $R=0.75$: (a) solid temperatures and (b) fluid temperatures for different planes cutting the reactor. The top row shows the original STAR-CCM+ results and the bottom row the MOR-predictions projected back into the full space.} \label{fig:ROMprediction_3d} \end{figure} Performing a separate POD for the solid and fluid part might improve the predictions as suggested by \cite{benner2020operator}. In particular since the solid temperature cover a much smaller domain (only the rose part in Figure \ref{fig:reactor}). This will be further investigated in the future. \section{Conclusion}\label{sec:Conclusion} Within this contribution we have addressed a combination of Proper Orthogonal Decomposition (POD), Discrete Empirical Interpolation Methods (DEIM), and Operator inference (OI) to infer a fast low dimensional model with real-time capability for operations parallel simulation following \cite{benner2020operator}. Thereby we have extended the approach introducing the concept of operator calibration, inspired by recent works in the field of machine learning \cite{rackauckas2020universal,chen2018neural,um2020solver}. The proposed method has been validated along the case of a tubular reactor, inspired by a real-world industrial application. In particular we would like to highlight that the reduced model \eqref{eq:explicitROM01} itself is extremely compact and well suited for integration into other applications e.g. using the Functional Mock-up Interface (FMI) standard \cite{blockwitz2012functional} for model exchange or model integration in other tools. The approach is well suited for industrial applications since it allows to derive reduced order models for complex 3D multi-physics simulations without knowing any solver details. A key requirement working with commercial simulation tools. Furthermore, through the use of the novel concept of operator calibration the approach relaxes the requirements of exact estimation of time derivatives (also often not available) as well as the choice of regularization methods as required by state of the art approaches for real-world problems \cite{peherstorfer2020sampling,swischuk2020learning,mcquarrie2021data}. In future work, we plan the to extend the number of scenarios considered, e.g., including varying cooling flow and more dynamic heating profiles. Furthermore, we plan to address more complex examples as well as to explore the use of separated reduced bases for the flow and the different temperature distributions as suggested by \cite{benner2020operator}. Last but not least, the quantification of uncertainties is crucial for industrial applications and we are currently investigating corresponding concepts along the work of \cite{soize2017nonparametric,farhat2018stochastic}. Despite these future ambitions, we believe that the current results already show the potential of this extended concept of OI. The combined operator inference and calibration approach will likely become a standard workhorse for industrial applications in the context of digital twins. \paragraph*{Acknowledgements:} We would like to acknowledge the funding through the Siemens Digital Industries Software project ICT-31. Furthermore, we would like to thank Diego Davila for providing the STAR-CCM+ model of the reactor and corresponding simulation data as well as Peter Mas and Daniel Berger for the valuable feedback and discussions. \bibliographystyle{plain}
1,314,259,996,209
arxiv
\section{Introduction}\label{sec:1_intro} \input{sections/FM_1_introduction} \section{Heteroskedastic finite mixture panel normal regression model}\label{sec:2_model} \input{sections/FM_2_finite_mixture} \section{Likelihood ratio test for $H_0:M = 1$ against $H_A:M=2$}\label{sec:3_test_1} \input{sections/FM_3_m1} \section{Modified EM test for $H_0: M = M_0$ against $H_A: M = M_0 +1$}\label{sec:5_EM} \input{sections/FM_5_EM} \section{Simulation}\label{sec:6_sim} \input{sections/FM_6_simulated} \newpage \section{Empirical Application}\label{sec:7_app} \input{sections/FM_7_application} \section{Conclusion} \input{sections/FM_8_conclusion} \bibliographystyle{apalike} \subsection{Reparameterization} To extract the direction of Fisher Information matrix singularity, we adapt the reparameterization approach by \cite{Kasahara2012} and consider the following one-to-one reparameterization of $\bs{\theta}_1$ and $\bs{\theta}_2$ given $\alpha$: \begin{equation}\label{eq:m0_repar2} \begin{pmatrix} \bs{\lambda} \\ \bs{\nu} \end{pmatrix} := \begin{pmatrix} \bs{\theta}_1 - \bs{\theta}_2 \\ \alpha \bs{\theta}_1 + (1 - \alpha) \bs{\theta}_2 \end{pmatrix} \text{ so that } \begin{pmatrix} \bs{\theta}_1 \\ \bs{\theta}_2 \end{pmatrix} = \begin{pmatrix} \bs{\nu} + ( 1- \alpha) \bs{\lambda} \\ \bs{\nu} - \alpha \bs{\lambda} \end{pmatrix}, \end{equation} where $\bs{\nu}$ and $\bs{\lambda}$ are both $(q+2) \times 1$ reparameterized parameter vectors with $\bs{\nu}=(\nu_\mu,\nu_\sigma,\bs{\nu}_{\bs\beta}\t)\t$ and $\bs\lambda = (\lambda_{\mu},\lambda_{\sigma}, (\bs\lambda_{\beta})\t)\t= ( \mu_1 - \mu_2, \sigma_1^2 - \sigma_2^2, (\bs \beta_1 - \bs\beta_2)\t)\t$. We also write $\bs{\theta}$ and $\bs{\lambda}$ as $\bs{\theta} =(\theta_1,\theta_2,\theta_3,...,\theta_{2+q})\t:= (\mu,\sigma^2,\beta_1,...\beta_q)\t$ and $\bs{\lambda}=(\lambda_1,\lambda_2,\lambda_3,...,\lambda_{q+2})\t := (\lambda_\mu,\lambda_\sigma,\lambda_{\beta_1},...,\lambda_{\beta_q})\t$. Define the space for reparameterized parameters as $$ \bs{\psi} := (\bs{\gamma}\t,\bs{\nu}\t,\bs{\lambda}\t)\t \in \Theta_{\bs\psi}, $$ where $\Theta_{\bs{\psi}} = \{ \bs{\psi}: \bs{\gamma} \in \Theta_{\bs\gamma}, \bs{\nu} + ( 1 - \alpha) \bs{\lambda} \in \Theta_{\bs\theta}, \bs{\nu} - \alpha \bs{\lambda} \in \Theta_{\bs\theta}\}.$ Under the null hypothesis $H_{01}: \bs{\theta}_1 = \bs{\theta}_2 = \bs{\theta}^*$, we have $\bs{\lambda} = (0,\ldots, 0)\t$ and $\bs{\nu} = \bs{\theta}^*$. We rewrite the reparameterized parameters under null hypothesis to be $(\bs{\psi}^{*})\t = ((\bs{\gamma}^*)\t, (\bs{\theta}^*)\t,0,\ldots,0)\t$. Under the reparameterized parameter space, the density function and its logarithm are expressed as \begin{align}\label{eq:repar} g(\bs{w};\bs{\psi},\alpha) & = \alpha f(\bs{w};\bs{\gamma},\bs{\nu} + (1 - \alpha) \bs{\lambda}) + (1 - \alpha) f(\bs{w};\bs{\gamma},\bs{\nu} - \alpha \bs{\lambda})\quad\text{and}\\ l(\bs{w};\bs{\psi},\alpha) & = \log g(\bs{w};\bs{\psi},\alpha).\nonumber \end{align} Write $\bs{\psi}$ as $\bs{\psi} = (\bs{\eta}\t,\bs{\lambda}\t)\t$ with $\bs{\eta} = (\bs{\gamma}\t,\bs{\nu}\t)\t$, where $\bs{\eta}^* = ((\bs{\gamma}^*)\t,(\bs{\nu}^* )\t)\t$ and $\bs{\lambda}^* = \bs{0}$. Denote the parameter space of $\bs\eta$ and $\bs\lambda$ by $\Theta_{\bs\eta}\subset \mathbb{R}^{p+q+2}$ and $\Theta_{\bs\lambda}\subset \mathbb{R}^{q+2}$, respectively. Under this reparameterization, the first order derivatives of the reparameterized log density with respect to the reparameterized parameters $\bs{\eta}$ is identical to those under the one-component model, and the first order derivative with respect to $\bs{\lambda}$ is a zero vector: \begin{equation} \begin{split} \nabla_{\bs{\eta}\t} l(\bs{w};\bs{\psi}^*,\alpha) = \frac{\nabla_{(\bs{\gamma}\t,\bs{\theta}\t)\t} f(\bs{w};\bs\gamma^*,\bs\theta^*)}{f(\bs{w};\bs\gamma^*,\bs\theta^*)} \quad\text{and}\quad \nabla_{\bs{\lambda}} l(\bs{w};\bs{\psi}^*,\alpha) = \bs{0} . \end{split} \end{equation} With $\nabla_{\bs\lambda} l(\bs{w};\bs{\psi}^*,\alpha) = 0$, the Fisher information matrix is singular, and the standard quadratic approximation fails. Consequently, the information on $\bs{\lambda}$ is provided by the second order derivative of $l(\bs{w};\bs{\psi},\alpha)$ with respect to $\bs{\lambda}$. We use second order derivative with respect to $\bs{\lambda}$ to identify $\bs{\lambda}$: \begin{equation} \nabla_{\bs{\lambda} \bs{\lambda}\t} l(\bs{w};\bs{\psi}^*,\alpha) = \alpha (1-\alpha) \frac{ \nabla_{\bs{\theta} \bs{\theta}\t}f(\bs{w};\bs{\gamma}^*,\bs{\theta}^*)}{f(\bs{w};\bs{\gamma}^*,\bs{\theta}^*)}. \end{equation} When $\alpha$ is bounded away from $0$ and $1$, the elements of $\nabla_{\bs{\lambda} \bs{\lambda}\t} l(\bs{W};\bs{\psi}^*,\alpha)$ are mean-zero random variables. Note that, unlike the cross-sectional normal mixture regression models analyzed by \cite{Kasahara2015a}, there exists no collinearity between these first and the second order derivatives for the panel data normal mixture regression models. Let $f^*$ and $\nabla f^*$ denote $f(\bs{W};\bs{\gamma}^*,\bs{\theta}^*)$ and $\nabla f(\bs{W};\bs{\gamma}^*,\bs{\theta}^*)$. Define the score vector $\bs{s}(\bs{W})$ as \begin{align}\label{eq:s_1} &\bs{s}(\bs{W}) = \begin{pmatrix} \bs{s}_{\bs{\eta}}(\bs W) \\ \bs{s}_{\bs{\lambda} \bs\lambda}(\bs W) \end{pmatrix}, \quad \text{where} \underset{( p + q +2 ) \times 1}{\bs{s}_{\bs{\eta} }(\bs W) }:= \frac{\nabla_{(\bs{\gamma}\t,\bs{\theta}\t)\t} f^*}{f^*} \quad \text{ and } \quad \underset{ ( q + 2 ) (q + 1) / 2}{\bs{s}_{\bs{\lambda} \bs\lambda}(\bs W) } :=\frac{\widetilde{\nabla}_{\bs\theta\bs\theta\t} f^*}{f^*}. \end{align} Here, $\widetilde{\nabla}_{\bs\theta\bs\theta\t} f^* := (c_{11} \nabla_{\theta_1\theta_1} f^*,...,c_{(q+2)(q+2)}\nabla_{\theta_{q+2}\theta_{q+2}} f^*,c_{12}\nabla_{\theta_{1}\theta_{2}} f^*,...,c_{(q+1)(q+2)}\nabla_{\theta_{q+1}\theta_{q+2}} f^*)\t$ for $\bs{\theta}=(\theta_1,\theta_2,\theta_3,...,\theta_{q+2})\t:=(\mu,\sigma^2,\beta_1,,...,\beta_q)\t$, where $c_{jk}=1/2$ for $j\neq k$ and $c_{jk}=1$ for $j=k$. Appendix \ref{sec:appendix_score_1} derives an explicit expression for the score function $\bs{s}(\bs{w})$ written in terms of Hermite polynomials. Collect the relevant normalized reparameterized parameters and define $\bs{t}_n(\bs{\psi},\alpha)$ as \begin{equation} \label{eq:t_1} \bs{t}(\bs{\psi},\alpha) = \begin{pmatrix} \bs{t}_{\bs{\eta}} \\ \bs{t}_{\bs{\lambda} }(\bs{\lambda},\alpha) \end{pmatrix}= \begin{pmatrix} \bs{\eta} - \bs{\eta}^* \\ \alpha ( 1- \alpha) \bs{v} (\bs{\lambda} ) \end{pmatrix}, \end{equation} where $v(\bs{\lambda})$ is a vector of unique elements of $\bs\lambda\bs\lambda\t$ given by \begin{equation} \label{eq:v} v(\bs{\lambda}) = (\lambda_1\lambda_1,...,\lambda_{q+2}\lambda_{q+2},\lambda_{1}\lambda_2,...,\lambda_{q+1}\lambda_{q+2})\t \end{equation} of which length is $q_\lambda:=(q+2)(q+3)/2$. Let $L_n(\bs{\psi},\alpha) := \sum_{i=1}^n l(\bs{W}_i;\bs{\psi}^*,\alpha)$ be the reparameterized log likelihood function and define the normalized score vector \[ \bs S_n := n^{-1/2} \sum_{i=1}^n {\bs s}(\bs W_i). \] Then, expanding $L_n(\bs{\psi},\alpha)$ four times around $(\bs\psi^*,\alpha)$, we may write $2\{L_n(\bs{\psi},\alpha)- L_n(\bs{\psi}^*,\alpha)\}$ as a quadratic function of $\sqrt{n}\bs{t}(\bs{\psi},\alpha)$ as \begin{align} 2\{L_n(\bs{\psi},\alpha)- &L_n(\bs{\psi}^*,\alpha)\} = 2(\sqrt{n}\bs{t}(\bs{\psi},\alpha))\t \bs S_n - (\sqrt{n}\bs{t}(\bs{\psi},\alpha))\t \bs {\mathcal{I}}_n(\sqrt{n}\bs{t}(\bs{\psi},\alpha)) + R_n(\bs{\psi},\alpha) \label{eq:LR0} \\ & \quad= \bs G_n\t \bs{\mathcal{I}}_n \bs G_n - \left[ \sqrt{n}\bs{t}(\bs{\psi},\alpha)- \bs G_n\right]\t \bs{\mathcal{I}}_n \left[ \sqrt{n}\bs{t}(\bs{\psi},\alpha)- \bs G_n\right] + R_n(\bs{\psi},\alpha),\label{eq:LR1} \end{align} where $\bs {\mathcal{I}}_n$ is the negative of the sample Hessian defined in the proof of Proposition \ref{prop:expansion} while $\bs G_n:=\bs{\mathcal{I}}_n^{-1}\bs S_n$. Let $\bs{\mathcal{I}}=E[\bs s(\bs W)\bs s(\bs W)\t]$. \begin{assumption}\label{assumption:2} (a) \(\bs{X}\) and \(\bs{Z}\) has finite $8$-th moments. (b) $E[\boldsymbol U \boldsymbol{U\t}]$ is non-singular, where $\boldsymbol U = [1, \boldsymbol{ X\t}, \boldsymbol{ Z\t}]\t$. \end{assumption} \begin{proposition}\label{prop:expansion} Suppose that assumption \ref{assumption:1} and \ref{assumption:2} hold. Then, under $H_0: M=1$, for $\alpha\in (0,1)$, (a) for any $\delta>0$, $\lim\sup_{n\rightarrow \infty} \Pr(\sup_{\bs\psi \in \Theta_{\bs\psi}: ||\bs\psi-\bs\psi^*||\leq \kappa}R_n(\bs\psi,\alpha)| > \delta (1+||n\bs t(\bs\psi,\alpha)||^2)) \rightarrow 0$ as $\kappa\rightarrow 0$, (b) $ \bs S_n \overset{d}{\to} \bs S\sim N(0,\bs{\mathcal{I}})$, and (c) $\bs{\mathcal{I}}_n \overset{p}{\to} \bs{\mathcal{I}}$, where $\bs{\mathcal{I}}$ is finite and non-singular. \end{proposition} The set of feasible values of $\sqrt{n} t(\bs \psi,\alpha)$ is given by the shifted and rescaled parameter space for $(\bs\eta,v(\bs\lambda))$ defined as $\Lambda_{n} := \sqrt{n} (\Theta_{\bs\eta}-\eta^*) \times \sqrt{n} \alpha(1-\alpha)v( \Theta_{\bs\lambda})$, where $v(A):= \{ t \in \mathbb{R}^{q_\lambda}: t = v(\lambda) \text{ for some } \lambda \in A \subset \mathbb{R}^{q+2}\}$. Because $\Lambda_n/\sqrt{n} $ is locally approximated by a cone $\Lambda := \mathbb{R}^{p+q+2} \times v(\mathbb{R}^{q+2})$, we may apply Lemma 2 of \cite{Andrews1999} to approximate the distribution of the supremum of the right hand side of (\ref{eq:LR1}) as \[ \max_{\bs\psi\in \Theta_{\bs\psi}}2\{L_n(\bs{\psi},\alpha)- L_n(\bs{\psi}^*,\alpha)\} \overset{d}{\rightarrow} \bs G \t \bs{\mathcal{I}} \bs G-\inf_{\bs t\in \Lambda} (\bs t-\bs G)'\bs{\mathcal{I}} (\bs t-\bs G), \] where $\bs G = \bs{\mathcal{I}}^{-1} \bs S\sim N(0,\bs{\mathcal{I}}^{-1})$. This allows us to characterize the asymptotic distribution of the LRTS. For each $\alpha\in (0,1)$, define the reparameterized PMLE by \begin{equation} \label{eq:pmle} \hat{\bs{\psi}}= \arg \max_{\bs{\psi} \in \Theta_{\bs\psi}} L_n(\bs{\psi},\alpha)+ \sum_{j=1}^2 p_n(\sigma_j^2(\bs{\psi},\alpha)) \end{equation} with $\hat{\bs{\psi}} := (\hat{\bs{\gamma}}\t,\hat{\bs{\nu}}\t,\hat{\bs{\lambda}}\t)\t$, where $\Theta_{\bs\psi}$ is defined as the space of $\bs{\psi}$ so that the $\bs{\vartheta}_2$ implied is in $\Theta_{\bs{\vartheta}}$ and $\sigma_j^2(\bs{\psi},\alpha)$ is the value of $\sigma_j$ implied by the value of $\bs{\psi}$ and $\alpha$ (e.g., $\sigma_1^2(\bs\psi,\alpha)=\nu_\sigma +(1-\alpha)\lambda_\sigma$). Let $(\hat{\bs{\gamma}}_0,\hat{\bs{\theta}}_0)$ be the one-component MLE that maximizes the one-component likelihood function $L_{0,n}(\bs{\gamma},\bs{\theta}) = \sum_{i=1}^N f(\bs{W}_i;\bs{\gamma},\bs{\theta}) $. Define the LRTS and the Penalized Likelihood Ratio Test Statistic (PRTS) of testing $H_{01}$ with a small positivity constant $\epsilon$ on $\alpha$ as \begin{align}\label{eq:LR_def} LR_n(\epsilon) &:= \max_{\alpha \in [\epsilon, 1- \epsilon]} 2 \{L_n(\hat{\bs{\psi}},\alpha) - L_{0,n}(\hat{\bs{\gamma}}_0,\hat{\bs{\theta}}_0) \}\ \text{and}\ PLR_n(\epsilon) := LR_n(\epsilon)+ \sum_{j=1}^2 p_n(\sigma_j^2(\hat{\bs{\psi}},\alpha)). \end{align} The EM-test we develop in Section 3 does not impose a direct constraint on the value of $\alpha$ but put a penalty function for $\alpha$. With $\bs{s}(\bs{W})$ in (\ref{eq:s_1}), partition $\bs{\mathcal{I}}=E[\bs s(\bs W)\bs s(\bs W)\t]$ and define \begin{equation} \begin{split} \bs{\mathcal{I}} = \begin{pmatrix} \bs{\mathcal{I}}_{\bs\eta} & \bs{\mathcal{I}}_{\bs\eta\bs\lambda} \\ \bs{\mathcal{I}}_{\bs\lambda\bs\eta} & \bs{\mathcal{I}}_{\bs\lambda\bs\lambda} \end{pmatrix}, \quad \bs{\mathcal{I}}_{\bs{\eta}} = E[\bs s_{\bs\eta}(\bs W) \bs s_{\bs\eta}(\bs W) \t], \quad \bs{\mathcal{I}}_{\bs\lambda\bs\eta} = E[\bs s_{\bs\lambda\bs\lambda }(\bs W) \bs s_{\bs\eta}(\bs W) ], \quad \bs{\mathcal{I}}_{\bs\eta\bs\lambda} = \bs{\mathcal{I}}_{\bs\lambda\bs\eta} \t, \\ \bs{\mathcal{I}}_{\bs\lambda\bs\lambda} =E[\bs s_{\bs\lambda\bs\lambda }(\bs W) \bs s\t_{\bs\lambda\bs\lambda}(\bs W) ], \quad \bs{\mathcal{I}}_{\bs\lambda,\bs\eta} = \bs{\mathcal{I}}_{\bs\lambda\bs\lambda} - \bs{\mathcal{I}}_{\bs\lambda\bs\eta} \bs{\mathcal{I}}_{\bs\eta} ^{-1} \bs{\mathcal{I}}_{\bs\eta\bs\lambda} , \quad \bs{G}_{\bs{\lambda},\bs\eta} := ( \bs{\mathcal{I}}_{\bs\lambda,\bs\eta} )^{-1} \bs{S}_{\bs\lambda,\bs\eta} , \end{split} \end{equation} where $\bs{S}_{\bs\lambda,\bs\eta} \sim N(0,\bs{\mathcal{I}}_{\bs\lambda,\bs\eta} )$. Define a set that characterizes the admissible values of $\sqrt{n}\bs{t}(\bs{\lambda},\alpha)$ when $n \to \infty$ by the cone \begin{equation} \Lambda_{\bs\lambda} = \Big\{ \sqrt{n} \alpha ( 1 - \alpha) v(\bs{\lambda}) : \bs{\lambda} \in \Theta_{\bs\lambda} \Big\}. \end{equation} Define $\hat{\bs{t}}_{\lambda}$ by \begin{equation}\label{eq:t_lambda_def} r_{\lambda}(\hat{\bs t}_{\bs \lambda}) = \inf_{\bs{t}_{\bs{\lambda}} \in \Lambda_{\bs\lambda}} r_{\lambda}(\bs{t}_{\bs{\lambda}}), \quad r_{\lambda} (\bs{t}_{\bs{\lambda}}) : = (\bs{t}_{\bs{\lambda}} - \bs{G}_{\bs{\lambda},\bs\eta})\t \bs{\mathcal{I}}_{\bs\lambda,\bs\eta} (\bs{t}_{\bs{\lambda}} - \bs{G}_{\bs{\lambda},\bs\eta} ), \end{equation} where $\hat{\bs{t}}_{\lambda}$ is a projection of a random Gaussian random variable $\bs{G}_{\bs{\lambda}}$ on a cone $\Lambda_{\bs\lambda}$. The following proposition establishes the asymptotic distribution of LRTS under the null hypothesis $H_0: M=1$. \begin{proposition}\label{prop:t2_distribution} Suppose that assumptions \ref{assumption:1} and \ref{assumption:2} hold. Under the null hypothesis $H_0: M_0 = 1$, (a) $\bs t(\hat{\bs{\psi}},\alpha)=O_p(n^{-1/2})$ for any $\alpha\in (0,1)$, (b) $LR_n(\epsilon) \overset{d}{\to} (\hat{\bs{t}}_{\bs\lambda} )\t \bs{\mathcal{I}}_{\bs\lambda,\bs\eta} \hat{\bs{t}}_{\bs\lambda}$ and $PLR_n(\epsilon) \overset{d}{\to} (\hat{\bs{t}}_{\bs\lambda} )\t \bs{\mathcal{I}}_{\bs\lambda,\bs\eta} \hat{\bs{t}}_{\bs\lambda} + \text{plim}_{n\rightarrow \infty} \sum_{j=1}^2 p_n(\sigma_j^2(\hat{\bs{\psi}},\alpha))$. \end{proposition} Proposition \ref{prop:t2_distribution}(a) implies that $\hat{\bs{\theta}}_j - \bs{\theta}^*=O_p(n^{-1/4})$ for $j=1,2$. When we choose the penalty function so that $\sum_{j=1}^2 p_n(\sigma_j^2(\hat{\bs{\psi}},\alpha))=o_p(1)$ under the null hypothesis of $M=1$, $PLR_n(\epsilon) $ has the same asymptotic null distribution as that of $LR_n(\epsilon)$. \section{Asymptotic Distribution under Local Alternatives } We derive the asymptotic distribution of the EM test under local alternatives. For brevity, we focus on testing $H_0: M=1$ against $H_A: M=2$. Consider the following local alternative to the homogenous model $f(\bs w;\bs\gamma^*,\bs\theta^*)$ with $\bs\theta^*=(\mu^*,\sigma^{*2},(\bs{\beta}^*)\t)\t$. In a reparameterized paraemter, $\bs\psi^* = ((\bs\nu^*)\t, (\bs{\lambda}^*)\t)\t$. For $\alpha^*\in (0,1)$ and a local parameter $\bs h=(\bs {h}_{\bs\nu}\t,\bs {h}_{\bs\lambda}\t)\t$ with $\bs h_{\bs \lambda}\in v(\bs\Theta_{\bs\lambda})$, we consider a sequence of contiguous local alternatives $\bs\vartheta_{2,n} =(\alpha_n,\bs{\psi}_n\t)\t = (\alpha_n,\bs\nu_n\t,\bs\lambda_n\t)\in \bs{\Theta}_\alpha\times \bs{\Theta}_{\bs\nu}\times\bs{\Theta}_{\bs\lambda}$ such that, with $\bs{t}_{\bs\lambda}(\bs\lambda,\alpha)$ given by (\ref{eq:t_1}), \begin{equation}\label{eq:h} \bs {h_\nu}= \sqrt{n}(\bs\nu_n -\bs\nu^*),\quad \bs {h_\lambda} =\sqrt{n} \bs{t}_{\bs\lambda}(\bs\lambda_n,\alpha_n),\quad \text{and} \quad \alpha_n = \alpha + o(1). \end{equation} Equivalently, the non-reparameterized contiguous local alternatives are given by \[ \bs\theta_{1,n} =\bs \nu_n + (1-\alpha_n) \bs\lambda_n\quad\text{and}\quad \bs\theta_{2,n} = \bs\nu_n - \alpha_n \bs\lambda_n \] for $\bs\nu_n = \bs \nu^* + n^{-1/2} \bs{h}_{\nu}$ and $\bs\lambda_n= (\lambda_{1,n},\lambda_{2,n},....,\lambda_{q+2,n})\t$ with \begin{align*} \lambda_{j,n} = n^{-1/4} (\alpha_n(1-\alpha_n))^{-1/2} h_{\lambda,j}\quad \text{for $j=1,...,q+2$}, \end{align*} where $\bs{h}_{\bs\lambda}=(h_{\lambda,1}^2,...,h_{\lambda,q+2}^2,h_{\lambda,1}h_{\lambda,2},....,h_{\lambda,q+1}h_{\lambda,q+2})\t$. The local alternatives are of order $n^{1/4}$ rather than $n^{1/2}$. The following proposition provides the asymptotic distribution of the EM test statistic under contiguous local alternatives. \begin{proposition}\label{prop:local-power} Suppose that the assumptions in Proposition \ref{prop:KS18_prop4} hold for $M_0=1$. Consider a sequence of contiguous local alternatives $\bs\vartheta_{2,n} = (\alpha^*,(\bs\nu^*)\t,\bs\lambda_n\t)\t$, where $\bs\lambda_n$ satisfy (\ref{eq:h}). Then, under $H_{1,n}: \bs\vartheta= \bs\vartheta_{2,n}$, we have $EM_n^{(K)}\overset{d}{\rightarrow} (\tilde{\bs{t}}_{\bs\lambda} )\t \bs{\mathcal{I}}_{\bs\lambda,\bs\eta} \tilde{\bs{t}}_{\bs\lambda}$, where $\tilde{\bs t}_{\bs\lambda}$ has the same distribution as $\hat {\bs t}_{\bs\lambda}$ in Proposition \ref{prop:t2_distribution} but replacing $\bs{G}_{\bs{\lambda,\bs{\eta}}}$ with $ ( \bs{\mathcal{I}}_{\bs\lambda,\bs\eta} )^{-1} \bs{S}_{\bs\lambda,\bs\eta} + \bs h_{\bs\lambda}$. \end{proposition} \subsection{Choice of penalty function}\label{sec:penalty} We develop the data-dependent empirical formula for $a_n$ by choosing the formula for $a_n$ so that the empirical rejection probabilities are matched with the nominal size (5\%) across different null models and sample sizes as reported in Table \ref{table:parameter} of Appendix D. For the model without conditioning variables, we develop the following data-dependent empirical formula for testing the null hypothesis of $M_0=1, 2, 3, 4$: \begin{equation}\label{eq:a_n} a_n = \begin{cases} \left({1 +\exp\left\{ \frac{\hat{\rho}_1^{M_0}}{ \hat{\rho}_4^{M_0}} + \frac{\hat{\rho}_2^{M_0}}{\hat{\rho}_4^{M_0}} \frac{1}{T} + \frac{\hat{\rho}_3^{M_0}}{\hat{\rho}_4^{M_0}} \frac{1}{n} \right\} }\right)^{-1}, & M_0 = 1 \\ \left({1 +\exp\left\{ \frac{\hat{\rho}_1^{M_0}}{ \hat{\rho}_4^{M_0}} + \frac{\hat{\rho}_2^{M_0}}{\hat{\rho}_4^{M_0}} \frac{1}{T} + \frac{\hat{\rho}_3^{M_0}}{\hat{\rho}_4^{M_0}} \frac{1}{n} + \frac{\hat{\rho}_5^{M_0}}{\hat{\rho}_4^{M_0}} \log\left(\frac{\omega(\bs{\vartheta}_{M_0};M_0)}{1-\omega(\bs{\vartheta}_{M_0};M_0)}\right) \right\} }\right)^{-1}, &M_0 = 2,3,4, \end{cases} \end{equation} where $\omega(\bs{\vartheta}_{M_0};M_0)$ is the misclassification probability as defined in \cite{Melnykov2010} for each of the null models. The parameter $\hat{\rho}_1^{M_0}$, $\hat{\rho}_2^{M_0}$, $\hat{\rho}_3^{M_0}$, $\hat{\rho}_4^{M_0}$, and $\hat{\rho}_5^{M_0}$ are chosen as follows. Across different null models, sample sizes, and different (candidate) values of $a_n$, we estimate the empirical rejection probabilities at $5\%$ significance level by simulations and denote it by $\hat s$. For example, when testing $H_0: M_0=2$, we repeatedly simulate the 500 datasets under each of the $48$ null model parameters and sample sizes $(n,T,\alpha,\mu,\sigma) \in \{100,500\} \times \{ 2,5,10\} \times \{(0.5,0.5),(0.2,0.8)\} \times \{ (-1,1),(-0.5,0.5) , (-0.5,0.8)\} \times \{ (1,1), (1.5,0.75), (0.8,1.2)\}$ and test the null hypothesis of $H_0: M_0=2$ by the EM test using one of the 6 values of $a_n\in \{0.01, 0.05,0.1,0.2, 0.3, 0.4\}$. For each of $108 \times 6= 648$ combinations of the parameter values, sample sizes, and $a_n$ values, let $\hat s$ denote the fraction of simulated datasets that led to the rejection of the null hypothesis at $5\%$ significance level. Using these $648$ ``observations" of $\{\hat{s},N,T,\omega(\bs{\vartheta}_{2};2), a_n\}$, we run the following regression: \[\begin{split} & \log\left(\frac{\hat{s}}{1- \hat{s}}\right) - \log\left(\frac{ {0.05}}{1- {0.05}}\right) \\ & = \begin{cases} \rho_1^{M_0} + \rho_2^{M_0}\frac{1}{T} + \rho_3^{M_0}\frac{1}{n} + \rho_4^{M_0} \log\left(\frac{ {a_n}}{1 - {a_n}}\right) , & M_0 = 1 \\ \rho_1^{M_0} + \rho_2^{M_0}\frac{1}{T} + \rho_3^{M_0}\frac{1}{n} + \rho_4^{M_0} \log\left(\frac{ {a_n}}{1 - {a_n}}\right) + \rho_5^{M_0} \log\left(\frac{\omega(\bs{\vartheta}_{M_0};M_0)}{1 - \omega(\bs{\vartheta}_{M_0};M_0)}\right), & M_0 = 2,3,4, \end{cases} \end{split} \] where $\hat{\rho}_1^{M_0}$, $\hat{\rho}_2^{M_0}$, $\hat{\rho}_3^{M_0}$, $\hat{\rho}_4^{M_0}$, and $\hat{\rho}_5^{M_0}$ in (\ref{eq:a_n}) denotes the corresponding estimates. Table \ref{tab:rho_estim} in the appendix reports the estimates. Note that the data-dependent formula (\ref{eq:a_n}) is obtained by setting $\hat s=0.05$ and solving for $a_n$ in the above equation. For the regression mixture model with conditioning variables, we find that the value of $a_n$ that gives accurate Type I errors is sensitive to the dimension of covariates, and developing a data-dependent empirical formula for an is difficult. Consequently, we choose a constant value of $a_n$ that only depends on the number of components $M_0=1, 2, 3,$ and $4$ as follows: $a_n= 0.1617 \text{ if } M_0 =1 ; 0.0025 \text{ if } M_0 = 2; 0.0567 \text{ if } M_0 = 3; 0.4858 \text{ if } M_0 = 4; 0.5 \text{ if } M_0\geq 5.$ These penalty terms for the regression with covariates are chosen by the average of the penalty function prediction for the null parameters used in the simulations. For example, the penalty term for $M_0 = 2$ is chosen by generating $a_n$ using the formula for all the combinations of $(N,T,\alpha, \mu,\sigma)$ in Table \ref{table:parameter} for $M_0 = 2$ and take the average value across the predicted $\hat a_n$'s. For $M_0 \ge 5$, we use the parametric bootstrap method to obtain the critical values for our empirical application, where we set $a_n = 0.5$. \subsection{Simulation result} Table \ref{table:size_test} reports the simulated Type I error rates of the modified EM test for testing the null hypothesis of $H_0: M_0=2$ against $H_1: M_0=3$ using 2000 repetitions for the asymptotic distribution and 1000 repetitions for the bootstrap distribution. We consider 4 null models using the parameter values reported in its footnote. The modified EM test using the asymptotic distribution generally has sizes that are close to the nominal size of 5\% although they are sometimes oversized when the time dimension is greater than $5$. The bootstrap modified EM test also performs well. Table \ref{table:power_test} shows the power of testing $H_0: M_0 = 2$, where the data are generated from 12 alternative (three-component mixture) models as reported in the table footnote. The power of the test is higher when the distance between $\mu_j$'s are larger across components. The higher power of the test is achieved for the model with equal distances between mean parameters such as $(\mu_1,\mu_2,\mu_3)=(-1,0,1)$ or $(-1.5,0.15)$ than the model with unbalanced distances with $(\mu_1,\mu_2,\mu_3)=(-1,0,2)$ or $(-0.5,0,1.5)$. As for mixing probability, the test has a better power property when the mixture probabilities are equality across components with $\bs{\alpha} = (1/3,1/3,1/3)$ than when they are unequal with $\bs{\alpha} = (1/4,1/2,1/4)$. The power increases with the time-dimension $T$ as well as the cross-sectional sample size $n$. Table \ref{table:size_M3} reports the simulated Type I error rates of the modified EM test using the asymptotic distribution when we test the null hypothesis of $H_0: M_0=3$ against $H_1: M_0=4$ for 6 null models with $(\alpha_1,\alpha_2, \alpha_3) = (1/3,1/3,1/3), (0.25, 0.5, 0.25)$, $(\mu_1,\mu_2, \mu_3) = (-4, 0, 4), (-4, 0, 6), (-6, 0, 6)$. Overall, the modefied EM test gives accurate Type I errors. We examined the type 1 error rates of the modified EM test for models with conditioning variables under the null model of $M_0 = 2$ using 500 repetitions. The simulation result is shown in Table \ref{table:size_regressor}. The test is slightly oversized when $(N,T)=(200,2)$ but, overall, the finite sample size properties of the modified EM test are good. \begin{table}[H] \centering \begin{threeparttable} \caption{Sizes (in \%) of modified EM test of $H_0 : M_0 = 2$ against $H_A: M_0 = 3$ at $5 \%$ level} \label{table:size_test} \begin{tabularx}{\textwidth}{l@{\extracolsep{\fill}}rrrrrr|rrrrrr} \hline & \multicolumn{6}{c|}{Asymptotic} & \multicolumn{6}{c}{Parametric Bootstrap} \\ \cmidrule{2-13} $T$ & \multicolumn{2}{c|}{3} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c|}{8} & \multicolumn{2}{c|}{3} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c}{8} \\ \cmidrule{2-13} $N$ & \multicolumn{1}{c}{200} & \multicolumn{1}{c|}{400} & \multicolumn{1}{c}{200} & \multicolumn{1}{c|}{400} & \multicolumn{1}{c}{200} & \multicolumn{1}{c|}{400} & \multicolumn{1}{c}{200} & \multicolumn{1}{c|}{400} & \multicolumn{1}{c}{200} & \multicolumn{1}{c|}{400} & \multicolumn{1}{c}{200} & \multicolumn{1}{c}{400} \\ \hline $(A,C)$ & 6.25 & \multicolumn{1}{r|}{6.00}& 9.35 & \multicolumn{1}{r|}{8.35} & 8.10 & 7.15 & 4.6 & \multicolumn{1}{r|}{6.2} & 6.2 & \multicolumn{1}{r|}{4.6} & 4.8 & 5.2 \\ $(A,D)$ & 4.75 & \multicolumn{1}{r|}{5.60} & 9.65 & \multicolumn{1}{r|}{8.20} & 9.65 & 7.75 & 5.4 & \multicolumn{1}{r|}{4.8} & 5.2 & \multicolumn{1}{r|}{5.6} & 5.0& 5.8 \\ $(B,C)$ & 3.50 & \multicolumn{1}{r|}{4.20} & 5.40 & \multicolumn{1}{r|}{5.40} & 6.05 & 6.20 & 3.6 & \multicolumn{1}{r|}{5.6} & 4.2 & \multicolumn{1}{r|}{5.2} & 4.0& 5.4 \\ $(B,D)$ & 3.15 & \multicolumn{1}{r|}{5.35} & 5.85 & \multicolumn{1}{r|}{5.10} & 7.60 & 7.05 & 3.6 & \multicolumn{1}{r|}{3.6} & 5.8 & \multicolumn{1}{r|}{4.0} & 6.2 & 4.6 \\ \hline \end{tabularx}\smallskip \begin{flushleft} Notes: $A$ and $B$ refer to $(\mu_1,\mu_2) = (-1,1)$ and $(-0.5,0.5)$, respectively, while $C$ and $D$ refer to $(\alpha_1,\alpha_2) = (0.5,0.5)$ and $(0.2,0.8)$, respectively. The variance is set to $(\sigma_1,\sigma_2) = (0.8,1.2)$. The asymptotic simulations are based on 2000 replications and the bootstrap simulation is based on 1000 replications. \end{flushleft} \end{threeparttable} \end{table} \begin{table}[H] \centering \begin{threeparttable} \caption{Powers (in \%) of modified EM test of $H_0 : M_0 = 2$ against $H_A: M_0 = 3$ at $5\%$ level } \label{table:power_test} \begin{tabularx}{\textwidth}{l @{\extracolsep{\fill}} l |rrrr|rrrr } \toprule & & \multicolumn{4}{c|}{A}& \multicolumn{4}{c}{B}\\ \cmidrule{3-10} & N& \multicolumn{2}{c}{100} & \multicolumn{2}{c|}{500} & \multicolumn{2}{c}{100} & \multicolumn{2}{c}{500} \\ \midrule & T& 2 & 5 & 2 & 5 & 2 & 5 & 2 & 5 \\ \midrule & $(C,G)$& 20.90 & 81.60 & 57.65& 100.00& 20.45& 82.65& 62.60 & 100.00\\ & $(C,H)$& 49.20 & 99.85& 99.90 & 100.00& 38.40 & 98.70 & 98.75& 100.00\\ & $(C,I)$& 12.05& 20.35& 17.95& 62.55& 10.55& 20.40 & 16.75& 65.85\\ & $(D,G)$& 77.95& 100.00& 100.00& 100.00& 86.50 & 100.00& 100.00& 100.00\\ & $(D,H)$& 57.35& 100.00& 100.00& 100.00& 42.75& 100.00& 100.00& 100.00\\ & $(D,I)$& 16.00 & 59.45& 31.75& 99.90 & 13.75& 70.80 & 40.30 & 100.00\\ & $(E,G)$& 93.00 & 100.00& 100.00& 100.00& 93.95& 100.00& 100.00& 100.00\\ & $(E,H)$& 83.80 & 100.00& 100.00& 100.00& 70.70 & 100.00& 100.00& 100.00\\ & $(E,I)$& 25.65& 97.00 & 80.15& 100.00& 30.65& 96.75& 83.05& 100.00\\ & $(F,G)$& 99.85& 100.00& 100.00& 100.00& 100.00& 100.00& 100.00& 100.00\\ & $(F,H)$& 93.50 & 100.00& 100.00& 100.00& 85.30 & 100.00& 100.00& 100.00\\ & $(F,I)$& 40.75& 99.95& 98.15& 100.00& 52.10 & 100.00& 99.45& 100.00\\ \bottomrule \end{tabularx} \smallskip \begin{flushleft} Notes: $A$ and $B$ refer to $(\alpha_1,\alpha_2,\alpha_3) = (1/3,1/3,1/3)$ and $(1/4,1/2,1/4)$, respectively; $C,D,E$,and $F$ refer to $(\mu_1,\mu_2,\mu_3) = (-0.5,0,1.5), (-1,0,1),(-1,0,2), (-1.5,0,1.5)$, respectively; $G,H,I$ refer to $(\sigma_1,\sigma_2,\sigma_3) = (0.6,0.6,1.2), (0.6,1.2,0.6), (1,1,1)$. \end{flushleft} \end{threeparttable} \end{table} \begin{table}[H] \centering \caption{Sizes (in \%) of modified EM test of $H_0 : M_0 = 3$ against $H_A: M_0 = 4$ at $5 \%$ level} \label{table:size_M3} \begin{threeparttable} \begin{tabularx}{\textwidth}{l @{\extracolsep{\fill}} rrrrrr} \toprule & (A,C) & (A,D) & (A,E) & (B,C) & (B,D) & (B,E) \\ \midrule 100,2 & 5.95& 5.15& 5.05& 5.05 & 5.85 & 4.40\\ 500,2 & 5.60& 5.55& 5.25& 5.10& 5.65 & 4.05 \\ 100,5 & 4.30& 6.00& 4.20& 5.15 & 5.10& 5.70\\ 500,5 & 4.20& 4.55& 3.95& 4.50& 4.15 & 4.15 \\ \bottomrule \end{tabularx}\smallskip \begin{flushleft} Notes: $A$ and $B$ refer to $(\mu_1,\mu_2, \mu_3) = (-4, 0, 4)$, $(-4, 0, 6)$ and $(-6, 0, 6)$, respectively, while $C,D, E$ refer to $(\alpha_1,\alpha_2, \alpha_3) = (1/3,1/3,1/3)$ and $(0.25, 0.5, 0.25)$, respectively. The variance is set to $(\sigma_1,\sigma_2,\sigma_3) = (0.75, 1.5, 0.75)$. The asymptotic simulations are based on 2000 replications and the bootstrap simulation is based on 1000 replications. \end{flushleft} \end{threeparttable} \end{table} \begin{table}[H] \centering \begin{threeparttable} \caption{Sizes of modified EM test of $H_0 : M_0 = 2$ against $H_A: M_0 = 3$ with conditioning variables} \label{table:size_regressor} \small \begin{tabularx}{\textwidth}{lrrrrrrrr} \toprule & $(A,C,E)$ & $(A,C,F)$ & $(A,D,E)$ & $(A,D,F)$ & $(B,C,E)$ & $(B,C,F)$ & $(B,D,E)$ & $(B,D,F)$ \\ $(N,T)$ & \\ \midrule $(200,2)$ & 8.4 & 8.2& 7.4 & 8.8& 8.6 & 8.2& 7.4 & 3.6\\ $(500,2)$ & 4.6 & 3.2& 3.2 & 2.2& 4.8 & 4.8& 3.6 & 3.6\\ $(200,5)$ & 4.0 & 1.8& 3.0 & 2.6& 2.2 & 2.0 & 2.2 & 3.2\\ $(500,5)$ & 2.2 & 1.2& 1.6 & 1.4& 3.0 & 2.0 & 1.8 & 2.0 \\ \bottomrule \end{tabularx} \end{threeparttable}\smallskip \begin{flushleft} Notes: $A$ and $B$ refer to $(\mu_1,\mu_2) = (-1,1)$ and $(-0.5,0.5)$, respectively, while $C$ and $D$ refer to $(\beta_1,\beta_2) = (1,1)$ and $(-1,1)$, respectively. $E$ and $F$ refer to $(\sigma_1,\sigma_2) = (0.3,0.1)$ and $(0.1,0.1)$. The mixing proportion is set to $(\alpha_1,\alpha_2) = (0.2,0.8)$. The asymptotic simulations are based on 500 replications. \end{flushleft} \end{table} In our empirical application of production function heterogeneity in Japan and Chile, we find evidence that the number of components is often larger than five when we sequentially apply our modified EM test to estimate the number of components. For this reason, we examine the performance of the sequential test based on our modified EM test relative to that of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) when the data is generated from a five-component model in a realistic setting; specifically, we simulate 100 data sets from the estimated five-component model of the Chilean textile industry in our empirical application and apply these three methods to select the number of components in each of 100 data sets. Here, we apply the modified EM test at the 5 percent significant level to sequentially test the null hypothesis $H_0: M=M_0$ for $M_0=1,2,...,7$, and we determine the number of components to be $M_0$ when we fail to reject $H_0: M=M_0$. Table \ref{table:simulated_BIC} reports the frequency at which the three methods select the number of components in this simulation. The table indicates that the proposed sequential test selects the correct number of components 72 \% of the time, while 25 \% of the time underestimates the true number of components. On the other hand, the AIC overestimates the number of components 86 \% of the time and the BIC underestimates the number of components by selecting a 4-component model 41 \% of the time, correctly estimating the number of components 58\% of the time. Overall, in this simulation, our sequential test based on the modified EM test outperforms the AIC or the BIC. \begin{table}[H] \centering \caption{Frequency of Number of Components with the Simulated Data} \label{table:simulated_BIC} \begin{threeparttable} \begin{tabularx}{\textwidth}{@{\extracolsep{\fill}}lccccccc} \toprule $M$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \midrule sequential EM & 0 & 0 & 0 & 0.26 & 0.72 & 0.02 & 0 \\ AIC & 0 & 0 & 0 & 0.01 & 0.13 & 0.31 & 0.55 \\ BIC & 0 & 0 & 0 & 0.41 & 0.58 & 0.01 & 0 \\ \bottomrule \end{tabularx} \begin{tablenotes} \footnotesize \item[1] The data are generated using the estimated parameters based on Chilean textile industry with 5-components and panel length $T=3$, where $(\alpha_1,\alpha_2, \alpha_3, \alpha_4,\alpha_5) = (0.16076522, 0.32454077, 0.09025875, 0.35478905, 0.06964622)$, $(\mu_1,\mu_2, \mu_3, \mu_4,\mu_5) = (-1.241241, -0.33803875, 0.4480291, 0.52379553, 1.4139465)$, $(\beta_1,\beta_2, \beta_3, \beta_4,\beta_5)= (0.451833, -0.05988709, -0.2453261, -0.03106076, 0.2053708)$, $(\sigma_1,\sigma_2, \sigma_3, \sigma_4,\sigma_5) = (0.9933480, 0.4585760, 0.9954302, 0.4116855, 0.1863346)$. We use the panel length and sample size that equal to those in the dataset, i.e., $n = 196$ and $T = 3$. \item[2] The results are based on 100 repetitions. \item[3] Each cell indicates the proportion of times that the model selection indicates a $M$-component model. \end{tablenotes} \end{threeparttable} \end{table} \subsection{Production Function and First Order Condition} Consider the input and output panel data of $n$ firms over $T$ years, $\{\{Y_{it},V_{it},L_{it},K_{it}\}_{t=1}^T\}_{i=1}^n$, where $Y_{it}$, $V_{it}$ $L_{it}$, $K_{it}$ denote output, labor, capital, and intermediate input of firm $i$ in year $t$, respectively. We denote the logarithm of corresponding variables by lowercase letters as $(y_{it},v_{it},l_{it},k_{it})$ with, e.g., $y_{it} = \ln(Y_{it})$. We use a finite mixture specification to capture unobserved heterogeneity in firm's input elasticities. We are interested in testing the number of production technology types. Assume there are $M$ discrete types of production technologies and define the latent random variable $D_i \in \{ 1,2,\ldots,M\}$ to represent the production technology type of firm $i$. If $D_i = j$, then firm $i$ is of type $j$. The population proportion of type $j$ is denoted by $\alpha_j=\Pr(D=j)$. The production function for type $j$ is Cobb-Douglas and the output is related to inputs as \begin{equation} Y_{it}=e^{\epsilon_{it}} F_t^j(V_{it}, L_{it},K_{it}, \omega_{it}) \label{prod} \end{equation} with \begin{align*} F_t^j(V_{it}, L_{it},K_{it},\omega_{it}) := \exp(\gamma_t^j + \omega_{it}) V_{it}^{\delta_{v,j}} L_{it}^{\delta_{\ell,j}}K_{it}^{\delta_{k,j}}, \end{align*} where $\gamma_t^j$ is aggregate productivity shock of type $j$ in year $t$; $\omega_{it}$ is serially correlated productivity shock; $\epsilon_{it}$ is idiosyncratic productivity shock. We assume that an intermediate input $V_{it}$ is flexibly chosen by firm $i$ after observing aggregate shock $\gamma_t^j$ and serially correlated productivity shock $\omega_{it}$. The variable $\epsilon_{it}$ represents a mean-zero i.i.d. random variable, of which realization is not known when intermediate input $V$ is chosen. Denote the information available to a firm for making decisions on $V_{it}$ by $\mathcal{I}_{it}$. In order to identify the intermediate input elasticity of the production function, we introduce the following assumptions (c.f., \cite{Kasahara2022esri}). \begin{assumption}\label{app:A1} (a) Each firm belongs to one of $M$ types, and the probability of being type $j$ is given by $\alpha_j = P(D_i = j)$ with $\sum_{j=1}^M \alpha_j = 1$. (b) For the $j^{th}$ type of production technology at time $t$, the output expressed in terms of input as in (\ref{prod}), where $\epsilon_{it} \sim N(0, \sigma_j^2)$ are i.i.d across $i$'s and $t$'s. $\omega_{it}$ follows an exogenous first order stationary Markov process given by $\omega_{it}=h^j(\omega_{it-1}) + \eta_{it} $ where, conditional on $\mathcal{I}_{it-1}$, $\eta_{it}$ is a mean-zero i.i.d. random variable. (c) $(\gamma_t^j,\omega_{it}) \in\mathcal{I}_{it}$ and $\epsilon_{it}\not\in\mathcal{I}_{it}$. \end{assumption} \begin{assumption}\label{app:A2} (a) Firms are price-taker in both output and input markets, where $P_{Y,t}$ and $P_{V,t}$ are the prices of output and intermediate input in year $t$. (b) $(P_{Y,t}, P_{V,t})$ are observed by firms at the beginning of the period before $V_{it}$ is chosen. \end{assumption} \begin{assumption}\label{app:A3} $V_{it}$'s are chosen at time $t$ by maximizing the expected profit conditional on information $\mathcal{I}_{it}$ at time $t$ and conditional on the value of $(K_{it},L_{it})$. The profit maximization problem for firms with type $j$ technology is given by \begin{equation}\label{eq:profit} V_{it} = \arg \max_{V} P_{Y,t} E[\exp(\epsilon_{it})|D_i=j] F^{j}_t(V,K_{it},L_{it},\omega_{it}) - P_{V,t} V. \end{equation} \end{assumption} In Assumption \ref{app:A1}(a), each firm's production function belongs to one of the $M$ types. Assumption \ref{app:A1}(b) assumes that idiosyncratic productivity shock follows a normal distribution, with which the estimating equation follows a finite mixture normal distribution. Assumption \ref{app:A1}(c) assumes that both aggregate shock $\gamma_t^j$ and serially correlated productivity shock $\omega_{it}$ are observed when intermediate inputs are chosen but idiosyncratic productivity shocks are not known. \cite{Gandhi2020} consider the similar timing assumption. Assumption \ref{app:A2} states that the firms observe the input and output prices when deciding on $V_{it}$. Assumption \ref{app:A3} assumes that $M_{it}$ are chosen to maximize the current expected period profit conditional on the value of $(K_{it},L_{it})$.\footnote{We are agnostic about the timing of choosing $K_{it}$ and $L_{it}$ as long as they are either determined before $V_{it}$ or simultanously chosen with $V_{it}$. It is reasonable to assume that capital input $K_{it}$ is determined before the value of $V_{it}$ is chosen. On the other hand, labor input $L_{it}$ may be flexible chosen simultaneously with $V_{it}$ after $\gamma_t^j$ and $\omega_{it}$ are observed. Even when labor input is simultaneously chosen with intermediate input, equation (\ref{eq:profit}) and the corresponding first-order-condition characterize the intermediate input choice once we interpret $L_{it}$ in (\ref{eq:profit}) as the optimal value chosen by the firm $i$ as discussed in \cite{Ackerberg2015}. } Given the above assumptions \ref{app:A1}, \ref{app:A2} and \ref{app:A3}, we derive an empirical specification based on the first order condition of profit maximization problem (\ref{eq:profit}) following the idea developed by \cite{Gandhi2020} and extended to a finite mixture production function model by \cite{Kasahara2022esri}. Note that $E[\exp(\epsilon_{it})|D_i=j]=\exp( \sigma_j^2/2)$ for $\epsilon_{it}\sim N(0,\sigma_j^2)$. Then, because $\delta_{v,j}= \frac{\partial F_i^{j}(V_{it},K_{it},L_{it})/\partial V_{it}} {{F_i^{j}(V_{it},K_{it},L_{it})}/{V_{it}}}$ for Cobb-Douglas production function, the first order condition with respect to $V_{it}$ in (\ref{eq:profit}) together with the production function (\ref{prod}) implies that \begin{equation}\label{eq:logs} s_{it} = \ln \delta_{v,j} + \frac{1}{2}\sigma_{j}^2 - \epsilon_{it}\quad \text{for $D_i=j$}, \end{equation} where \[ s_{it}:=\ln\left(\frac{P_{M,t} M_{it}}{P_{Y,t} Y_{it}}\right) \] is the logarithm of the ratio of intermediate input cost to revenue. Collect the observed data as $\bs{W}_i = \{ s_{it},\ln K_{it}\}_{t=1}^T$. Let $\mu_j = \ln \delta_{v,j} + \frac{1}{2} \sigma_j^2$ and define a type-specific parameter to be $\bs{\theta}_j = (\mu_j,\sigma_j)$, where $\delta_{v,j}$ can be identified from $\bs{\theta}_j$ as $\delta_{v,j}=\exp(\mu_j-\sigma_j^2/2)$. Collect the parameters of each type and the mixing probability as $\bs{\vartheta}_M = (\bs{\theta}_1\t,\ldots,\bs{\theta}_M\t,\alpha_1,\ldots,\alpha_{M-1})\t$. Recall that $\epsilon_{it} \overset{iid}{\sim} N(0, \sigma_j^2)$ across $i$ and $t$ conditional on the technology type $D_i=j$. Then, from (\ref{eq:logs}), the density function of $s_{i1},..., s_{iT}$ can be written as a mixture of type-specific likelihood density similarly to a finite normal mixture panel regression model density in equation (\ref{eq:fm}): \begin{equation}\label{model-1} f_{M}(\bs{W}_i;\bs{\vartheta}_{M}) = \sum_{j=1}^M \alpha_j \prod_{t=1}^T \frac{1}{\sigma^{j}} \phi\left(\frac{s_{it} - \mu_j}{\sigma^{j}} \right). \end{equation} The penalized maximum likelihood estimator is defined as \[ \hat{\bs{\vartheta}}_{M} = \arg\max_{\bs{\vartheta}_M} \sum_{i=1}^{n} \ln f_{M}(\bs{W}_i;\bs{\vartheta}_{M}) +\tilde p_n(\bs{\vartheta}_{M}). \] As an alternative specification, we allow the elasticity of output for intermediate input to be a function of $\ln K_{it}$ as \[ \ln\delta_{v,j}=\beta_{0,j} + \beta_{k,j} \ln K_{it}. \] This results in the logarithm of the ratio of intermediate input cost to revenue being linearly related to $\ln K_{it}$ as \[ s_{it} = \mu_j +\beta_{k,j} \ln K_{it} - \epsilon_{it}\quad \text{for $D_i=j$} \] with $\mu_j = \beta_{0,j}+ \frac{1}{2}\sigma_{j}^2$. In this case, the conditional density function of $\{ s_{it}\}_{t=1}^T$ given $\{\ln K_{it}\}_{t=1}^T$ is \begin{equation}\label{model-2} f_{M}(\bs{W}_i;\bs{\vartheta}_{M}) = \sum_{j=1}^M \alpha_j \prod_{t=1}^T \frac{1}{\sigma^{j}} \phi\left(\frac{s_{it} - \mu_j - \beta_{k,j} \ln K_{it}}{\sigma^{j}} \right). \end{equation} In addition, we consider a specification in which we include not only $\ln K_{it}$ but also $\ln L_{it}$ as a regressor: \begin{equation}\label{model-3} f_{M}(\bs{W}_i;\bs{\vartheta}_{M}) = \sum_{j=1}^M \alpha_j \prod_{t=1}^T \frac{1}{\sigma^{j}} \phi\left(\frac{s_{it} - \mu_j - \beta_{k,j} \ln K_{it}- \beta_{\ell,j} \ln L_{it}}{\sigma^{j}} \right). \end{equation} \subsection{Empirical result} We apply the modified EM test to two producer-level data sets to determine the number of production technology types. We used the production data from the Japanese publicly traded firms from 2003 to 2007 and the Chilean manufacturing plants from 1992 to 1996.\footnote{Please refer to \cite{kasahara21} and \cite{KASAHARA2008} for the details of the datasets of the Japanese publicly traded firms and the Chilean manufacturing plants, respectively. } We cleaned the data and used the firms/plants with continuous data entry for the five years to ensure that we had balanced panel data. We focus on three largest industries in terms of the number of firms and plants for each country (Chemical, Machine, and Electronics for Japan and Food products, Fabricated metal products, and Textiles for Chile). Table \ref{tab:sum} presents the summary statistics for the revenue share of intermediate materials and the log of gross output in these industries. The within-industry standard deviations of the revenue share of intermediate materials are substantial across all industries, suggesting that intermediate input elasticities differ across firms within the narrowly defined industries. \begin{table}[H] \centering \caption{Descriptive statistics for the revenue share of intermediate material and the log of gross output for the Japanese firms and the Chilean plants} \label{tab:sum} \begin{threeparttable} \begin{tabularx}{\textwidth}{@{\extracolsep{\fill}} p{4.1cm}cccccc} \hline\hline \multicolumn{7}{c}{\textbf{Panel A: \ Japanese publicly trade firms}}\\ \midrule & & &\multicolumn{2}{c}{$\frac{P_{M,t} M_{it}}{P_{Y,t} Y_{it}}$} & \multicolumn{2}{c}{$\ln (Y_{it})$}\\ Industry & NObs & n & $mean $ & $sd $ & $mean $ & $sd $ \\ \midrule \noalign{\vskip 0.5mm} Chemical & 805 & 161 & 0.34 & 0.15 & 17.52 & 1.24 \\ Machine & 790 & 158 & 0.50 & 0.16 & 17.31 & 1.35 \\ Electronics & 775 & 155 & 0.45 & 0.18 & 17.54 & 1.27 \\ \hline\hline \multicolumn{7}{c}{\textbf{Panel B: \ Chilean plants}}\\ \midrule & & & \multicolumn{2}{c}{$\frac{P_{M,t} M_{it}}{P_{Y,t} Y_{it}}$} & \multicolumn{2}{c}{$\ln (Y_{it})$}\\ Industry & NObs & n & $mean $ & $sd $ & $mean $ & $sd $ \\ \midrule \\[-1.8ex] Food products & 4645 & 929 & 0.65 & 0.15 & 10.62 & 1.66 \\ Fabricated metal products & 1260 & 252 & 0.53 & 0.18 & 11.00 & 1.37 \\ Textiles & 1130 & 226 & 0.58 & 0.19 & 11.01 & 1.32 \\ \hline\hline \\[-1.8ex] \end{tabularx} \begin{tablenotes} \item [1] The summary statistics are based on the Japanese firm-level data from 2003 to 2007 and the Chilean plant-level data from 1992 - 1996. All observations with $\ln (M_{it} / Y_{it}) <= -3$ and $\ln (M_{it} / Y_{it}) > log(2)$ are removed. The data set is a balanced panel, i.e., we kept firms/plants that are continuously observed for these five years. \item[2] The variable $\frac{P_{M,t} M_{it} }{P_{Y,t} Y_{it} }$ is defined as the revenue share of the intemediate input, where $P_{M,t}$ is the average price of the intemediate input at time $t$, $P_{Y,t}$ is the average price for the output, $M_{it}$ is the quantity of the intermediate input and $Y_{it}$ is the quantity of the output. \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[H] \centering \caption{The Modeified EM test for Japanese producer without conditioning variables} \small \label{table:Japan} \begin{threeparttable} \begin{tabularx}{\textwidth}{@{}lc@{\extracolsep{\fill}}rrrrr@{}} \toprule & & M=1 & M=2 & M=3 & M=4 & M=5 \\ \cmidrule{3-7} & & \multicolumn{5}{c}{$T = 3$} \\ \cmidrule{3-7} Chemical &\textit{EM} & $436.37^{***}$ & $239.83^{***}$ & $130.1^{***}$ & $126.4^{***}$ & $63.24^{***}$ \\ &\textit{BIC} & 805.55 & 383.43 & 157.5 & 41.62 & -70.46 \\ \noalign{\vskip 0.5mm} Electronics &\textit{EM}& $563.94^{***}$ & $186.67^{***}$ & $115.82^{***}$ & $81.06^{***}$ & $47.76^{***}$ \\ &\textit{BIC} & 814.01 & 264.27 & 91.67 & -10.39 & -77.2 \\ \noalign{\vskip 0.5mm} Machine &\textit{EM} & $434.91^{***}$ & $194.48^{***}$ & $72.83^{***}$ & $56.94^{***}$ & $54.77^{***}$ \\ &\textit{BIC} & 458.72 & 37.85 & -142.28 & -200.74 & -242.71 \\ \noalign{\vskip 0.5mm} \cmidrule{3-7} & & \multicolumn{5}{c}{$T = 4$} \\ \cmidrule{3-7} Chemical &\textit{EM}& $629.22^{***}$ & $308.6^{***}$ & $181.39^{***}$ & $177.38^{***}$ & $96.35^{***}$ \\ &\textit{BIC} & 1071.45 & 456.54 & 162.15 & -4.99 & -168.01 \\ \noalign{\vskip 0.5mm} Electronics &\textit{EM} & $803.15^{***}$ & $282.32^{***}$ & $167.83^{***}$ & $106.43^{***}$ & $89.93^{***}$ \\ &\textit{BIC} & 1071.45 & 456.54 & 162.15 & -4.99 & -168.01 \\ \noalign{\vskip 0.5mm} Machine &\textit{EM} & $620.95^{***}$ & $292.52^{***}$ & $118.37^{***}$ & $102.57^{***}$ & $75.32^{***}$ \\ &\textit{BIC} & 609.1 & 2.14 & -276.04 & -380.16 & -467.96 \\\noalign{\vskip 0.5mm} \cmidrule{3-7} & & \multicolumn{5}{c}{$T = 5$} \\ \cmidrule{3-7} Chemical &\textit{EM} & $818.38^{***}$ & $386.08^{***}$ & $219.13^{***}$ & $209.42^{***}$ & $118.25^{***}$ \\ &\textit{BIC} & 1331.53 & 527.48 & 155.86 & -48.53 & -243.73 \\ \noalign{\vskip 0.5mm} Electronics &\textit{EM}& $1024.86^{***}$ & $375.29^{***}$ & $226.01^{***}$ & $134.53^{***}$ & $126.36^{***}$ \\ &\textit{BIC} & 1343.12 & 332.61 & -28.32 & -239.31 & -359.17 \\ \noalign{\vskip 0.5mm} Machine &\textit{EM} & $819.98^{***}$ & $389.69^{***}$ & $156.44^{***}$ & $149.98^{***}$ & $96.32^{***}$ \\ &\textit{BIC} & 775.75 & -30.17 & -406.59 & -548.81 & -683.96 \\ \noalign{\vskip 0.5mm} \bottomrule \end{tabularx} \begin{tablenotes} \item[1] The estimation is based on the revenue share of intermediate material. \item[2] $~^{*}$, $~^{**}$, $~^{***}$ indicate the result is significant at $10 \%$, $ 5\%$ and $1\%$ levels respectively. \item[3] We report the BIC values under the EM test statistics. \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[H] \centering \caption{The Modeified EM test for Chilean producer without conditioning variables} \small \label{table:Chile} \begin{threeparttable} \begin{tabularx}{\textwidth}{@{}lc@{\extracolsep{\fill}}rrrrr@{}} \toprule & & M=1 & M=2 & M=3 & M=4 & M=5 \\ \cmidrule{3-7} & & \multicolumn{5}{c}{$T = 3$} \\ \cmidrule{3-7} Food products &\textit{EM} & $805.51^{***}$ & $637.77^{***}$ & $204.92^{***}$ & $80.54^{***}$ & $72.41^{***}$ \\ &\textit{BIC} & 422.55& -371.13& -991.96& -1176.61& -1236.82 \\ \noalign{\vskip 0.5mm} Fabricated metal products &\textit{EM} & $238.84^{***}$ & $68.91^{***}$ & $26.24^{***}$ & $24.42^{***}$ & $21.82^{***}$ \\ &\textit{BIC} &719.74& 496.49& 444.02& 433.01& 425 \\ \noalign{\vskip 0.5mm} Textiles &\textit{EM} & $229.87^{***}$ & $146.17^{***}$ & $64.76^{***}$ & $27.06^{***}$ & $29.98^{**}$ \\ &\textit{BIC} &635.37& 418.28& 288.34& 236.9& 223.34\\ \noalign{\vskip 0.5mm} \cmidrule{3-7} & & \multicolumn{5}{c}{$T = 4$} \\ \cmidrule{3-7} Food products &\textit{EM} & $1165.08^{***}$ & $874.27^{***}$ & $257.49^{***}$ & $130.61^{***}$ & $139.59^{***}$ \\ &\textit{BIC} &419.47& -730.83& -1586.11& -1825.87& -1938.03 \\ \noalign{\vskip 0.5mm} Fabricated metal products &\textit{EM} & $362.1^{***}$ & $120.7^{***}$ & $41.6^{***}$ & $43.68^{***}$ & $20.95^{***}$ \\ &\textit{BIC} &905.9& 559.3& 453.41& 427.34& 399.82 \\ \noalign{\vskip 0.5mm} Textiles &\textit{EM} & $325.17^{***}$ & $222.28^{***}$ & $74.19^{***}$ & $47.58^{***}$ & $51.65^{***}$ \\ &\textit{BIC} & 821.73& 510.98& 303.8& 243.51& 210.77 \\\noalign{\vskip 0.5mm} \cmidrule{3-7} & & \multicolumn{5}{c}{$T = 5$} \\ \cmidrule{3-7} Food products &\textit{EM} & $1553.9^{***}$ & $1010.31^{***}$ & $290.02^{***}$ & $172.46^{***}$ & $155.25^{***}$ \\ &\textit{BIC} &471.66& -1066.71& -2057.71& -2329.38& -2484.82 \\ \noalign{\vskip 0.5mm} Fabricated metal products &\textit{EM} & $478.94^{***}$ & $176.5^{***}$ & $58.96^{***}$ & $59.37^{***}$ & $33.19^{***}$ \\ &\textit{BIC} &1101.11& 637.21& 477.1& 433.62& 389.54 \\ \noalign{\vskip 0.5mm} Textiles &\textit{EM} & $428.29^{***}$ & $280.46^{***}$ & $103.41^{***}$ & $56.63^{***}$ & $53.57^{***}$ \\ &\textit{BIC} &968.16& 556.01& 289.55& 201.41& 160\\ \noalign{\vskip 0.5mm} \bottomrule \end{tabularx} \begin{tablenotes} \item[1] The estimation is based on the revenue share of intermediate material. \item[2] $~^{*}$, $~^{**}$, $~^{***}$ indicate the result is significant at $10 \%$, $ 5\%$ and $1\%$ levels respectively. \item[3] We report the BIC values under the EM test statistics. \end{tablenotes} \end{threeparttable} \end{table} To determine the number of components, we test the null hypothesis $H_0: M=M_0$ against $H_1=M=M_0+1$ by applying the modified EM test at the 5 percent significance level sequentially for $M_0 = 1,\ldots,5$. If we fail to reject the null hypothesis at a certain $M_0 = M$, then we conclude that there are $M$ types of intermediate input elasticities. We consider both the model without conditioning variable (\ref{model-1}) and the models with conditioning variable (\ref{model-2})-(\ref{model-3}). Table \ref{table:Japan} and \ref{table:Chile} report the result of the modified EM test for the model without conditioning variable (\ref{model-1}) from the Japanese and the Chilean industries, respectively, with the panel length of $T=3,4,5$ and the null model of $M=1,...,5$. For all industries in both countries and all panel lengths, we reject the null hypothesis of $H_0: M=M_0$ for all $M_0=1,2,3,4,$ and $5$ at five percent significance level, indicating that the number of types for intermediate input elasticities is at least five types. This result reflects a large and persistent heterogeneity in the revenue share of intermediate materials across firms or plants, providing strong evidence for substantial heterogeneity in intermediate input elasticities across firms' production functions in Japanese and Chilean producers. Our finding is important for a conventional empirical practice of estimating the Cobb-Douglas production function, assuming that the elasticity parameters are common across firms. Given the strong evidence of heterogeneity in production function coefficients, incorporating heterogeneity in production function coefficients in empirical applications is warranted and must be encouraged. On the other hand, one of the reasons why the estimated number of technology types is greater than five may be that the assumption of the Cobb-Douglas production function is too restrictive. When the production function is not Cobb-Douglas, the revenue share of intermediate materials generally depends on the value of production inputs \citep{Gandhi2020}. For this reason, we test the number of technology types when the revenue share of intermediate materials depends on the value of capital input as well as labor input by estimating the finite mixture normal regression models (\ref{model-2})-(\ref{model-3}). Table \ref{table:regressor_1} presents the result of the sequential test and the BIC when we estimate the mixture regression model with $\ln K_{it}$ in (\ref{model-2}) using the data with a panel length of $T=3$. For the Japanese Chemical, Electronics, and Machinery industries, the sequential test based on the modified EM test suggests that the data is generated from seven to nine-component models; at the same time, the BIC selects models with at least 10 components. For the Chilean Food industry, the sequential test indicates a ten-component model while the BIC chooses an eight-component model. On the other hand, the sequential test and the BIC, respectively, select the model with seven and six component models for the Chilean Fabricated Metal Products industry and the Chilean Textile industry. Table \ref{table:regressor_2} reports the result for the model that includes both $\ln K_{it}$ and $\ln L_{it}$ as regressors. Across six industries, both the result of the sequential test and the BIC in Table \ref{table:regressor_2} select the model with at least five components, providing evidence for substantial heterogeneity in production technology across firms and plants. Comparing the result of Table \ref{table:regressor_2} with that of Table \ref{table:regressor_1}, the selected number of components for the model with $\ln K_{it}$ and $\ln L_{it}$ is smaller than that for the model only with $\ln K$. This suggests a possibility that the number of components may be over-estimated if we do not consider a flexible enough production function specification by excluding some regressors. \begin{table}[H] \caption{The Modified EM test and the BIC (\textbf{Dependent Variable}: $\ln \frac{P_{M,t} M_{it}}{Y_{M,t} Y_{it}}$, \textbf{Regressors}: $\ln K_{it}$)} \label{table:regressor_1} \footnotesize \begin{tabular}{l rrrrrrrrrr} \toprule $M_0$ & 1 & 2 & 3 & 4 & 5 & 6 & 7& 8& 9 & 10 \\ \midrule \multicolumn{11}{l}{ Japanese Chemical } \\ \cmidrule{1-11} \textit{EM} & $459.4^{***}$ & $236.36^{***}$ & $125.42^{***}$ & $118.36^{***}$ & $87.63^{***}$ & $53.72^{***}$ & $38.69^{***}$& $34.07^{**}$ & \cellcolor{yellow} $36.47$ & -\\ \textit{BIC} & 1384.76 & 943.61 & 726.53 & 620.32 & 518.86 & 449.92 & 413.49 & 394.46 & 381.48 & \cellcolor{yellow}366.09 \\ \bottomrule \multicolumn{11}{l}{Japanese Electronics} \\ \cmidrule{1-11} \textit{EM} & $560.06^{***}$ & $213.82^{***}$ & $116.29^{***}$ & $78.81^{***}$ & $47.05^{***}$ & $40.77^{***}$ & $27.4^{**}$ & \cellcolor{yellow}$29.02$ & - & - \\ \textit{BIC} & 1332.14 & 788.19 & 593.44 & 495.74 & 434.15 & 406.77 & 385.45 & 372.63& 367.31 & \cellcolor{yellow}351.17 \\ \bottomrule \multicolumn{11}{l}{ Japanese Machine } \\ \cmidrule{1-11} \textit{EM} & $433.19^{***}$ & $202.92^{***}$ & $80.42^{***}$ & $76.82^{***}$ & $53.83^{***}$ & $34.62^{**}$ & \cellcolor{yellow}$55.65$ & - & - & -\\ \textit{BIC} & 1355.6 & 940.49 & 757 & 696.06 & 638.48 & 617.4 & 588.94 & 568.71 & 555.15 & \cellcolor{yellow}544.51 \\ \bottomrule \multicolumn{11}{l}{Chilean Food Products } \\ \cmidrule{1-11} \textit{EM} & $816.06^{***}$ & $489.37^{***}$ & $169.14^{***}$ & $80.88^{***}$ & $80.63^{***}$ & $52.67^{***}$ & $31.29^{***}$ & $17.16^{**}$ & $20.55^{***}$ & \cellcolor{yellow} $-60.46^{}$ \\ \textit{BIC} & 6759.39& 5962.74 & 5499.3& 5356.47 & 5301.31 & 5241.91& 5210.27 & \cellcolor{yellow} 5200.71 & 5210.77 & 5222.29 \\ \bottomrule \multicolumn{11}{l}{ Chilean Fabricated Metal Products} \\ \cmidrule{1-11} \textit{EM} & $199.35^{***}$ & $63.25^{***}$ & $49.24^{***}$ & $30.27^{***}$ & $15.73^{**}$ & $18.25^{**}$ & \cellcolor{yellow} $10.88^{}$ & - & - & - \\ \textit{BIC} & 1923.64 & 1744.72 & 1699.97 & 1670.93 & 1661.03& \cellcolor{yellow} 1659.54 & 1665.08 & 1669.02 & 1680.96& 1695.54\\ \bottomrule \multicolumn{11}{l}{ Chilean Textile } \\ \cmidrule{1-11} \textit{EM} & $201.86^{***}$ & $95.17^{***}$ & $61.43^{***}$ & $31.17^{***}$ & $14.12^{*}$ & $17.45^{**}$ & \cellcolor{yellow} $7.94^{}$ & - & - & - \\ \textit{BIC} & 1681.91 & 1499.99 & 1424.93 & 1380.93& 1368.65& \cellcolor{yellow}1364.94 & 1365.72 & 1370.72 & 1382.83 & 1392.24 \\ \bottomrule \end{tabular} \begin{tablenotes} \item[1] The estimation is based on the revenue share of intermediate material using the panel data of length $T=3$. \item[2] $~^{*}$, $~^{**}$, $~^{***}$ indicate the result is significant at $10 \%$, $ 5\%$ and $1\%$ levels respectively. \end{tablenotes} \end{table} \begin{table}[H] \caption{The Modified EM test and the BIC (\textbf{Dependent Variable}: $\ln \frac{P_{M,t} M_{it}}{Y_{M,t} Y_{it}}$, \textbf{Regressors}: $\ln K_{it}$ and $\ln L_{it}$)} \label{table:regressor_2} \footnotesize \begin{tabular}{l rrrrrrrrrr} \toprule $M_0$ & 1 & 2 & 3 & 4 & 5 & 6 & 7& 8& 9 & 10 \\ \midrule \multicolumn{11}{l}{ Japanese Chemical } \\ \cmidrule{1-11} \textit{EM} & $412.35^{***}$ & $224.09^{***}$ & $141.59^{***}$ & $132.24^{**}$ & \cellcolor{yellow}$121.56$ & - & - & - & - & - \\ \textit{BIC} & 1294.05 & 905.44 & 705.72 & 587.3& 490.07 & 479.74 & \cellcolor{yellow}389.05 & 390.29 & 382.28 & 372.69 \\ \bottomrule \multicolumn{11}{l}{Japanese Electronics} \\ \cmidrule{1-11} \textit{EM} & $573.11^{***}$ & $218.38^{***}$ & $116.07^{***}$ & $94.76^{**}$ & \cellcolor{yellow}$47.73$ & - & - & - & - & - \\ \textit{BIC} & 1336.69 & 784.95 & 590.73 & 498.55 & 426.23 & 389.91 & 372.05 & 371.64& \cellcolor{yellow}359.15 & 368.25 \\ \bottomrule \multicolumn{11}{l}{ Japanese Machine } \\ \cmidrule{1-11} \textit{EM} & $468.06^{***}$ & $204.01^{***}$ & $93.35^{***}$ & $81.62^{***}$ & $62.00^{***}$ & $37.04^{***}$ & \cellcolor{yellow}$14.21^{}$ & - & - & - \\ \textit{BIC} & 1360.56 & 915.69 & 736.2& 676.26 & 625.66 & 596.45 & 564.34 & 548.7 & \cellcolor{yellow}536.78 & 539.64 \\ \bottomrule \multicolumn{11}{l}{Chilean Food Products } \\ \cmidrule{1-11} \textit{EM} & $805.09^{***}$ & $478.64^{***}$ & $177.08^{***}$ & $84.13^{***}$ & $80.96^{***}$ & $51.97^{***}$ & $32.3^{**}$ & \cellcolor{yellow}$19.50$ & - & - \\ \textit{BIC} & 6732.11& 5952.7& 5506.55 & 5362.27 & 5309.37 & 5257.78& 5233.37 & \cellcolor{yellow} 5229.9 & 5242.9 & 5258.41 \\ \bottomrule \multicolumn{11}{l}{ Chilean Fabricated Metal Products} \\ \cmidrule{1-11} \textit{EM} & $204.45^{***}$ & $63.57^{***}$ & $49.42^{***}$ & $28.61^{***}$ & \cellcolor{yellow}$18.32^{}$ & - & - & - & - & - \\ \textit{BIC} & 1926.06 & 1747.29 & 1709.44 & 1685.39 & \cellcolor{yellow}1678.71 & 1680.54 & 1685.02 & 1696.19 & 1703.21 & 1723.56 \\ \bottomrule \multicolumn{11}{l}{ Chilean Textile } \\ \cmidrule{1-11} \textit{EM} & $203.69^{***}$ & $90.69^{***}$ & $58.4^{***}$ & $32.55^{***}$ & \cellcolor{yellow}$16.19^{}$ & - & - & - & - & - \\ \textit{BIC} & 1673.99 & 1495.55 & 1431.18 & 1394.54& 1382.59& 1373.03 & \cellcolor{yellow}1368.8 & 1382.42 & 1394.3 & 1394.09 \\ \bottomrule \end{tabular} \begin{tablenotes} \item[1] The estimation is based on the revenue share of intermediate material using the panel data of length $T=3$. \item[2] $~^{*}$, $~^{**}$, $~^{***}$ indicate the result is significant at $10 \%$, $ 5\%$ and $1\%$ levels respectively. \end{tablenotes} \end{table} \subsection{Score function for testing $H_0:m = 1$ against $H_A:m=2$}\label{sec:appendix_score_1} $H^j(\cdot)$ is defined as the $j$-th order Hermite polynomial. $H^1(t) = t$, $H^2(t) = t^2 - 1$ , $H^3(t) = t^3 - 3t$, and $H^4(t) = t^4 - 6t^2 + 3$. As shown in \cite{Kasahara2015a} supplement material, the derivative of $\{ \frac{1}{\sigma} \phi(\frac{t}{\sigma}) \}$ is $$\frac{\nabla_{\mu^m}\nabla_{ (\sigma^2)^{\ell}} \{ \frac{1}{\sigma} \phi(\frac{t}{\sigma}) \} }{\{ \frac{1}{\sigma} \phi(\frac{t}{\sigma}) \}} = \left(\frac{1}{2}\right)^\ell \left(\frac{1}{\sigma}\right)^{m+2\ell} H^{m+2\ell}\left(\frac{t}{\sigma}\right). $$ Let \begin{equation}\label{eq:Hermite_polynomial} \begin{split} f^* = f(\bs{w};\gamma^*,\theta^* ), \nabla f^* = \nabla f(\bs{w};\gamma^*,\theta^* ) , H^{j*}_{i,t} = \frac{1}{\sigma^* j!} H^{j}\left(\frac{y_{it} - \bs{x}_{it}^\top \bs{\beta}^* - \bs{z}_{it}^\top \bs{\gamma}^* - \mu^* }{\sigma^*} \right), \end{split} \end{equation} then the first order derivatives of the density functions are \begin{align*} \nabla_{\mu} f^* & = f^* \sum_{t=1}^T \frac{1}{\sigma} H^{1*}_{i,t} ; \nabla_{\sigma^2} f^* = f^* \sum_{t=1}^T \frac{1}{2} \frac{1}{\sigma^2} H^{2*}_{it} ;\\ \nabla_{\bs{\beta}} f^* & = f^* \sum_{t=1}^T \frac{1}{\sigma} H^{1*}_{it} \bs{x}_{it}; \nabla_{\bs{\gamma}} f^* = f^* \sum_{t=1}^T \frac{1}{\sigma} H^{1*}_{it} \bs{z}_{it} . \end{align*} The score function defined in (\ref{eq:s_1}) is then written in terms of the Hermite polynomials: \begin{equation}\label{eq:s_1_hermite} \begin{split} \bs s_{\bs \eta } = \begin{pmatrix} s_{\mu } \\ s_{\sigma } \\ \bs s_{\bs\beta } \\ \bs s_{\bs \gamma } \end{pmatrix}= \begin{pmatrix} \sum_{t=1}^T H^{1*}_{i,t} \\ \sum_{t=1}^T H^{2*}_{i,t} \\ \sum_{t=1}^T H^{1*}_{i,t}\bs x_{it}\\ \sum_{t=1}^T H^{1*}_{i,t}\bs z_{it}\\ \end{pmatrix},\qquad \bs{s}_{\bs{\lambda} \bs\lambda} = \begin{pmatrix} s_{\lambda_\mu\lambda_\mu}\\ s_{\lambda_\mu\lambda_\sigma}\\ s_{\lambda_\sigma\lambda_\sigma}\\ \bs s_{\lambda_\mu\lambda_{\bs\beta}}\\ \bs s_{\lambda_\sigma\lambda_{\bs\beta}}\\ \bs s_{\lambda_{\bs\beta}\lambda_{\bs\beta}} \end{pmatrix}, \end{split} \end{equation} where \[ \begin{split} \begin{pmatrix} s_{\lambda_{\mu \mu } }\\ s_{\lambda_{\mu \sigma }}\\ s_{\lambda_{\sigma \sigma }}\\ \bs s_{\lambda_{\mu\bs \beta }}\\ \bs s_{\lambda_{\sigma\bs \beta }}\\ \end{pmatrix} &= \begin{pmatrix} \sum_{t=1}^T H^{2*}_{i,t} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{1,i,t} H^{1*}_{i,s} \\ 3 \sum_{t=1}^T H^{3*}_{i,t} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t} H^{2*}_{i,s} \\ 3 \sum_{t=1}^T H^{4*}_{i,t} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{2*}_{i,t} H^{2*}_{i,t} \\ 2 \sum_{t=1}^T H^{2*}_{i,t}\bs x_{it} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}H^{1*}_{i,s}\bs x_{it} \\ 3 \sum_{t=1}^T H^{3*}_{i,t}\bs x_{it} + 2 \sum_{t=1}^T\sum_{s \neq t} H^{1*}_{i,t}H^{2*}_{i,s}\bs x_{it} \end{pmatrix}\quad\text{and} \\ \bs s_{\lambda_{\bs\beta\bs\beta}} & = \begin{pmatrix} \sum_{t=1}^T H^{2*}_{i,t}x^2_{it,1} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,1} H^{1*}_{i,s} x_{is,1}\\ \vdots \\ \sum_{t=1}^T H^{2*}_{i,t}x^2_{it,q} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,q} H^{1*}_{i,s} x_{is,q} \\ 2 \sum_{t=1}^T H^{2*}_{i,t}x_{it,1} x_{it,2} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,1} H^{1*}_{i,s} x_{is,2}\\ \vdots \\ 2 \sum_{t=1}^T H^{2*}_{i,t}x_{it,1}x_{it,q} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,1} H^{1*}_{i,s} x_{is,q} \\ 2 \sum_{t=1}^T H^{2*}_{i,t}x_{it,2} x_{it,3} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,2} H^{1*}_{i,s} x_{is,3}\\ \vdots \\ 2 \sum_{t=1}^T H^{2*}_{i,t}x_{it,q-1}x_{it,q} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,q-1} H^{1*}_{i,s} x_{is,q} \end{pmatrix}. \end{split} \] When $T = 1$, the score functions are as follow: \begin{equation} \bs s_{\bs\eta } = \begin{pmatrix} s_{\mu } \\ s_{\sigma } \\ \bs s_{\bs\beta } \\ \bs s_{\bs \gamma } \end{pmatrix} = \begin{pmatrix} H^{1*}_{i} \\ H^{2*}_{i} \\ H^{1*}_{i}\bs x_{i}\\ H^{1*}_{i}\bs z_{i}\\ \end{pmatrix},\quad \begin{pmatrix} s_{\lambda_{\mu \mu } }\\ s_{\lambda_{\mu \sigma }}\\ s_{\lambda_{\sigma \sigma }}\\ \bs s_{\lambda_{\mu\bs \beta }}\\ \bs s_{\lambda_{\sigma\bs \beta }}\\ \end{pmatrix} = \begin{pmatrix} H^{2*}_{i} \\ 3 H^{3*}_{i} \\ 3 H^{4*}_{i} \\ 2 H^{2*}_{i}\bs x_{i} \\ 3 H^{3*}_{i}\bs x_{i} \end{pmatrix},\quad \text{and } \bs s_{\lambda_{\bs\beta\bs\beta}} = \begin{pmatrix} H^{2*}_{i}x^2_{i,1} \\ \vdots \\ H^{2*}_{i}x^2_{i,q} \\ 2 H^{2*}_{i}x_{i,1} x_{i,2} \\ \vdots \\ 2 H^{2*}_{i}x_{i,1}x_{i,q} \\ 2 H^{2*}_{i}x_{i,2} x_{i,3}\\ \vdots \\ 2 H^{2*}_{i}x_{i,q-1}x_{i,q} \\ \end{pmatrix}. \end{equation} Notice that $s_{\sigma }$ and $ s_{\lambda_{\mu \mu }} $ are perfect collinear and, therefore, the Fisher information matrix associated with the proposed score function is singular under this reparameterization for data with $T=1$. \subsection{Score function for testing $H_0:m = M_0$ against $H_A:m=M_0 + 1$}\label{sec:appendixb:score_m} The derivative of the reparameterized density w.r.t $\lambda$ at $\psi^{h*}_{\tau}$ is zero similar to testing homogeneity case. With the constraint $\pi^{M_0} = 1 - \sum_{j=1}^{M_0 - 1} \pi^j$. The score functions $s_{\eta i}$'s contain the first order derivatives w.r.t $\pi$'s $\gamma$ and $\nu$ at $\psi^{h*}_{\tau}$: \begin{equation} \begin{split} \nabla_{\pi^j} l^h(\bs{w};\psi^{h*}_{\tau},\tau) & = \frac{f(\bs{w};\gamma^*,\theta_0^{j*}) - f(\bs{w};\gamma^*,\theta_0^{M_0 *}) }{\sum_{j=1}^{M_0} \alpha_0^{j*} f(\bs{w};\gamma^*,\theta_0^{j*})};\\ \nabla_{\gamma} l^h(\bs{w};\psi^{h*}_{\tau},\tau) & = \frac{ \sum_{j=1}^{M_0} \alpha_0^{j*} \nabla_{\gamma} f(\bs{w};\gamma^*,\theta_0^{j*})}{\sum_{j=1}^{M_0} \alpha_0^{j*} f(\bs{w};\gamma^*,\theta_0^{j*})};\\ \nabla_{\nu} l^h(\bs{w}; \psi^{h*}_{\tau},\tau) & = \frac{\nabla_{\theta} f(\bs{w};\gamma^*,\theta_0^{h*})}{\sum_{j=1}^{M_0} \alpha_0^{j*} f(\bs{w};\gamma^*,\theta_0^{j*})}. \end{split} \end{equation} Define $H^{b*}_{j,i,t}$ as an abridged expression for $\frac{1}{b!} \frac{1}{\sigma_0^* } H^{b}(\frac{y_{it} - \mu_0^{j*} - x_{it}' \beta_0^{j*} - z_{it}' \gamma^* }{\sigma_0^{j*}}) $. \\ Define the weight $w_{i}^{j*}$ as \begin{equation} w_{i}^{j*} = \frac{\alpha_0^{j*} f(\{\bs{W}_{it}\}^T_{t=1};\gamma^*,\theta_0^{j*})}{ f_{M_0}(\{\bs{W}_{it}\}^T_{t=1};\vartheta_{M_0}^*)},j=1,\ldots,M_0, \end{equation} where $f_{M_0}(\{\bs{W}_{it}\}^T_{t=1};\vartheta_{M_0}^*)$ is defined by equation (\ref{eq:fm0}). As shown in section \ref{sec:appendixb:score_m}, the score functions are: \begin{equation} \begin{split} s_{\alpha i} = \begin{pmatrix} \frac{f(\bs{w} | \theta_0^{1*}) - f(\bs{w} | \theta^{M_0*}_0) }{\sum_{l} \alpha_0^{l*} f(\bs{w} | \theta^{l*}_0)}\\ \vdots \\ \frac{f(\bs{w} | \theta_0^{M_0-1 *}) - f(\bs{w} | \theta_0^{M_0*}) }{\sum_{l} \alpha_0^{l*} f(\{\bs{W}_{it}^*\}_{t=1}^T | \theta_0^{l*})} \end{pmatrix}, s_{\mu i} = \begin{pmatrix} w_{i}^{1*} \sum_{t=1}^T H^{1*}_{1,i,t} \\ \vdots \\ w_{i}^{M_0*} \sum_{t=1}^T H^{1*}_{M_0,i,t}\end{pmatrix}, \\ s_{\beta_i} = \begin{pmatrix} w_{i}^{1 *} \sum_{t=1}^T H^{1*}_{1,i,t}x_{it}\\ \vdots \\ w_{i}^{M_0 *} \sum_{t=1}^T H^{1*}_{M_0,i,t}x_{it} \end{pmatrix}, s_{\sigma i} = \begin{pmatrix} w_{i}^{1 *} \sum_{t=1}^T H^{2*}_{1,i,t} \\ \vdots \\ w_{i}^{M_0 *} \sum_{t=1}^T H^{2*}_{M_0,i,t} \end{pmatrix}, s_{\gamma i} = \begin{pmatrix} w_i^{1 *} \sum_{t=1}^T H^{1*}_{1,i,t}z_{it}\\ \vdots \\ w_i^{M_0 *} \sum_{t=1}^T H^{1*}_{M_0,i,t}z_{it} \end{pmatrix}; \end{split} \end{equation} and $\tilde{S}_{\lambda,\eta} = (( {S}_{\lambda,\eta}^1)^\top,\ldots,({S}_{\lambda,\eta}^{M_0})^\top)^\top \sim N(0,\tilde{\bs{\mathcal{I}}}_{\lambda,\eta})$. $\tilde{S}_{\lambda,\eta}$ is an $\R^{M_0 q_{\lambda}}$-vector, and $( {S}^h_{\lambda,\eta}) \in \R^{q_{\lambda}}$, $q_{\lambda} = (q+2)(q+1)/2$. Define $S^h_{\lambda} = n^{-1/2} \sum_{i=1}^N s_{\lambda i}^h$ . In addition, define ${\bs{\mathcal{I}}}^h_{\lambda,\eta} = E[{S}_{\lambda,\eta}^h ({S}_{\lambda,\eta}^h)'] $. Define $W_{\lambda,\eta}^h = ({\bs{\mathcal{I}}}^h_{\lambda,\eta} )^{-1}{S}_{\lambda,\eta}^h$, and $W_{\lambda,\eta}^h \sim N(0, (I^h_{\lambda, \eta})^{-1})$. \begin{equation} \begin{split} s^h_{\lambda_{\mu \sigma} i} & = w_i^{h *} \begin{pmatrix} \sum_{t=1}^T H^{2*}_{h,i,t} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{h,i,t} H^{1*}_{h,i,s} \\ 3 \sum_{t=1}^T H^{4*}_{h,i,t} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{2*}_{h,i,t} H^{2*}_{h,i,t} \\ 3 \sum_{t=1}^T H^{3*}_{h,i,t} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{h,i,t} H^{2*}_{h,i,s} \\ 2 \sum_{t=1}^T H^{2*}_{h,i,t}x_{it} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{h,i,t}x_{it} H^{1*}_{h,i,s} \\ 3 \sum_{t=1}^T H^{3*}_{h,i,t}x_{it} + 2 \sum_{t=1}^T\sum_{s \neq t} H^{1*}_{h,i,t}x_{it} H^{2*}_{h,i,s} \end{pmatrix} , \\ s^h_{\lambda_{\beta}i} & = w_i^{h *} \begin{pmatrix} \sum_{t=1}^T H^{2*}_{h,i,t}x^2_{it,1} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{h,i,t}x_{it,1} H^{1*}_{h,i,s} x_{is,1}\\ \vdots \\ \sum_{t=1}^T H^{2*}_{h,i,t}x^2_{it,q} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{h,i,t}x_{it,q} H^{1*}_{h,i,s} x_{is,q} \\ 2 \sum_{t=1}^T H^{2*}_{h,i,t}x_{it,1} x_{it,2} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{h,i,t}x_{it,1} H^{1*}_{h,i,s} x_{is,2}\\ \vdots \\ 2 \sum_{t=1}^T H^{2*}_{h,i,t}x_{it,1}x_{it,q} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{h,i,t}x_{it,1} H^{1*}_{h,i,s} x_{is,q} \\ 2 \sum_{t=1}^T H^{2*}_{h,i,t}x_{it,2} x_{it,3} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{h,i,t}x_{it,2} H^{1*}_{h,i,s} x_{is,3}\\ \vdots \\ 2 \sum_{t=1}^T H^{2*}_{h,i,t}x_{it,q-1}x_{it,q} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{h,i,t} x_{it,q-1} H^{1*}_{h,i,s} x_{is,q} \end{pmatrix}. \end{split} \end{equation} \subsection{Score function} Define the score functions $s_{\eta i} $ to be the score functions relevant to $\eta$, and define $s_{\lambda i}$ the score functions relevant $\lambda$. The reparameterized score functions are $s_i = (s_{\eta i}^\top,s_{\lambda i}^\top)^\top$, where $s_{\eta i} : = (s_{\mu i} ,s_{\beta i}^\top,s_{\sigma i},s_{\bs{\gamma} i}^\top)^\top$, $s_{\lambda i} = (s_{\lambda_{\mu \sigma} i},s_{\lambda_{\beta} i})$. The score functions are as described in \ref{sec:appendixb}, where $H^b(\cdot)$ is defined as the $b$-th order Hermite polynomial. $H^1(t) = t$, $H^2(t) = t^2 - 1$ , $H^3(t) = t^3 - 3t$, and $H^4(t) = t^4 - 6t^2 + 3$. Use $H^{b*}_{i,t}$ as short form of $\frac{1}{b!} \frac{1}{\sigma^* } H^{b}(\frac{y_{it} - \mu^* - x_{it}^\top \beta^* - z_{it}^\top \bs{\gamma}^* }{\sigma^*}) .$ \begin{equation}\label{score_1} \begin{split} s_{\eta i} = \begin{pmatrix} \sum_{t=1}^T H^{1*}_{i,t} \\ \sum_{t=1}^T H^{1*}_{i,t}x_{it}\\ \sum_{t=1}^T H^{2*}_{i,t} \\ \sum_{t=1}^T H^{1*}_{i,t}z_{it}\\ \end{pmatrix},s_{\lambda_{\mu \sigma} i} = \begin{pmatrix} \sum_{t=1}^T H^{2*}_{i,t} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{1,i,t} H^{1*}_{i,s} \\ 3 \sum_{t=1}^T H^{4*}_{i,t} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{2*}_{i,t} H^{2*}_{i,t} \\ 3 \sum_{t=1}^T H^{3*}_{i,t} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t} H^{2*}_{i,s} \\ 2 \sum_{t=1}^T H^{2*}_{i,t}x_{it} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it} H^{1*}_{i,s} \\ 3 \sum_{t=1}^T H^{3*}_{i,t}x_{it} + 2 \sum_{t=1}^T\sum_{s \neq t} H^{1*}_{i,t}x_{it} H^{2*}_{i,s} \end{pmatrix} , \\ s_{\lambda_{\beta}i} = \begin{pmatrix} \sum_{t=1}^T H^{2*}_{i,t}x^2_{it,1} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,1} H^{1*}_{i,s} x_{is,1}\\ \vdots \\ \sum_{t=1}^T H^{2*}_{i,t}x^2_{it,q} + \frac{1}{2} \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,q} H^{1*}_{i,s} x_{is,q} \\ 2 \sum_{t=1}^T H^{2*}_{i,t}x_{it,1} x_{it,2} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,1} H^{1*}_{i,s} x_{is,2}\\ \vdots \\ 2 \sum_{t=1}^T H^{2*}_{i,t}x_{it,1}x_{it,q} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,1} H^{1*}_{i,s} x_{is,q} \\ 2 \sum_{t=1}^T H^{2*}_{i,t}x_{it,2} x_{it,3} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,2} H^{1*}_{i,s} x_{is,3}\\ \vdots \\ 2 \sum_{t=1}^T H^{2*}_{i,t}x_{it,q-1}x_{it,q} + \sum_{t=1}^T \sum_{s \neq t} H^{1*}_{i,t}x_{it,q-1} H^{1*}_{i,s} x_{is,q} \end{pmatrix} \end{split}, \end{equation} where $x_{it,k}$ denote the $k$-th component of the vector $x_{it} \in \R^q$. Collect the relevant variables and define \begin{equation}\label{t1} t_n(\bs{\psi}_{\alpha},\alpha) := \begin{pmatrix} n^{1/2} (\eta - \eta^*) \\ n^{1/2} \alpha ( 1- \alpha) v(\bs{\lambda}) \end{pmatrix}. \end{equation} Define the vector of outer product of $\lambda$ with itself as \begin{equation} \begin{split} v(\bs{\lambda} & = (\lambda_{\mu}^2,\lambda_{\sigma}^2, \lambda_{\mu} \lambda_{\sigma^2}, \lambda_{\mu} \lambda_{\beta_1} , \ldots,\lambda_{\mu} \lambda_{\beta_q}, \lambda_{\sigma} \lambda_{\beta_1} , \ldots,\lambda_{\sigma} \lambda_{\beta_q}, \lambda_{\beta_1}^2,\ldots,\lambda_{\beta_q}^2 , \lambda_{\beta_1}^2,\ldots,\lambda_{\beta_q}^2, \\ & \lambda_{\beta_1} \lambda_{\beta_2} , \ldots,\lambda_{\beta_1} \lambda_{\beta_q},\lambda_{\beta_2} \lambda_{\beta_3},\ldots,\lambda_{\beta_2} \lambda_{\beta_q}, \ldots, \lambda_{\beta_{q-1}} \lambda_{\beta_{q}})^\top. \end{split} \end{equation} Define the normalized score $S_n := n^{-1/2} \sum_{i=1}^N s_i$ and the information matrix as $\bs{\mathcal{I}}_n := \frac{1}{n} \sum_{i=1}^N s_i s_i^\top,$. \subsection{Score function proof } For each iteration $k$. Given $\alpha_k^j,\mu_k^j,\sigma_k^j$, $\beta^j_k$. First calculate square error $r_{i,k}^j = \sum_{t=1}^T \frac{1}{2}(\frac{y_{it} -\mu^j_k - x'_{it}\beta^j_k - z'_{it} \gamma_k}{\sigma^j_k})^2$, conditional on type-specific parameters. For the finite mixture model, the likelihood for each $i$ is written \begin{align*} f(\bs{w};\psi_0) & = \sum_{j=1}^j \alpha^j \left\{ \Pi_{t=1}^T \frac{1}{\sigma^j} \phi\left(\frac{ y_{it} -\mu^j - x'_{it}\beta^j - z'_{it} \gamma }{\sigma^j }\right)\right\} \\ & = \sum_{j=1}^j \alpha^j \left\{ \Pi_{t=1}^T \frac{1}{\sqrt{2\pi}\sigma^j} \exp\{ \frac{- (y_{it} -\mu^j - x'_{it}\beta^j - z'_{it} \gamma)^2}{2 (\sigma^j)^2} \} \right\} \\ & = \sum_{j=1}^j \alpha^j \left\{ \frac{1}{(\sqrt{2\pi}\sigma^j)^T} \exp\{ \frac{-\sum_{t=1}^T (y_{it} -\mu^j - x'_{it}\beta^j - z'_{it} \gamma)^2}{2 (\sigma^j)^2}\} \right\} \\ & = \sum_{j=1}^j \alpha^j \left\{ \frac{1}{((\sqrt{2\pi} \sigma^j)^T} \exp\{-r_i^j\} \right\}. \end{align*} Call the weight matrix $W$ for Bayesian updating. In the $m \times N$ matrix, each element $ji$ represent the likelihood of firm $i$ belonging to type $j$. $$ W' = \begin{pmatrix} \frac{\alpha^1 f(\{\bs{W}_{1t}\}_{t=1}^T; \gamma,\theta^1)}{f(\{\bs{W}_{1t}\}_{t=1}^T; \gamma,\theta^1)} &\frac{\alpha^2 f(\{\bs{W}_{1t}\}_{t=1}^T; \gamma,\theta^2)}{f(\{\bs{W}_{1t}\}_{t=1}^T; \gamma,\theta^2)}& \ldots & \frac{\alpha^M f(\{\bs{W}_{1t}\}_{t=1}^T; \gamma,\theta^m)}{f(\{\bs{W}_{1t}\}_{t=1}^T; \gamma,\theta^m)} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\alpha^1 f(\{\bs{W}_{Nt}\}_{t=1}^T; \gamma,\theta^1)}{f(\{\bs{W}_{Nt}\}_{t=1}^T; \gamma,\theta^1)} &\frac{\alpha^2 f(\{\bs{W}_{Nt}\}_{t=1}^T; \gamma,\theta^2)}{f(\{\bs{W}_{Nt}\}_{t=1}^T; \gamma,\theta^2)}& \ldots & \frac{\alpha^M f(\{\bs{W}_{Nt}\}_{t=1}^T; \gamma,\theta^m)}{f(\{\bs{W}_{Nt}\}_{t=1}^T; \gamma,\theta^m)} \end{pmatrix}, $$ where $w_{ij}^{(k)} = \frac{\alpha^{j(k)} f(\bs{w}; \gamma^{(k)},\theta^{j(k)})}{f(\bs{w}; \gamma^{(k)},\theta^{j(k)})}$ is the posterior probability of observation $i$ being type $j$ in $k$-th iteration . Then update $\alpha^j, \mu^j, \sigma^j$ using $W$ at the end of each iteration, by the following formula. Define $X_{it} = (X_{it,1},X_{it,2},\ldots,X_{it,q})'$,$\tilde{X}_{it} = (1,X_{it,1},X_{it,2},\ldots,X_{it,q})'$ and $Z_{it} = (Z_{it,1},Z_{it,2},\ldots,Z_{it,p})'$. $\alpha^{j(k+1)} = \frac{1}{N} \sum_{i=1}^N w_{ij}^{(k)}$, is just the mean of each row from $W$. $\gamma$ is updated by $$\gamma^{(k+1)} = (\sum_{i=1}^N \sum_{t=1}^T Z_{it} Z_{it}^T )^{-1} (\sum_{i=1}^{N}\sum_{t=1}^T Z_{it}(Y_i - \sum_{j=1}^{M_0} w_{ij}^{(k)} X_{it}^T\beta^{j(k)} - \mu^{j(k)})), $$ $\mu$ and $\beta$ is updated using $$(\mu^{j(k+1)},(\beta^{j(k+1)})')' = (\sum_{i=1}^N \sum_{t=1}^T w_{ij}^{(k)} \tilde{X}_{it} \tilde{X}_{it}^T )^{-1} (\sum_{i=1}^{N}\sum_{t=1}^T \tilde{X}_{it}(Y_i - Z_{it}^T \gamma )) .$$ $\sigma^{j(k+1)}$ is the weighted average of residuals. $$\sigma^{j(k+1),2} = \frac{1}{NT} \sum_{i=1}^N w_{it}^{(k)} \{ \sum_{i=1}^T (y_{it} -\mu^{j(k)} - x'_{it}\beta^{j(k)} - z'_{it} \gamma^{(k)} )^2 \} .$$ \subsection{Chile LRTS Results} \section{Proof of unbounded likelihood} \end{document} For odd values of $T$ that is finite, $1-F_{T-1}(s)= (1+s+s^2/2!+...+s^{(T-1)/2}/((T-1)/2)!) e^{-s/2}$, and $\Pr(n (s_{(1)}^2 )^{T/2}>\epsilon) \rightarrow 0$ as $n\rightarrow \infty$. \begin{center} $\star\star\star$ \end{center} We show that $k$, the bound holds as well. For $T = k+1$, define the pdf of $\chi$-square distribution with degree of freedom $k$ as $f_{k-1}(x)$, we can write the expression of $\Pr(n (s_{(1)}^2 )^{k/2}>\epsilon)$ as \[\begin{split} [1-F_{{k}}( (\epsilon/n)^{2/k+1} ) ]^n & = \Big[1 - \int_{0}^{ (\epsilon/n)^{2/k+1} } f_{k}(x) d x \Big]^n\\ & = \Big[1 - \int_{0}^{ (\epsilon/n)^{2/k+1} } \frac{1}{2^{k/2} \Gamma(\frac{k}{2})} x^{k/2-1} e^{- x/2} d x \Big]^n . \end{split} \] Note that $(1 - x)^{\frac{1}{x}} \to \frac{1}{e}$ when $ x \to 0 $ and $$\Big[1-F_{{k}}\big( (\epsilon/n)^{2/ (k+1)} \big) \Big]^n = \Big\{ \Big[1-F_{{k}}\big( (\epsilon/n)^{2/(k+1)} \big) \Big]^{\frac{1}{ F_{{k}}( (\epsilon/n)^{2/ (k+1)} )} }\Big\}^{ \frac{ F_{{k}}( (\epsilon/n)^{2/(k+1)} )}{1 / n} }. $$ It is suffice to show that in the limit, $n F_{{k}}( (\epsilon/n)^{2/(k+1)} ) \to \infty$. The statement is equivalent to that $$\frac{ F_{{k}}( (\epsilon x)^{2/(k+1)} )}{x} \to \infty \text{ as } x \to 0. $$ We show the result holds by applying L'Hôpital's rule: \[\begin{split} \lim_{x \to 0} \frac{ F_{{k}}( (\epsilon x )^{2/(k+1)} )}{x } = \lim_{x \to 0} f_{k}( (\epsilon x )^{2/(k+1)} ) \epsilon^{2/(k+1)} x^{2/(k+1)-1}, \end{split}\] where $f_{k}$ is the PDF of $\chi$-square distribution with $k$ degrees of freedom. Note that $f_{k}((\epsilon x )^{2/(k+1)} ) = \frac{1}{2^{k/2} \Gamma(k/2)} ((\epsilon x )^{2/(k+1)} )^{k/2-1} e^{- ((\epsilon x )^{2/(k+1)} )/2}$, then $$ \lim_{x \to 0} f_{k}( (\epsilon x )^{2/(k+1)} ) x^{2/(k+1)-1} = \lim_{x \to 0} C_{k,\epsilon} e^{- ((\epsilon x )^{2/(k+1)} )/2} x^{ -\frac{1}{k+1}} = \infty, $$ where $C_{k,\epsilon} = \frac{\epsilon^{k/(k+1)}}{2^{k/2} \Gamma(k/2)} $, because $e^{- ((\epsilon x )^{2/(k+1)} )/2}$ is bounded and $x^{ -\frac{1}{k+1}} \to \infty$ as $x \to 0$ for $k=1,2 \ldots $. With the above result, we have \[ \begin{split} \lim_{n\to \infty} \Big\{ \Big[1-F_{{k}}\big( (\epsilon/n)^{2/(k+1)} \big) \Big]^{\frac{1}{ F_{{k}}( (\epsilon/n)^{2/ (k+1)} )} }\Big\}^{ \frac{ F_{{k}}( (\epsilon/n)^{2/(k+1)} )}{1 / n} } = 0, \end{split} \] because $\Big\{ \Big[1-F_{{k}}\big( (\epsilon/n)^{2/(k+1)} \big) \Big]^{\frac{1}{ F_{{k}}( (\epsilon/n)^{2/ (k+1)} )} }\Big\}$ is bounded away from $1$ from below and $\frac{ F_{{k}}( (\epsilon/n)^{2/(k+1)} )}{1 / n} \to \infty$ when $n\to \infty$. Therefore, for $T\ge 2$, we have the result holds. \end{proof}
1,314,259,996,210
arxiv
\section{Introduction} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig1.eps} \caption{BKT temperature computed for the square lattice (gray) and for the Lieb lattice with a half-filled flat band (blue, green and yellow) with different values of the hopping staggering (see Fig.~\ref{fig.example}). Inset: BKT temperature at interactions $0.2\leq |U|\leq 3$. The flat band is isolated from the other bands by a band gap $E_{\rm gap} = \sqrt{8}\delta$. The highest BKT temperatures are obtained when $\delta=0$, corresponding to the situation where the gap between the flat band and dispersive bands closes, resulting in a linear band touching. The BKT temperature for the square lattice (gray) is exponentially suppressed at low interactions, whereas $T_{BKT}$ on the isolated flat band is proportional to $|U|$. All energies are given in units of the average inter-lattice-site hopping energy $t$.} \label{fig.comp} \end{figure} Systems with dispersionless (flat) bands host exotic phenomena, as even small interactions will dominate the kinetic energy. For example, flat bands have been predicted to increase the critical temperature for superconductivity. Bardeen-Cooper-Schrieffer (BCS) theory predicts that the critical temperature is given by $T_c\propto {\rm exp}\left(-\frac{1}{|U| \rho_0(E_F)}\right)$, where $|U|$ is the strength of the effective attractive interaction and $\rho_0(E_F)$ is the density of states at the Fermi surface. In a flat band, where the density of states diverges, $T_c$ is proportional~\cite{Heikkila2011,Khodel1990,Kopnin2011} to $|U|$, implying that the critical temperature can be much higher in flat bands than in dispersive bands at low interaction strengths. However, the BCS critical temperature does not by itself indicate superconductivity, as it is only the critical temperature for Cooper pair formation. The Meissner effect and the possibility of dissipationless transport are also required. These are characterized by a nonzero superfluid weight $D_s$ or, equivalently, superfluid stiffness~\cite{Scalapino1993}. Moreover, a nonzero superfluid weight is a necessary condition for a nonzero Berezinsky-Kosterlitz-Thouless (BKT) transition temperature, which is the critical temperature for superconductivity in two dimensions. The superfluid weight is conventionally given by $D_s=n_e/m^*$, where $n_e$ is the total particle density and $m^*$ is the effective mass. In a flat band, single particles localize and $m^*$ diverges, which indicates vanishing superfluid weight. However, in multiband models, the superfluid weight has an additional geometric contribution which can be nonzero even in the case of flat bands~\cite{Peotta2015,Liang2017,Julku2016}. In the isolated band limit, this contribution has been shown~\cite{Peotta2015} to be related to the quantum metric~\cite{Provost:1980,Resta2011,OzawaGoldman2018}. Monte Carlo results are in good agreement with this prediction~\cite{Hofmann2020,Herzog-Arbeitman2021,Peri2021}. Flat band superconductivity has attracted immense interest due to its relevance in magic-angle twisted bilayer graphene~\cite{Cao2018,Cao2018b,Yankowitz2019} and other moiré materials~\cite{Park2021,Shen2020,Cao2020,Balents2020,Andrei2021}. In particular, the potential importance of the geometric contribution to the superfluid weight has been shown in theoretical studies of twisted bilayer graphene~\cite{Xie2020,Julku2020,Hu2019,Torma2021}, and has also been explored experimentally~\cite{Tian2021}. There is, however, a fundamental problem in the relation between the superfluid weight and the quantum metric as presented in previous literature. Consider a gedanken transformation that changes the orbital locations of a lattice model without altering the hopping terms. The superfluid weight is invariant under such transformations. On the other hand, the quantum metric depends not only on the tight-binding parameters of the lattice model, but also on the locations of the orbitals. We show that this discrepancy in mean-field theory is resolved by properly accounting for the dependence of the order parameters on the magnetic vector potential. This dependence is crucial in multiband models, where the order parameters in different orbitals can have different complex phases. We show that accounting for the behavior of the order parameters is necessary even in systems with time-reversal symmetry and uniform pairing, contradicting previous literature~\cite{Peotta2015,Chan2022}. We derive complete equations for the mean-field superfluid weight, and show that the use of the simpler equations provided in previous literature can lead to quantitative and, in extreme cases, qualitative errors where the superfluid weight is incorrectly nonzero. Within our general mean-field framework, we study lattice models with both isolated and non-isolated flat bands. We show that, in time-reversal symmetric systems, the superfluid weight for isolated flat bands is proportional to the \textit{minimal} quantum metric, which is the quantum metric with the smallest possible trace for the considered lattice model. These conclusions in mean field theory are mirrored by exact calculations of the Cooper pair mass in attractive Hubbard models possessing a uniform pairing condition. We find two contributions to the effective mass in perturbation theory: the quantum metric and a competing non-universal term. However, we show that the space group symmetries strongly constrain the latter. If the orbitals are located at high symmetry positions such that they are pinned in location by the lattice symmetries, then this non-universal term vanishes and the quantum metric is the unique contribution to the Cooper pair mass. We propose a simple extension of the uniform pairing condition that guarantees the non-universal term vanishes. Based on our results, we conclude that lower bounds for the superfluid weight in terms of topological invariants such as the Chern number~\cite{Peotta2015} and Euler class~\cite{Xie2020} are valid, but the use of other bounds which depend on orbital positions~\cite{Liang2017,Herzog-Arbeitman2021} requires additional conditions, e.g.~space group symmetries. In obstructed atomic limits~\cite{Herzog-Arbeitman2021}, the superfluid weight is only bounded by real space invariants computed at the high-symmetry positions. Moreover, we discuss which results in previous literature are likely to be accurate, and which would need revisiting based on the complete formula for the superfluid weight. In order to understand the behavior of non-isolated flat bands, we also study the effect of closing the gap between the flat band and dispersive bands. Remarkably, we show that a band touching can actually be beneficial for superconductivity (see Fig.~\ref{fig.comp}). This is important, as it means that one does not need to find systems where the flat band is separated from the other bands by a large energy scale. If isolated bands were needed, trying to achieve a higher critical temperature would mean that larger band gaps were required to avoid thermal excitations to the other bands --- this could be a severe limitation especially when searching for room temperature superconductivity. Our results show that such isolation is not necessarily needed. In contrast, band touchings can enhance $T_{BKT}$ or $T_c$. We also investigate the effect of different types of band touchings, and show that the quantum geometry of the flat band alone is not sufficient to describe superconductivity in the non-isolated band limit: the type of band touching matters too, and can actually be used as a design degree of freedom when optimizing the critical temperature. We complement our numerical results with an analytic treatment of interacting bipartite crystalline lattices with mean field theory, yielding relations between the pairing strengths on different sublattices. Overall, our results are promising for harnessing the potential of flat bands in increasing the critical temperature of superconductivity. This potential is illustrated by Fig.~\ref{fig.comp}. For large interactions, dispersive band structures are often as good or better than flat band systems. In contrast, for weak interactions (typically $|U| < t$), flat bands provide a clear, even radical, advantage. This makes it possible to utilize a wider class of systems and materials for high temperature superconductivity since interactions do not need to be strong. The potential of flat bands to offer high critical temperature even for weak interactions may also help avoid bipolarons and charge density waves competing with superconductivity at large interactions~\cite{Esterlis2018,Esterlis20182}. This article is structured as follows. In Sec~\ref{sec.sfw}, we derive the complete equations for the superfluid weight and show how they differ from the results obtained in previous literature. We then revisit superconductivity in isolated flat bands in Sec.~\ref{sec.qm}, and show that the superfluid weight is related to the minimal quantum metric. Sec.~\ref{sec.example} illustrates our general results within the specific example of the Lieb lattice. In Sec.~\ref{sec.Jonah}, we show how the general conclusions given by the superfluid weight calculations can be obtained by derivation of the many-body effective Cooper pair mass in a flat band, and how symmetries can guarantee that the quantum metric is minimal. In Sec.~\ref{sec.ni}, we study non-isolated flat bands, and show that the highest $T_{BKT}$ can occur when the flat band is \textit{not} isolated from the dispersive bands. The validity of results given in previous literature is discussed in Sec.~\ref{sec.prev}. Finally, we summarize our conclusions in Sec.~\ref{sec.conc}. \section{Superfluid weight in multiband mean-field models} \label{sec.sfw} \subsection{The model Hamiltonian} We study the Hubbard model on a multiband lattice \begin{align} H &= \sum_{\sigma}\sum_{i\alpha,j\beta} (t_{i\alpha,j\beta}^{\sigma} - \mu \delta_{i\alpha,j\beta})\crea{c_{i\alpha\sigma}}\ani{c_{j\beta\sigma}} \nonumber \\ &+ U\sum_{i\alpha} \crea{c_{i\alpha\uparrow}} \crea{c_{i\alpha\downarrow}} \ani{c_{i\alpha\downarrow}} \ani{c_{i\alpha\uparrow}}, \label{eq.ham} \end{align} where $i,j$ label the unit cells and $\alpha,\beta$ the orbitals in a unit cell. The hopping amplitude from site $j\beta$ to $i\alpha$ for spin $\sigma$ is $t_{i\alpha,j\beta}^\sigma$ and $U<0$ is the on-site interaction strength. The particle number is tuned by the chemical potential $\mu$. We use the usual mean-field approximation $U\crea{c_{i\alpha\uparrow}}\crea{c_{i\alpha\downarrow}}\ani{c_{i\alpha\downarrow}}\ani{c_{i\alpha\uparrow}}\approx \Delta_{i\alpha}\crea{c_{i\alpha\uparrow}}\crea{c_{i\alpha\downarrow}} + {\rm H.c.} - |\Delta_{i\alpha}|^2/U$, where $\Delta_{i\alpha} = U\ave{c_{i\alpha\downarrow} c_{i\alpha\uparrow}}$. We will focus on solutions where the order parameter is uniform on each orbital, $\Delta_{i\alpha}=\Delta_{\alpha}$, i.e. it does not depend on the unit cell index $i$ but can depend on the orbital index $\alpha$. \subsection{Superfluid weight from the free energy} \label{sec.fromfreeenergy} The superfluid weight can be defined as the change in free energy $F=\Omega + \mu N$, where $\Omega$ is the grand canonical potential and $N$ is the particle number, due to a change in the phase of the order parameters $\Delta_{i\alpha}\to \Delta_{i\alpha} e^{2i\vec{q}\cdot \vec{r}_{i\alpha}}$~\cite{Taylor2006,Peotta2015}, with $\vec{r_{i\alpha}}$ being the position of the site $i\alpha$: \begin{equation} [D_s]_{ij} = \frac{1}{V}\frac{{\rm d}^2F}{{\rm d}q_i{\rm d}q_j}\bigg|_{\vec{q}=\vec{0}}. \label{eq.sfw} \end{equation} Here, $V$ is the volume of the system. The derivative is taken at a constant temperature, but the other thermodynamic variables are allowed to vary with $\vec{q}$. Introducing the phase $e^{2i\vec{q}\cdot \vec{r}_{i\alpha}}$ into Eq.~\eqref{eq.ham}, the Fourier transformed mean-field Hamiltonian reads \begin{align} H(\vec{q}) &= \sum_{\vec{k}} \vec{\crea{c_{\vec{k}}}} H_{\rm BdG}(\vec{k}) \vec{\ani{c_{\vec{k}}}} \nonumber \\ &+ \sum_{\vec{k}}{\rm Tr} H_{\vec{k}}^{\downarrow} - nN_c\mu - N_c\sum_{\alpha} \frac{|\Delta_{\alpha}(\vec{q})|^2}{U}, \\ H_{\rm BdG}(\vec{k}) &= \begin{pmatrix} H_{\vec{q}+\vec{k}}^{\uparrow} - \mu \vec{1} & \vec{\Delta} \\ \vec{\Delta}^{\dag} & - (H_{\vec{q}-\vec{k}}^{\downarrow})^* + \mu \vec{1} \end{pmatrix}, \end{align} where $\vec{\ani{c_{\vec{k}}}} = ( \ani{c_{\vec{q}+\vec{k},\alpha=1,\uparrow}},\ldots, \ani{c_{\vec{q}+\vec{k},\alpha=n,\uparrow}}$, $ \crea{c_{\vec{q}-\vec{k},\alpha=1,\downarrow}},\ldots,\crea{c_{\vec{q}-\vec{k},\alpha=n,\downarrow}})^{\rm T}$ and $n$ is the number of bands. The number of unit cells is denoted by $N_c$, and $\vec{\Delta}={\rm diag}(\Delta_{1},\ldots,\Delta_n)$. The matrix $H_{\vec{k}}^{\sigma}$ is the Fourier transformation of the kinetic Hamiltonian for spin $\sigma$, $[H_{\vec{k}}^{\sigma}]_{\alpha\beta} = \sum_{i} t_{i\alpha,0\beta}^{\sigma} e^{-i\vec{k}\cdot (\vec{R_i}+\vec{\delta_{\alpha}}-\vec{\delta_{\beta}})}$, where $\vec{R_i}$ is the position of the $i$th unit cell and $\vec{\delta_{\alpha}} = \vec{r_{i\alpha}}-\vec{R_i}$. Here we have used the Fourier transformation \begin{equation} c_{\vec{k}\alpha\sigma} = \frac{1}{\sqrt{N_c}} \sum_{i} e^{-i\vec{k}\cdot(\vec{R_i}+\vec{\delta_{\alpha}})} c_{i\alpha\sigma}, \label{eq.fourier} \end{equation} which takes the intra-unit cell positions of the orbitals into account. Another convention that is often used is \begin{equation} c_{\vec{k}\alpha\sigma} = \frac{1}{\sqrt{N_c}} \sum_{i} e^{-i\vec{k}\cdot\vec{R_i}} c_{i\alpha\sigma}, \label{eq.fourier_no} \end{equation} which corresponds to setting all $\vec{\delta_{\alpha}}=\vec{0}$. This latter convention has the advantage of making the Hamiltonian explicitly periodic in reciprocal space. However, the choice of the orbital positions plays an essential role, as we will show, in relating the superfluid weight to quantum geometry. The equilibrium state minimizes the grand canonical potential \begin{align} \Omega &= -\frac{1}{\beta} \sum_{\vec{k}}\sum_{i} \ln [1+\exp(-\beta E_{\vec{k},i})] \nonumber \\ &+\sum_{\vec{k}}{\rm Tr}H_{\vec{k}}^{\downarrow} - nN_{c}\mu - N_c\sum_{\alpha} \frac{|\Delta_{\alpha}|^2}{U} , \end{align} where $E_{\vec{k},i}$ are the eigenvalues of the Bogoliubov-de-Gennes Hamiltonian $H_{\rm BdG}({\vec k})$. The order parameters for a given chemical potential and temperature can thus be solved by minimizing $\Omega$, or equivalently by solving the gap equation. The particle number is controlled by the chemical potential $\mu$, and fulfills the equation $N = -\partial \Omega/\partial \mu$. Equation~\eqref{eq.sfw} can be cumbersome to use, as it requires knowledge of the state at nonzero $\vec{q}$. In previous literature~\cite{Peotta2015}, it has been shown that this equation simplifies to $[D_s]_{ij} = (1/V) \partial^2 \Omega/\partial q_i \partial q_j\big|_{\vec{q}=\vec{0}}$ for systems with time-reversal symmetry (TRS) --- and assuming that the order parameter is always real, even for nonzero $\vec{q}$. The partial derivative is taken with all variables but $\vec{q}$ is kept constant, meaning only knowledge of the ground state is required (e.g., only $\Delta(\vec{q}=0)$ is needed, not $\Delta(\vec{q}\neq 0)$). This simplified equation has been used for example to show that the superfluid weight of isolated flat bands is proportional to the quantum metric. A salient problem with that result, however, is that the quantum metric depends on the positions of the orbitals $\{\vec{\delta_{\alpha}}\}$ through \Eq{eq.fourier}. On the other hand, the superfluid weight is invariant under changes of $\{\vec{\delta_{\alpha}}\}$: this is immediately clear from the definition~\eqref{eq.sfw}, given that the free energy does not depend on intra-unit cell positions (when the hopping amplitudes $t_{i\alpha , j\beta}$ have been fixed constant). Using the terminology introduced in Ref.~[\onlinecite{Simon2020}], the superfluid weight is geometry-independent while the quantum metric is geometry-dependent. The source of this discrepancy is the assumption that all order parameters are real even at nonzero $\vec{q}$. For a single-band model, this assumption can always be made, because of the freedom in the phase of the order parameter. However, for a multiband model, the order parameters can have orbital-dependent phases, and cannot, in general, be made simultaneously real by changing only the overall phase. To understand how the problem arises, let us express ${\rm d}^2F/{\rm d}q_i{\rm d}q_j$ in terms of partial derivatives of the grand canonical potential. For all the equations, we will fix the overall phase of the order parameters by imposing reality and positivity on a nonzero order parameter for one of the orbitals; we choose it to be $\Delta_1(\vec{q})$. For simplicity, we will focus here on a system with time reversal symmetry, which implies that $\mu(\vec{q})=\mu(-\vec{q})$ and $\Delta_{\alpha}(\vec{q}) = \Delta_{\alpha}^*(-\vec{q})$~\cite{Peotta2015}. Hence at $\vec{q}=\vec{0}$, the derivatives of the order parameters are purely imaginary and ${\rm d}\mu/{\rm d}q_i\big|_{\vec{q}=\vec{0}}=0$. The general case without TRS is treated in Appendix~\ref{app.sfw}. Using the chain rule, the first derivative of the grand potential may be written as \begin{equation} \frac{{\rm d}\Omega}{{\rm d}q_i} = \frac{\partial \Omega}{\partial q_i} + \frac{\partial \Omega}{\partial \mu} \frac{{\rm d}\mu}{{\rm d}q_i} + \sum_{\alpha} \frac{\partial \Omega}{\partial \Delta_{\alpha}^{I}} \frac{{\rm d}\Delta_{\alpha}^{I}}{{\rm d}q_i} + \sum_{\alpha} \frac{\partial \Omega}{\partial \Delta_{\alpha}^{R}} \frac{{\rm d}\Delta_{\alpha}^{R}}{{\rm d}q_i}, \label{eq.step_chain} \end{equation} where we have used the notation $\Delta_{\alpha}^{I}={\rm Im}(\Delta_{\alpha})$ and $\Delta_{\alpha}^{R}={\rm Re}(\Delta_{\alpha})$. Taking the total derivative of Eq.~\eqref{eq.step_chain} with reference to $q_j$ and setting $\vec{q}=\vec{0}$ yields \begin{align} \frac{{\rm d}^2F}{{\rm d}q_i{\rm d}q_j} \bigg|_{\vec{q}=\vec{0}} &= \frac{{\rm d}^2\Omega}{{\rm d}q_i{\rm d}q_j} \bigg|_{\vec{q}=\vec{0}} - \frac{\partial \Omega}{\partial \mu}\frac{{\rm d}^2\mu}{{\rm d}q_i{\rm d}q_j}\bigg|_{\vec{q}=\vec{0}} \\ &= \frac{{\rm d}}{{\rm d}q_j} \frac{\partial \Omega}{\partial q_i}\bigg|_{\vec{q}=\vec{0}} \\ &= \frac{\partial ^2\Omega}{\partial q_i\partial q_j}\bigg|_{\vec{q}=\vec{0}} + \sum_{\alpha}\frac{\partial^2\Omega}{\partial \Delta_{\alpha}^{I}\partial q_i}\frac{{\rm d}\Delta_{\alpha}^{\rm I}}{{\rm d}q_j}\bigg|_{\vec{q}=\vec{0}}. \label{eq.not_final} \end{align} We have used that $\partial \Omega/\partial \Delta_{\alpha} = 0$ at all $\vec{q}$, which is equivalent to the gap equation, and that the total particle number $N=-\partial\Omega/\partial \mu$ is constant. Due to TRS, the derivatives of the order parameters are purely imaginary at $\vec{q}=\vec{0}$ and ${\rm d}\mu/{\rm d}q_i\big|_{\vec{q}=\vec{0}}=0$, which is why only the total derivatives of $\Delta_{\alpha}^I$ appear on the third line. Since $\partial \Omega/\partial \Delta_{\alpha}=0$ holds at all $\vec{q}$, we have \begin{equation} 0 = \frac{{\rm d}}{{\rm d}q_i} \frac{\partial\Omega}{\partial \Delta_{\alpha}^I}\bigg|_{\vec{q}=\vec{0}} = \frac{\partial^2\Omega}{\partial q_i\partial \Delta_{\alpha}^I}\bigg|_{\vec{q}=\vec{0}} + \sum_{\beta} \frac{\partial^2\Omega}{\partial \Delta_{\alpha}^I\partial \Delta_{\beta}^I}\frac{{\rm d}\Delta_{\beta}^I}{{\rm d}q_i}\bigg|_{\vec{q}=\vec{0}} . \label{eq.sys} \end{equation} Using this identity, we can write Eq.~\eqref{eq.not_final} in a more concise form \begin{align} \frac{{\rm d}^2F}{{\rm d}q_i{\rm d}q_j}\bigg|_{\vec{q}=\vec{0}} &= \frac{\partial^2\Omega}{\partial q_i\partial q_j}\bigg|_{\vec{q}=\vec{0}} - ({\rm d}_i\Delta^I)^{\rm T} \partial_{\Delta^{I}}^2\Omega ({\rm d}_j\Delta^I)\big|_{\vec{q}=\vec{0}}, \label{eq.final}\\ {\rm d}_i\Delta^I &= \left( \frac{{\rm d}\Delta_2^I}{{\rm d}q_i}, \ldots, \frac{{\rm d}\Delta_{n}^I}{{\rm d} q_i} \right)^{\rm T},\\ \partial_{\Delta^{I}}^2\Omega &= \begin{pmatrix} \frac{\partial^2\Omega}{\partial \Delta_{2}^I\partial \Delta_{2}^I} & \ldots & \frac{\partial^2\Omega}{\partial \Delta_{2}^I\partial \Delta_{n}^I} \\ \vdots & \ddots & \vdots \\ \frac{\partial^2\Omega}{\partial \Delta_{n}^I\partial \Delta_2^I} & \ldots & \frac{\partial^2\Omega}{\partial \Delta_n^I\partial \Delta_n^I} \end{pmatrix}. \end{align} The partial derivatives in $\partial_{\Delta^I}^2\Omega$ are taken by varying the involved order parameter while keeping all other variables constant. The order parameter $\Delta_1$ does not appear in ${\rm d}_i\Delta^I$ and $\partial_{\Delta^I}^2\Omega$ because we assumed that $\Delta_1$ is always taken real and positive. If the overall phase of the order parameters is not fixed, an additional row and column containing the derivatives involving $\Delta_1$ needs to be added to $\partial_{\Delta^I}^2\Omega$. Clearly, ${\rm d}^2F/{\rm d}q_i{\rm d}q_j|_{\vec{q}=\vec{0}} = \partial^2\Omega/\partial q_i \partial q_j|_{\vec{q}=\vec{0}}$ when ${\rm d}\Delta_{\alpha}^{I}/{\rm d}q_i|_{\vec{q}=\vec{0}}=0$. This holds if the order parameters are real also at nonzero $\vec{q}$. It has been argued in previous literature that the simplified equation $[D_s]_{ij}=\partial^2\Omega/\partial q_i\partial q_i\big|_{\vec{q}=\vec{0}}$ can be used in systems with TRS, as in such systems the order parameters can be made real with a transformation of the form $c_{i\alpha}\to c_{i\alpha} e^{i\theta_{i\alpha}(\vec{q})}$~\cite{Peotta2015}. Since this transformation has no effect on the eigenvalues of $H_{\rm BdG}$ or on the absolute values of the order parameters, the free energy remains unchanged, and there is no effect on the superfluid weight. However, the derivatives of the order parameters (the rightmost term in Eq.~\ref{eq.final}), and $\partial^2 \Omega/\partial q_i\partial q_j$, are \textit{not} conserved under this transformation; they both change in such a way that the left-hand side of Eq.~\ref{eq.final} remains invariant. Therefore, when using $[D_s]_{ij}=(1/V)\partial^2\Omega/\partial q_i\partial q_j|_{\vec{q}=\vec{0}}$, it is crucial to compute the partial derivative \textit{after} the transformation $c_{i\alpha}\to c_{i\alpha} e^{i\theta_{i\alpha}(\vec{q})}$ is performed. In practice, one cannot assume that this simplified equation holds without knowledge of the behavior of the order parameters at nonzero $\vec{q}$ even in systems with TRS. This fact was correctly pointed out in Ref.~[\onlinecite{Chan2022}]. However, it was stated therein that the additional terms are zero when the orbitals are equivalent. This is not generally the case: the introduction of the vector $\vec{q}$ in the systems typically breaks the very symmetry of the lattice which guaranteed equal pairing at the orbitals, meaning that the order parameters at nonzero $\vec{q}$ can differ by a phase even if they are equal at $\vec{q}=\vec{0}$. It is straightforward to show that the additional terms in Eq.~\eqref{eq.final} are always negative for $i=j$. The matrix $\partial_{\Delta^I}^2\Omega$ is the Hessian matrix of the grand canonical potential, and since the order parameters are a minimum of $\Omega$, it is positive semidefinite. It follows immediately that $({\rm d}_i \Delta)^T \partial_{\Delta^I}^2\Omega ({\rm d}_i\Delta)\geq 0$, which means that $(1/V)\partial^2\Omega/\partial q_i^2|_{\vec{q}=\vec{0}} \geq [D_s]_{ii}$. This implies that $\partial^2\Omega/\partial q_i\partial q_j$ can predict values that are much larger than the correct superfluid weight, including the case of indicating a nonzero superfluid weight when it is in fact vanishing. The derivatives ${\rm d}\Delta_{\alpha}/{\rm d}q_i$ can be computationally expensive to evaluate, as they seem to require solving the gap equation at different nonzero $\vec{q}$. Remarkably, however, their computation requires only knowledge of the ground state at $\vec{q}=\vec{0}$. This method relies on the system of equations given in Eq.~\eqref{eq.sys}, which can be written in matrix form as \begin{align} &(\partial_{\Delta^I}^2\Omega){\rm d}_i\Delta^{I} = -\vec{b_i}, \label{eq.lin_sys_del}\\ &\vec{b_i} = \left( \frac{\partial^2\Omega}{\partial q_i\partial \Delta_{2}^{I}},\ldots, \frac{\partial^2\Omega}{\partial q_i\partial \Delta_{n}^{I}} \right)^{\rm T}. \end{align} The derivatives of the order parameters are thus ${\rm d}_i\Delta^I = -(\partial_{\Delta^I}^2\Omega)^{-1}\vec{b_i}$, which involves only partial derivatives of $\Omega$ and does not require knowledge of the state at nonzero $\vec{q}$. Note that if we had not fixed the overall phase of the order parameters by choosing $\Delta_1$ real and positive, the gap equation would have an infinite number of solutions due to the freedom in this phase. In this case, the Hessian matrix would contain the terms related to partial derivatives w.r.t. $\Delta_1^I$ and would not be invertible. \subsection{Superfluid weight from linear response: the conventional and geometric contributions} In previous literature, the superfluid weight has been split into so-called conventional and geometric parts~\cite{Peotta2015,Liang2017}. The conventional part is the only component present in single-band models, and is related to the derivatives of the band structure. It vanishes in the flat-band limit. The geometric part is a purely multiband component which can be nonzero even on flat bands. Expressions for these components have been derived from linear response theory~\cite{Liang2017}, but without accounting for the dependence of the order parameters on the vector potential. We compute the full mean-field superfluid weight from linear response theory in Appendix~\ref{app.lin}, and obtain \begin{widetext} \begin{equation} [D_s]_{ij} = \frac{1}{V} \sum_{\vec{k},ab}\frac{n_F(E_a)-n_F(E_b)}{E_b-E_a}\big[ \bra{\psi_a}\partial_{i}\widetilde{H}_{\vec{k}} \ket{\psi_b} \bra{\psi_b} \partial_{j}\widetilde{H}_{\vec{k}} \ket{\psi_a} -\bra{\psi_a}(\partial_{i}\widetilde{H}_{\vec{k}}\gamma^z+\delta_{i}\Delta) \ket{\psi_b} \bra{\psi_b} (\partial_{j}\widetilde{H}_{\vec{k}}\gamma^z +\delta_{j}\Delta) \ket{\psi_a} \big], \label{eq.linrespresult} \end{equation} \end{widetext} where \begin{align} \partial_{i}\widetilde{H_{\vec{k}}} &= \begin{pmatrix} \frac{\partial H_{\vec{k}'}^{\uparrow}}{\partial k_{i}'}\bigg|_{\vec{k'}=\vec{k}} & 0 \\ 0 & \frac{\partial (H_{\vec{k}'}^{\downarrow})^*}{\partial k_{i}'}\bigg|_{\vec{k'}=-\vec{k}} \end{pmatrix}, \nonumber \\ \delta_{i}\Delta &= \begin{pmatrix} 0 & \frac{{\rm d}\vec{\Delta}}{{\rm d}q_{i}} \\ \frac{{\rm d}\vec{\Delta}^{\dag}}{{\rm d}q_{i}} & 0 \end{pmatrix}. \end{align} Here, $\gamma^z = \sigma_z\otimes \mathbf{1}_{n\times n}$, where $\sigma_i$ are Pauli matrices and $\mathbf{1}_{n\times n}$ is the $n\times n$ identity matrix. The eigenvalues and eigenvectors of $H_{\rm BdG}$ are $E_a$ and $\ket{\psi_a}$ respectively, and $n_{F}(E)$ is the Fermi-Dirac distribution at $E$. The prefactor in~(\ref{eq.linrespresult}) should be understood as $-\partial n_F(E)/\partial E$ when $E_a=E_b$. This expression differs from the one given in~[\onlinecite{Liang2017}] by the addition of $\delta_i\Delta$ in the second term on the RHS of Eq.~\eqref{eq.linrespresult}, which accounts for the derivatives of the order parameters. To separate the conventional and geometric contributions, we write the eigenvectors in terms of the Bloch functions $\ket{m_{\vec{k}}}_{\sigma}$: $\ket{\psi_a}=\sum_{m=1}^{n} (w_{+,am} \ket{+}\otimes\ket{m_{\vec{k}}}_{\uparrow}+w_{-,am}\ket{-}\otimes\ket{m^*_{-\vec{k}}}_{\downarrow})$, where $\ket{m_{\vec{k}}}_{\uparrow}$ is the eigenvector of $H_{\vec{k}}^{\uparrow}$ with eigenvalue $\epsilon_{\uparrow,m,\vec{k}}$, $\ket{m^*_{-\vec{k}}}_{\downarrow}$ is the eigenvector of $(H_{-\vec{k}}^{\downarrow})^*$ with eigenvalue $\epsilon_{\downarrow,m,-\vec{k}}$, and $\ket{\pm}$ are the eigenvectors of $\sigma_z$ with eigenvalues $\pm 1$. Then \begin{align} [D_{s,{\rm conv}}]_{\mu\nu} = \sum_{\vec{k}}\sum_{mn} C_{nn}^{mm} [j_{\mu}^{\uparrow}&(\vec{k})]_{mm} [j_{\nu}^{\downarrow}(\vec{k})]_{nn}, \nonumber \\ C_{pq}^{mn} = 4 \sum_{ab} \frac{n_F(E_a)-n_F(E_b)}{E_b-E_a}& w_{+,am}^*w_{+,bn} w_{-,bp}^* w_{-,aq}, \nonumber \\ [j_{\mu}^{\sigma}(\vec{k})]_{mn} = \phantom{a}_{\sigma}\bra{m_{\vec{k}}}& \partial_{\mu} H_{\vec{k}}^{\sigma} \ket{n_{\vec{k}}}_{\sigma}, \end{align} where $\partial_{\mu} = \partial/\partial_{k_{\mu}}$. The geometric contribution is $D_{s,{\rm geom}} = D_{s} - D_{s,{\rm conv}}$. The expression for the conventional contribution matches the one given in Ref.~[\onlinecite{Liang2017}], but the geometric contribution contains terms arising from the derivatives of the imaginary components of the order parameters. All the new additional terms in Eq.~\eqref{eq.final} (i.e., other than the partial derivative of the grand potential) are thus added to the geometric contribution, which is reasonable, as they can only be nonzero in multiband models. This split into conventional and geometric contributions is independent of the choice of orbital positions, and as we show below, the geometric part is related to the minimal quantum metric in isolated flat bands. These definitions are valid in a system with TRS, where the derivatives of the order parameters can be made purely imaginary at $\vec{q}=0$~\cite{Peotta2015}. In a system without TRS, there are additional terms arising from the derivatives of the real parts of the order parameters which can be nonzero even in a single-band system. The superfluid weights derived from the free energy, Eq.~\ref{eq.final}, and by linear response, Eq.~\ref{eq.linrespresult} are equal, as shown in Appendix~\ref{app.equivalence}. We have verified numerically that both methods yield the same results in all examples studied in this article. \section{Quantum metric and isolated flat bands} \label{sec.qm} The quantum metric of a set of bands $\mathcal{S}$ is the real part of the quantum geometric tensor \begin{equation} \mathcal{B}_{ij}(\vec{k}) = 2 \text{Tr }P(\vec{k}) \partial_i P(\vec{k})\partial_j P(\vec{k}) \label{eq.QGtensor} \end{equation} where $P(\vec{k}) = \sum_{m\in \mathcal{S}} \ket{m_{\vec{k}}}\bra{m_{\vec{k}}}$ is the projector into the Bloch states of the bands at $\vec{k}$. The quantum metric has been previously related to the superfluid weight, most prominently in the limit of isolated flat bands with TRS and where the pairing is uniform in all orbitals where $\Delta_{\alpha}\neq 0$, i.e. $\Delta_{\alpha}=\Delta$ for all $\Delta_{\alpha}\neq 0$~\cite{Peotta2015,Tovmasyan2016,Liang2017}. In such systems, the superfluid weight is given by \begin{align} \label{eq:DsQM} [D_{s}]_{ij} &= \frac{4f(1-f)}{(2\pi)^{D-1}}|U|n_{\phi}\mathcal{M}_{ij},\\ \mathcal{M}_{ij} &= \frac{1}{2\pi} \int_{\rm B.Z.} {\rm d}^2\vec{k} \, {\rm Re }(\mathcal{B}_{ij}(\vec{k})). \end{align} Here $f$ is the filling fraction of the band, $\mathcal{M}_{ij}$ is the quantum metric of the isolated flat band, $n_{\phi}^{-1}$ is the number of orbitals where pairing is nonzero and $D$ is the dimension of the system. This result is derived from mean-field theory using the equality $[D_{s}]_{ij} =(1/V) \partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$, or the equivalent linear response equations~\cite{Peotta2015,Liang2017,Julku2016}. However, as we have shown in Section~\ref{sec.sfw}, this equation is only accurate in special cases, even in systems with TRS and uniform pairing. We will show here that, nevertheless, it is actually possible to derive a general connection between the superfluid weight and the quantum geometry, but the relevant quantity turns out to be the {\it minimal} quantum metric, i.e. the quantum metric with the lowest possible trace over all possible orbital positions. As stated in Sec.~\ref{sec.fromfreeenergy}, $\partial^2\Omega/(\partial q_i)^2\big|_{\vec{q}=\vec{0}} \geq {\rm d}^2F/({\rm d}q_i)^2\big|_{\vec{q}=\vec{0}}$ in presence of TRS. Without TRS, this inequality may not be true when ${\rm d}\mu/{\rm d}q_i\big|_{\vec{q}=\vec{0}}\neq 0$ (see Sec.~\ref{sec.no_trs}). When the inequality is saturated, the quantum metric is directly related to the superfluid weight. Otherwise, it gives an upper bound. We will first show that in systems with TRS, there always exists a point where the inequality is saturated. The property $\Delta(\vec{q}) = \Delta^*(-\vec{q})$ implies that \begin{equation} \frac{{\rm d}\Delta_{\alpha}}{{\rm d}q_i}\bigg|_{\vec{q}=\vec{0}} = i\Delta_{\alpha} \frac{{\rm d}\theta_{\alpha}}{{\rm d}q_i}\bigg|_{\vec{q}=\vec{0}}, \label{eq.der_delta} \end{equation} where $\theta_{\alpha}$ is the phase of the order parameter $\Delta_{\alpha} = |\Delta_{\alpha}|e^{i\theta_{\alpha}}$. As before, we fix $\theta_1=0$, with $\Delta_1$ a nonzero order parameter. It follows from Eq.~\eqref{eq.der_delta} that ${\rm d}\Delta_{\alpha}/{\rm d}q_i = 0$ can only be nonzero if ${\rm d}\theta_{\alpha}/{\rm d}q_i=0$, or if $\Delta_{\alpha}=0$ meaning there is no pairing in the orbital. Let us now assume that the order parameters for a choice of intra-unit-cell positions $\{\vec{\delta_{\alpha}}\}$ are $|\Delta_{\alpha}(\vec{q})| e^{i\theta_{\alpha}(\vec{q})}$. The order parameters in the same model for another choice of positions $\{\vec{\delta_{\alpha}} + \vec{x}_\alpha\}$ are $|\Delta_{\alpha}(\vec{q})|e^{i\widetilde{\theta_{\alpha}}(\vec{q})}$, with $\widetilde{\theta_{\alpha}}(\vec{q}) = \theta_{\alpha}(\vec{q}) - 2\vec{q}\cdot \vec{x}_\alpha$ (see Appendix.~\ref{app.orb_pos}). Therefore \begin{equation} \frac{{\rm d}\widetilde{\theta_{\alpha}}(\vec{q})}{{\rm d}q_i} = \frac{{\rm d}\theta_{\alpha}(\vec{q})}{{\rm d}q_i} - 2 x^i_\alpha. \end{equation} To set ${\rm d}\Delta_{\alpha}/{\rm d}q_i=0$ and guarantee that $[D_s]_{ij}=(1/V)\partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$, we can thus shift the orbital positions by \begin{equation} x^i_\alpha = \frac{1}{2}\frac{{\rm d} \theta_{\alpha}(\vec{q})}{{\rm d}q_i}\bigg|_{\vec{q}=\vec{0}}. \label{eq.positions} \end{equation} With the overall phase of the order parameters fixed, the order parameters are uniquely defined, and this shift is unique for all orbitals where $\Delta_{\alpha}\neq 0$. The resulting positions $\{\vec{\delta_{\alpha}+\vec{x_{\alpha}}}\}$ are independent of the particular initial choice of $\{\vec{\delta_{\alpha}}\}$ (see Appendix~\ref{app.unique}). If we had not fixed the overall phase, the positions where $[D_{s}]_{ij} =(1/V) \partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$ would be unique up to an overall translation. The quantum metric computed for this appropriate set of positions is related to the superfluid weight directly. We find precise analogs of these results in the uniform pairing Hubbard models considered in Sec.~\ref{sec.Jonah}. We have shown that positions $\{\vec{\delta_{\alpha}} + \vec{x}_\alpha\}$ where $D_s$ is related to the quantum metric exist, but solving them from Eq.~\eqref{eq.positions} requires knowledge of the derivative of the order parameters at some set of orbital positions $\{\vec{\delta_{\alpha}}\}$. This would still require solving the gap equation at a finite $\vec{q}$ to know which quantum metric is related to the superfluid weight. We will now show that it is possible to compute the correct quantum metric {\it without} solving the gap equation: it is the one with the smallest possible trace. As shown previously~\cite{Peotta2015}, $\partial^2\Omega/\partial q_i\partial q_j \propto \mathcal{M}_{ij}$, and $\partial^2\Omega/\partial q_i^2 \geq {\rm d}^2F/{\rm d}q_i^2$. The result obtained from the quantum metric is thus always an upper bound for the diagonal components of the superfluid weight, and this upper bound is tight for the particular choice of positions that makes the derivatives of the order parameters zero: this is thus a minimum over all possible choices of orbital positions. For an isolated flat band, the quantum metric with the \textit{smallest possible integral of its diagonal components is thus proportional to the superfluid weight}. Since all diagonal components are as small as possible, this is the quantum metric with the smallest possible trace. The relationship between the superfluid weight and the quantum metric has been used to derive lower bounds for the superfluid weight in flat band systems. Our result shows that for such a lower bound to be valid, it needs to be a lower bound for the quantum metric for any choice of the orbital positions. The validity of some lower bounds found in literature is discussed in Sec.~\ref{sec.prev}. \section{Example: superfluid weight, quantum metric, and orbital positions in the Lieb lattice} \label{sec.example} To illustrate the importance of the additional terms of superfluid weight derived in Sections~\ref{sec.sfw} and~\ref{sec.qm}, and the role of orbital positions, we study the superfluid weight in the Lieb lattice, shown in Fig.~\ref{fig.example}a). This model has time-reversal symmetry and is invariant under the interchange of the $A$ and $C$ orbitals When $\delta=0$ and $a = \frac{1}{2}$, the Lieb lattice possess $C_4$ rotation symmetry, inversion symmetry, and reflection symmetry that interchanges the $A, C$ orbitals, and thus belongs to symmetry group $C_{4v}$. Changing $\delta \neq 0$ or $a \neq \frac{1}{2}$ destroys the $C_4$ and inversion symmetries, but the mirror symmetries are preserved, thus reducing to symmetry group to $C_s$. The flat band states reside solely on the $A$ and $C$ sites. The staggering of the hopping amplitudes is controlled by the parameter $\delta$, and introduces a band gap $E_{\rm gap} = \sqrt{8} \delta$, as shown in Fig.~\ref{fig.example}b. We employ the parameter $a$ to control the distance between the $B$ site and the $A/C$ sites in a unit cell, and take the volume of a unit cell to be $1$. We use the average inter-site hopping amplitude as our energy unit. The complete equation~(\ref{eq.final}) yields a result that is independent of the choice of orbital positions (see Fig.~\ref{fig.example}c-e), contrary to $(1/V)\partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$. In the extreme case $\delta=1$, when the lattice is disconnected and can clearly not support superconductivity, the correct superfluid weight is zero. However, using $\partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$ can in fact give a nonzero and quite large superfluid weight. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig2.eps} \caption{(a) Lieb lattice with staggered hopping amplitudes. The position of the orbitals in the unit cell is controlled by the parameter $a$. The typical Lieb lattice is $C_4$-symmetric corresponding to $a=\frac{1}{2}$, while $a=0$ and $a=1$ are equivalent to Fourier transformations where the positions of the orbitals are ignored with different choices of the unit cell. (b) Single-particle band structure at $\delta=0$ and $\delta=0.2$. The flat band is separated from the other bands by a band gap $E_{\rm gap} = \sqrt{8}\delta$. (c-e) Superfluid weight $\sqrt{{\rm det}(D_s)}$ in the Lieb lattice computed with (red, "complete") and without (the other colors) the corrections for three different choices of intra-unit cell positions.} \label{fig.example} \end{figure} At $\delta=0$, the simplified equation $[D_s]_{ij} = (1/V)\partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$ holds exactly when $a=\frac{1}{2}$ (see Fig. \ref{fig.example}), which corresponds to the Lieb lattice with $C_{4v}$ symmetry when the convention given by Eq.~\eqref{eq.fourier} for the Fourier transformation is used. This is explained by the equal hopping amplitudes in all directions: the systems with $a=\frac{1}{2}-x$ and $a=\frac{1}{2}+x$ are identical up to an overall rotation, and the additional terms are thus symmetric around $a=\frac{1}{2}$, where the minimum of $\partial^2\Omega/\partial q_i\partial q_j$ occurs. Our proof in Sec.~\ref{sec.Jonah} generalizes this statement to all space groups. When $\delta$ is increased and $C_4$ symmetry is broken, the orbital positions for which the relation $[D_s]_{ij}=(1/V)\partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$ holds shifts continuously towards $a=0$. Importantly, there is a wide parameter range where none of the choices $a=\frac{1}{2}$, $a=0$ or $a=1$ give the correct result when the derivatives of the order parameters are ignored. When $a=0$ or $a=1$, the position of the $A, C$ orbitals is at the unit cell origin (where the $B$ orbitals are), and hence the Fourier transform Eq.~\ref{eq.fourier} becomes identical to the other convention Eq.~\ref{eq.fourier_no}. Finally, let us consider the role of the conventional and geometric parts of superfluidity in our example case. In an earlier study~\cite{Julku2016}, the quantum metric in the Lieb lattice has been related to the superfluid weight. Note that while only the $A$ and $C$ sites have equal pairing, the order parameter on the $B$ sites is vanishing in the isolated band limit, meaning the uniform pairing condition is fulfilled. As shown in Fig.~\ref{fig.qm_sf}a, the main contribution to the superfluid weight at low interactions is the geometric part, and the ratio $D_{\rm geom}/D_s$ approaches one in the isolated flat band limit. This is expected as the conventional contribution should vanish on a perfectly flat band. The prediction from the minimal quantum metric, shown in Fig.~\ref{fig.qm_sf}b), is increasingly accurate with growing $\delta$. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig3.eps} \caption{(a) Superfluid weight (circles) and geometric contribution (crosses) as a function of $|U|$ at different $\delta$ in the Lieb lattice. The dotted lines indicate the predictions from the minimal quantum metric. Only $[D_s]_{xx}$ is shown as the off-diagonal components of the superfluid weight tensor are very small for all parameters. (b) $[D_{s}]_{xx}/|U|$ at low interactions obtained from a linear fit (crosses) and prediction for the slope from the minimal quantum metric.} \label{fig.qm_sf} \end{figure} \section{Cooper Pair Mass Beyond Mean Field} \label{sec.Jonah} It has been shown that the two-body problem in a flat band gives for the bound pair a finite effective mass that is governed by quantum geometry~\cite{Torma2018,Iskin2021}. For uniform pairing, the inverse effective mass can be approximately related to the quantum metric. Thus pairs can move while single particles cannot, meaning that the qualitative picture given by mean-field superfluid weight calculations is already apparent at the two-body level. Here we calculate the Cooper pair mass in a full many-body treatment and without a mean-field approximation. The mass is obtained from the spectrum of pair excitations of the ground state. It shows dependence on quantum geometry and allows relating the proper choice of quantum metric discussed above to the system symmetries. We consider a family of positive semi-definite, $D$-dimensional, attractive Hubbard models first introduced by Ref.~[\onlinecite{Tovmasyan2016}] where the electron kinetic energy term has $N_f$ perfectly flat zero-energy bands fulfilling a condition where the single-particle projectors $P(\vec{k})$ (see \Eq{eq.QGtensor}) obey \begin{equation} \begin{aligned} \label{eq:upc} \int \frac{d^Dk}{(2\pi)^D} P_{\alpha \alpha}(\vec{k}) = n_\phi N_f \equiv \epsilon \end{aligned} \end{equation} for all orbitals $\alpha =1,\dots,n_\phi^{-1}$ where the pairing is nonzero. The condition (\ref{eq:upc}) leads to the pairing gaps on different orbitals being the same, therefore it is also referred to as the uniform pairing condition. We neglect the spin label, assuming that the model has time-reversal symmetry which relates the two projectors: $P_\uparrow(\vec{k})=P^*_\downarrow(-\vec{k})\equiv P(\vec{k})$. Upon projecting the many-body operators into the $N_f$ flat bands, the kinetic energy vanishes and the Hamiltonian is given by the interaction term \begin{equation} \begin{aligned} H_{U} &= -|U| \sum_{i\alpha} \bar{n}_{i\alpha,\u} \bar{n}_{i\alpha,\d} + \frac{n_\phi N_f}{2} |U| \bar{N} , \\ \end{aligned} \end{equation} where $\bar{n}_{i,\alpha ,\sigma}$ is the projected density operator in orbital $\alpha$ and spin $\sigma$, $\bar{N}$ is the projected total density operator. Ref.~[\onlinecite{Tovmasyan2016}] demonstrated that $H_{U}$ possesses $\eta$-pairing groundstates, that is, states with all particles paired. In forthcoming work Ref.~[\onlinecite{upcoming}], we show that the Cooper pair excitations on top of these groundstates are exactly solvable thanks to the uniform pairing condition, and we are able to calculate their effective mass exactly. The Cooper pair excitations are governed by the following single-particle Hamiltonian: \begin{equation} \begin{aligned} \label{eq:hp} h_{\alpha \beta}(\vec{q}) &= \int \frac{d^Dk}{(2\pi)^D} P_{\alpha \beta}(\vec{q}+\vec{k}) P_{\beta \alpha}(\vec{k}) \ . \\ \end{aligned} \end{equation} We denote the eigenvalues of $h(\vec{q})$ as $\epsilon_\mu(\vec{q})$, where $\mu = 0,\dots, n_{\phi}^{-1}-1$. The many-body energy of the lowest lying Cooper pair is $|U|(\epsilon - \epsilon_0(\vec{q}))$, where $\epsilon_0(\vec{q})$ is the \textit{largest} eigenvalue of $h(\vec{q})$. We now show that $\epsilon_\mu(\vec{q})$, and hence the Cooper pair spectrum, is invariant under a redefinition of the orbital locations $\vec{\delta}_\alpha \to \vec{\delta}_\alpha + \vec{x}_\alpha $ (leaving the hopping elements invariant). This must be the case physically because the choice of $\vec{x}_\alpha$ is just a convention for the Fourier transform. Since the redefinition means $P_{\alpha \beta}(\vec{k}) \to e^{-i \vec{k} \cdot (\vec{x}_\alpha-\vec{x}_\beta)} P_{\alpha \beta}(\vec{k})$, we see that $h_{\alpha \beta}(\vec{q})$ transforms under a redefinition of the orbitals as \begin{equation} \begin{aligned} h_{\alpha \beta}(\vec{q}) &\to e^{-i \vec{q} \cdot (\vec{x}_\alpha-\vec{x}_\beta)} \int \frac{{\rm d}^Dk}{(2\pi)^D} P_{\alpha \beta}(\vec{q}+\vec{k}) P_{\beta \alpha}(\vec{k}) \\ &= [V^\dag_{\vec{x}}(\vec{q}) h(\vec{q}) V_{\vec{x}}(\vec{q})]_{\alpha \beta} \end{aligned} \end{equation} where we defined the diagonal unitary matrix $[V_{\vec{x}}(\vec{k})]_{\alpha \beta} = e^{i \vec{k} \cdot \vec{x}_\alpha} \delta_{\alpha \beta}$. We see explicitly that, although $h(\vec{q})$ is not invariant, its spectrum is. The effective Cooper pair mass is given by \begin{equation} \begin{aligned} \null [m^{-1}]_{ij} = \left. -|U| \frac{{\rm d}^2 \epsilon_0(\vec{q})}{{\rm d}q_i{\rm d}q_j} \right|_{\vec{q}=\vec{0}} \end{aligned} \end{equation} which is computed from the spectrum of $h(\vec{q})$ and thus is manifestly invariant. Using perturbation theory, $\epsilon_0(\vec{q})$ can be easily calculated to second order in $\vec{q}$. At zeroth order $\epsilon_0(0) = \epsilon$, which corresponds to the constant eigenvector $u_0^\alpha = \sqrt{n_\phi}$. The first order correction vanishes (showing the Cooper pair is stable), and we calculate two contributions at second order in $q_i$: \begin{equation} \begin{aligned} \label{eq:expansion} \epsilon(\vec{q}) &= \epsilon + \sum_{\mu =1}^{n-1} \frac{| u_\mu^\dag (\vec{q} \cdot \pmb{\nabla} h) u_0|^2}{\epsilon - \epsilon_\mu(0)} \\ &+ \frac{1}{2} q_i q_j \int \frac{d^Dk}{(2\pi)^D}\sum_{\alpha \beta} u_0^\alpha \partial_{ij} P_{\alpha \beta}(\vec{k}) P_{\beta \alpha}(\vec{k}) u_0^\beta , \end{aligned} \end{equation} noting that $\epsilon_\mu(0) < \epsilon$ are the eigenvalues of $h(0)$, so the first line is non-negative, and where $\pmb{\nabla} h$ is the gradient of $h$ evaluated at ${\vec q} = 0$. After integration by parts, the integral in the second line yields \begin{equation} \begin{aligned} \label{eq:intbyparts} &n_\phi \sum_{\alpha \beta} \int \frac{d^Dk}{(2\pi)^D} \partial_{ij} P_{\alpha \beta} P_{\beta \alpha} \\ &= -n_\phi \int \frac{d^Dk}{(2\pi)^D} \text{Tr } \partial_i P \partial_j P = - \frac{n_\phi}{(2\pi)^{D-1}} \mathcal{M}_{ij}, \end{aligned} \end{equation} which is proportional the quantum metric integrated over the Brillouin zone, i.e.~$ \mathcal{M}_{ij}$ defined in Eq.(\ref{eq:DsQM}) (as $\text{Tr }P \{\partial_i P, \partial_j P\} = \text{Tr }\partial_i P \partial_j P$). Hence \Eq{eq:intbyparts} is negative semi-definite. It is important to note that $\pmb{\nabla} h$ is not invariant under the choice of $\vec{x}_\alpha$, transforming as \begin{equation} \begin{aligned} \pmb{\nabla} h_{\alpha\beta} \to \pmb{\nabla} h_{\alpha\beta} - i (\vec{x}_\alpha - \vec{x}_\beta) h_{\alpha \beta}(0) \ . \\ \end{aligned} \end{equation} Nevertheless, it is possible to show that, up to a choice of origin, there is a unique choice of $\vec{x}_\alpha$ where $\pmb{\nabla} h u_0 = 0$ and the quantum metric is the sole contributor to the effective mass. Note that the $O(p^2)$ term in the first line of \Eq{eq:expansion} competes with $ -\mathcal{M}_{ij}$ in \Eq{eq:intbyparts} because it is opposite in sign. Thus the choice of $\vec{x}_\alpha$ where only the quantum metric is nonzero corresponds to the orbital positions of the minimal quantum metric. A calculation using the uniform pairing condition results in an explicit form for the orbital shifts that make the quantum metric the sole contribution for the effective mass, namely \begin{equation} \begin{aligned} \label{eq:xalso} (\epsilon-h(0)) \vec{x}_\alpha &= -i[ \pmb{\nabla}h u_0]_\alpha . \ \\ \end{aligned} \end{equation} This equation has a unique solution up to the overall choice of origin because $\epsilon-h(0)$ has a single zero-eigenvalue corresponding to the uniform eigenvector $u_0$. With the the orbital shifts $\vec{x}_\alpha$ given by Eq.~(\ref{eq:xalso}), the effective mass becomes \begin{equation} \begin{aligned} \label{eq:mquanummetric} [m^{-1}]_{ij} = \frac{n_\phi}{(2\pi)^{D-1}} |U| \mathcal{M}_{ij} . \end{aligned} \end{equation} Comparing this equation to \Eq{eq:DsQM}, we find exact agreement with the mean field superfluid weight up to an overall factor of $4f(1-f)$,which is the Cooper pair density. We now improve upon \Eq{eq:mquanummetric} in two ways. First we find that $\vec{x}_\alpha$ obey the space group symmetries $g \in G$ of the Hamiltonian when the symmetric choice of Fourier convention (\Eq{eq.fourier}) is used. In other words, when the symmetry-preserving positions of the orbitals are used, their deviations $\vec{x}_\alpha$ also obey the space group symmetries. In many cases, this is tantamount to a proof that $\vec{x}_\alpha = 0$, meaning that the quantum metric is the {\it minimal} quantum metric, and is the Cooper pair mass. For instance, at $\delta = 0$ in the Lieb lattice with $a=\frac{1}{2}$, the $A$ and $C$ orbitals are related by $C_4$ symmetry and are invariant under $C_2$. There is no way to deform these orbitals off the positions $a=\frac{1}{2}$ without breaking $C_2$. Thus $\vec{x}_\alpha = 0$, thereby explaining why $a = \frac{1}{2}$ is the correct choice to evaluate the minimal quantum metric in Fig. \ref{fig.example}. By a similar argument, all orbitals at fixed high-symmetry positions necessarily have $\vec{x}_\alpha = \vec{0}$ because they are pinned by symmetries. In these cases, the minimal quantum metric is guaranteed to be the one computed using the physical positions in \Eq{eq.fourier}. Secondly, we now propose a simple generalization of the uniform pairing condition that guarantees $\vec{x}_\alpha = 0$. We define the quantity \begin{equation} \begin{aligned} \varepsilon_\alpha(\vec{q}) = \int \frac{d^Dk}{(2\pi)^D} [P(\vec{k}+\vec{q}) P(\vec{k})]_{\alpha \alpha} \end{aligned} \end{equation} which at $\vec{q}=0$ yields $\epsilon_\alpha = n_\phi N_f$, the uniform pairing condition in \Eq{eq:upc}. It is then direct to check that \begin{equation} \begin{aligned} \label{eq:genUPC} \varepsilon_\alpha(0) &= n_\phi N_f, \quad &\text{(uniform pairing condition)} \\ \partial_i \varepsilon_\alpha(0) &= 0 , \quad &\text{(minimal metric condition)}, \end{aligned} \end{equation} the latter condition being the many-body analogue of \Eq{eq.positions}, in that its solution sets the quantum metric to be minimal. These results directly parallel those given by mean field theory in the above sections. We have shown that the Cooper pair effective mass is independent of the Fourier convention for the orbital positions. Furthermore, there exists a choice of orbital positions where the effective mass is determined by the quantum metric alone and at these positions the quantum metric is minimal. Under the the uniform pairing condition, we provide an explicit formula for these positions in \Eq{eq:xalso}, to be compared to \Eq{eq.positions}. The inclusion of crystalline symmetries constrains the positions: if the orbital positions are pinned by the symmetries, then the quantum metric evaluated for those positions must be minimal. Lastly, we established a generalization of the uniform pairing condition in \Eq{eq:genUPC} to determine when the quantum metric is minimal. \section{Non-isolated flat bands} \label{sec.ni} The relationship of the minimal quantum metric and the superfluid weight indicates that the BKT transition temperature could be increased in systems with a high quantum metric. However, this is only valid in the isolated flat band limit. The quantum metric typically diverges when the band gap closes, but this is not an indication that the superfluid weight diverges. The superfluid weight is proportional to $|U|\mathcal{M}_{ij}$ only when the flat band is isolated, which requires that the interaction strength is small compared to the band gap (otherwise pairing would involve higher bands). Therefore, when the band gap shrinks, the largest $|U|$ for which the quantum metric is proportional to $D_s$ decreases accordingly. The very large quantum metric that can be achieved with a small band gap is thus only relevant at very low interactions, where $[D_s]_{ij}\propto |U|\mathcal{M}_{ij}$ remains small. The divergence of the quantum metric is an indication that the contributions from the other bands are important at low $|U|$, and reduce the superfluid weight compared to the isolated flat band result. In the Lieb lattice, those contributions have been shown to curtail the divergence and lead to a finite superfluid weight~\cite{Julku2016}. An interesting question when searching for systems with high $T_{\rm BKT}$ is whether the critical temperature can still be large in the non-isolated band limit even though the contributions from dispersive bands are prominent. In repulsive models, a flat band near the Fermi surface has been predicted to be beneficial~\cite{Aoki2020,Kuroki2005,Kobayashi2016,Matsumoto2018}. In attractive models, previous mean-field studies have indicated that the superfluid weight has a non-linear dependence on the interaction strength for non-isolated flat bands~\cite{Julku2016,Wu2021,Iskin2019}, but the additional terms we find in this work have not been taken into account. In this section, we show by continuously tuning the band gap that the superfluid weight and $T_{\rm BKT}$ can actually be maximal when there is a band touching. Furthermore, we study its dependence on different types of band touchings. We supplement our analysis of band touching points by employing a $S$-matrix construction~\cite{Calugaru2021} to analyze bipartite lattices with band touching points. \subsection{Effect of closing the band gap} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig4.eps} \caption{BKT temperature in the Lieb lattice (a) as a function of the hopping staggering $\delta$ and the interaction $|U|$, and (b) as a function of the chemical potential $\mu$ and the interaction.}\label{fig.ni_fb} \end{figure} As shown in Fig.~\ref{fig.qm_sf}, the superfluid weight in the Lieb lattice increases monotonically when $\delta$ is decreased, and reaches its maximum when $\delta=0$ for all interactions. At high interactions, the superfluid weight decays as $\propto 1/|U|$, which is a well-known behavior related to the formation of bound pairs in the BEC limit of the BCS-BEC crossover~\cite{Iskin2019,Orso2021}. At low interactions, $D_s\propto |U|$ when the flat band is isolated. This linear behavior is visible in an increasingly wide range of interactions when $\delta$ is increased. For $\delta=0$, when there is no band gap, the behavior is no longer exactly linear, which is consistent with previous literature such as~[\onlinecite{Julku2016}], and~[\onlinecite{Iskin2019,Wu2021}] where it has been found that $D_s\propto |U|{\rm ln}(C/|U|)$, with $C$ a constant. The superfluid weight at zero temperature is an upper bound for the BKT temperature. However, it does not give the full picture: for instance, the zero-temperature superfluid weight in a dispersive band will typically be non-zero in the $U\to 0$ limit whereas it vanishes in a flat band. At $T=0$, the superfluid weight will thus typically be smaller in a flat band than a dispersive band for small interactions, even though the BKT temperature is usually larger on the flat band (see Fig.~\ref{fig.comp}). In this section, we solve the BKT temperature from the universal relation~\cite{Berezinsky1971,Kosterlitz1973,Nelson1977} \begin{equation} T_{\rm BKT} = \frac{\pi}{8} \sqrt{{\rm det}(D_s(T_{\rm BKT}))}. \end{equation} As is shown in Fig.~\ref{fig.ni_fb}a, $T_{\rm BKT}$ mirrors the behavior of the superfluid weight and increases monotonically with $\delta$ for all considered interactions. The largest BKT temperature occurs around interaction $U\approx -3.5$ with no hopping staggering so the flat band is not isolated. Moreover, the highest critical temperature as a function of $\mu$ is found for the half-filled flat band, showing that in this model, the highest possible critical temperature is achieved in the flat band when it is not isolated. Hence, the isolated flat band limit is not necessary to reach a high $T_{BKT}$, and a band touching could actually be beneficial for superconductivity. It is important to remember also that the flat band combined with a band touching yields a higher $T_c$ than a usual dispersive band (e.g., square lattice), for small interactions $|U|$, see Fig.~\ref{fig.comp}. \subsection{Comparison of linear and quadratic band touchings} \label{sec.ltq} \begin{figure*} \centering \includegraphics[width=\textwidth]{fig5.eps} \caption{(a) Band structure of the tunable Lieb model for different values of $\lambda$, at $\delta=0$, i.e. in the presence of a band touching, and at $\delta=0.4$. The band touching can be tuned from linear ($\lambda=0$) to quadratic ($\lambda=1$) at $\delta=0$. At $\delta=0.4$, the dispersive bands are modified without changing the quantum metric of the flat band. (b) Superfluid weight $[D_s]_{xx}$ for $\delta=0.4$ in the Lieb model, when the flat band is separated from the other bands by a gap. (c) Superfluid weight $[D_s]_{xx}$ and (d) ratio $[D_{s,{\rm geom}}]_{xx}/[D_s]_{xx}$ in the tunable Lieb lattice. The off-diagonal components of the superfluid weight are zero. (e) Order parameters $\Delta_{\alpha}/U$ in the tunable Lieb lattice as a function of $\lambda$ at interaction strengths $U=-1$ (blue), $U=-4$ (orange) and $U=-8$ (blue). The order parameters in the $A/C$ orbital (full line) are always equal, and are larger than the order parameter in the $B$ orbital (dashed line). The dotted line shows the average of all order parameters. (f) $\sqrt{{\rm det}(D_s)}$ and (g) $[D_{s,{\rm geom}}]_{xx}/[D_s]_{xx}$ in the tunable kagome model. In this case, the off-diagonal components are not always zero. A similar behavior of the ratio $[D_{s,{\rm geom}}]_{ij}/[D_s]_{ij}$ is observed for all components. } \label{fig.ltq_res} \end{figure*} To study the effect of different types of band touchings on the superfluid weight, we use the method developed in Ref.~[\onlinecite{Graf2021}] to construct flat band models that can be continuously tuned from a linear to a quadratic band touching. The method is based on building two Hamiltonians $H_{\rm lin}$ and $H_{\rm quad}$ that feature a flat band with a linear and quadratic band touching respectively, and for which the flat band has exactly the same Bloch functions. Then the band touching can be continuously tuned in the total Hamiltonian $H = (\lambda H_{\rm quad} + (1-\lambda) H_{\rm lin})/C$ without affecting the energy or the Bloch functions of the flat band. We study two such models, constructed on a Lieb and kagome geometry. The tight-binding parameters are given in Appendix~\ref{app.ltq}. These models both have a flat band at $E=0$, and we pick $C$ so that the total width of the band structure is independent of $\lambda$. Our energy unit is the average inter-site hopping strength of the $\lambda=0$ lattice model. The band structure for the tunable Lieb model is shown for three values of $\lambda$ in Fig.~\ref{fig.ltq_res}a. The Lieb model is constructed so that the band gap can be tuned with the staggering parameter $\delta$. In the Lieb model, when $\delta$ is nonzero, the superfluid weight at low interactions becomes independent of $\lambda$ (see Fig.~\ref{fig.ltq_res}b). This is expected, as in the isolated band limit the superfluid weight is determined by quantum geometry, and the flat band has the same quantum metric for all $\lambda$. The range of interactions where $D_s$ is independent of the parameter $\lambda$ grows with $\delta$, as the band gap becomes larger and the isolated band limit is valid up to larger $|U|$. At intermediate interactions, the limit $\lambda=0$, corresponding to a linear touching, has a more pronounced maximum. When the band gap is closed, differences when varying $\lambda$ occur already at vanishingly small interactions, as shown in Fig.~\ref{fig.ltq_res}c-d). The superfluid weight is smaller overall for the quadratic band touching $\lambda=1$. Moreover, the ratio $D_{\rm geom}/D_s$ is much smaller for the quadratic than the linear band touching. It is interesting to note that the superfluid weight behaves differently from the mean-field order parameters, shown in Fig.~\ref{fig.ltq_res}e. The order parameters at the $A/C$ sites are larger in the quadratic model than in the linear model for all interactions we consider. We show in Sec.~\ref{sec.mf_gap} that this is expected to hold in bipartite lattices with uniform pairing. However, even though the pairing is stronger in the quadratic model, the superfluid weight is lower, which is the opposite of what would be expected for an isolated flat band~\cite{Peotta2015,Liang2017} where the superfluid weight is proportional to the pairing gap. For the kagome model, which does not feature a band gap, a similar behavior of the geometric part of the superfluid weight can be observed (see Fig.~\ref{fig.ltq_res}e-f)): its contribution is much more prominent for the linear band touching than the quadratic one. The maximum of $D_s$ is also slightly more pronounced in the linear model than the quadratic one, although the superfluid weight is larger in the quadratic model at small interactions. The geometry of the flat band therefore does not give the full picture in the non-isolated band limit: even though the Bloch functions of the flat band are always the same when varying $\lambda$ which controls the type of band touching, the superfluid weight differs. This means that the behavior of the superfluid weight is dependent on the nature of the band touching. \subsection{Band touching points from the $S$-matrix construction} \label{sec.mf_gap} The mean field behavior of the pairing gap in general lattices, with both isolated and non-isolated flat bands, can be understood using the $S$-matrix construction of Ref.~[\onlinecite{Calugaru2021}]. This provides a description of the effect of band touchings on the pairing gap that is more general than given by the specific models considered above, and allows for an analytic solution in the mean field, yielding general results for quantities such as the pairing gap. The power of this approach is made evident as it yields self-consistent gap equations independent of the wavefunctions, allowing for an analysis of pairing strength as a function of the lattice parameters and dispersion. The $S$-matrix construction employs a bipartite lattice with two unequal sublattices $L, {\tilde L}$, with the difference between the number of orbitals per unit cell $N_L - N_{\tilde L} = N_f$ being the number of flat bands. Band touching points can be enforced in the model via irrep analysis of the symmetries \cite{Calugaru2021}. The bipartite Hamiltonian in such models reads \begin{align} H_{\vec{k}} = \begin{bmatrix} 0 & S_{\vec{k}}^\dagger \\ S_{\vec{k}} & 0 \end{bmatrix}, \label{} \end{align} where $S_{\vec{k}}^\dagger$ is an $N_{\tilde L} \times N_L$ rectangular matrix encoding the hopping between the two sublattices. These $S$-matrix Hamiltonians can be realized in actual physical materials \cite{Regnault2021}. The energies come in $\pm \epsilon_{{\vec{k}},m}$ pairs, where $\epsilon$ are the singular values of $S_{\vec{k}}$. Because $S_{\vec{k}}^\dagger$ maps $C^{N_L}$ to $C^{N_{\tilde L}}$, there are at least $N_L- N_{\tilde L}$ vectors in the null space of $S_{\vec{k}}^\dagger$; these form the flat bands. One can introduce a quadratic Hamiltonian \begin{align} H_\text{quad} = H_{\vec{k}} \begin{bmatrix} I_{{\tilde L} \times {\tilde L}} & 0 \\ 0 & -I_{L \times L} \end{bmatrix} H_{\vec{k}} \end{align} which has eigenvalues $\pm \epsilon_{{\vec{k}},m}^2, 0$, and preserves the flat band wavefunctions. In the case of the Lieb lattice, this Hamiltonian is precisely the same as the Hamiltonian with quadratic band touching points studied in Sec.~\ref{sec.ltq}, obtained using the technique from Ref.~[\onlinecite{Graf2021}] (see Appendix~\ref{app.ltq} for the tight-binding parameters of the model). By adding attractive on-site interactions and assuming that the pairing is uniform within each sublattice, that is, there are two gaps $\Delta_L$ and $\Delta_{\tilde L}$ depending on the sublattice, we find the following self-consistent gap equations at $T=0$ for $H_{\vec{k}}$: \begin{align} N_L \Delta_L &= \frac{|U| N_{\tilde L}}{2} f(\Delta) + \frac{|U|(N_L - N_{\tilde L})}{2}\\ N_{\tilde L} \Delta_{\tilde L} &= \frac{|U| N_{\tilde L}}{2} f(\Delta) , \label{eq:sysMF} \end{align} where $\Delta = \dfrac{1}{2}(\Delta_L + \Delta_{\tilde L})$ and \begin{align} f(\Delta) &= \dfrac{1}{N_{\tilde L}} \sum_{ m =1}^{N_{\tilde L}} \int \frac{d^Dk}{(2\pi)^D} \dfrac{\Delta}{ \sqrt{\Delta^2 + \epsilon_{{\vec{k}},m}^2}} . \nonumber \\ \end{align} Here the sum is over the $N_{\tilde L}$ dispersive bands. The function $f$ ranges from $0$ for a perfectly flat band at zero kinetic energy, to $1$ for a gapped band at very large kinetic energy, and is a monotonically increasing function of $\Delta$. \Eq{eq:sysMF} always has a solution, and obeys the following properties: \begin{align} N_L \Delta_L - N_{\tilde L} \Delta_{\tilde L} &= \frac{|U|(N_L - N_{\tilde L})}{2}, \label{eq:weighted_diff_linear}\\ 0 < \Delta_{\tilde L} &< \Delta_L < \frac{|U|}{2}, \\ \frac{N_L - N_{\tilde L}}{4} &< \frac{\Delta}{|U|} < \frac{1}{2}. \label{eq:linear} \end{align} The first equality generalizes the result found in the Lieb lattice by Ref.~[\onlinecite{Julku2016}], as it now applies to any bipartite lattice with uniform pairing within each sublattice, and agrees with our numerical calculations of the pairing gaps. The dispersion does not need to be gapless for this equality to hold; only the bipartite nature of the underlying lattice is required. These relations are proved in Appendix~\ref{app.gap}. Regardless of the form of the bipartite lattice, even in the absence of a band touching, we have the result that the pairing strength on the larger sublattice $\Delta_L$ is always larger than the pairing on the smaller sublattice $\Delta_{\tilde L}$, due to the fact that the flat bands greatly enhance the pairing for the sublattice $L$ (see Appendix~\ref{app.gap}), and both $\Delta_L, \Delta_{\tilde L}$ are bounded by quantities depending on the number of flat and dispersive bands. Though the exact details of $f(\Delta)$ depend on the dispersion of the kinetic energy, the fact that it is bounded suggests that most of the gap strength comes from the flat band contribution which is universal. To maximize the strength of the pairing $\Delta_L$, we note that the self-consistent equation for $\Delta_L$ depends only on the ratio of the number of bands of the sublattices $r = \frac{N_{\tilde L}}{N_L}$. This is saturated as $r \rightarrow 0$: thus, even in the presence of band touching points, more flat bands per total bands enhances the superconducting gap at $T=0$. If the dispersive bands are gapped from the flat bands, with the band gap $ \gg |U|$, $f(\Delta) \rightarrow 0$. Thus, we approach the limit discussed in Ref.~[\onlinecite{Peotta2015}], where one may project the Hamiltonian into the flat bands and obtain an exactly solvable BCS ground state. The quadratic band touching point, i.e.~the case of $H_\text{quad}$, has a different set of self-consistent gap equations (see Appendix~\ref{app.gap}), due to the fact that the dispersive bands have different wavefunctions (though the flat band wavefunctions remain the same). The self-consistent equations still always possess a solution so long as flat bands exist. An analysis shows that the weighted difference reads \begin{align} N_L \Delta_L - N_{\tilde L} \Delta_{\tilde L} &= \frac{|U| N_{\tilde L}}{2} (f(\Delta_L) - f(\Delta_{\tilde L})) \nonumber \\ &+ \dfrac{|U|(N_L - N_{\tilde L})}{2}, \label{} \end{align} which increases relative to \Eq{eq:weighted_diff_linear} so long as $\Delta_L > \Delta_{\tilde L}$. We prove that there always exists a solution of the gap equations with this property (see Appendix~\ref{app.gap} for more details). To make further statements about the pairing gap $\Delta$, we compare $f(\Delta)$ for a quadratic dispersion versus a linear dispersion. In general, a higher density of states of the kinetic energy close to zero energy will raise $f(\Delta)$, thereby raising $\Delta$. Thus, we expect the quadratic band touching will have a stronger pairing gap than the linear band touching. It is interesting to compare this to our numerical results for different band touchings in the Lieb lattice (Fig.~\ref{fig.ltq_res}) where the quadratic band touching does not give highest value for the superfluid weight. The pairing gap, on the other hand, is larger in the quadratic model, in agreement with our prediction. The pairing gap $\Delta_L$ is influenced by the density of states, which indeed is larger for quadratic dispersion than a linear dispersion. However, the superfluid weight depends also on quantum geometry which affects the ability of Cooper pairs to move. Thus the two quantities can have qualitatively different behavior. We analyze the $S$-matrix model in the many-body limit (without recourse to mean field theory) in upcoming work~\cite{upcoming2}. \section{Revisiting the literature} \label{sec.prev} The superfluid weight has been computed from mean-field theory in a variety of multiband systems~\cite{Julku2016,Liang2017,Iskin2019,Wu2021,Iskin2019b,Peri2021,Chan2022,Herzog-Arbeitman2021,Kitamura2021,Peltonen2020} including magic-angle twisted bilayer graphene~\cite{Julku2020,Xie2020,Hu2019} and flat band systems with disorder~\cite{Lau2022}. The impact of the terms arising from the derivatives of the order parameters should be examined on a case-by-case basis. For example, the results for the Lieb lattice presented in~[\onlinecite{Julku2016}] are mostly close to the correct result. Indeed, the hopping staggering $\delta$ used for the main results therein is very small and the orbital positions were picked so that $a=\frac{1}{2}$, which gives the correct results at $\delta=0$ even without including the derivatives of the order parameters. Results for larger values of $\delta$ are inaccurate. The results presented for the Mielke lattice with a flat band in~[\onlinecite{Iskin2019}] are accurate based on the same reasoning, but the results for other values of the tight-binding parameters may be affected by the ignored terms. In Ref.~[\onlinecite{Chan2022}], the behavior of the order parameters was accurately taken into account, and the results agreed well with DMRG calculations. The superfluid weight was, however, compared with the quantum metric computed for a choice of the Fourier transformation which predicted $\pi D_s=0.6|U|$ at low interactions and for a half-filled flat band. The estimation we find using the correct choice, namely the minimal quantum metric, instead gives a slope of approximately $\pi D_s = 0.45|U|$, which is much closer to the mean-field and DMRG results of $\pi D_s\approx 0.40|U|$ obtained in~[\onlinecite{Chan2022}]. Expressions for the superfluid weight in terms of the quantum metric can be found in~[\onlinecite{Liang2017}] for models without flat bands. For instance, in the isolated band limit, \begin{equation} [D_{s,{\rm geom}}]_{ij} = \frac{2}{V}\Delta^2\sum_{\vec{k}} \frac{{\rm tanh} (\beta E_{m,\vec{k}}/2)}{E_{m,\vec{k}}} {\rm Re}(\mathcal{B}_{ij}), \end{equation} where $m$ labels the isolated band, which does not need to be flat. In this case, the minimal quantum metric is not always relevant, but one should instead minimize the above integral for $i=j$. The relationship between the superfluid weight and quantum metric has been used to derive various bounds for the superfluid weight~\cite{Peotta2015,Xie2020,Liang2017,Herzog-Arbeitman2021,Verma2021}. The lower bound given in~[\onlinecite{Peotta2015}] for time-reversal symmetric systems in terms of the spin Chern number is valid, as it is a lower bound for the quantum metric regardless of the choice of orbital positions. On the other hand, the bound proposed in~[\onlinecite{Liang2017}] related to the integral of the absolute value of the Berry curvature is only valid if one takes the lowest possible lower bound, as that quantity depends on the choice of orbital positions. This is also the case for the lower bound in terms of real space invariants proposed in~[\onlinecite{Herzog-Arbeitman2021}] for systems with obstructed Wannier orbitals or fragile topology. It is shown in the supplementary material of that work that the lower bound can be nonzero for arbitrary orbital positions. The correct choice of orbital positions is thus needed to define an orbital-independent bound. If the uniform pairing condition is satisfied, then space group symmetries can guarantee that the minimal quantum metric is obtained for orbitals at the high-symmetry positions. The two-body problem in a flat band was shown in Ref.~[\onlinecite{Torma2018}] to give a finite effective mass for a pair, which means that already at the two-body level, interactions can lead to pair movement even when the single particle effective mass is infinite. The pair mass was found to be given by the ``local" (spatially dependent) version of the quantum metric -- which reassuringly is independent of orbital positions. However, approximations were then used to connect the pair mass to the usual quantum metric. Our many-body Cooper pair calculation in Section~\ref{sec.Jonah} now shows that the correct choice is the minimal quantum metric. Quantum geometry has been shown to be relevant also for Bose-Einstein condensation in flat bands~\cite{Julku2021,Julku20212}. The speed of sound and the excitation fraction were found to depend on generalized forms of the quantum metric, and the quantum distance between the flat band states, respectively. These quantities are invariant under the change of orbital positions. Under certain conditions, however, they were shown to reduce to the usual quantum metric and Hilbert-Smith quantum distance, and then (as well as in the superfluid density calculation in~[\onlinecite{Julku20212}]) one needs to pay attention to the choice of the correct basis. Numerically exact methods such as quantum Monte Carlo do not require the same care as mean-field theory with the behavior of the order parameters, as the interaction Hamiltonian of the exact Hubbard model does not depend on the vector field explicitly. Generally, it is important to make sure that all variables that may depend on the vector potential are properly taken into account. In addition to the superfluid weight, the quantum metric has been related to the effective mass of two-body bounds states~\cite{Liang2018,Iskin2021,Iskin2022}, conductivity~\cite{Mitscherling2022}, the orbital magnetic susceptibility~\cite{Piechon2016,Gao2015}, the velocity of the Goldstone mode~\cite{Iskin2020}, and other physical phenomena~\cite{Abouelkomsan2022,Gao2019,Holder2020,Ahn2021,Mitscherling2020}. As shown here for the superfluid weight, whenever a connection is drawn between a physical quantity and the quantum metric, particular attention should be paid to the dependence of the quantum metric on the orbital positions. If the physical quantity should not depend on these, there may be an appropriate basis which is the only one where the quantum metric is relevant. \subsection{Systems with broken time-reversal symmetry} \label{sec.no_trs} Our result Eq.~\ref{eq.final} is valid for time-reversal symmetric systems. It can be straightforwardly generalized to be valid also for systems where TRS is broken (see Appendix~\ref{app.sfw}): the vector ${\rm d}_i\Delta$ will contain entries for the derivatives of the real parts of the order parameters and ${\rm d}\mu/{\rm d}q_i$ (when $\mu(\vec{q})\neq\mu(-\vec{q})$). Corresponding entries are added in the Hessian matrix $\partial_{\Delta,\mu}^2\Omega$. The addition of these terms should be considered carefully when connecting the superfluid weight to the quantum metric or other results obtained from $(1/V)\partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$. Indeed, in contrast to systems with TRS, there may not exist any set of orbital positions where $[D_s]_{ij} = (1/V)\partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$: the derivatives of the \textit{real parts} of the order parameters can be nonzero, and cannot be made zero by manipulating only the phases of the order parameters. If $[D_s]_{ij} = (1/V)\partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$ never holds, lower bounds derived for the quantum metric cannot generally be used for the superfluid weight. Furthermore, if ${\rm d}\mu/{\rm d}q_i\big|_{\vec{q}=\vec{0}}\neq 0$, the Hessian matrix $\partial_{\Delta,\mu}^2\Omega$ contains entries corresponding to the chemical potential and may not be positive semidefinite. In such a case, the partial derivative could even be smaller than the total derivative. When ${\rm d}\mu/{\rm d}q_i\big|_{\vec{q}=\vec{0}}=0$, the terms relating to $\mu$ can be ignored and the Hessian matrix $\partial_{\Delta}^2\Omega$ contains partial derivatives of $\Omega$ only with reference to the real and imaginary parts of the order parameters. In such a case, the inequality $\partial^2\Omega/\partial q_i^2\big|_{\vec{q}=\vec{0}}\geq {\rm d}^2F/{\rm d}q_i^2\big|_{\vec{q}=\vec{0}}$ holds. Furthermore, when the overall phase of the order parameters is fixed, $\partial_{\Delta}^2\Omega$ is invertible. Under these conditions, the superfluid weight is $[D_s]_{ij} = (1/V)\partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$ if and only if all the derivatives of the order parameters are zero at $\vec{q}=\vec{0}$. These derivatives are given by \begin{equation} \frac{{\rm d}\Delta_{\alpha}}{{\rm d}q_i} = \frac{{\rm d}|\Delta_{\alpha}|}{{\rm d}q_i}e^{i\theta_{\alpha}} + i|\Delta_{\alpha}|e^{i\theta_{\alpha}}\frac{{\rm d}\theta_{\alpha}}{{\rm d}q_i}. \end{equation} Because changing the orbital positions only affects the phases $\theta_{\alpha}$, the derivatives can be set to zero with such a transformation only when ${\rm d}|\Delta_{\alpha}|/{\rm d}q_i=0\big|_{\vec{q}=\vec{0}}$. The equality $[D_s]_{ij} = (1/V)\partial^2\Omega/\partial q_i\partial q_j\big|_{\vec{q}=\vec{0}}$ can thus only hold in systems where ${\rm d}|\Delta_{\alpha}|/{\rm d}q_i\big|_{\vec{q}=\vec{0}}=0$ for all $\alpha,i$. In systems where ${\rm d}|\Delta_{\alpha}|/{\rm d}q_i\big|_{\vec{q}=\vec{0}} = 0$, results relating the superfluid weight to the quantum metric can be used, provided the diagonal components $(1/V)\partial^2\Omega/\partial q_i^2\big|_{\vec{q}=\vec{0}}$ are minimized. \section{Conclusions} \label{sec.conc} We have derived complete equations for the mean-field superfluid weight in multiband lattice models. These equations contain both the partial derivative of the grand potential, which gives a connection to quantum geometry, and terms that take into account the changes in the order parameter. The significance of the latter terms has been overlooked in the previous literature. We have shown that ignoring them can lead to quantitative as well as qualitative errors, where superconductivity can be predicted in systems where it is impossible. The use of the complete equations is thus crucial whenever studying multiband systems, such as moiré materials, as well as when searching for materials with particularly high critical temperatures. Using our new equations, we have shown that the superfluid weight in isolated flat bands is proportional to the \textit{minimal} quantum metric, that is, the one with the smallest possible trace. A central discrepancy afflicting the current understanding of the connection between superconductivity and quantum geometry has been the following: the superfluid weight is manifestly independent on orbital positions, while the quantum metric, which has been shown to govern isolated flat band superconductivity, depends on them. Our finding that actually only the minimal quantum metric is relevant resolves this fundamental concern. Based on our results, bounds for the superfluid weight in terms of topological invariants in time-reversal symmetric systems~\cite{Peotta2015,Xie2020} are still valid, but other bounds which depend on the choice of orbital positions require more care. The conclusions based on the mean-field superfluid weight are corroborated by exact results derived for the Cooper pair mass. We generalized the uniform pairing condition in \Eq{eq:genUPC} to establish a minimal metric condition. When evaluated at the orbital positions satisfying the minimal metric condition, the Cooper pair mass is entirely determined by the quantum metric. Moreover, if the orbitals of the model are fixed by symmetries at high-symmetry points (maximal Wyckoff positions), then the minimal quantum metric is guaranteed to be obtained for these positions. Importantly, our results show that in systems where TRS is broken, a relation between quantum geometry and superfluidity, and consequently topological bounds, does not exist in general. We identified sufficient conditions for having the connection to quantum geometry, namely that the derivatives of the order parameter and chemical potential with respect to $\vec{q}$ have to vanish at $\vec{q}=\vec{0}$. Whether these conditions are also necessary remains a topic of future research, as well as the possible relations of the conditions to the crystalline symmetries, as in the time-reversal symmetric, uniform pairing case. Furthermore, we have shown that the quantum geometry of the flat band is not sufficient to describe the superfluid weight in the non-isolated band limit: its behavior depends not only on the flat band properties but also on the nature of the band touching. In general, the geometric contribution is more prominent for linear band touchings than quadratic ones. Many flat band material candidates have band touchings~\cite{Regnault2021}. Remarkably, we have shown that an isolated flat band is \textit{not} necessary to achieve a high critical temperature, and that a band touching with dispersive bands can in fact be beneficial for superconductivity. This result is important for realizing the promise of high-temperature or even even room-temperature superconductivity from flat bands. Restricting to isolated flat bands would require materials and systems with a gap on the order tens of meV (the thermal energy). We have shown that this limitation is not necessary: in contrast, a band touching can enhance the critical temperature. This conclusion holds within the specific models considered by us, but is likely to be more general since the quantum metric of a flat band diverges when the gaps to the other bands are closed. By results from S-matrix analysis, we developed universal relations relating the pairing gaps on bipartite lattices, and argued that the pairing gap is enhanced for quadratic over linear band touchings, a result opposite to what we saw numerically for the superfluid weight. This is understood as density of states determining the former while also quantum geometry is important for the latter. Our results inspire further engineering of band touchings to optimize the critical temperature of superconductivity, and determine the dominance of quantum geometry or the density of states. \acknowledgments We thank Aleksi Julku, Long Liang, Sebastiano Peotta, Grazia Salerno and Gabriel Topp for useful discussions. We acknowledge support by the Academy of Finland under project numbers 303351 and 327293. K-E.H. acknowledges support from the Magnus Ehrnrooth Foundation. B.A.B. and A.C. were supported by the ONR Grant No. N00014-20-1-2303, DOE Grant No. DESC0016239, the Schmidt Fund for Innovative Research, Simons Investigator Grant No. 404513, the Packard Foundation, the Gordon and Betty Moore Foundation through Grant No. GBMF8685 towards the Princeton theory program, and a Guggenheim Fellowship from the John Simon Guggenheim Memorial Foundation. Further support was provided by the NSF-MRSEC Grant No. DMR-1420541 and DMR2011750, BSF Israel US foundation Grant No. 2018226, and the Princeton Global Network Funds. B.A.B. acknowledges support from the Office of Naval Research grant No. N00014-20-1-2303 and from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement n° 101020833). J.H-A. is supported by a Marshall Scholarship funded by the Marshall Aid Commemoration Commission. A.C. is supported by a Moore Postdoctoral Fellowship from the Gordon and Betty Moore Foundation. \phantom{a}\\
1,314,259,996,211
arxiv
\section{Introduction} \label{introduction} Single transverse-spin asymmetries (SSAs) in both high energy lepton-hadron and hadronic scattering processes have attracted considerable attention from both experimental and theoretical communities over the years \cite{D'Alesio:2007jt}. Generally, defined as $A_N\equiv (\sigma(S_\perp)-\sigma(-S_\perp))/(\sigma(S_\perp)+\sigma(-S_\perp))$, the ratio of the difference and the sum of the cross sections when the hadron's spin vector $S_\perp$ is flipped, SSAs have been consistently observed in various experiments at different collision energies \cite{HERMES, COMPASS, SSA-rhic}. Much theoretical progress has been achieved in the recent years. An important realization is the crucial role of the initial- and final-state interactions between the struck parton and the target remnant \cite{Brodsky:2002cx}, which provide the necessary phases that leads to the non-vanishing SSAs. These interactions can be accounted for by including the appropriate color gauge links in the gauge invariant transverse momentum dependent (TMD) parton distribution functions (PDFs) \cite{Collins:2002kn,TMD-dis,boermulders}. An important example is the quark Sivers function \cite{Siv90}, which represents the distribution of unpolarized quarks in a transversely polarized nucleon, through a correlation between the quark's transverse momentum and the nucleon polarization vector. They are believed to be (partially) responsible for the SSAs observed in the experiments. The details of the initial- and final-state interactions depend on the scattering process, thus the form of the gauge link in the Sivers function is process dependent \cite{mulders}. As a result, the Sivers function itself is non-universal. For example, it is the difference between the final-state interactions (FSIs) in semi-inclusive deep inelastic scattering (SIDIS) and the initial-state interactions (ISIs) in Drell-Yan (DY) process in $pp$ collision that leads to an opposite sign in the Sivers function probed in these two processes \cite{Collins:2002kn, boermulders, Kang:2009bp}. For hadron production in $pp$ collisions, typically the Sivers function has a more complicated relation relative to those probed in SIDIS and DY processes \cite{mulders}; that is, there are only FSIs (ISIs) in the SIDIS (DY) process, while both ISIs and FSIs exist for single inclusive particle production. The SSAs for inclusive single particle production in hadronic collisions are among the earliest processes studied in experiments, starting from the fixed-target experiments in 1980s \cite{SSA-fixed-tgt}. Recently the experiments at Relativistic Heavy Ion Collider (RHIC) have also measured the SSAs of inclusive hadron production in $pp$ collisions over a wide range of energies~\cite{SSA-rhic}. Theoretically a QCD collinear factorization formalism at next-to-leading-power (twist-3) has been developed and been used in the phenomenological studies \cite{Efremov,qiu-sterman,inclupi,applytwist}. Alternatively, a more phenomenological approach has also been formulated in the context of generalized parton model (GPM) \cite{Anselmino,Boglione:2007dm,Anselmino:2008uy}, with the inclusion of spin and transverse momentum effects. In this approach TMD factorization is assumed as a reasonable starting point~\cite{Anselmino}; at the same time, the leading twist TMD distributions (Sivers functions) are assumed to be universal (process-independent), thus the same as those in SIDIS process \cite{oldsiv,newsiv}. In this paper we formulate the SSAs in inclusive single particle production within the framework of the GPM approach. However, instead of using a process-independent Sivers function, we will carefully examine the initial- and final-state interaction effects, and determine the process-dependent Sivers function. Further we find one can shift the process-dependence of the Sivers function to the squared hard partonic scattering amplitude under one-gluon exchange approximation, and these modified hard parts are very similar in form as those in the twist-3 collinear approach~\cite{inclupi} in terms of Mandelstam variables $\hat s,\hat t, \hat u$ (as we will demonstrate). This suggests a close connection between this modified GPM formalism and the twist-3 approach. However, it is important to mention that Mandelstam variables $\hat s,\hat t, \hat u$ are themselves a function of partonic intrinsic transverse momentum in the GPM approach. We comment on these issues at the end of Section~\ref{prosiver}, where we also show the modified GPM formalism can reproduce the twist-3 collinear factorization formalism in the leading order expansion in intrinsic transverse momentum $k_T$ (for contributions coming from initial and final state interactions, where the latter is equivalent up to a prefactor). The rest of the paper is organized as follows: In Sec.~\ref{prosiver}, we introduce the GPM approach, demonstrate how to formulate the ISI and FSI effects, and discuss the connection to the twist-3 collinear factorization approach. In Sec.~\ref{numerics}, we estimate the asymmetry for inclusive pion and direct photon production at RHIC energy, and compare our predictions with those from the conventional GPM approach. We conclude our paper in Sec.~\ref{sum}. \section{Initial- and final-state interactions in single inclusive particle production} \label{prosiver} In this section, we introduce the basic ideas and assumptions of the GPM approach. Then we discuss how to formulate the initial- and final-state interactions for single inclusive particle production. Within the same framework of GPM approach, we thus derive a new formalism for the SSAs of single inclusive particle production, with the process-dependence of the Sivers function taken into account. \subsection{Generalized Parton Model} The generalized parton model was introduced by Feynman and collaborators~\cite{Field:1976ve} as a generalization of the usual collinear pQCD approach. It was adapted and used to describe the SSAs for inclusive particle production~\cite{Anselmino,Boglione:2007dm,Anselmino:2008uy}, which has had considerable phenomenological success \cite{Boglione:2007dm}. According to this approach, for the inclusive production of large $P_{hT}$ hadrons (or photons), $A^\uparrow(P_A)+B(P_B)\to h(P_h)+X$, the differential cross section can be written as \begin{eqnarray} E_h\frac{d\sigma}{d^3 P_h}=\frac{\alpha_s^2}{S}\sum_{a,b,c}\int \frac{dx_a}{x_a}d^2k_{aT} f_{a/A^\uparrow}(x_a, \vec{k}_{aT})\int \frac{dx_b}{x_b}d^2k_{bT} f_{b/B}(x_b, k_{bT}^2) \int \frac{dz_c}{z_c^2} D_{h/c}(z_c)H^U_{ab\to c}(\hat s,\hat t, \hat u)\delta(\hat s+\hat t+\hat u), \label{main} \end{eqnarray} where $S=(P_A+P_B)^2$, $f_{a/A^\uparrow}(x_a, \vec{k}_{aT})$ is the TMD parton distribution functions with $k_{aT}$ the intrinsic transverse momentum of parton $a$ with respect to the light-cone direction of hadron $A$, and $D_{h/c}(z_c)$ is the fragmentation function. Since we will only consider the SSAs generated from the parton distribution functions in this paper, we have neglected the $k_T$-dependence in the fragmentation function. $H^U_{ab\to c}(\hat s,\hat t, \hat u)$ is the hard part coefficients with $\hat s, \hat t, \hat u$ the usual partonic Mandelstam variables. Eq.~(\ref{main}) can also be used to describe direct photon production, in which one replaces the fragmentation function $D_{h/c}(z_c)$ by $\delta(z_c-1)$, and $\alpha_s^2$ by $\alpha_{em}\alpha_s$. To clearly specify the kinematics, we consider the center-of-mass frame of the two initial hadrons, in which one has $P_A^\mu=\sqrt{S/2}\,\bar{n}^\mu$ and $P_B^\mu=\sqrt{S/2}\, n^\mu$, with $\bar{n}^\mu=[1^+, 0^-, 0_\perp]$ and $n^\mu=[0^+, 1^-, 0_\perp]$ in light-cone components. For future convenience we also define the hadronic Mandelstam invariants, $T=(P_A-P_h)^2$ and $U=(P_B-P_h)^2$. Additionally, the momenta of the partons in the partonic process $a(p_a)+b(p_b)\to c(p_c)+d(p_d)$ can be written as \begin{eqnarray} p_a^\mu=\left[x_a\sqrt{\frac{S}{2}},\, \frac{k_{aT}^2}{x_a \sqrt{2S}},\, \vec{k}_{aT}\right], \qquad p_b^\mu=\left[\frac{k_{bT}^2}{x_b \sqrt{2S}},\, x_b\sqrt{\frac{S}{2}},\, \vec{k}_{bT}\right], \end{eqnarray} where the momentum of parton $c$ is related to the final hadron as: $p_c=P_h/z_c$. To study the SSAs, the PDFs $f_{a/A^\uparrow}(x_a, \vec{k}_{aT})$ in the transversely polarized hadron $A$ can be expanded as \cite{Anselmino,Boglione:2007dm,Anselmino:2008uy,Bacchetta:2004jz} \begin{eqnarray} f_{a/A^\uparrow}(x_a, \vec{k}_{aT})=f_{a/A}(x_a, k_{aT}^2)+f_{1T}^{\perp a}(x_a, k_{aT}^2) \frac{\epsilon^{k_{aT}S_{A} n\bar{n} }}{M}, \end{eqnarray} where $S_A$ is the transverse polarization vector, $M$ is the mass of hadron $A$, $f_{a/A}(x_a, k_{aT}^2)$ is the spin-averaged PDFs, and $f_{1T}^{\perp a}(x_a, k_{aT}^2)$ is the Sivers functions. Thus in GPM approach, the spin-averaged differential cross section is given by Eq.~(\ref{main}) with $f_{a/A^\uparrow}(x_a, \vec{k}_{aT})$ replaced by $f_{a/A}(x_a, k_{aT}^2)$, while the spin-dependent cross section is given by \begin{eqnarray} E_h\frac{d\Delta\sigma}{d^3 P_h}&=&\frac{\alpha_s^2}{S}\sum_{a,b,c}\int \frac{dx_a}{x_a}d^2k_{aT} f_{1T}^{\perp a}(x_a, k_{aT}^2) \frac{\epsilon^{k_{aT}S_{A} n\bar{n} }}{M} \int \frac{dx_b}{x_b}d^2k_{bT} f_{b/B}(x_b, k_{bT}^2) \nonumber\\ &&\times \int \frac{dz_c}{z_c^2} D_{h/c}(z_c)H^U_{ab\to c}(\hat s,\hat t, \hat u)\delta(\hat s+\hat t+\hat u), \label{spin} \end{eqnarray} and the SSA is given by the ratio, \begin{eqnarray} A_N\equiv \left.E_h\frac{d\Delta\sigma}{d^3 P_h}\right/E_h\frac{d\sigma}{d^3 P_h}. \end{eqnarray} As stated in the introduction, there are two assumptions in the GPM approach: one is that the spin-averaged and spin-dependent differential cross sections can be factorized in terms of TMD PDFs as in Eqs.~(\ref{main}) and (\ref{spin}), and the other one is that the Sivers functions is assumed to be universal and equal to those in SIDIS process, $f_{1T}^{\perp a}(x_a, k_{aT}^2)=f_{1T}^{\perp a, \rm SIDIS}(x_a, k_{aT}^2)$. In this paper we continue to work within the framework of the GPM approach, in other words, we will assume the TMD factorization is a reasonable phenomenological starting point. However, at the same time, we will take into account the initial- and final-state interactions. Since both ISIs and FSIs contribute for single inclusive particle production, in principle the Sivers functions in inclusive particle production in hadronic collisions should be different from those probed in SIDIS process. We thus need to carefully analyze these ISIs and FSIs for all the partonic scattering processes relevant to single inclusive particle production to determine the proper Sivers functions to be used in the formalism. In other words, this new formalism will be \begin{eqnarray} E_h\frac{d\Delta\sigma}{d^3 P_h}&=&\frac{\alpha_s^2}{S}\sum_{a,b,c}\int \frac{dx_a}{x_a}d^2k_{aT} f_{1T}^{\perp a, ab\to cd}(x_a, k_{aT}^2) \frac{\epsilon^{k_{aT}S_{A} n\bar{n} }}{M} \int \frac{dx_b}{x_b}d^2k_{bT} f_{b/B}(x_b, k_{bT}^2) \nonumber\\ &&\times \int \frac{dz_c}{z_c^2} D_{h/c}(z_c)H^U_{ab\to c}(\hat s,\hat t, \hat u)\delta(\hat s+\hat t+\hat u), \end{eqnarray} in which a {\it process-dependent Sivers function} denoted as $f_{1T}^{\perp a, ab\to cd}(x_a, k_{aT}^2)$ is used rather than that from SIDIS $f_{1T}^{\perp a, \rm SIDIS}(x_a, k_{aT}^2)$ as in the conventional GPM approach. \subsection{Initial- and final-state interactions} In this subsection, we will discuss how to formulate the initial- and final-state interactions. The crucial point is that the existence of the Sivers function in the polarized nucleon relies on the initial- and final-state interactions between the struck parton and the spectators from the polarized nucleon through the gluon exchange. Thus by analyzing these interactions, one can determine the process dependent Sivers function $f_{1T}^{\perp a, ab\to cd}(x_a, k_{aT}^2)$ to be used for the corresponding partonic scattering $ab\to cd$. We start with the classic examples: the final-state interaction in SIDIS, and the initial-state interaction for DY process. To the leading order (one-gluon exchange), they are shown in Fig.~\ref{class}. \begin{figure}[htb]\centering \psfig{file=sidis.eps, width=1.5in} \hskip 0.3in \psfig{file=DY.eps, width=1.5in} \caption{Final-state interaction in SIDIS (left) and initial-state interaction in DY (right) processes.} \label{class} \end{figure} For the SIDIS process $e(\ell)+p(P_A, S_T)\to e(\ell')+h+X$ with $Q^2=-q^2=-(\ell'-\ell)^2$, under the eikonal approximation, the final-state interaction (as in Fig.~\ref{class}(left)) leads to \begin{eqnarray} \bar{u}(p_c)(-ig)\gamma^-T^a\frac{i(\sla{p}_c-\sla{k})}{(p_c-k)^2+i\epsilon}\approx \bar{u}(p_c)\left[\frac{g}{-k^++i\epsilon}T^a\right], \label{dis} \end{eqnarray} where the gamma matrix $\gamma^-$ appears because of the interaction with a longitudinal polarized gluon ($\sim A^+$), and $a$ is the color index for this gluon. The eikonal part (the term in the bracket) is the first order contribution of the gauge link (in an expansion of the coupling $g$) in the definition of a gauge-invariant TMD PDFs in SIDIS process, see Fig.~\ref{sidis_siv}(a). The imaginary part of the eikonal propagator $1/(-k^++i\epsilon)$ provides the necessary phase for the SSAs. \begin{figure}[htb]\centering \psfig{file=dissiv.eps, width=2.8in} \caption{Sivers function in SIDIS process in the first non-trivial order (one-gluon exchange).} \label{sidis_siv} \end{figure} On the other hand, for DY process, the initial-state interaction (as in Fig.~\ref{class}(right)) leads to \begin{eqnarray} \bar{v}(p_b)(-ig)\gamma^-T^a\frac{-i(\sla{p}_b+\sla{k})}{(p_b+k)^2+i\epsilon}\approx \bar{v}(p_b)\left[\frac{g}{-k^+-i\epsilon}T^a\right], \end{eqnarray} which has the same real part and opposite imaginary part compared to SIDIS process. This leads to the fact that the spin-averaged TMD PDFs are the same, while the Sivers function will be opposite in SIDIS and DY processes. This conclusion can be generalized to all order, and has been proven to be true using parity and time-reversal invariant arguments \cite{Collins:2002kn,boermulders}. \begin{figure}[htb]\centering \psfig{file=qqp1.eps, width=1.5in} \hskip 0.15in \psfig{file=qqp2.eps, width=1.5in} \hskip 0.15in \psfig{file=qqp3.eps, width=1.5in} \hskip 0.15in \psfig{file=qqp4.eps, width=1.5in} \caption{Initial- and final-state interactions in $qq'\to qq'$: (a) initial-state interaction, (b) final-state interaction, (c) and (d) the final-state interactions for the unobserved particle.} \label{qqp} \end{figure} Now let us turn to the case for inclusive single particle production in hadronic collisions, in which $2\to 2$ partonic scattering is the leading order contribution, where both initial- and final-state interactions contribute. We will start with a simple example: $qq'\to qq'$. Here the initial-quark $q$ is from the polarized nucleon, and the final-quark $q$ fragments to the final-state hadron. The one-gluon exchange approximation for the initial- and final-state interactions are shown in Fig.~\ref{qqp}. Under the eikonal approximation, for ISI Fig.~\ref{qqp}(a), \begin{eqnarray} \frac{i(\sla{p}_b+\sla{k})}{(p_b+k)^2+i\epsilon}(-ig)\gamma^-T^a u(p_b)=\left[\frac{-g}{-k^+-i\epsilon}T^a\right] u(p_b). \label{qqI} \end{eqnarray} Likewise, for the FSI Fig.~\ref{qqp}(b), we have \begin{eqnarray} \bar{u}(p_c)(-ig)\gamma^-T^a\frac{i(\sla{p}_c-\sla{k})}{(p_c-k)^2+i\epsilon}\approx \bar{u}(p_c)\left[\frac{g}{-k^++i\epsilon}T^a\right]. \label{qqF} \end{eqnarray} Thus both interactions contribute to the phase $-i\pi \delta(k^+)$, which is the same as in the SIDIS process as in Eq.~(\ref{dis}). However, they will have different color flow. To extract the extra color factors for Fig.~\ref{qqp}(a) and (b) as compared to the usual $qq'\to qq'$ without gluon attachments, we resort to the method developed in \cite{qiu-sterman,inclupi,Qiu:2007ey}. We obtain the color factors $C_I$ ($C_{F_c}$) for initial (final)-state interaction \begin{eqnarray} C_I=-\frac{1}{2N_c^2}, \qquad C_{F_c}=-\frac{1}{4N_c^2}, \end{eqnarray} while the color factors for unpolarized cross section is given by \begin{eqnarray} C_u=\frac{N_c^2-1}{4N_c^2}. \end{eqnarray} In other words, the Sivers function in $qq'\to qq'$ should be the one as shown in Fig.~\ref{qqpsiv}, which comes from the sum of the ISIs and FSIs with the corresponding color factors $C_I$ and $C_{F_c}$ respectively. \begin{figure}[htb]\centering \psfig{file=qqpsiv.eps, width=3.4in} \caption{Sivers function in $qq'\to qq'$ from ISIs and FSIs, with the corresponding color factors $C_I$ and $C_{F_c}$ respectively.} \label{qqpsiv} \end{figure} Thus by comparing the imaginary part of the eikonal propagators in Eq.~(\ref{dis}) for SIDIS and those in Eqs.~(\ref{qqI}) and (\ref{qqF}) for ISI and FSI for $qq'\to qq'$, we immediately find the Sivers function probed in $qq'\to qq'$ process is related to those in SIDIS as follows \begin{eqnarray} f_{1T}^{\perp a, qq'\to qq'}=\frac{C_I+C_{F_c}}{C_u}f_{1T}^{\perp a, \rm SIDIS}. \end{eqnarray} Thus in the GPM model, using the process dependent Sivers function, one should replace \begin{eqnarray} f_{1T}^{\perp a, \rm SIDIS}H^U_{qq'\to qq'}\equiv f_{1T}^{\perp a, \rm SIDIS}\left[C_u h_{qq'\to qq'}\right], \end{eqnarray} by the following form \begin{eqnarray} f_{1T}^{\perp a, qq'\to qq'}H^U_{qq'\to qq'}=\frac{C_I+C_{F_c}}{C_u}f_{1T}^{\perp a, \rm SIDIS}H^U_{qq'\to qq'}=f_{1T}^{\perp a, \rm SIDIS}\left[C_I h_{qq'\to qq'} + C_{F_c}h_{qq'\to qq'}\right], \end{eqnarray} where $h_{qq'\to qq'}$ is the partonic cross section without color factors included. For $qq'\to qq'$, one has \begin{eqnarray} h_{qq'\to qq'}=2\frac{\hat s^2+\hat u^2}{\hat t^2}. \end{eqnarray} Alternatively one can use $f_{1T}^{\perp a, \rm SIDIS}$ for the single inclusive particle production while accounting for the process-dependence of the Sivers function, by shifting the process-dependence to the hard parts. In other words, instead of using $H^U_{qq'\to qq'}$ in Eq.~(\ref{spin}) for the spin-dependent cross section, one should use \begin{eqnarray} H^{\rm Inc}_{qq'\to qq'}\equiv H^{\rm Inc-I}_{qq'\to qq'}+H^{\rm Inc-F}_{qq'\to qq'}, \end{eqnarray} where \begin{eqnarray} H^{\rm Inc-I}_{qq'\to qq'}=C_I h_{qq'\to qq'}, \qquad H^{\rm Inc-F}_{qq'\to qq'}=C_{F_c} h_{qq'\to qq'}, \end{eqnarray} are the corresponding hard parts related to initial- and final-state interactions, respectively. There are many other partonic processes contributing to the single inclusive particle production. Similar to the analysis in $qq'\to qq'$, one needs to analyze each individual Feynman diagram accordingly, carefully moving the extra factors (process-dependence) from the corresponding Sivers function to the hard parts, thus obtaining $H^{\rm Inc-I}_{ab\to cd}$ and $H^{\rm Inc-F}_{ab\to cd}$ for every channel. The modified formalism will be given in the next subsection. There are some comments to our results presented to this point: in particular those displayed in Fig.~\ref{qqpsiv}. It looks like Figs.~\ref{qqp}(a), (b) can be factorized into a convolution of Sivers function and a hard part function as shown in Fig.~\ref{qqpsiv}. However, this is not a TMD factorization in the strict sense. Currently TMD factorization theorems have been established for both SIDIS and DY processes \cite{Collins:1981uk, Ji:2004wu}. To the order we are studying, this means, the one-gluon exchange diagram for SIDIS in Fig.~\ref{class} can be factorized into a convolution of a Sivers function $f_{1T}^{\perp a, \rm SIDIS}(x_a, k_{aT}^2)$ and a hard part function $H(Q)$, as shown in Fig.~\ref{sidis_siv}. Here all the soft physics (those depending on $k_{aT}$) has been absorbed into the Sivers function $f_{1T}^{\perp a, \rm SIDIS}(x_a, k_{aT}^2)$, and the hard part function $H(Q)$ only depends on the hard scale $Q$, not $k_{aT}$. On the other hand, for $qq'\to qq'$, we write the corresponding diagram Fig.~\ref{qqp}(a) into a similar form: a product of a Sivers function $f_{1T}^{\perp a, qq'\to qq'}(x_a, k_{aT}^2)$ and a hard part function $H_{qq'\to qq'}(\hat s$, $\hat t$, $\hat u)$, as shown in Fig.~\ref{qqpsiv}. But as we will comment later, besides the $k_{aT}$ dependence from the Sivers function, one will also need to keep the $k_{aT}$ dependence in the hard part functions $H_{qq'\to qq'}$, without which the SSAs will vanish in both the conventional GPM and this modified GPM formalism. Even though this is not a TMD factorization, one hopes this formalism is a reasonable approximation. There are two reasons to suggest this might be the case. First of all, from phenomenological point of view, this formalism had some success \cite{Boglione:2007dm}. Secondly, as we will show in Section~\ref{prosiver} {\bf{D}} this formalism has a connection with the well-established collinear twist-3 approach \cite{inclupi}. In this respect, our identification of the color factors with the hard cross-sections is reminiscent of the results of the twist 3 approach (see in particular \cite{inclupi}). Indeed we will see that upon calculating all partonic processes that contribute from each channel, they have the same form in terms of Mandelstam variables $\hat s$, $\hat t$, $\hat u$, as compared to those in the twist-3 collinear factorization approach \cite{inclupi} (up to a prefactor associated with final state interactions). To close this subsection, we want to point out the following important fact: the interaction with the unobserved particle (the quark $q'$ for $qq'\to qq'$) vanishes after summing different cut diagrams \cite{qiu-sterman,inclupi,Yuan:2008vn}. To see this clearly, we have for Figs.~\ref{qqp}(c) and ~\ref{qqp}(d) \begin{eqnarray} \frac{1}{(p_d-k)^2+i\epsilon} \delta(p_d^2)\to-i\pi \delta((p_d-k)^2) \delta(p_d^2),\quad{\rm and}\quad \frac{1}{p_d^2-i\epsilon} \delta((p_d-k)^2)\to+i\pi \delta((p_d-k)^2) \delta(p_d^2), \end{eqnarray} respectively. Since the remaining parts of the scattering amplitudes for these two diagrams are exactly the same except for the above pole contributions which are opposite to each other, the contribution from the unobserved particle vanishes. This could also be used to explain why the inclusive DIS process, the SSA vanishes. As shown in Fig.~\ref{class} (left), we don't observe the final-state quark for the inclusive DIS process, thus the contribution from the cut to the left and to the right will cancel which results in a vanishing asymmetry. We want to emphasize that the above analysis holds true only under one-gluon exchange approximation. Going beyond one-gluon exchange, the Sivers functions are typically more complicated, there seems no simple relation (as extra color factors) to those in the SIDIS process \cite{tmdbreak}. \subsection{Single inclusive hadron production} Now after carefully taking into account both initial- and final-state interactions, the more appropriate GPM formalism for spin-dependent cross section should be written as \begin{eqnarray} E_h\frac{d\Delta\sigma}{d^3 P_h}&=&\frac{\alpha_s^2}{S}\sum_{a,b,c}\int \frac{dx_a}{x_a}d^2k_{aT} f_{1T}^{\perp a, \rm SIDIS}(x_a, k_{aT}^2) \frac{\epsilon^{k_{aT}S_{A} n\bar{n} }}{M} \int \frac{dx_b}{x_b}d^2k_{bT} f_{b/B}(x_b, k_{bT}^2) \nonumber\\ &&\times \int \frac{dz_c}{z_c^2} D_{h/c}(z_c)H^{\rm Inc}_{ab\to c}(\hat s,\hat t, \hat u)\delta(\hat s+\hat t+\hat u), \label{modified} \end{eqnarray} where we have a new hard part function $H^{\rm Inc}_{ab\to c}$ instead of $H^U_{ab\to c}$ used in the conventional GPM approach. Here the process dependence in the Sivers function has been absorbed into $H^{\rm Inc}_{ab\to c}$, which can be written as \begin{eqnarray} H^{\rm Inc}_{ab\to c}(\hat s,\hat t,\hat u)=H^{\rm Inc-I}_{ab\to c}(\hat s,\hat t,\hat u)+ H^{\rm Inc-F}_{ab\to c}(\hat s,\hat t,\hat u), \label{mgpm} \end{eqnarray} where $H^{\rm Inc-I}_{ab\to c}$ and $H^{\rm Inc-F}_{ab\to c}$ are associated with initial- and final-state interactions, respectively. The contributions for the various contributing partonic subprocesses are given by \begin{eqnarray} H^{\rm Inc-I}_{qq'\to qq'}&=&-H^{\rm Inc-I}_{\bar q\bar q'\to \bar q\bar q'}=-\frac{1}{N_c^2}\left[\frac{\hat s^2+\hat u^2}{\hat t^2}\right]\, , \quad H^{\rm Inc-F}_{qq'\to qq'}=-H^{\rm Inc-F}_{\bar q\bar q'\to \bar q\bar q'}=-\frac{1}{2N_c^2}\left[\frac{\hat s^2+\hat u^2}{\hat t^2}\right] \\ H^{\rm Inc-I}_{q\bar q'\to q\bar q'}&=&-H^{\rm Inc-I}_{\bar qq'\to \bar q q'}=-\frac{N_c^2-2}{2N_c^2}\left[\frac{\hat s^2+\hat u^2}{\hat t^2}\right]\, , \quad H^{\rm Inc-F}_{q\bar q'\to q\bar q'}=-H^{\rm Inc-F}_{\bar qq'\to \bar q q'}=-\frac{1}{2N_c^2}\left[\frac{\hat s^2+\hat u^2}{\hat t^2}\right] \\ H^{\rm Inc-I}_{qq'\to q'q}&=&-H^{\rm Inc-I}_{\bar q\bar q'\to \bar q'\bar q}=-\frac{1}{N_c^2}\left[\frac{\hat s^2+\hat t^2}{\hat u^2}\right]\, , \quad H^{\rm Inc-F}_{qq'\to q'q}=-H^{\rm Inc-F}_{\bar q\bar q'\to \bar q'\bar q}=\frac{N_c^2-2}{2N_c^2}\left[\frac{\hat s^2+\hat t^2}{\hat u^2}\right] \\ H^{\rm Inc-I}_{q\bar q'\to \bar q' q}&=&-H^{\rm Inc-I}_{\bar qq'\to q'\bar q}=-\frac{N_c^2-2}{2N_c^2}\left[\frac{\hat s^2+\hat t^2}{\hat u^2}\right]\, , \quad H^{\rm Inc-F}_{q\bar q'\to \bar q' q}=-H^{\rm Inc-F}_{\bar qq'\to q'\bar q}=\frac{1}{N_c^2}\left[\frac{\hat s^2+\hat t^2}{\hat u^2}\right] \\ \nonumber H^{\rm Inc-I}_{qq\to qq}&=&-H^{\rm Inc-I}_{\bar q\bar q\to \bar q\bar q}=-\frac{1}{N_c^2}\left[\frac{\hat s^2+\hat u^2}{\hat t^2} +\frac{\hat s^2+\hat t^2}{\hat u^2}\right]+\frac{N_c^2+1}{N_c^3}\frac{\hat s^2}{\hat t\hat u}\, , \\ H^{\rm Inc-F}_{qq\to qq}&=&-H^{\rm Inc-F}_{\bar q\bar q\to \bar q\bar q}=-\frac{1}{2N_c^2}\left[\frac{\hat s^2+\hat u^2}{\hat t^2}\right] +\frac{N_c^2-2}{2N_c^2}\left[\frac{\hat s^2+\hat t^2}{\hat u^2}\right]+\frac{1}{N_c^3}\frac{\hat s^2}{\hat t\hat u} \\ H^{\rm Inc-I}_{q\bar q\to q'\bar q'}&=&-H^{\rm Inc-I}_{\bar q q\to \bar q' q'}=\frac{1}{2N_c^2}\left[\frac{\hat t^2+\hat u^2}{\hat s^2}\right]\, , \quad H^{\rm Inc-F}_{q\bar q\to q'\bar q'}=-H^{\rm Inc-F}_{\bar q q\to \bar q' q'}=\frac{N_c^2-2}{2N_c^2}\left[\frac{\hat t^2+\hat u^2}{\hat s^2}\right] \\ H^{\rm Inc-I}_{q\bar q\to \bar q' q'}&=&-H^{\rm Inc-I}_{\bar q q\to q'\bar q'}=\frac{1}{2N_c^2}\left[\frac{\hat t^2+\hat u^2}{\hat s^2}\right]\, , \quad H^{\rm Inc-F}_{q\bar q\to \bar q' q'}=-H^{\rm Inc-F}_{\bar q q\to q'\bar q'}=\frac{1}{N_c^2}\left[\frac{\hat t^2+\hat u^2}{\hat s^2}\right] \\ H^{\rm Inc-I}_{q\bar q\to q\bar q}&=&-H^{\rm Inc-I}_{\bar q q\to \bar q q}=-\frac{N_c^2-2}{2N_c^2}\left[\frac{\hat s^2+\hat u^2}{\hat t^2}\right] +\frac{1}{2N_c^2}\left[\frac{\hat t^2+\hat u^2}{\hat s^2}\right]-\frac{1}{N_c^3}\frac{\hat u^2}{\hat s\hat t}\, , \nonumber \\ H^{\rm Inc-F}_{q\bar q\to q\bar q}&=&-H^{\rm Inc-F}_{\bar q q\to \bar q q}=-\frac{1}{2N_c^2}\left[\frac{\hat s^2+\hat u^2}{\hat t^2}\right] +\frac{N_c^2-2}{2N_c^2}\left[\frac{\hat t^2+\hat u^2}{\hat s^2}\right]+\frac{1}{N_c^3}\frac{\hat u^2}{\hat s\hat t} \\ H^{\rm Inc-I}_{q\bar q\to \bar q q}&=&-H^{\rm Inc-I}_{\bar q q\to q\bar q}=-\frac{N_c^2-2}{2N_c^2}\left[\frac{\hat s^2+\hat t^2}{\hat u^2}\right] +\frac{1}{2N_c^2}\left[\frac{\hat t^2+\hat u^2}{\hat s^2}\right]-\frac{1}{N_c^3}\frac{\hat t^2}{\hat s\hat u}\, , \nonumber \\ H^{\rm Inc-F}_{q\bar q\to \bar q q}&=&-H^{\rm Inc-F}_{\bar q q\to q\bar q}=\frac{1}{N_c^2}\left[\frac{\hat s^2+\hat t^2}{\hat u^2}+\frac{\hat t^2+\hat u^2}{\hat s^2}\right]-\frac{N_c^2+1}{N_c^3}\frac{\hat t^2}{\hat s\hat u} \\ H^{\rm Inc-I}_{qg\to qg}&=&-H^{\rm Inc-I}_{\bar qg\to \bar qg}=\frac{1}{2(N_c^2-1)} \left[-\frac{\hat s}{\hat u}-\frac{\hat u}{\hat s}\right]+\frac{N_c^2}{2(N_c^2-1)} \left[\frac{\hat s^2+\hat u^2}{\hat t^2}\frac{\hat u}{\hat s}\right]\, , \nonumber \\ H^{\rm Inc-F}_{qg\to qg}&=&-H^{\rm Inc-F}_{\bar qg\to \bar qg}=\frac{1}{2N_c^2(N_c^2-1)} \left[-\frac{\hat s}{\hat u}-\frac{\hat u}{\hat s}\right]-\frac{1}{N_c^2-1} \left[\frac{\hat s^2+\hat u^2}{\hat t^2}\right]\, , \\ H^{\rm Inc-I}_{qg\to gq}&=&-H^{\rm Inc-I}_{\bar qg\to g\bar q}=\frac{1}{2(N_c^2-1)} \left[-\frac{\hat s}{\hat t}-\frac{\hat t}{\hat s}\right]+\frac{N_c^2}{2(N_c^2-1)} \left[\frac{\hat s^2+\hat t^2}{\hat u^2}\frac{\hat t}{\hat s}\right]\, ,\nonumber \\ H^{\rm Inc-F}_{qg\to gq}&=&-H^{\rm Inc-F}_{\bar q g\to g\bar q}=-\frac{1}{2(N_c^2-1)} \left[-\frac{\hat s}{\hat t}-\frac{\hat t}{\hat s}\right]-\frac{N_c^2}{2(N_c^2-1)} \left[\frac{\hat s^2+\hat t^2}{\hat u^2}\frac{\hat s}{\hat t}\right] \\ H^{\rm Inc-I}_{q\bar q\to gg}&=&-H^{\rm Inc-I}_{\bar q q\to gg}=-\frac{1}{2N_c^3}\left[\frac{\hat u}{\hat t}+\frac{\hat t}{\hat u}\right]-\frac{1}{N_c}\left[\frac{\hat t^2+\hat u^2}{\hat s^2}\right]\, ,\nonumber \\ H^{\rm Inc-F}_{q\bar q\to gg}&=&-H^{\rm Inc-F}_{\bar q q\to gg}=-\frac{1}{2N_c}\left[\frac{\hat u}{\hat t}+\frac{\hat t}{\hat u}\right]+\frac{N_c}{2}\left[\frac{\hat t^2+\hat u^2}{\hat s^2}\frac{\hat u}{\hat t}\right] \end{eqnarray} We also calculate the corresponding hard part functions for direct photon production, and they are given by \begin{eqnarray} H^{\rm Inc}_{qg\to \gamma q}&=&-H^{\rm Inc}_{\bar qg\to \gamma \bar q}=-\frac{N_c}{N_c^2-1} e_q^2\left[-\frac{\hat t}{\hat s}-\frac{\hat s}{\hat t}\right]\, ,\quad H^{\rm Inc}_{q\bar q\to \gamma g}=-H^{\rm Inc}_{\bar q q\to \gamma g}=\frac{1}{N_c^2}e_q^2 \left[\frac{\hat t}{\hat u}+\frac{\hat u}{\hat t}\right] \, . \end{eqnarray} Here again we note that all these hard part functions have the same form in terms of Mandelstam variables $\hat s$, $\hat t$, $\hat u$, compared to those in the twist-3 collinear factorization approach \cite{inclupi}: $H^{\rm Inc-I}_{ab\to c}$ and $H^{\rm Inc-F}_{ab\to c}$ have the same functional form as the corresponding ones $H^{\rm twist\mbox{-}3-I}_{ab\to c}$ and $H^{\rm twist\mbox{-}3-F}_{ab\to c}$ (defined below) in the twist-3 collinear factorization formalism, respectively. However, there are two differences in the formalisms. First, in the twist-3 collinear approach, the hard part functions are given by \begin{eqnarray} H^{\rm twist\mbox{-}3}_{ab\to c}(\hat s,\hat t,\hat u)=H^{\rm twist\mbox{-}3-I}_{ab\to c}(\hat s,\hat t,\hat u) +H^{\rm twist\mbox{-}3-F}_{ab\to c}(\hat s,\hat t,\hat u)\left(1+\frac{\hat u}{\hat t}\right), \label{twist-3} \end{eqnarray} i.e., there is an extra factor $(1+\hat u/\hat t)$ accompanying the hard part functions $H^{\rm twist\mbox{-}3-F}_{ab\to c}$ associated with final state interactions. However, in our modified GPM formalism as in Eq.~(\ref{mgpm}), there is no such factor. This difference can be traced back to the eikonal approximation we are using, see, e.g., Eq.~(\ref{qqF}), where we only keep the pole contribution $-k^+ + i\epsilon$ in the denominator under this approximation. However, there is an extra term linear in $k_\perp$ ($\propto p_c\cdot k_\perp$) which exists in the twist-3 collinear factorization formalism. This leads to the extra factor $(1+\hat u/\hat t)$ for the final-state interaction contribution (for details, see Ref.~\cite{inclupi}). Second, in the twist-3 collinear factorization approach, all the parton momenta are collinear to the corresponding hadrons, thus $\hat s$, $\hat t$, $\hat u$ does not depend on the parton intrinsic transverse momentum. On the other hand, in the GPM approach the parton momenta involve intrinsic transverse momentum, thus $\hat s$, $\hat t$, $\hat u$ all depend on the the parton transverse momentum, $k_{aT}$ and $k_{bT}$. In fact, because of the existence of the linear $k_{aT}$-dependence in $\epsilon^{k_{aT}S_{A} n\bar{n} }$, one has to keep another linear $k_{aT}$-dependence from the rest of the integrand in Eq.~(\ref{modified}), otherwise the integral over $d^2k_{aT}$ vanishes. In other words, it is the linear in $k_{aT}$ term in the hard part functions $H^{\rm Inc}_{ab\to c}(\hat s, \hat t, \hat u)$ and $\delta(\hat{s}+\hat{t}+\hat{t})$ that contributes to the asymmetry. Even with these two differences, the similarities in terms of $\hat s$, $\hat t$, $\hat u$ suggest that there are close connections between our modified GPM formalism and the twist-3 collinear factorization approach. We explore this potential connection in the next subsection. \subsection{Connection to the twist-3 collinear factorization formalism} As pointed out in the last subsection, it is the linear in $k_{aT}$ dependence from the rest of the integral in Eq.~(\ref{modified}) that contributes to the asymmetry. We thus make an expansion and keep only the linear in $k_{aT}$ terms. We will show that the leading term in this expansion has a close connection to the twist-3 collinear factorization formalism. We start by specifying the partonic kinematics. Keeping the linear in $k_{aT}$ terms and dropping all the $k_{bT}$-dependence we have $p_a^\mu\approx x_a P_A^\mu+k_{aT}$ and $p_b^\mu\approx x_b P_B^\mu$, thus \begin{eqnarray} \hat{s}\approx x_a x_b S, \qquad \hat{t}\approx \frac{x_a}{z_c} T -\frac{2P_{hT}\cdot k_{aT}}{z_c}, \qquad \hat{u}=\frac{x_b}{z_c}U. \end{eqnarray} Thus we can write the $\delta$-function as \begin{eqnarray} \delta(\hat s+\hat t+\hat u)=\frac{1}{x_bS+T/z_c}\delta\left(x_a-x-\frac{2P_{hT}\cdot k_{aT}}{z_c x_b S+T}\right) \quad {\rm where,}\quad x_a=x+\frac{2P_{hT}\cdot k_{aT}}{z_c x_b S+T}, \label{xaform} \end{eqnarray} and $x=-x_bU/(z_c x_b S+T)$ is independent of $k_{aT}$. Now performing the integrate over $x_a$ in Eq.~(\ref{modified}) and using the $\delta$-function we get, \begin{eqnarray} E_h\frac{d\Delta\sigma}{d^3 P_h}&=&\frac{\alpha_s^2}{S}\sum_{a,b,c}\int d^2k_{aT} \frac{\epsilon^{k_{aT}S_{A} n\bar{n}}}{M} \frac{1}{x_a} f_{1T}^{\perp a, \rm SIDIS}(x_a, k_{aT}^2) \int \frac{dx_b}{x_b} f_{b/B}(x_b) \nonumber\\ &&\times \int \frac{dz_c}{z_c^2} D_{h/c}(z_c)H^{\rm Inc}_{ab\to c}(\hat s,\hat t, \hat u)\left.\frac{1}{x_bS+T/z_c}\right|_{x_a=x+\frac{2P_{hT}\cdot k_{aT}}{z_c x_b S+T}}. \label{before} \end{eqnarray} After replacing $x_a$ as above, one has \begin{eqnarray} \hat{s}=\tilde{s}-\frac{\tilde{s}}{\tilde{u}}2P_{hT}\cdot k_{aT}/z_c, \qquad \hat{t}=\tilde{t}+\frac{\tilde{s}}{\tilde{u}}2P_{hT}\cdot k_{aT}/z_c, \qquad \hat{u}=\tilde{u}, \label{stu} \end{eqnarray} where $\tilde{s}=x x_b S$, $\tilde{t}=xT/z_c$, $\tilde{u}=x_b U/z_c$ and they are all independent of $k_{aT}$. Note $\hat{s}+\hat{t}+\hat{u}=0$ implies $\tilde{s}+\tilde{t}+\tilde{u}=0$. Now besides the $\epsilon^{k_{aT}S_{A} n\bar{n}}$, the linear in $k_{aT}$ contributions in Eq.~(\ref{before}) can come from, either (a) $x_a$-dependence in $f_{1T}^{\perp a, \rm SIDIS}(x_a, k_{aT}^2)$, or (b) the $\hat{s}$- and $\hat{t}$-dependence in $H^{\rm Inc}_{ab\to c}(\hat s,\hat t, \hat u)$. This is because $x_a$, $\hat s$, and $\hat t$ are the only terms in Eq.~(\ref{before}) which depend linearly in $k_{aT}$. We now make $k_{aT}$ expansion one by one. First for contribution (a), since \begin{eqnarray} \frac{\partial x_a}{\partial k_{aT}^\alpha}=\frac{2P_{hT\alpha}}{z_c x_b S+T}, \end{eqnarray} to the linear term in $k_{aT}$, we have \begin{eqnarray} E_h\frac{d\Delta\sigma^{(a)}}{d^3 P_h}&=&\frac{\alpha_s^2}{S}\sum_{a,b,c}\int d^2k_{aT} \frac{\epsilon^{k_{aT}S_{A} n\bar{n}}}{M} k_{aT}^\alpha \frac{2P_{hT\alpha}}{z_c x_b S+T} \frac{d}{dx_a}\left[\frac{f_{1T}^{\perp a, \rm SIDIS}(x_a, k_{aT}^2)}{x_a}\right]_{x_a\to x} \int \frac{dx_b}{x_b} f_{b/B}(x_b) \nonumber\\ &&\times \int \frac{dz_c}{z_c^2} D_{h/c}(z_c)H^{\rm Inc}_{ab\to c}(\tilde s,\tilde t, \tilde u)\frac{1}{x_bS+T/z_c}, \label{after} \end{eqnarray} where we have dropped all $k_{aT}$ dependence in $H^{\rm Inc}_{ab\to c}$, thus replacing the $k_{aT}$-dependent $\hat s$, $\hat t$, $\hat u$ by the $k_{aT}$-independent $\tilde s$, $\tilde t$, $\tilde u$ in $H^{\rm Inc}_{ab\to c}$. Then using \begin{eqnarray} \int d^2 k_{aT} k_{aT}^\beta k_{aT}^\alpha f_{1T}^{\perp a, \rm SIDIS}(x_a, k_{aT}^2)=-\frac{1}{2} \int d^2 k_{aT} \,g^{\beta\alpha}\,|\vec{k}_{aT}|^2 f_{1T}^{\perp a, \rm SIDIS}(x_a, k_{aT}^2), \end{eqnarray} and the relation between the Sivers function and the Efremov-Teryaev-Qiu-Sterman function $T_{a,F}(x, x)$ \cite{boermulders}, \begin{eqnarray} T_{a,F}(x,x)=-\frac{1}{M}\int d^2k_{aT} |\vec{k}_{aT}|^2f_{1T}^{\perp a, \rm SIDIS}(x, k_{aT}^2), \end{eqnarray} one can rewrite Eq.~(\ref{after}) as \begin{equation} E_h\frac{d\Delta\sigma^{(a)}}{d^3 P_h}\hspace{-0.1cm}=\hspace{-0.1cm}\frac{\alpha_s^2}{S}\sum_{a,b,c}\hspace{-0.05cm} \int \frac{dz_c}{z_c^2} D_{h/c}(z_c) \frac{\epsilon^{P_{hT}S_A n \bar{n}}}{z_c \tilde{u}} \frac{1}{x}\left[T_{a,F}(x, x)\hspace{-0.05cm}-\hspace{-0.05cm}x\frac{d}{dx}T_{a,F}(x, x)\right] \int \frac{dx_b}{x_b} f_{b/B}(x_b) H^{\rm Inc}_{ab\to c}(\tilde s,\tilde t, \tilde u)\frac{1}{x_bS+T/z_c}. \nonumber\\ \label{aftera} \end{equation} We observe that this {\em form} is the same as that in the twist-3 collinear factorization approach. In particular, note that there is no $k_{aT}$-dependence in the hard part functions $H^{\rm Inc}_{ab\to c}$. The difference to the twist-3 collinear factorization formalism \cite{inclupi} (as mentioned above) is the extra factor $(1+\hat u/\hat t)$ accompanying the hard part functions associated with final-state interactions, see Eqs.~(\ref{mgpm}) and (\ref{twist-3}). However, in our modified GPM formalism, we have another contribution from (b), due to the $k_{aT}$-dependence from $H^{\rm Inc}_{ab\to c}(\hat s, \hat t, \hat u)$ in Eq.~(\ref{before}). Let's now study this contribution (b). As is explicit in Eq.~(\ref{stu}) $\hat{u}$ is independent of $k_{aT}$ while both $\hat{s}$ and $\hat{t}$ depend on $k_{aT}$. Since $\hat s+\hat t+\hat u=0$, one could then set $\hat t=-\hat s-\hat u$ in $H^{\rm Inc}_{ab\to c}$ and then expand only $\hat s$ in $k_{aT}$. That is, \begin{eqnarray} \left.\frac{\partial}{\partial k_{aT}^\alpha}H^{\rm Inc}_{ab\to c}(\hat s, \hat t, \hat u)\right|_{k_{aT}\to 0}= \left.\frac{\partial \hat s}{\partial k_{aT}^\alpha} \frac{\partial}{\partial \hat s}H^{\rm Inc}_{ab\to c}(\hat s, -\hat s-\hat u, \hat u)\right|_{k_{aT}\to 0} =-\frac{2\tilde s}{\tilde u}\frac{P_{hT\alpha}}{z_c}\frac{\partial}{\partial \tilde s} H^{\rm Inc}(\tilde s, -\tilde s-\tilde u, \tilde u). \end{eqnarray} Then we have the contribution (b) \begin{equation} E_h\frac{d\Delta\sigma^{(b)}}{d^3 P_h}=\frac{\alpha_s^2}{S}\sum_{a,b,c} \int \frac{dz_c}{z_c^2} D_{h/c}(z_c) \frac{\epsilon^{P_{hT}S_A n \bar{n}}}{z_c \tilde{u}} \frac{1}{x}T_{a,F}(x, x) \int \frac{dx_b}{x_b} f_{b/B}(x_b) \left[-\tilde{s}\frac{\partial}{\partial \tilde{s}}H^{\rm Inc}_{ab\to c}(\tilde s,-\tilde s-\tilde u, \tilde u)\right]\frac{1}{x_bS+T/z_c}. \label{afterb} \end{equation} Thus to the leading order (linear in $k_{aT}$ terms), the spin-dependent cross section in our modified GPM formalism can be written as \begin{eqnarray} E_h\frac{d\Delta\sigma}{d^3 P_h}=E_h\frac{d\Delta\sigma^{(a)}}{d^3 P_h}+E_h\frac{d\Delta\sigma^{(b)}}{d^3 P_h}, \end{eqnarray} with the contributions (a) and (b) given by Eqs.~(\ref{aftera}) and (\ref{afterb}), respectively. The term (a) {\it almost} reproduces the twist-3 collinear factorization formalism in Ref.~\cite{inclupi} modular the extra factor $(1+\hat u/\hat t)$ associated with final state interactions, for which the origin of the difference is understood in last subsection. On the other hand, for the extra term (b), theoretically how to interpret this ``mismatch'' and why the term (b) does not appear in the usual twist-3 collinear factorization formalism deserves further investigation \cite{GK}. Here it is important to note, from the phenomenological perspective, as already shown in \cite{inclupi}, the derivative of the correlation function $T_{a,F}(x, x)$ is the dominant contribution to the SSAs, thus we expect the term (b), which contains no derivative, to play a less important role in generating the SSAs compared with term (a). In other words, even though this modified GPM has an extra piece compared with the well-known twist-3 collinear factorization formalism, phenomenologically (numerically) this formalism could give a good approximation to the SSAs. This remains to be confirmed \cite{GK} because there is still a difference in term (a) on the extra factor $(1+\hat u/\hat t)$ associated with the final state interactions between the twist-3 collinear factorization approach and our modified GPM formalism. If this were the case, it will provide further support to the modified GPM approach to the SSAs. To close this section, we want to emphasize that the contribution calculated in Ref.~\cite{inclupi} only comes from the so-called soft-gluon-pole (SGP) in the twist-3 collinear factorization approach. However, there are also contributions from so-called soft-fermon-pole (SFP) \cite{SFP}. Even though our modified GPM formalism might capture the main feature of SGP contributions, it seems unlikely to reproduce the SFP contributions. In this respect the twist-3 formalism is ``internally complete'' in the sense that the collinear factorization is expected to hold for this formalism \cite{Qiu:1990cu}. Finally, while TMD factorization is assumed in both GPM and our modified GPM formalisms, it is likely not to hold in these processes~\cite{tmdbreak}. However, the extent to which it is broken is not known numerically. Thus, calculations within (modified) GPM formalisms should bear this in mind and thus be used with extra care. \section{Numerical estimate of the SSAs} \label{numerics} In this section, we will estimate the SSAs for single inclusive hadron and direct photon production in $pp$ collisions at RHIC energy by using our modified GPM formalism in Eq.~(\ref{modified}). We will compare our results with those calculated from the conventional GPM formalism as in Eq.~(\ref{spin}). To calculate the spin-averaged cross section, we use GRV98 LO parton distribution functions \cite{Gluck:1998xa} along with a Gaussian-type $k_T$-dependence \cite{newsiv, oldsiv}. The hard part functions for different partonic scattering channels are available in the literature \cite{inclupi,Owens:1986mp,Kang:2010vd}. For the spin-dependent cross section, we use the latest Sivers functions from \cite{newsiv} which are extracted from the recent SIDIS experiments. To consistently use this set of Sivers function, we will use DSS fragmentation function \cite{deFlorian:2007aj}. For the numerical predictions below, we work in a frame in which the polarized hadron moves in the $+z$-direction, choosing $S_\perp, P_{h\perp}$ along $y$- and $x$-directions, respectively, where all the relevant distribution functions and fragmentation functions evaluated at the scale $P_{h\perp}$~\cite{Anselmino}. \begin{figure}[htb]\centering \psfig{file=pi0.eps, width=2.5in} \hskip 0.2in \psfig{file=photon.eps, width=2.5in} \caption{$A_N$ for inclusive particle production as a function of $x_F$ at RHIC energy $\sqrt{s}=200$ GeV: $p^\uparrow p\to\pi^0+X$ (left) and $p^\uparrow p\to\gamma+X$ (right). The dashed curves are for the conventional GPM calculation, and the solid curves are for our modified GPM calculation. We have used the latest Sivers function from \cite{newsiv}, and DSS fragmentation function \cite{deFlorian:2007aj}.} \label{new} \end{figure} In Fig.~\ref{new}, we plot the $A_N$ as a function of $x_F$ for inclusive $\pi^0$ (left) and direct photon (right) production at rapidity $y=3.3$ for RHIC energy $\sqrt{s}=200$ GeV. The estimates using the conventional GPM formalism in Eq.~(\ref{spin}) are shown as dashed lines, while those using our modified GPM formalism in Eq.~(\ref{modified}) are shown as solid lines. One immediately see that for both inclusive $\pi^0$ and direct photon, $A_N$ change signs compare to the conventional GPM formalism. For $\pi^0$, the conventional GPM predicts a negative asymmetry (though very small from this set of Sivers functions), while the modified GPM formalism predicts a positive asymmetry. On the other hand, for direct photon, conventional GPM formalism predicts a positive asymmetry, while modified GPM formalism predicts that the asymmetry is negative, which is consistent with the predictions from twist-3 collinear factorization approach \cite{inclupi}. This can also be easily understood as follows. In the conventional GPM approach, one use $H^U$ in the calculation of the spin-dependent cross section. For direct photon production, the dominant channel comes from $qg\to \gamma q$, with \cite{inclupi,Owens:1986mp} \begin{equation} H^U_{qg\to \gamma q}=\frac{1}{N_c}e_q^2 \left[-\frac{\hat t}{\hat s}-\frac{\hat s}{\hat t}\right]\, \end{equation} while the hard part in the modified GPM formalism is given by \begin{equation} H^{\rm Inc}_{qg\to \gamma q}=-\frac{N_c}{N_c^2-1} e_q^2\left[-\frac{\hat t}{\hat s}-\frac{\hat s}{\hat t}\right]. \end{equation} This introduces an extra color factor $-N_c^2/(N_c^2-1)$, thus opposite to the conventional GPM formalism. This prediction comes from the process-dependence of the Sivers functions, and has the same origin as in the photon+jet calculation \cite{Bacchetta:2007sz}. On the other hand, for the inclusive $\pi^0$ production, the dominant channel comes from $qg\to qg$, particularly in the forward direction, one has \begin{eqnarray} H^{\rm Inc}_{qg\to qg}=H^{\rm Inc-I}_{qg\to qg}+H^{\rm Inc-F}_{qg\to qg} \to -\frac{N_c^2}{2(N_c^2-1)}\frac{2\hat s^2}{\hat t^2}-\frac{1}{N_c^2-1}\frac{2\hat s^2}{\hat t^2}=-\frac{N_c^2+2}{N_c^2-1}\frac{\hat s^2}{\hat t^2}, \end{eqnarray} where we have used that in the forward direction, $\hat t$ is small, while $\hat u\sim -\hat s$, whereas \cite{inclupi,Owens:1986mp} \begin{eqnarray} H^U_{qg\to qg}=\frac{N_c^2-1}{2N_c^2}\left[-\frac{\hat s}{\hat u}-\frac{\hat u}{\hat s}\right]+\frac{\hat s^2+\hat u^2}{\hat t^2}\to \frac{2\hat s^2}{\hat t^2}. \end{eqnarray} We thus also see the sign is reversed in our modified GPM formalism compared with the conventional GPM approach. We observe that the $x_F$-dependence in both modified and conventional GPM formalisms are different from those observed in the RHIC experiments where larger asymmetries have been observed in the forward direction (large $x_F$)~\cite{SSA-rhic}. Of course, in order to have a comparison with the experimental data for inclusive hadron production at RHIC experiments, one must include both Sivers (as studied in this paper) and Collins effects \cite{Collins:1992kk}. The latter describes a transversely polarized quark jet fragmenting into an unpolarized hadron, whose transverse momentum relative to the jet axis correlates with the transverse polarization vector of the fragmenting quark. This latter correlation can also generate the transverse spin asymmetry (which is not studied here). Currently attempts at global fitting with both SIDIS and $pp$ experimental data are ongoing \cite{Anselmino:2008uy}. We encourage the use of the modified GPM formalism in such a global analysis, to study the effect of the associated ISIs and FSIs (process-dependence of the Sivers functions). We also emphasize~\cite{Bacchetta:2007sz} that there is only Sivers contribution in direct photon production. Since the modified and conventional GPM predict opposite asymmetries, direct photon production presents a favorable opportunity to test the process dependence of the Sivers function, or the effect of the associated ISIs. \section{Summary} \label{sum} In this paper, we have studied the single transverse spin asymmetries in the single inclusive particle production in hadronic collisions. We point out the Sivers functions in such processes are generally different from those probed in the SIDIS process because of different initial- and final-state interactions. By carefully taking into account the process-dependence in the Sivers functions (under one-gluon exchange approximation), we derive a new formalism within the framework of GPM approach. We find this formalism has close connections with the collinear twist-3 approach. With our modified GPM formalism, we make predictions for the inclusive $\pi^0$ and direct photon production in $pp$ collisions at RHIC energies. We find that the asymmetries predicted from the modified GPM formalism are opposite to those in the conventional GPM approach. This sign difference comes from the color gauge interaction, which has the same origin as the sign change for Sivers functions between SIDIS and DY processes. Our predictions about the sign are consistent with those from the twist-3 collinear factorization approach. We encourage a global analysis of both SIDIS and $pp$ experimental data using this modified GPM formalism. \section*{Acknowledgments} We are grateful to M.~Anselmino, U.~D'Alesio, A.~Metz, P.~Mulders, F.~Murgia, J.~W.~Qiu, W.~Vogelsang, F.~Yuan and J.~Zhou for useful discussions and comments. L.G. acknowledges support from U.S. Department of Energy under contract DE-FG02-07ER41460. Z.K. is grateful to RIKEN, Brookhaven National Laboratory, and the U.S. Department of Energy (Contract No.~DE-AC02-98CH10886) for supporting this work.
1,314,259,996,212
arxiv
\section{Introduction} \label{Introduction} Weak lensing surveys are expected to make significant contribution in constraining cosmology by studying the statistics of dark matter distribution in the nearby universe in an unbiased way. For instance, Contaldi et al.(2003) used the Red Cluster Sequence (RCS) to show that the $\Omega_{\rm m}$ and $\sigma_8$ degeneracy directions are nearly orthogonal making them particularly suitable for combined analysis (van Waerbeke et al. 2002). Ishak et al. (2003) argued that joint CMB-cosmic shear surveys provide an optimal data set for constraining the amplitude and running of spectral index which helps to probe various inflationary models. Tereno et. al.(2004) have studied cosmological forecasts for joint CMB and weak lensing survey data. Clearly, the potential of weak lensing surveys (Mellier 1999; Bartelmann \& Schneider 2001; R\'efr\'egier 2003; van Waerbeke \& Mellier 2003) as a cosmological probe is now well established (Contaldi et al. 2003; Hu \& Tegmark 1999). In last few years there have been many studies which have directly detected cosmological shear in random patches of the sky (Brown et al. 2003; Bacon et al. 2002; Bacon, Refregier \& Ellis 2000; Hamana et al. 2003; H\"{a}mmerle et al. 2002; Hoekstra et al. 2002a; Hoekstra, Yee \& Gladders 2002a; Jarvis et al. 2002; Kaiser, Wilson \& Luppino 2000; Maoli et al. 2001; Refregier, Rhodes, \& Groth 2002; Rhodes, Refregier \& Groth 2001; van Waerbeke et al. 2000; van Waerbeke et al. 2001a; van Waerbeke et al. 2002; Wittman et al. 2000). While early studies were primarily concerned with detection of weak lensing signal, present generation of weak lensing studies are putting constraints on cosmological parameters such as matter density parameter $\Omega_{\rm m}$ and power spectrum normalisation $\sigma_8$. Not only these studies can put remarkable constraints on cosmological parameters, they can help to break up parameter degeneracies when used along with other cosmological surveys such as CMB observations as we discussed above. Inspired by the success of these surveys, there are many other ongoing, planned and proposed weak lensing surveys which are currently in progress such as the Deep Lens Survey (Wittman et al. 2002), the NOAO Deep survey (Mellier et al. 2001) \footnote{http://www.noao.edu/noao/noaodeep/}, the Canada-France-Hawaii Telescope Legacy Survey (Mellier et al. 2001) \footnote{http://www.cfht.hawaii.edu/Science/CFHLS/}, the Panoramaic Survey Telescope and Rapid Response System \footnote{http://pan-starrs.ifa.hawaii.edu/public/}, the Supernova Acceleration Probe \footnote{http://snap.lbl.gov/} (Rhode et. al. 2003; Massey et al. 2003) and the large Synoptic Survey Telescope \footnote{http://www.lsst.org/} (Tyson et. al. 2002). Independently or jointly these observations can contribute to increase the accuracy with which various cosmological parameters can be determined from other cosmological observation such as CMB. While first generations of cosmic shear surveys have demonstrated (van Waerbeke \& Mellier 2003; Refregier 2003) the feasibility of weak lensing studies in constraining the dark matter power spectrum parametrised by $\sigma_8$, $\Omega_{\rm m}$ and shape parameter $\Gamma$ (Munshi \& Valageas 2005 contains a recent compilations of present ground based survey results), the future surveys will be able to probe much larger scales and therefore it will be possible to study the linear regime directly to put more stringent bounds on cosmological parameters such as the equation of state of dark energy and its time variations. Various groups have carried out detailed analysis of cosmological constraints that are achievable from ground and space based weak lensing surveys (Bernardeau et al. 1997, Jain \& Seljak 1997, Kaiser 1998). Early studies focused mainly on a limited number of cosmological parameters whereas later on it was realised that weak lensing can play an important role in constraining the dark energy equation of state (Hu 1999, 2002ab; Hueter 2002; Abazajjian \& Dodelson 2003; Heavens 2003; Refrefier et al. 2003; Benabed \& van Waerbeke 2003; Jain \& Taylor 2003 and references therein). Hu \& Tegmark (2004) studied a general multi-parameter Fisher matrix analysis based on power spectrum to check prospects of weak lensing for parameter estimation independently or jointly with external data set such as CMB. Takada \& Jain (2005) have presented a study based on joint analysis of power spectrum and bi-spectrum and emphasised the role of tomography in in weak lensing parameter estimation. More recently Kilbinger \& Schneider (2005) have studied joint constraints from second- and third- order moments based on aperture mass $M_{\rm ap}$ statistics to study cosmological parameter estimation efficiency. Although similar in motivation we use a different analytical approach to check the efficiency of future space based weak lensing surveys to constrain cosmology. We combine measurements from second $\langleM_{\rm ap}^2\rangle$ and third order $\langleM_{\rm ap}^3\rangle$ statistics to constrain cosmological parameters such as $\Omega_{\rm m}$, $\sigma_8$, $\alpha_s$, $n_s$ and $w_{\rm de}$. The effect of estimating the median redshif $z_0$ of the source redshift distribution directly from the data is also included. Impact of inaccurate modelling of power-spectrum and bi-spectrum of the underlying mass distribution is also analysed. The reasons for choosing $M_{\rm ap}$ over other two-point statistics, as pointed out by Kilbinger \& Schneider (2005), are several. Unlike other two-point statistics $M_{\rm ap}$ by construction is a scalar object and unlike many other three-point objects, constructed from shear three-point correlation functions, the third order moment of $M_{\rm ap}$ has a non-vanishing ensemble average and hence can directly probe the underlying bi-spectrum of the matter distribution. Besides, $M_{\rm ap}$ and its extensions has the inherent characteristic of separating gravity induced $E$ mode from $B$ mode (see e.g. Crittenden et al. 2001, for more detailed discussion on this issue) generated mainly by residual systematics which can affect both second- and third-order estimators. Moreover, it is to be noted that the integral measure $\langleM_{\rm ap}^3\rangle$ is also a local measure of the bi-spectrum and does not suffer from the oscillatory nature of other filters. This paper is organised as follows. In section~2 we introduce the second- and third- order estimators based on $M_{\rm ap}$ statistics. Expressions are presented for their joint covariances. Section~3 describes the cosmological parameters and the survey design for proposed SNAP class experiments. Next, in sections 4 and 5 we study the influence of non-Gaussianities onto the covariance matrices and the signal to noise ratios. In section~6 we introduce the Fisher matrix formalism in our context and use it to study estimation error for weak lensing observable. Finally the section~5 is left for discussion of our results and future prospects. Appendix~A is devoted to comparing the performance of ground based surveys and SNAP class space based surveys. It also takes a detailed look at various issues of weak lensing survey design. \section{Aperture-mass statistics} \label{Aperture-mass} \subsection{Weak-lensing effects} \label{Weak-lensing} Weak gravitational lensing effects probe the matter distribution integrated along the line of sight to distant sources like galaxies (e.g., Bernardeau et al. 1997; Kaiser 1998). Therefore, they can be used to obtain information on the 2-D projected density field $\kappa$ (also called the convergence field) on the sky. In practice, one needs to take into account the redshift distribution $n(z_s)$ of the sources. Indeed, in order to have good statistics one needs to average weak lensing effects over many sources which entails rather broad source distributions. On the other hand, it can be convenient to use a filtered version of the convergence $\kappa$, such as the aperture-mass $M_{\rm ap}$. More specifically, the latter is obtained from $\kappa$ by using a compensated filter. Then, one can also express $M_{\rm ap}$ as a function of the tangential shear $\gamma_{\rm t}$ (Kaiser 1994; Schneider 1996) which can be directly estimated from the observed ellipticities of distant galaxies. This is the property which makes the aperture-mass a useful quantity. In addition, since it involves a compensated filter the contribution from long wavelengths to the signal is damped as compared with a top-hat filter (which would yield the mean convergence) so that $M_{\rm ap}$ allows one to probe the matter density field over a narrow range of wavelengths which can be varied through the angular scale $\theta_s$ of the window $U_{M_{\rm ap}}({\vec \vartheta})$. Thus, the aperture-mass can be written in terms of the fluctuations of the density field as: \begin{equation} M_{\rm ap}= \int\d{\vec \vartheta}\; U_{M_{\rm ap}}({\vec \vartheta}) \; \int_0^{\chi_{\rm max}} \d\chi \; \tilde{w}(\chi) \; \delta(\chi,{\cal D}{\vec \vartheta}) , \label{Mapdelta} \end{equation} with: \begin{equation} \tilde{w}(\chi) = \frac{3\Omega_{\rm m}}{2} \int_z^{z_{\rm max}} \d z_s \; n(z_s) \; \frac{H_0^2}{c^2} \; \frac{{\cal D}(\chi) {\cal D}(\chi_s-\chi)}{{\cal D}(\chi_s)} \; (1+z) . \label{wt} \end{equation} Here the redshift $z$ corresponds to the radial distance $\chi$ and ${\cal D}$ is the angular distance, ${\vec \vartheta}$ is the angular direction on the sky, $\delta(\chi,{\cal D}{\vec \vartheta})$ is the matter density contrast and hereafter we normalise the mean redshift distribution of the sources (e.g. galaxies) to unity: $\int\d z_s \; n(z_s)=1$. We defined $z_{\rm max}$ as the depth of the survey (i.e. $n(z_s)=0$ for $z_s>z_{\rm max}$). Here and in the following we use the Born approximation which is well-suited to weak-lensing studies: the fluctuations of the gravitational potential are computed along the unperturbed trajectory of the photon (Kaiser 1992). We also neglect the discrete effects due to the finite number of galaxies. They can be obtained by taking into account the discrete nature of the distribution $n(z_s)$. This gives corrections of order $1/N$ to higher-order moments of weak-lensing observable, where $N$ is the number of galaxies within the circular field of interest. In practice $N$ is much larger than unity (for a circular window of radius 1 arcmin we expect $N \ga 100$ for the SNAP mission) therefore in this paper we shall work with eq.(\ref{Mapdelta}). We choose for the filter $U_{M_{\rm ap}}$ associated with the aperture-mass $M_{\rm ap}$ the window of finite support used in Schneider (1996): \begin{equation} U_{M_{\rm ap}} = \frac{\Theta(\vartheta<\theta_s)}{\pi\theta_s^2} \; 9 \left(1-\frac{\vartheta^2}{\theta_s^2}\right) \left(\frac{1}{3} - \frac{\vartheta^2}{\theta_s^2}\right) , \label{UMap} \end{equation} where $\Theta$ is a top-hat with obvious notations. The angular radius $\theta_s$ gives the angular scale probed by this smoothed observable. As described in Munshi et al. (2004), the cumulants of $M_{\rm ap}$ can be written in Fourier space as: \beqa \langle M_{\rm ap}^p \rangle_c & = & \int_0^{\chi_{\rm max}} \prod_{i=1}^{p} \d\chi_i \; \tilde{w}(\chi_i) \int \prod_{j=1}^{p} \d{\bf k}_j \; W_{M_{\rm ap}}({\bf k}_{\perp j} {\cal D}_j \theta_s) \nonumber \\ & & \times \; \left( \prod_{l=1}^{p} e^{i k_{\parallel l} \chi_l} \right) \;\; \langle \delta({\bf k}_1) \dots \delta({\bf k}_p) \rangle_c . \label{cumMapk} \end{eqnarray} We note $\langle .. \rangle$ the average over different realizations of the density field, $k_{\parallel}$ is the component of ${\bf k}$ parallel to the line-of-sight, $\bk_{\perp}$ is the two-dimensional vector formed by the components of ${\bf k}$ perpendicular to the line-of-sight and $W_{M_{\rm ap}}(\bk_{\perp}{\cal D}\theta_s)$ is the Fourier transform of the window $U_{M_{\rm ap}}$: \begin{equation} W_{M_{\rm ap}}(\bk_{\perp}{\cal D}\theta_s) = \int\d{\vec \vartheta} \; U_{M_{\rm ap}}({\vec \vartheta}) \; e^{i \bk_{\perp}.{\cal D}{\vec \vartheta}} = \frac{24 J_4(k_{\perp}\De\theta_s)}{(k_{\perp}\De\theta_s)^2} . \label{WMap} \end{equation} The Fourier-space expression (\ref{cumMapk}) is well suited to models which give a simple expression for the correlations $\langle \delta({\bf k}_1) .. \delta({\bf k}_p)\rangle_c$, such as the stellar model (Valageas et al. 2004; Barber et al. 2004) defined by: \begin{equation} \langle \delta({\bf k}_1) .. \delta({\bf k}_p)\rangle_c = \frac{\tilde{S}_p}{p} \; \delta_D({\bf k}_1+\dots+{\bf k}_p) \; \sum_{i=1}^p \prod_{j\neq i} P(k_j) , \label{stellar} \end{equation} where $\tilde{S}_2=1$, $\delta_D$ is the Dirac distribution and $P(k)$ is the 3-D power-spectrum of the density fluctuations. The coefficients $\tilde{S}_3, \tilde{S}_4,..$ are closely related (and approximately equal) to the skewness, kurtosis, .., of the density field. Eq.(\ref{cumMapk}) generalises in a straightforward fashion for many-point cumulants. This allows one to consider for instance the cross-correlations between the two statistics $M_{\rm ap1}$ and $M_{\rm ap2}$ associated with two different angular scales $\theta_{s1},\theta_{s2}$ and/or two different redshift distributions $n_1(z_s),n_2(z_s)$. \subsection{Low-order estimators} \label{Low-order} The expressions recalled in the previous section describe weak lensing effects due to the fluctuations of the matter density field. In practice, one actually measures the aperture-mass $M_{\rm ap}$ from the distortions of the images of distant sources. Thus, in the case of weak lensing ($|\kappa| \ll 1$) the observed complex ellipticity $\epsilon=\epsilon_1+i\epsilon_2$ of a distant galaxy is related to the shear $\gamma=\gamma_1+i\gamma_2$ by: $\epsilon=\gamma+\epsilon_*$, where $\epsilon_*$ is the intrinsic ellipticity of the galaxy. On the other hand, the aperture-mass defined in eqs.(\ref{Mapdelta})-(\ref{UMap}) can also be written as a function of the tangential shear $\gamma_{\rm t}$ as (Kaiser et al. 1994, Schneider 1996) as: \begin{equation} M_{\rm ap}= \int \d{\vec \vartheta} \; Q_{M_{\rm ap}}({\vec \vartheta}) \; \gamma_{\rm t}({\vec \vartheta}) \label{Mapgammat} \end{equation} with: \begin{equation} Q_{M_{\rm ap}}({\vec \vartheta}) = \frac{\Theta(\vartheta<\theta_s)}{\pi\theta_s^2} \; 6 \; \left(\frac{\vartheta}{\theta_s}\right)^2 \left(1-\frac{\vartheta^2}{\theta_s^2}\right) . \label{QMap} \end{equation} This leads us to define the estimators $M_p$ for low-order moments (e.g., Munshi \& Coles 2003, Valageas et al. 2004b): \begin{equation} M_p = \frac{(\pi \theta_s^2)^p} {(N)_p} \sum_{(j_1,\dots,j_p)}^N Q_{j_1} \dots Q_{j_p} \; \epsilon_{{\rm t}j_1} \dots \epsilon_{{\rm t}j_p}, \label{Mp} \end{equation} with: \begin{equation} (N)_p = N (N-1) .. (N-p+1) = \frac{N!}{(N-p)!} , \end{equation} where $N$ is the number of galaxies in the patch of size $\pi\theta_s^2$, $p$ is the order of the moment and $Q_j=Q_{M_{\rm ap}}({\vec \vartheta}_j)$ where ${\vec \vartheta}_j$ and $\epsilon_{{\rm t}j}$ are the direction on the sky and the observed tangential ellipticity of the galaxy $j$. Finally, the sum in eq.(\ref{Mp}) runs over all sets of $p$ different galaxies among the $N$ galaxies enclosed in the angular radius $\theta_s$, which ensures that $\langle M_p\rangle=\langle M_{\rm ap}^p \rangle$ if we neglect the correlation of the intrinsic ellipticity of a given galaxy with other galaxies or with weak lensing observables. These estimators $M_p$ correspond to a single circular field of angular radius $\theta_s$ containing $N$ galaxies. In practice, the size of the survey is much larger than $\theta_s$ and we can average over $N_c$ cells on the sky. This yields the estimators ${\cal M}_p$ defined by: \begin{equation} {\cal M}_p = \frac{1}{N_c} \sum_{n=1}^{N_c} M_p^{(n)} , \;\;\; \mbox{whence} \;\;\; \langle{\cal M}_p\rangle = \langle M_{\rm ap}^p \rangle , \label{cMp} \end{equation} where $M_p^{(n)}$ is the estimator $M_p$ for the cell $n$ and we assumed that these cells are sufficiently well separated so as to be uncorrelated. The estimators $M_p$ and ${\cal M}_p$ provide a measure of the moments $\langle M_{\rm ap}^p\rangle$, which can be used to constrain cosmological parameters. However, as shown in Valageas et al. (2004b), it is better to first consider cumulant-inspired estimators $H_p$. Thus, for the second and third-order cumulants we define: \begin{equation} H_2 = M_2 , \;\;\; H_3 = M_3 - 3 {\cal M}_2 M_1 , \label{H3} \end{equation} and: \begin{equation} {\cal H}_p = \frac{1}{N_c} \sum_{n=1}^{N_c} H_p^{(n)} , \;\;\; \langle{\cal H}_2\rangle = \langleM_{\rm ap}^2\rangle_c , \;\;\; \langle {\cal H}_3 \rangle = \langle M_{\rm ap}^3 \rangle_c . \label{cHp} \end{equation} The interest of ${\cal H}_3$ is that its scatter is smaller than for ${\cal M}_3$, see Valageas et al.(2004b). Besides it directly yields the one-point cumulants (here we neglected higher-order terms over $1/N_c$). \subsection{Covariance matrix} \label{Covariance} The estimators ${\cal M}_p$ and ${\cal H}_p$ defined in the previous section allow one to estimate the cumulants of the aperture-mass $\langleM_{\rm ap}^p\rangle_c$ for a given angular scale $\theta_s$ and source redshift distribution $n(z_s)$. In practice we can vary both the angular scale $\theta_s$ of the filters $U_{M_{\rm ap}}$ and $Q_{M_{\rm ap}}$ and the redshift distribution $n(z_s)$ (for instance by applying a simple binning of the galaxies over redshift and selecting different redshift bins, which is often referred to as tomography). Thus, if we restrict ourselves to second-order moments $\langleM_{\rm ap}^2\rangle$ we obtain the set of estimators ${\cal H}_2(i)$, with $\langle{\cal H}_2(i)\rangle=\langleM_{\rm ap}^2(i)\rangle$ as defined in eq.(\ref{cHp}), where the index $i=1,..,N_i$ stands for both the angular scale $\theta_{si}$ and the redshift distribution $n_i(z_s)$ (for $N_{\theta}$ angular scales and $N_z$ redshift bins we have $N_i=N_{\theta}\times N_z$). Then, the covariance matrix $C_{ij}$ associated with this data set is: \begin{equation} C_{ij} = \langle{\cal H}_2(i){\cal H}_2(j)\rangle - \langle{\cal H}_2(i)\rangle \langle{\cal H}_2(j)\rangle . \label{Cij} \end{equation} It measures the cross-correlation between the two estimators ${\cal H}_2(i)$ and ${\cal H}_2(j)$. In the following, we shall consider the limit where the number of cells $N_c$ on the sky goes to infinity, that is the sum in eq.(\ref{cHp}) is performed over $N_c$ cells which cover the whole survey area with a uniform coverage and which are separated by an angular shift $\Delta{\vec\alpha}$ which goes to zero. Therefore, the discrete sum (\ref{cHp}) tends to the integral: \begin{equation} {\cal H}_2(i) = \int_A \frac{\d{\vec\alpha}}{A} \; H_2(i;{\vec\alpha}) , \label{cH2i} \end{equation} where $A$ is the survey area and $H_2(i;{\vec\alpha})$ is the estimator $H_2(i)$, associated with the angular scale $\theta_{si}$ and redshift distribution $n_i(z_s)$, for the cell centred on the direction ${\vec\alpha}$ on the sky. Hereafter we neglect side effects and we consider that all estimators cover the same area $A$, independently of the radius $\theta_{si}$. Then, after one angular integration we obtain for the covariance $C_{ij}$: \begin{equation} C_{ij} = \int_A \frac{\d{\vec\alpha}}{A} \; \sigma^2(H_2(i),H_2(j);{\vec\alpha}) , \label{Cijalpha} \end{equation} where $\sigma^2(H_2(i),H_2(j);{\vec\alpha})$ is the cross-correlation between the two estimators $H_2(i)$ and $H_2(j)$ separated by the angular shift ${\vec\alpha}$. From eq.(\ref{Mp}) the cross-correlation $\sigma^2$ reads (see also the expressions given in App.A1 of Munshi \& Valageas 2005): \beqa \lefteqn{ \sigma^2(H_2(i),H_2(j);{\vec\alpha}) = \langleM_{\rm ap}^2(i)M_{\rm ap}^2(j)\rangle_c({\vec\alpha}) } \nonumber \\ && + 2 [ \langleM_{\rm ap}(i)M_{\rm ap}(j)\rangle_c({\vec\alpha}) + Q_{ij}({\vec\alpha}) ]^2 , \label{sigma2} \end{eqnarray} where we recalled explicitly the dependence on the angular shift ${\vec\alpha}$, and we introduced $Q_{ij}$ defined by: \begin{equation} Q_{ij}({\vec\alpha}) = \frac{\sigma_*^2}{2} \frac{n_{{\rm g}ij}} {n_{{\rm g}i}n_{{\rm g}j}} \int \d{\vec\vartheta} \; Q_i({\vec\vartheta}) Q_j({\vec\vartheta} - {\vec\alpha}) , \label{Qij} \end{equation} where $\sigma_*^2=\langle|\epsilon_*|^2\rangle$ is the scatter of the galaxy intrinsic ellipticities, $n_{{\rm g}i}$ is the number surface density of galaxies associated with the redshift distribution $n_i(z_s)$, $n_{{\rm g}ij}$ is the surface density of common galaxies to both redshift distributions $n_i(z_s)$ and $n_j(z_s)$, and $Q_i$ is the filter $Q_{M_{\rm ap}}$ defined in eq.(\ref{QMap}) for the radius $\theta_{si}$. We can obtain in a similar fashion the covariance matrix associated with the data set $\{{\cal H}_3(i)\}$, associated with third-order cumulants, as well as the full data set $\{{\cal H}_2(i),{\cal H}_3(i)\}$ where we consider both second-order and third-order cumulants. Thus, we have (see also Munshi \& Valageas 2005): \beqa \lefteqn{\sigma^2(H_2(i),H_3(j)) = \langleM_{\rm ap}^2(i)M_{\rm ap}^3(j)\rangle_c } \nonumber \\ && + 6 \langleM_{\rm ap}(i)M_{\rm ap}^2(j)\rangle_c [ \langleM_{\rm ap}(i)M_{\rm ap}(j)\rangle_c + Q_{ij} ] , \label{sigma23} \end{eqnarray} and: \beqa \lefteqn{\sigma^2(H_3(i),H_3(j)) = \langleM_{\rm ap}^3(i)M_{\rm ap}^3(j)\rangle_c } \nonumber \\ && + 9 \langleM_{\rm ap}^2(i)M_{\rm ap}^2(j)\rangle_c [ \langleM_{\rm ap}(i)M_{\rm ap}(j)\rangle_c + Q_{ij} ] \nonumber \\ && + 9 \langleM_{\rm ap}^2(i)M_{\rm ap}(j)\rangle_c \langleM_{\rm ap}(i)M_{\rm ap}^2(j)\rangle_c \nonumber \\ && + 6 [ \langleM_{\rm ap}(i)M_{\rm ap}(j)\rangle_c + Q_{ij} ]^3 , \label{sigma33} \end{eqnarray} where we did not recall explicitly the dependence on the angular shift ${\vec\alpha}$. It is interesting to estimate the amplitude of various contributions to the covariance matrix $C_{ij}$, due to noise, Gaussian and non-Gaussian terms. Thus, in addition to the full matrix $C_{ij}$ described above we also introduce the matrices $C_{ij}^{G,c.v.}$, $C_{ij}^{G,s.n.}$ and $C_{ij}^{c.v.}$. The matrix $C_{ij}^{G,c.v.}$ only includes Gaussian terms which contribute to the cosmic variance (i.e. non-Gaussianities and the noise are set to zero). This yields for instance for $\sigma^2(H_2(i),H_2(j))^{G,c.v.}$: \begin{equation} \sigma^2(H_2(i),H_2(j))^{G,c.v.} = 2 \langleM_{\rm ap}(i)M_{\rm ap}(j)\rangle_c^2 . \label{sigma2Gcv} \end{equation} We obtain from eq.(\ref{sigma33}) a similar expression for $\sigma^2(H_3(i),H_3(j))^{G,c.v.}$, while from eq.(\ref{sigma23}) we see at once that $\sigma^2(H_2(i),H_3(j))^{G,c.v.}=0$. Next, the matrix $C_{ij}^{G,s.n.}$ includes all Gaussian terms which involve the shot noise. This yields for $\sigma^2(H_2(i),H_2(j))^{G,s.n.}$: \begin{equation} \sigma^2(H_2(i),H_2(j))^{G,s.n.} = 4 \langleM_{\rm ap}(i)M_{\rm ap}(j)\rangle_c Q_{ij} + 2 Q_{ij}^2 . \label{sigma2Gsn} \end{equation} We again obtain from eq.(\ref{sigma33}) a similar relation for $\sigma^2(H_3(i),H_3(j))^{G,s.n.}$, while from eq.(\ref{sigma23}) we see at once that $\sigma^2(H_2(i),H_3(j))^{G,s.n.}=0$. Note that $C_{ij}^{G,c.v.}+C_{ij}^{G,s.n.}$ is equal to the matrix $C_{ij}$ without non-Gaussian terms. Finally, the matrix $C_{ij}^{c.v.}$ is equal to the matrix $C_{ij}$ where we set the noise equal to zero. It includes both Gaussian and non-Gaussian terms, and we obtain for $\sigma^2(H_2(i),H_2(j))^{c.v.}$: \begin{equation} \sigma^2(H_2(i),H_2(j))^{c.v.} = \langleM_{\rm ap}^2(i)M_{\rm ap}^2(j)\rangle_c + 2 \langleM_{\rm ap}(i)M_{\rm ap}(j)\rangle_c^2 . \label{sigma2cv} \end{equation} From eqs.(\ref{sigma23})-(\ref{sigma33}) we obtain similar expressions for $\sigma^2(H_2(i),H_3(j))^{c.v.}$ and $\sigma^2(H_3(i),H_3(j))^{c.v.}$. Note that $\sigma^2(H_2(i),H_3(j))^{c.v.}$ is different from zero. Thus, the comparison between the four matrices $C_{ij}$, $C_{ij}^{G,c.v.}$, $C_{ij}^{G,s.n.}$ and $C_{ij}^{c.v.}$ allows us to evaluate the relative importance of the shot noise (associated with the dispersion $\sigma_*^2$ of the galaxy intrinsic ellipticity) and of the cosmic variance (merely due to the finite size of the survey), as well as the relative importance of Gaussian and non-Gaussian terms. \section{Background cosmology and survey parameters} \label{Background} We describe in this section the background cosmology and the specific correlation hierarchy for the matter distribution which we use in numerical computations. We also focus on the SNAP observational strategy to estimate the accuracy which can be obtained from such weak lensing surveys. However, as mentioned before the basic formalism developed here remains completely general and specific details studied here only serve illustrational purposes. \subsection{Cosmological parameters} \label{Cosmological} For the background cosmology we consider a fiducial $\Lambda$CDM model with $\Omega_{\rm m}=0.3$, $\Omega_{\rm de}=0.7$, $w_{\rm de}=-1$, $H_0=70$ km/s/Mpc and $\sigma_8=0.88$. We note $\Omega_{\rm m}$ the matter density (cold dark matter plus baryons) and $\Omega_{\rm de}$ the dark-energy density today at $z=0$. We parameterize its equation of state by $w_{\rm de}=p_{\rm de}/\rho_{\rm de}$. For $w_{\rm de}=-1$ this corresponds to a simple cosmological constant. In the following we shall investigate the dependence of weak-lensing observable on the cosmological parameters $\Omega_{\rm m}$ (keeping $\Omega_{\rm m}+\Omega_{\rm de}=1$ for a flat universe) and $w_{\rm de}$. We always consider $w_{\rm de}$ to be a constant independent of time. Then, the Hubble expansion rate $H(z)$ reads (e.g., Linder \& Jenkins 2003): \begin{equation} \frac{\dot{a}}{a} = H(z)= H_0 \sqrt{\Omega_{\rm m}(1+z)^3+\Omega_{\rm de}(1+z)^{3(1+w_{\rm de})}} , \label{Hz} \end{equation} where $a(t)=(1+z)^{-1}$ is the cosmological scale factor while the dot denotes the derivative with respect to physical time (we used $\Omega_{\rm m}+\Omega_{\rm de}=1$). The linear growth factor $D(t)$ obeys the differential equation: \begin{equation} \ddot{D} + 2 H \dot{D} - \frac{3}{2} \Omega_{\rm m} H_0^2 (1+z)^3 D = 0 , \label{Dt} \end{equation} which yields for the linear growth rate $g(a)$ relative to a critical-density universe $g=D/a$: \beqa \lefteqn{ 2 \frac{\d^2 g}{\d\ln a^2} + \frac{5\Omega_{\rm m}+(5-3w_{\rm de})\Omega_{\rm de} a^{-3w_{\rm de}}} {\Omega_{\rm m}+\Omega_{\rm de} a^{-3w_{\rm de}}} \frac{\d g}{\d\ln a} } \nonumber \\ && + \frac{(3-3w_{\rm de})\Omega_{\rm de} a^{-3w_{\rm de}}}{\Omega_{\rm m}+\Omega_{\rm de} a^{-3w_{\rm de}}} g = 0 . \label{ga} \end{eqnarray} We obtain $g(a)$ by solving numerically the differential eq.(\ref{ga}). For the matter transfer function $T(k)$ we use the fitting formulae provided by Eisenstein \& Hu (1999), with $\Omega_{\rm b}=0.047$, $n_s=1$ and $\alpha_s=0$. Here $n_s$ is the power-law index of the primordial power-spectrum and $\alpha_s$ is the running index. More precisely, we write the linear matter power-spectrum as in Spergel et al. (2003): \begin{equation} P_L(k) \propto \left(\frac{k}{k_0}\right)^{n_s+\alpha_s \ln(k/k_0)/2} T^2(k), \label{PLk} \end{equation} where $k_0=0.05$ Mpc$^{-1}$. Thus, the local slope $n$ of the primordial spectrum is: \begin{equation} n=\frac{\d\ln P_{\rm prim}}{\d\ln k} = n_s+\alpha_s \ln \left(\frac{k}{k_0}\right) , \;\;\; \frac{\d n}{\d\ln k} = \alpha_s . \label{nsPk} \end{equation} Next, we use the fit given by Peacock and Dodds (1996) to model the non-linear evolution of the matter power-spectrum due to gravitational clustering. In order to investigate the sensitivity of weak lensing data on this non-linear extrapolation we introduce an additional parameter $f_2$. Thus, we modify eq.(21) of Peacock \& Dodds (1996) as: \begin{equation} f_{\rm NL}(x) = x \left[ \frac{1+B\beta x+[A x]^{\alpha\beta}} {1+([A x]^{\alpha}g^3(\Omega_m)/[f_2 V x^{1/2}])^{\beta}} \right]^{1/\beta} . \label{f2} \end{equation} This is identical to the formula from Peacock \& Dodds (1996) for $f_2=1$ (see their paper for the meaning of various parameters). In the linear regime we merely have $f_{\rm NL}(x)= x$ while in the non-linear regime we have $f_{\rm NL}(x) \propto f_2 x^{3/2}$. Therefore, the parameter $f_2$ describes the amplitude of the non-linear part of the matter power-spectrum as compared with Peacock \& Dodds (1996) (which corresponds to $f_2=1$). \begin{figure} \begin{center} \epsfxsize=4.1 cm \epsfysize=5 cm {\epsfbox{D2k.ps}} \epsfxsize=4.2 cm \epsfysize=5 cm {\epsfbox{MapXi.ps}} \end{center} \caption{{\it Left panel:} The power $\Delta^2(k)/k$ at redshift $z=1$ as a function of the comoving wavenumber $k$. The three upper curves correspond to $f_2=0.85$ (lower dashed curve), $f_2=1$ (middle solid line, this corresponds to Peacock \& Dodds 1996) and $f_2=1.15$ (upper dotted curve). This corresponds roughly to a $10\%$ variation of the matter power-spectrum in the highly non-linear regime. The lower dotted curve shows the linear power $\Delta_L^2(k)/k$. {\it Right panel:} The variance $\langleM_{\rm ap}^2\rangle$ of the aperture-mass as a function of smoothing angle $\theta_s$ for the SNAP survey. We again display the cases $f_2=0.85$ (lower dashed curve), $f_2=1$ (middle solid line) and $f_2=1.15$ (upper dotted curve).} \label{FigXi} \end{figure} For illustration purposes, we show in Fig.~\ref{FigXi} the non-linear power $\Delta^2(k)/k$ (left panel, as a function of comoving wavenumber $k$) at redshift $z=1$ and the variance $\langleM_{\rm ap}^2\rangle$ (right panel, as a function of smoothing angle $\theta_s$) for the SNAP survey described in section~\ref{Survey} below with no redshift binning. We display the curves obtained for the cases $f_2=0.85,1,1.15$ from bottom to top. This corresponds to a variation of about $\pm 10\%$ for the power-spectrum $P(k)$ in the non-linear regime. Thus, the range $f_2=1\pm 0.15$ describes roughly the current uncertainty on the non-linear power-spectrum in the range of interest. Note that $f_2$ is not merely a multiplicative factor to the non-linear power-spectrum as the recipe from Peacock \& Dodds (1996) also involves a rescaling of length scales. Here we defined the 3-D power per logarithmic interval $\Delta^2(k)=4\pi k^3 P(k,z)$. Because of the integration along the line of sight the power relevant for weak lensing observables is actually $\Delta^2(k)/k$, and the typical wavenumber associated with the angle $\theta_s$ is $k \sim 4/({\cal D}\theta_s)$, see Munshi et al. (2004). The lower dotted curve in left panel of Fig.~\ref{FigXi} shows the linear power $\Delta_L^2(k)/k$. We can check that at low wavenumbers and at large angular scales which probe the linear regime the dependence on $f_2$ disappears. \begin{figure} \begin{center} \epsfxsize=6 cm \epsfysize=5 cm {\epsfbox{MapS3.ps}} \end{center} \caption{The skewness $S_3^{M_{\rm ap}}$ of the aperture-mass as as a function of the smoothing angle $\theta_s$ for the SNAP survey. We display the cases $f_3=0.5$ (lower dashed curve), $f_3=1$ (middle solid line) and $f_3=1.5$ (upper dotted curve) which correspond to a $50\%$ variation of the skewness in the highly non-linear regime.} \label{FigS3} \end{figure} For higher-order correlations of the matter density field we use the stellar model (\ref{stellar}) described in Valageas et al. (2004a). This is actually identical to the minimal tree-model up to third-order moments (Munshi et al. 2004). In particular, for the skewness $S_3$ of the 3-D matter density field we interpolate between the quasi-linear limit $S_3^{\rm QL}$ and the non-linear prediction $S_3^{\rm HEPT}$ of HEPT (Scoccimarro et al. 1998). In order to investigate the sensitivity to the non-linear fit used for $S_3$ we introduce a parameter $f_3$ and we define the skewness $S_3^{\rm NL}$ reached in the highly non-linear regime as: \begin{equation} S_3^{\rm NL} = f_3 S_3^{\rm HEPT} . \label{f3} \end{equation} Then, on linear scale we use $S_3=S_3^{\rm QL}$ while on highly non-linear scales we use $S_3=S_3^{\rm NL}$, see Munshi et al. (2004) for details. We show in Fig.~\ref{FigS3} the skewness $S_3^{M_{\rm ap}}=\langleM_{\rm ap}^3\rangle/\langleM_{\rm ap}^2\rangle^2$ as a function of the smoothing angle $\theta_s$ for the cases $f_3=0.5,1,1.5$, for the SNAP survey described in section~\ref{Survey} below with no redshift binning. Thus we now consider a variation of $\pm 50\%$ on $f_3$ and $S_3$ since the skewness is not known to better than $50\%$ in the highly non-linear regime (the actual current uncertainty may actually be even larger). Of course, since the aperture-mass is linear over the density fluctuations within our weak-lensing approximation (\ref{Mapdelta}) this yields a $50\%$ variation for $S_3^{M_{\rm ap}}$ at small angular scales. We can check again that at large angular scales which probe the quasi-linear regime the dependence on $f_3$ disappears. Thus, the parameters $f_2$ and $f_3$ allow us to evaluate the sensitivity of weak lensing results onto the amplitude of the two-point and three-point matter correlations in the non-linear regime, where they are not known to up to high accuracy yet. Unless otherwise stated, we use $f_2=1$ and $f_3=1$ which correspond to the fits obtained from Peacock \& Dodds (1996) and from Scoccimarro et al. (1998). \subsection{Survey parameters} \label{Survey} Hereafter, we adopt the characteristics of the SNAP mission as given in Refregier et al.(2004). More precisely, we consider the ``Wide'' survey where the redshift distribution of galaxies is given by: \begin{equation} n(z_s) \propto z_s^2 \; e^{-(z_s/z_0)^2} \;\;\; \mbox{and} \;\;\; z_0=1.13, \;\;\; z_{\rm max}=3 . \label{nzSNAP} \end{equation} The variance in shear due to intrinsic ellipticities and measurement errors is $\sigma_*=\langle|\epsilon_*|^2\rangle^{1/2}=0.31$. The survey covers an area $A=300$ deg$^2$ and the surface density of usable galaxies is $n_g=100$ arcmin$^{-2}$. In order to extract some information from the redshift dependence of weak lensing effects we also divide the ``Wide'' SNAP survey into two redshift bins: $0<z_s<z_*$ and $z_*<z_s<z_{\rm max}$. We choose $z_*=1.23$, which corresponds roughly to the separation provided by the SNAP filters and which splits the ``Wide'' SNAP survey into two samples with the same number of galaxies (hence $n_g=50$ arcmin$^{-2}$). Note that one cannot use too many redshift bins at it decreases the number of source galaxies associated with each subsample (for the aperture mass we could still obtain good results with three bins but we shall restrict ourselves to two redshift bins in this paper). The redshift bins that we use are similar to those which were used by Refregier et al.(2003) using photometric redshifts, except that they have a sharp cutoff and non-overlapping source distributions. Note that using overlapping source distributions (over redshift) would increase the cross-correlations. \section{Dependence of low-order estimators on Cosmology} \label{Dependence-of-low-order-estimators} \begin{figure} \begin{center} \epsfxsize=4.15 cm \epsfysize=4.5 cm {\epsfbox{logH2.ps}} \epsfxsize=4.15 cm \epsfysize=4.5 cm {\epsfbox{logH3.ps}}\\ \epsfxsize=4.15 cm \epsfysize=4.5 cm {\epsfbox{logH2z2.ps}} \epsfxsize=4.15 cm \epsfysize=4.5 cm {\epsfbox{logH3z2.ps}} \end{center} \caption{The logarithmic derivative $\partial\ln\langleM_{\rm ap}^p\rangle_c/\partial\ln X$ of low order cumulants with respect to cosmological parameters $X$. {\it Upper left panel:} The dependence of the variance $\langleM_{\rm ap}^2\rangle$ on $\Omega_{\rm m}$ (solid line), $\sigma_8$ (dashed line), $w_{\rm de}$ (dotted line), $n_s$ (dot dashed line) and $\alpha_s$ (dotted line), as a function of angular scale $\theta_s$ with no redshift binning. For $\alpha_s$ we plot $\partial\ln\langleM_{\rm ap}^2\rangle_c/\partial\alpha_s$ since $\alpha_s=0$ in our fiducial model. {\it Upper right panel:} Same as upper left panel for the third-order cumulant $\langleM_{\rm ap}^3\rangle_c$. {\it Lower left panel:} Same as upper left panel but for two redshift bins. The low redshift bin is shown by the same line styles as in upper left panel whereas the high redshift bin is shown by the long dashed line. {\it Lower right panel:} Same as lower left panel for the third-order cumulant $\langleM_{\rm ap}^3\rangle_c$.} \label{logH} \end{figure} We show in Fig.~\ref{logH} the logarithmic derivative $\partial\ln\langleM_{\rm ap}^p\rangle_c/\partial\ln X$ of cumulants of order $p=2$ (left panels) and $p=3$ (right panels) with respect to cosmological parameters $X$, with no redshift binning (upper panels) and with two redshift bins (lower panels). For the cosmological parameter $\alpha_s$ we merely plot $\partial\ln\langleM_{\rm ap}^p\rangle_c/\partial\alpha_s$ since $\alpha_s=0$ in our fiducial model. The rather large values obtained for these derivatives shows that weak lensing effects could potentially constrain cosmological parameters up to a good accuracy. By contrast, the small derivative with respect to $w_{\rm de}$ shows that the equation of state of the dark energy component cannot be measured up to a similar accuracy. As expected, since weak lensing effects are proportional to matter density fluctuations, eq.(\ref{Mapdelta}), the derivatives with respect to $\Omega_{\rm m}$ and $\sigma_8$ are positive over most of the angular range $0.1'-1000'$. In particular, we can check that $\langleM_{\rm ap}^2\rangle_c \sim \sigma_8^2$ as expected from eq.(\ref{Mapdelta}). On the other hand, we can check that $\partial\langleM_{\rm ap}^2\rangle_c/\partial n_s>0$ at small scales which probe high wavenumbers $k$ and it crosses zero at about $20'$ which corresponds to the normalisation of the power-spectrum. Since these derivatives show different variations with the angular scale $\theta_s$ for different cosmological parameters, one should be able to constrain simultaneously these parameters by using several angular scales. Note that by looking over such a large range of angular scales one can discriminate the behaviours of different cosmological parameters which helps to remove degeneracies. On the other hand, one can use the additional information provided by higher-order cumulants, such as the third-order cumulant (whence the skewness) shown in the right panels. However, we can see that for most parameters $\langleM_{\rm ap}^3\rangle_c$ behaves roughly as $\langleM_{\rm ap}^2\rangle^2$, except for $\Omega_{\rm m}$ where there is a residual dependence which can be used to measure for instance both $\Omega_{\rm m}$ and $\sigma_8$ (see also Bernardeau et al. 1997, as well as Kilbinger \& Schneider 2005). Alternatively, one can split the survey into two redshift bins and take advantage of the different dependence on cosmology of weak lensing effects associated with each bin. This is shown in both lower panels. Although the curves are quite similar we shall check in sect.~\ref{Estimation} that using such a redshift binning does indeed improve the constraints on cosmological parameters. \section{Covariance matrices of low-order estimators} \label{Covnum} In order to use the estimators ${\cal H}_p$ to measure cosmological parameters, through their dependence on cosmology displayed in Fig.~\ref{logH}, we need the covariance matrices $C_{ij}$ introduced in sect.~\ref{Covariance}. The latter are necessary to obtain the relevant error bars through a Fisher matrix analysis (sect.~\ref{Formalism}) or a $\chi^2$ likelihood function (sect.~\ref{Fisherchi2}). \subsection{Impact of noise and non-Gaussianities} \label{Covnoise} \begin{figure} \epsfxsize=8.1 cm \epsfysize=6 cm {\epsfbox{CovH2.ps}} \caption{Covariance of the estimator ${\cal H}_2(\theta_s)$ of the variance $\langleM_{\rm ap}^2\rangle$, eqs.(\ref{cHp})-(\ref{Cij}). The solid line shows the full covariance $C_{ii}=\sigma^2({\cal H}_2(\theta_{si}),{\cal H}_2(\theta_{si}))$ as a function of smoothing angular scale $\theta_{si}$, with no redshift binning. The dotted line displays the contribution $C^{G,s.n.}$ from the galaxy intrinsic ellipticity dispersion to the Gaussian part of $C$. The dashed line $C^{G,c.v.}$ is the cosmic variance contribution to the Gaussian part of $C$ while $C^{c.v.}$ is the cosmic variance contribution to the full matrix $C$.} \label{CovH2} \end{figure} \begin{figure} \epsfxsize=8.1 cm \epsfysize=6 cm {\epsfbox{CovH3.ps}} \caption{Covariance of the estimator ${\cal H}_3(\theta_s)$ of the third-order cumulant $\langleM_{\rm ap}^3\rangle_c$, eq.(\ref{cHp}). The various line styles show different contributions to the full covariance $C_{ii}=\sigma^2({\cal H}_3(\theta_{si}),{\cal H}_3(\theta_{si}))$ (solid line) as in Fig.~\ref{CovH2}.} \label{CovH3} \end{figure} \begin{figure} \begin{center} \epsfxsize=4.15 cm \epsfysize=4.5 cm {\epsfbox{CovH2t01.ps}} \epsfxsize=4.15 cm \epsfysize=4.5 cm {\epsfbox{CovH2t2.ps}}\\ \epsfxsize=4.15 cm \epsfysize=4.5 cm {\epsfbox{CovH2t50.ps}} \epsfxsize=4.15 cm \epsfysize=4.5 cm {\epsfbox{CovH2t1000.ps}} \end{center} \caption{Covariance $C_{ij}$ of the estimator $H_2(\theta_s)$ of the variance $\langleM_{\rm ap}^2\rangle$, eqs.(\ref{cHp})-(\ref{Cij}), for different angular scales as a function of $\theta_{si}$ at fixed $\theta_{sj}$, with no redshift binning. The various line styles show different contributions to the full covariance as in Fig.~\ref{CovH2}. The four panels correspond to $\theta_{sj}=0.1'$ (upper left panel), $\theta_{sj}=2'$ (upper right panel), $\theta_{sj}=50'$ (lower left panel) and $\theta_{sj}=1000'$ (lower right panel).} \label{CovHij} \end{figure} We first show in Fig.~\ref{CovH2} the covariance matrix $C_{ij}$ of the estimator ${\cal H}_2(\theta_s)$ of the variance $\langleM_{\rm ap}^2\rangle$, see eqs.(\ref{cHp})-(\ref{Cij}), along the diagonal $i=j$ with no redshift binning ($\theta_{si}=\theta_{sj}$). The full solid line shows the full covariance $C$ while other line styles correspond to the specific contributions $C^{G,s.n.}$, $C^{G,c.v.}$ and $C^{c.v.}$, defined in eqs.(\ref{sigma2Gcv})-(\ref{sigma2cv}). As expected, we check that at small angular scales $\theta_s<1'$ the covariance $C$ is dominated by the shot-noise due to the galaxy intrinsic ellipticity dispersion, whereas at large scales $\theta_s>2'$ the covariance is dominated by the cosmic variance (i.e. the error bar due to the finite size of the survey). We can also check that the shot noise contribution $C^{G,s.n.}$ grows as $1/\theta_s^2$ at smaller scales, which is proportional to the number of distinct patches of radius $\theta_s$ in the survey. The slow rise of the cosmic variance term $C^{c.v.}$ yields a broad plateau for the full covariance, from $1'$ up to $1000'$, while non-Gaussianities become negligible above $30'$. We can note that even at smallest scales non-Gaussian terms are only twice larger than Gaussian terms so that the Gaussian part $C^G=C^{G,s.n.}+C^{G,c.v.}$ of the covariance matrix is always a good approximation along the diagonal ($\theta_{si}=\theta_{sj}$) for two-point estimators such as ${\cal H}_2$. Next, we display in Fig.~\ref{CovH3} the covariance matrix $C_{ii}$ of the estimator ${\cal H}_3(\theta_s)$ of the third-order cumulant $\langleM_{\rm ap}^3\rangle_c$, again along the diagonal $i=j$ with no redshift binning ($\theta_{si}=\theta_{sj}$), see eq.(\ref{sigma33}). We find again that the shot-noise due to the galaxy intrinsic ellipticities dominates below $2'$. However, it now grows as $1/\theta_s^4$ rather than $1/\theta_s^2$ at smallest scales (this is due to the term $Q_{ij}^3$ in eq.(\ref{sigma33})). Above $1'$ the covariance is dominated by the cosmic variance but in contrast with Fig.~\ref{CovH2} we now find that non-Gaussian terms are larger than Gaussian terms by an order of magnitude around $\theta_s \sim 3'$ (for the cosmic variance part). This is not really surprising since ${\cal H}_3$ itself is an estimator of non-Gaussianities. However, above $60'$ non-Gaussiannities become negligible again as we probe the quasi-linear regime. We show in Fig.~\ref{CovHij} the covariance $C_{ij}$ of the estimator ${\cal H}_2(\theta_s)$ of the variance $\langleM_{\rm ap}^2\rangle$ for different angular scales $\{\theta_{si},\theta_{sj}\}$ with no redshift binning (i.e. we probe $C_{ij}$ out of the diagonal whereas Fig.~\ref{CovH2} was restricted to the diagonal $i=j$). The line styles are as in Fig.~\ref{CovH2}. The wiggles in the lower panels for $C^{G,s.n.}$ correspond to a change of sign (hence we actually plot $|C^{G,s.n.}|$) because at large and different angular scales the first term in eq.(\ref{sigma2Gsn}) can make $C^{G,s.n.}$ negative.When both angular scales are small the covariance is dominated by the intrinsic ellipticity noise (both upper panels with $\theta_{si}<2'$) but as soon as one scale is larger than $10'$ the covariance is dominated by the cosmic variance (both upper panels with $\theta_{si}>2'$ and both lower panels for all $\theta_{si}$). Then, except at large scales where $\theta_{si} \sim \theta_{sj}$ the cosmic variance is dominated by the non-Gaussian terms which can be several orders of magnitude larger than the Gaussian contribution. Note that this is quite different from Fig.~\ref{CovH2} which showed that non-Gaussian contributions were not very important along the diagonal. Moreover, the non-Gaussian contribution to $C_{ij}$ shows a broad plateau as we go farther from the diagonal. This feature, which makes the covariance matrix very broad about the diagonal and reflects a strong correlation between various angular scales, leads to difficulties for the estimation of cosmological parameters since the covariance matrix $C_{ij}$ cannot be easily inverted. The full covariance matrix along with its various components are shown in Fig.~\ref{fig:cov_h2h2} for ${\cal H}_2$. In agreement with Figs.~\ref{CovH2}-\ref{CovHij} the upper right panel which displays $C^{G,s.n.}$ shows that the shot noise increases fastly at smaller angular scales. Its contribution to the covariance is restricted to the diagonal $\theta_{s1}=\theta_{s2}$ above $10'$ while it mostly depends on $\max(\theta_{s1},\theta_{s2})$ when both scales are below $10'$. On the other hand, the Gaussian contribution $C^{G,c.v.}$ to the cosmic variance (lower right panel) is always restricted close to the diagonal and increases at large scales. The upper left panel which displays $C^{c.v.}$ clearly shows that non-Gaussianities bring a significant broadening to the covariance matrix which is no longer diagonal-dominated. Finally, the lower left panel which displays the full covariance matrix $C$ reflects these various factors and shows a very broad shape with a steep increase at small angular scales due to the shot-noise. Fig.~\ref{fig:cov_h3h3} which shows the covariance matrix for ${\cal H}_3$ exhibits a similar behaviour albeit with a stronger impact of non-Gaussianities. \begin{figure*} \protect\centerline{ \epsfysize = 3.75truein \epsfbox[0 0 383 395] {h2h2.eps}} \caption{Covariance matrix for the estimator ${\cal H}_2$ for different angular scales. We plot contours of constant $\log C$. {\it Lower left panel:} the full covariance $C$. {\it Upper left panel:} the cosmic variance contribution $C^{c.v.}$ (i.e. the shot noise due to the galaxy intrinsic ellipticity dispersion is set to zero). {\it Lower right panel:} the Gaussian contribution $C^{G,c.v.}$ to the cosmic variance. {\it Upper right panel:} the shot noise contribution $C^{G,s.n.}$ to the Gaussian part of $C$.} \label{fig:cov_h2h2} \end{figure*} \begin{figure*} \protect\centerline{ \epsfysize = 3.75truein \epsfbox[0 0 383 395] {h3h3.eps}} \caption{Same as previous figure but for for the estimator ${\cal H}_3$. See text for discussion.} \label{fig:cov_h3h3} \end{figure*} \subsection{Signal to noise ratios} \label{Signal-to-noise-ratios} \begin{figure} \epsfxsize=8.1 cm \epsfysize=6 cm {\epsfbox{NSH23SNAP.ps}} \caption{Inverse of the signal to noise ratios of the estimators ${\cal H}_2$ and ${\cal H}_3$ of the second-order and third-order cumulants of the aperture-mass. The solid lines show the ratios $\sigma(H_p(i),H_p(i))/H_p(i)$ as a function of the angular scale $\theta_{si}$ for $p=2$ (curve labeled $H_2$) and $p=3$ (curve labelled $H_3$) with no redshift binning. The covariance $\sigma^2=C_{ii}$ also corresponds to the solid line in Figs.~\ref{CovH2}-\ref{CovH3}. The dotted lines show the ratios obtained when we only include Gaussian terms in the covariance $C_{ii}$ while the dashed lines correspond to the uncertainty associated with a $15\%$ variation of $f_2$ with respect to ${\cal H}_2$ and a $50\%$ variation of $f_3$ with respect to ${\cal H}_3$. The dot-dashed line which follows closely the curve obtained for ${\cal H}_3$ shows the noise/signal ratio when we use the estimator ${\cal M}_3$ (eq.(\ref{cMp})) instead of ${\cal H}_3$.} \label{NSH23} \end{figure} Next, we plot in Fig.~\ref{NSH23} the inverse $N/S$ of the signal to noise ratios of the estimators ${\cal H}_2$ and ${\cal H}_3$ of the second-order and third-order cumulants of the aperture-mass. We show the ratios $\sigma(H_p(i),H_p(i))/H_p(i)$ without redshift binning (solid lines). They grow at small scales because of the shot noise due to the galaxy intrinsic ellipticity dispersion and at large scales because of cosmic variance. We also display the noise to signal ratios $N/S$ obtained when we only include the Gaussian terms $C^G$ in the covariance $C_{ii}$ (dotted lines). This corresponds to $C^G=C^{G,s.n.}+C^{G,c.v.}$, that is the sum of dotted lines and dashed lines shown in Figs.~\ref{CovH2}-\ref{CovH3}. In agreement with sect.~\ref{Covnoise} we find that non-Gaussian terms only make a difference in the intermediate range $1'-100'$ and that this effect is quite modest for the estimator ${\cal H}_2$ of the variance $\langleM_{\rm ap}^2\rangle$. For ${\cal H}_3$ the noise ratio can be increased by a factor $3$ around $3'$. As discussed in sect.~\ref{Covnoise} non-Gaussian terms are mainly important for cross-correlations between different angular scales. It is interesting to note that the angular scale at which the highest signal/noise ratio is achieved shifts to smaller angular scales when we include the contribution to cosmic variance from non-Gaussianities. Thus ignoring non-Gaussian terms in cosmic variance not only gives a wrong impression of higher signal to noise ratio, it also gives a wrong estimate of the angular scale where this is achieved. More detailed analysis of how these plots change with variations in survey charecteristics are presented in the appendix, see fig.~\ref{NSH23all} and fig.~\ref{NSH23ground}. The very high S/N achieved by surveys such as SNAP for ${\cal H}_3$ clearly indicates the potential of weak lensing surveys in studying even higher order non-Gaussianity such as ${\cal H}_4$ which estimates the kurtosis of underlying mass distribution. The dot-dashed line which follows closely the curve obtained for ${\cal H}_3$ shows the noise/signal ratio when we use the estimator ${\cal M}_3$ (eq.(\ref{cMp})) instead of ${\cal H}_3$. Thus, in agreement with Valageas et al.(2005) we find that for third-order estimators there is not much gain to be obtained by switching from ${\cal M}_3$ to ${\cal H}_3$. However, for higher-order non-Gaussianities like ${\cal H}_4$ the improvement can be quite significant (see Figs.4 and 5 of Valageas et al. 2005). Finally, the dashed lines in Fig.~\ref{NSH23} show the noise associated with the current uncertainty on the non-linear regime of gravitational clustering, as described by the parameters $f_2$ and $f_3$. Thus, the curve labeled $f_2$ shows $N/S=\Delta {\cal H}_2/{\cal H}_2$ with $\Delta{\cal H}_2=({\cal H}_2(f_2=1.15)-{\cal H}_2(f_2=0.85))/2$ associated with a $15\%$ variation of $f_2$ as described in Fig.~\ref{FigXi}. In agreement with Fig.~\ref{FigXi} and Fig.~\ref{NSH23} this corresponds roughly to a $10\%$ uncertainty on the non-linear power-spectrum. We can see that this is actually the dominant source of noise over the range $0.2'-20'$. Next, the curve labelled $f_3$ shows $N/S=\Delta {\cal H}_3/{\cal H}_3$ with $\Delta{\cal H}_3=({\cal H}_3(f_3=1.5)-{\cal H}_3(f_3=0.5))/2$ associated with a $50\%$ variation of $f_3$ as described in Fig.~\ref{FigS3}. In agreement with Fig.~\ref{FigS3} and Fig.~\ref{NSH23} this corresponds roughly to a $50\%$ uncertainty on the non-linear skewness and third-order cumulant. This is the main source of noise over the range $0.3'-20'$. Thus, in agreement with some previous studies we find that current theoretical uncertainties on non-linear gravitational clustering are the limiting factor to constrain cosmological parameters from large weak-lensing surveys with characteristics similar to the SNAP mission. \subsection{Cross-correlation of redshift bins} \label{Crossredshift} \begin{figure*} \protect\centerline{ \epsfysize = 2.25truein \epsfbox[0 0 510 190] {h2h2_zbin.eps}} \caption{Covariance and cross-correlation is displayed for two different redshift bins. Left panel shows the covariance of ${\cal H}_2$ for lower redshift bin and right panel corresponds to higher redshift bin. Middle panel panel shows the cross-correlation between both redshift bins.} \label{fig:cov_h2h2_zbin} \end{figure*} \begin{figure*} \protect\centerline{ \epsfysize = 2.25truein \epsfbox[0 0 513 190] {h3h3_zbin.eps}} \caption{Same as Fig.~\ref{fig:cov_h2h2_zbin} but for ${\cal H}_3$.} \label{fig:cov_h3h3_zbin} \end{figure*} \begin{figure} \epsfxsize=8.1 cm \epsfysize=6 cm {\epsfbox{NSH23z2SNAP.ps}} \caption{Inverse of the signal to noise ratios of the estimators ${\cal H}_2$ and ${\cal H}_3$ as in Fig.~\ref{NSH23} but for two redshift bins. The solid lines correspond to the low redshift sub-sample ($z_s<1.23$) whereas the dashed lines show the results for the high redshift bin ($z_s>1.23$).} \label{NSH23z2} \end{figure} \begin{figure} \epsfxsize=8.1 cm \epsfysize=6 cm {\epsfbox{NSH11SNAP.ps}} \caption{Inverse of the signal to noise ratios of the estimators ${\cal H}_{11},{\cal H}_{21}$ and ${\cal H}_{12}$ associated with the cross-correlation between both redshift bins. The solid lines are the full noise/signal ratios while the dotted lines are the results which we obtain when we only keep Gaussian terms in the covariance matrices.} \label{NSH11} \end{figure} We show in Fig.~\ref{NSH23z2} the noise/signal ratio we obtain for each redshift bin when we divide the sample into two sub-samples separated by the source redshift $z_s=1.23$. The curves are similar to those displayed in Fig.~\ref{NSH23} for the full sample. For clarity we do not show the curves associated with the theoretical uncertainties ($f_2$ and $f_3$) or with the Gaussian approximation to the covariance matrices. They also follow the behaviour of Fig.~\ref{NSH23}. We can see that the signal to noise ratio is larger for the high redshift bin (note that we plot its inverse N/S). This is expected since the longer line of sight leads to a larger amplitude of weak lensing effects. However at very large angles the signal to noise ratio becomes slightly better for the low redshift bin for the estimator ${\cal H}_3$ of non-Gaussianities because of the growth of non-Gaussian gravitational clustering. Finally, we show in Fig.~\ref{NSH11} the noise/signal ratios we obtain for the estimators ${\cal H}_{11},{\cal H}_{21}$ and ${\cal H}_{12}$ which directly measure the cross-correlation between both redshift bins. Here we defined ${\cal H}_{pq}$ in a manner similar to ${\cal H}_p$ introduced in sect.~\ref{Low-order} such that their mean is: $\langle{\cal H}_{pq}\rangle=\langle M_{{\rm ap}1}^p M_{{\rm ap}2}^q\rangle_c$ \ where $M_{{\rm ap}1}$ (resp. $M_{{\rm ap}2}$) is the aperture-mass associated with the low (resp. high) redshift bin, see Munshi \& Valageas (2005). Of course we can check that the behaviour of different curves is similar to that obtained in previous figures. In particular, the signal to noise ratio is slightly better for ${\cal H}_{12}$ than for ${\cal H}_{21}$ since the former gives more weight to the high redshift bin, in agreement with Fig.~\ref{NSH23z2}. However, we note that the inclusion of non-Gaussian terms in the covariance matrices now makes a large difference for the third-order estimators ${\cal H}_{21}$ and ${\cal H}_{12}$. Indeed, a Gaussian approximation could overestimate the signal to noise ratio by up to a factor ten. The full covariance matrix is displayed in Fig.~\ref{fig:cov_h2h2_zbin} for ${\cal H}_2$ and Fig.~\ref{fig:cov_h3h3_zbin} for ${\cal H}_3$. In agreement with the discussion above we find that the covariance is larger for the high-redshift bin, together with the weak lensing signal itself. As seen in Fig-\ref{NSH23z2} the overall effect is still to improve the signal to noise ratio for the gigh-$z$ bin. The shape of the covariance matrix within each bin (left and right panels) is similar to the behaviour obtained for the full survey with no redshift binning (Figs.~\ref{fig:cov_h2h2},\ref{fig:cov_h3h3}). The cross-correlation between both bins (middle panels) shows a similarly broad feature due to non-Gaussianities. However, there is no rise at small angular scales due to shot noise because there are no common galaxies to both bins (hence the shot noise contribution vanishes). \section{Estimation of cosmological parameters} \label{Estimation} \subsection{Formalism} \label{Formalism} Baye's theorem (see e.g. Jaynes, 2003) provides an interesting starting point for most parameter estimation studies. Assuming a cosmological data vector ${\bf a}$ the likelihood function of the cosmological parameter set ${\bf \Theta}$ can be described as: \begin{equation} {\cal L}({\bf \Theta|a}) = {{{\cal L}({\bf a,\Theta})} {\cal L({\bf \Theta})} \over {{\cal L}({\bf a})}}. \end{equation} Here ${\cal L}(\Theta)$ indicates the prior likelihood function of the parameter set $\Theta$ and ${\cal L}(\Theta|{\bf a})$ denotes the conditional likelihood function. Normalisation determines $\cal L({\bf a})$ and the prior comes from other cosmological observations or as in our case it is assumed to be constant as we do not include any other observational information. The factor ${\cal L}({\bf a}|{\bf \Theta})$ describes the distribution function of the observed data vector ${\bf a}$ for a given cosmological parameter ${\bf \Theta}$. Assuming a multi-variate Gaussian distribution one can express ${\cal L}({\bf a}|{\bf \Theta})$ as \begin{equation} {\cal L}({\bf a}|{\bf \Theta}) = { 1 \over {\sqrt{{(2\pi)}^{\rm N} {\rm det} C(\Theta)}}} {\rm exp} \left ( -{1 \over 2} {\bf a}^{\rm T} {\bf C^{-1} }{\bf a} \right ), \end{equation} where ${\bf C}^{-1}$ is the inverse of the covariance matrix ${\bf C}=\langle{\bf a}^T {\bf a} \rangle - \langle {\bf a}^T \rangle \langle {a} \rangle$ of the data vector ${\bf a}$ being used. Associated log-likelihood statistic is defined as $\chi^2/2$ where $\chi^2 = {\bf a}^{\rm T} {\bf C^{-1}}{\bf a}$. The covariance matrix ${\bf C}$ is a function of underlying cosmological parameters. Its derivatives w.r.t various cosmological parameters along with the derivatives of the mean $\mu_{\alpha}$ are computed numerically while constructing the Fisher matrix ${\rm F}_{\alpha\beta}$. Assuming a fiducial cosmological model, for a given data vector, the estimation error of cosmological parameters associated with an unbiased estimator can be constructed from the Fisher matrix formalism (see Tegmark, 1997, Matsubara \& Szalay 2002, Takada \& Jain 2003). The Fisher information matrix (Kendall \& Stuart 1969) which is related to the the inverse covariance matrix for the cosmological parameter, assesses how well the data vector can distinguish the fiducial model from other models. Thus the error on the parameters $\theta_{\alpha}$ obeys: $\langle \triangle \theta_{\alpha} \triangle \theta_{\beta} \rangle \ge F^{-1}_{\alpha\beta}$. The left hand side is computed for a specific values of the parameter vector $\Theta_0$. The equality is obtained for maximum likelihood estimates of the parameters. This inequality - which is also known as Cram\'er-Rao inequality - provides a minimum variance bound for unbiased estimators and can be used to study estimation error and their cross-correlation for various parameter sets for a given survey strategy. Analytical expression for the Fisher matrix ${\rm F}$ in terms of the covariance and mean of the data can be written as (Tegmark et al. 1997): \begin{equation} \rm F_{\alpha\beta} = \mu_{\alpha} {\rm C}^{-1}\mu_{\beta}+{1 \over 2} Tr \left [ (ln~C)_{\alpha} (ln~C)_{\beta} \right ] \end{equation} where $\rm (ln~C)_{\alpha} = C^{-1}~C_\alpha$ is the derivative of $\rm \ln C$ w.r.t the parameter $\theta_{\alpha}$, and $\mu_{\alpha}$ denotes the derivative of $\mu = \langle {\bf a} \rangle$ w.r.t. the parameter $\alpha$. The first term corresponds to the case when only mean is being estimated from the data vector whereas the second term is associated error for variance estimations from the data vector ${\bf a}$. The Fisher matrix is a positive-definite matrix. It is dominated by the linear order term related to estimation of mean of the data. With other cosmological data sets where the mean is fixed, such as mean temperature of the CMB, the variance term can be the dominant term. Here we include both terms in our analysis. \begin{equation} {\rm F}_{\alpha\beta}(\Theta_0) = \left \langle {\partial {\rm ln} {\cal L} (a|\Theta) \over \partial \Theta_{\alpha}} {\partial {\rm ln} {\cal L} (a|\Theta) \over \partial \Theta_{\beta}} \right \rangle = - \left \langle {\partial^2 {\rm ln} {\cal L} (a|\Theta) \over \partial \Theta_{\alpha}\partial \Theta_{\beta}} \right \rangle \end{equation} Where $\langle \dots \rangle$ represents averaging over all possible data realization for fixed cosmological parameter values $\Theta=\Theta_0$. The equations of ellipsoid defined by the equation $\sum_{\alpha,\beta} \Delta \theta_{\alpha} F_{\alpha \beta} \Delta \theta_{\beta} = \lambda^2$ define regions in a multidimensional parameter space which can be interpreted as error bounds for an experimental setup. A specific choice for $\lambda$ defines a $\lambda\sigma$ confidence level. Although strictly speaking such an interpretation is valid only when the likelihood function is Gaussian, for a mildly non-Gaussian case it can still provide a valuable idea about errors associated with estimators. Therefore, up to a normalisation constant the marginalised two-dimensional likelihood function ${\cal L}(\theta_{\alpha},\theta_{\beta})$ for two parameters $\theta_{\alpha}$ and $\theta_{\beta}$ can be expressed as: \begin{equation} \sim {\rm exp} \left [ -{ 1 \over 2 } (\triangle \theta_{\alpha}, \triangle \theta_{\beta}) \left ( \begin{array} {c c} {{\rm F}^{-1}}_{\alpha \alpha} & {{\rm F}^{-1}}_{\alpha \beta} \\ {{\rm F}^{-1}}_{\alpha \beta} & {{\rm F}^{-1}}_{\beta \beta} \end{array} \right ) \left ( \begin{array} {c} {\triangle \theta_{\alpha}}\\ {\triangle \theta_{\beta}} \end{array} \right ) \right ] \end{equation} Here $({\rm F^{-1}})_{\alpha \beta}$ represents the $\alpha\beta$-element of the inverse of the original higher dimensional Fisher matrix $\rm F$. The marginalised error-ellipses therefore can be directly deduced from this expression. The likelihood contours start to deviate from their Fisher counterparts when higher-order correction terms start to become important at a large distance from the minima. In general when a partial set of parameters is being marginalised over the other parameters the error covariance of the remaining parameters is given by a sub-matrix of the Full inverse Fisher matrix $\rm F^{-1}$. Marginalisation in general causes a broadening of error ellipses due to reduced level of prior information being used. \begin{equation} {\rm F} = \left( \begin{array} {c c} {{\rm F}}_{AA}& {{\rm F}}_{AB} \\ {{\rm F}}_{AB}^{\rm T}& {{\rm F}}_{BB} \end{array} \right) \end{equation} It can be shown that the marginalised $n_A \times n_A$ Fisher matrix ${\tilde {\rm F}}$ can be expressed as ${\tilde {\rm F}} = {\rm F}_{AA} - {\rm F}_{BB} {\rm F}_{CC}^{-1}{\rm F}_{BB}^{\rm T} $. The second term in this expression provides the correction to the first term, due to marginalisation. In the absence of cross-correlations the correction term vanishes. In our study, the data vector ${\bf a}$ consists of various choices of measurements of ${\cal H}_2$ and ${\cal H}_3$ at different angular scales for different redshift bins ${\bf a}_i = ({\cal H}_2(i;z_a), {\cal H}_3(i;z_a))$. Here we have introduced additional index $a$ for parametrising the redshift bin $z_a$ under consideration. A slightly compact notation was used in previous sections where $i$ was allowed to run over different angular scales and different redshift bins. Accordingly the Covariance matrix has a block structure with various blocks denoting covariance of ${\cal H}_2(i;z_a)$ and ${\cal H}_2(i;z_a)$ along with their cross-covariances between same and different redshift bins. In our analysis we have considered various combinations of the estimators ${\cal H}_2$ and ${\cal H}_3$ independently and jointly and for one or two redshift bins to study errors in parameter estimation. Different cases that we have considered can be summarised as below. \begin{itemize} \item Various angular scales $\theta_s$ with {\it no} redshift information, and only 2-point information from ${\cal H}_2(i)$. \item Various angular scales $\theta_s$ with {\it no} redshift information, but 2-point ${\cal H}_2(i)$ and 3-point ${\cal H}_3(i)$ information. \item Various angular scales $\theta_s$ {\it with} redshift information, but only 2-point ${\cal H}_2(i,z_a)$ information. \item Various angular scales $\theta_s$ {\it with} redshift information, with both 2-point and 3-point ${\cal H}_2(i,z_a),{\cal H}_3(i,z_a) $ information. \end{itemize} For each of these choices we analyse various combinations of cosmological parameters independently or jointly with or without a prior information regarding median source redshift $z_0$. Without prior redshift information the joint covariance matrix ${\bf C}$ is ill-conditioned and numerical inversions are less stable. Due to broad covariance structure of the matrix ${\bf C}$ there are no completely independent information at different angular scales $\theta_{si}$, especially when there is no redshift information. \subsection{Use of redshift binning and third-order estimators} \label{redshift-binning-third-order-estimators} \begin{figure} \protect\centerline{ \epsfysize = 1.95truein \epsfbox[21 413 590 719] {om_sigma.eps}} \caption{Results of Fisher matrix analysis for the parameter pair $\{\Omega_{\rm m},\sigma_8\}$, combining the three angular scales $\theta_s=0.1', 10'$ and $1000'$. Perfect knowledge of other parameters is assumed. For clarity only $3\sigma$ contours are displayed. {\it Left panel}: Fisher ellipses obtained without redshift binning when we consider only the variance $\langleM_{\rm ap}^2\rangle$ (estimator ${\cal H}_2$, dashed line), only the third order cumulant $\langleM_{\rm ap}^3\rangle_c$ (estimator ${\cal H}_3$, long dashed line), and both cumulants for joint analysis (${\cal H}_2$ and ${\cal H}_3$, solid line). {\it Right panel}: Fisher analysis with two redshift bins. We consider the two bins separately for $H_2$ (dashed and long dashed lines) and jointly for ${\cal H}_3$ only (dot-dashed line) and for the joint pair ${\cal H}_2$ and ${\cal H}_3$ (solid line).} \label{fig:om_sigma_fish1} \end{figure} \begin{figure} \protect\centerline{ \epsfysize = 1.95truein \epsfbox[21 413 590 719] {om_wde.eps}} \caption{Same as Fig.~\ref{fig:om_sigma_fish1} but for the $\{\Omega_{\rm m},w_{\rm de}\}$ pair.} \label{fig:om_wde_fish1} \end{figure} \begin{figure} \protect\centerline{ \epsfysize = 1.95truein \epsfbox[21 413 590 719] {om_n.eps}} \caption{Same as Fig.~\ref{fig:om_sigma_fish1} but for the $\{\Omega_{\rm m},n_s\}$ pair.} \label{fig:om_n_fish1} \end{figure} \begin{figure} \protect\centerline{ \epsfysize = 1.95truein \epsfbox[21 413 590 719] {n_sig8.eps}} \caption{Same as Fig.~\ref{fig:om_sigma_fish1} but for the $\{\sigma_8,n_s\}$ pair.} \label{fig:n_sig8_fish1} \end{figure} In Figs.~\ref{fig:om_sigma_fish1}-\ref{fig:n_sig8_fish1} we present our results from the Fisher analysis described in sect.~\ref{Formalism} for various cosmological parameter pairs, combining the three angular scales $\theta_s=0.1', 10'$ and $1000'$. Perfect knowledge of remaining parameters is assumed in each case and only $3\sigma$ contours are plotted for clarity. We display both separate and joint analysis of second-order and third-order cumulants (estimators ${\cal H}_2$ and ${\cal H}_3$) as well as the results obtained without redshift binning (left panels) and with two redshift bins (right panels). As expected, we find that in all cases the $3\sigma$ contours are much larger for the third-order cumulant $\langleM_{\rm ap}^3\rangle_c$ than for the variance $\langleM_{\rm ap}^2\rangle_c$. Higher-order cumulants would be even more noisy. However, for the pairs $\{\Omega_{\rm m},\sigma_8\}$ and $\{\Omega_{\rm m},w_{\rm de}\}$ the degeneracy directions (long axis of the ellipse) are significantly different so that combining ${\cal H}_2$ and ${\cal H}_3$ greatly improves the constraints on cosmology as compared with ${\cal H}_2$ alone (see left panels). Indeed, the long axis of the ${\cal H}_2$-ellipse can be fairly reduced by the intersection with the small axis of the ${\cal H}_2$-ellipse. This is most clearly seen in Fig.~\ref{fig:om_wde_fish1} for the $\{\Omega_{\rm m},w_{\rm de}\}$ pair. For the $\{\Omega_{\rm m},n_s\}$ and $\{\sigma_8,n_s\}$ pairs where the $3\sigma$ ellipses associated with ${\cal H}_2$ and ${\cal H}_3$ are almost aligned the ${\cal H}_2$ contour is well within the ${\cal H}_3$-ellipse so that adding the third-order cumulant does not tighten the constraints on cosmology. Next, we show in the right panels the effects of redshift binning. For the $\{\Omega_{\rm m},\sigma_8\}$ pair the ${\cal H}_2$-ellipses have a very high eccentricity and are very thin so that the small change of orientation between the two redshift bins leads to a significant tightening of the intersection area which improves the constraints on these cosmological parameters. For other pairs of parameters the ${\cal H}_2$-ellipses associated with the lower redshift bin (1) are significantly broader than for the high redshift bin and do not improve the constraints on cosmology. Then, redshift binning is not really useful in these cases. The fact that the low redshift bin yields poorer constraints on cosmological parameters can be understood from the smaller line of sight which gives less room for weak-lensing effects. \subsection{Comparison of Fisher analysis and $\chi^2$ likelihood} \label{Fisherchi2} \begin{figure} \protect\centerline{ \epsfysize = 3.25truein \epsfbox[0 0 370 309] {m2m3.eps}} \caption{$1\sigma$ and $3\sigma$ Fisher ellipses compared with $\chi^2$ contours, combining the three angular scales $\theta_s=0.1',10'$ and $1000'$ without redshift binning. We only display the results corresponding to joint analysis of second-order and third-order cumulants.} \label{fig:chi_1bin_m2m3} \end{figure} \begin{figure} \protect\centerline{ \epsfysize = 3.truein \epsfbox[0 0 370 339] {z1z2_m2m3_omega.eps}} \caption{Same as Fig.~\ref{fig:chi_1bin_m2m3} but using tomography. The survey is split into two redshift bins and their constraints on cosmology are combined to yield the contours shown in the four panels for various parameter pairs.} \label{fig:chi_omega} \end{figure} \begin{figure} \protect\centerline{ \epsfysize = 3.truein \epsfbox[0 0 370 340] {z1z2_m2m3_sigma.eps}} \caption{Same as Fig.~\ref{fig:chi_omega} for other cosmological parameter pairs.} \label{fig:chi_sigma} \end{figure} We now compare the results of the Fisher matrix analysis with the contour plots obtained through a $\chi^2$ likelihood function. We again combine the three angular scales $\theta_s=0.1',10'$ and $1000'$ and we only display the joint analysis of second-order and third-order cumulants. We first show in Fig.~\ref{fig:chi_1bin_m2m3} our results with no redshift binning. We can check that the $\chi^2$ likelihood agrees well with the Fisher matrix analysis. Moreover, it happens that both $1\sigma$ and $3\sigma$ contours are close to elliptic so that the Fisher matrix analysis appears to be sufficient, except for the broader $3\sigma$ contour of the $\{\Omega_{\rm m},w_{\rm de}\}$ pair which is large enough to see distortions from the elliptic shape (i.e. far from the fiducial model $\{\Omega_{\rm m}=0.3,w_{\rm de}=-1\}$ the deviations of cumulants $\langleM_{\rm ap}^p\rangle_c$ are no longer linear). Next, we show in Figs.~\ref{fig:chi_omega}-\ref{fig:chi_sigma} a comparison of Fisher matrix analysis with the $\chi^2$ likelihood function using redshift binning. In agreement with sect.~\ref{redshift-binning-third-order-estimators} we find that tomography is most efficient for the pair $\{\Omega_{\rm m},\sigma_8\}$. Both Fisher matrix and $\chi^2$ contour plots are consistent, as in Fig.~\ref{fig:chi_1bin_m2m3}. Again, we find that the $3\sigma$ areas are small enough to have elliptic shapes. \subsection{Dependence on uncertainties on the mean redshift} \label{Mean-redshift} \begin{figure} \protect\centerline{ \epsfysize = 3.truein \epsfbox[0 0 370 330] {z1z2_m2m3_redshift.eps}} \caption{Same as Fig.~\ref{fig:chi_omega} for other cosmological parameter pairs including the typical source redshift $z_0$.} \label{fig:chi_red} \end{figure} We now investigate the impact of any uncertainty on the source redshift distribution onto weak-lensing results. Thus, we show in Fig.~\ref{fig:chi_red} Fisher matrix and $\chi^2$ contours which can be obtained for various cosmological parameters letting the typical source redshift $z_0$ defined in eq.(\ref{nzSNAP}) free (in each panel other parameters are assumed to be known). Then, we can directly read in Fig.~\ref{fig:chi_red} by how much any inaccuracy on the source redshift distribution can broaden the constraints on cosmology. Note that even when we do not impose any a priori on $z_0$ such weak-lensing observations (assuming again that other parameters are given) are able to recover $z_0$ by themselves up to $20\%$. If $z_0$ is known to a better accuracy one can cut off the contours in Fig.~\ref{fig:chi_red}. Depending on the cosmological parameter of interest we can see that the uncertainty on $z_0$ can multiply the error bar on the former by a factor of 2 ($n_s$) up to 4 ($\Omega_{\rm m}$ or $\sigma_8$). Therefore, a good accuracy on the source redshifts can be quite rewarding. \subsection{Marginalizing over unknown parameters} \label{Marginalizing} \begin{figure} \protect\centerline{ \epsfysize = 3.5truein \epsfbox[20 146 590 713] {all_fisher.eps}} \caption{Marginalised 3-$\sigma$ Fisher contours are displayed for various pairs of cosmological parameters. The inner solid contours correspond to the case when all hidden parameters are assumed to be known perfectly. Outermost long-dashed lines correspond to the case when all hidden cosmological parameters and median redshift $z_0$ are marginalised. The short-dashed line correspond to marginalisation of all hidden cosmological parameters except $z_0$ (i.e. a perfect knowledge of $z_0$ is assumed). In all cases two redshift bins are considered and information at the level of two-point and three-point are included for a SNAP class survey. Angular scales being considered are $\theta_s=0.1'$, $10'$ and $1000'$ . } \label{fig:margin_z0} \end{figure} Fig.~\ref{fig:margin_z0} shows the individual error ellipses for parameter pairs as various degree of marginalisation is used. For the case of two redshift bins and joint analysis of 2-point and 3-point analysis we plot the effect of having no prior knowledge of $z_0$ or any other parameters on estimation error. Clearly accurate knowledge of power-spectrum and bi-spectrum evolution is essential for putting any constraints on spectral index $n_s$ or its run $\alpha_s$. Results regarding marginalised errors are presented in units of $f_{sky}^{-1/2}$ in table~\ref{table:map1}. Although the entries represent an extrapolation of flat-sky results to spherical sky, it is unlikely that order of magnitudes are going to change with a more accurate all-sky approach. The second column shows $F_{\alpha\alpha}^{-1/2}$ for individual parameters where complete knowledge of all other parameters is assumed. The third column shows marginalised errors when other parameters and $z_0$ are unknown. Similarly the fourth column presents errors for additional prior of knowing only $z_0$ perfectly. Last column corresponds to the determination of only $\Omega_{\rm m}$, $\sigma_8$ and dark energy equation of state $w_{\rm de}$ whereas all other parameters are assumed to be known. Finally, we present the cross-correlations $r_{ij}$ among estimation errors for various parameter pairs in table~\ref{table:cross}. The significant correlations show that it is difficult to measure simultaneously all parameters with a good accuracy, if there are no priors from other data sets. This is consistent with Fig.~\ref{fig:margin_z0} and with the comparison between the second and third columns of table~\ref{table:map1}. \begin{table*} \begin{center} \caption{Scatter in estimated parameters in units of ${\rm f^{-1/2}_{sky}}$ for individual and joint estimation. Angular scales involved are $0.1'$, $10'$ and $1000'$. Values quoted within parenthesis are from two-point analysis whereas others are from joint analysis of two- and three-point statistics. Information from two redshift bins is considered. Different columns show the cases where we know all other parameters (2nd column), we marginalize over other cosmological parameters $X_j$ and redshift $z_0$ (3rd column), we marginalize over other parameters $X_j$ only (4th column) and only the three parameters $\{\Omega_{\rm m},\sigma_8,w_{\rm de}\}$ are unknown.} \label{table:map1} \begin{tabular} {@{}lccccc} \hline \hline & $\langleM_{\rm ap}^2\rangle$ & ${\rm F}_{ii}^{-1/2}$ & $[{\rm F}^{-1}]_{ii}^{1/2} \; \{X_j,z_0\} $ & $[{\rm F}^{-1}]_{ii}^{1/2} \; \{X_j\} $ & $[{\rm F}^{-1}]_{ii}^{1/2} \; \{\Omega_{\rm m},\sigma_8,w_{\rm de}\}$ \\ \hline \hline & $\sigma(\Omega_{\rm m})$ & .00016(.00018) & .0059(.037) & .0022(.0039) & .0010(.0010) \\ & $\sigma(\sigma_8)$ & .00025(.00027) & .0075(.088) & .0025(.0160) & .0017(.0055)\\ & $\sigma(w_{\rm de})$ & .00439(.00626) & .0606(.523) & .013(.4425) & .0116(.1357) \\ & $\sigma(n_s)$ & .00173(.00194) & .0305(.082) & .0207(.0258) & - \\ & $\sigma(\alpha_s)$ & .00111(.00112) & .0101(.011) & .0083(.0091) & - \\ & $\sigma(z_0)$ & .00181(.00204) & .0370(.405) & - & - \\ \hline \hline \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \caption{Cross-correlation $r_{ij}$ among estimation errors for various parameters. Values quoted within parenthesis are from two-point analysis whereas others are from joint analysis of two-point and three-point statistics. Two redshift bins are considered. Angular scales involved are $0.1'$, $10'$ and $1000'$.} \label{table:cross} \begin{tabular} {@{}lccccccc} \hline \hline & $r_{ij}$ & $\Omega_{\rm m}$ & $\sigma_8$ & $w_{\rm de}$ & $n_s$ & $\alpha_s$ & $z_0$ \\ \hline \hline & $\Omega_{\rm m}$ & $+1.00$ & - &-&-&-&- \\ & $\sigma_8$ & $ -0.88$( $-0.96$) & $+1.00$ & &-&-&- \\ & $w_{\rm de}$ & $ +0.96$($+0.60$) & $-0.92$$(-0.37)$ & $+1.00$ & &-&- \\ & $n_s$ & $-0.87$($-0.97$) & $+0.56$$(+ 0.90)$ & $-0.78$$( -0.63)$ & $+1.00$ & - & -\\ & $\alpha_s$ & $ +0.75$($+ 0.59$) & $-0.37$$(-0.53)$ & $+ 0.64$$(+0.29)$ & $-0.97$$(-0.75)$ & $+1.00$ & - \\ & $z_0$ & $ +0.92$($+0.99$) & $-0.94$$(-0.98)$ & $+0.97$$(+0.53)$ & $-0.73$$(-0.95)$ & $ +0.58 $$(+0.55)$ & $+1.00$ \\ \hline \hline \end{tabular} \end{center} \end{table*} \section{Discussion} \label{Discussion} We have used weak lensing tomography in real space at the level of two-point (equivalently power spectrum) and three-point (equivalently bi-spectrum) to study how accurately the background dynamics and the nature of dark energy can be probed from future weak lensing surveys such as SNAP. It is well known that the weight functions and the growth rate dependencies are different for the second $\langleM_{\rm ap}^2\rangle$ and third-order $\langleM_{\rm ap}^3\rangle$ moments. These complimentary information help to reduce the level of degeneracies along with the tomographical information in certain specific choices of cosmological parameters. In our formalism, cross-correlation among various angular scales at different redshifts can be very easily incorporated in a natural way. Such a treatment is completely analytical and is an extension of previous study by Munshi \& Coles (2002) which was later extended in Munshi \& Valageas (2005). Assuming a ``no-hole'' approach it is possible to directly include all contributions to covariance matrices which include cosmic variance at large angular scales, shot noise at small angular scales and mixed terms at intermediate scales. Taking advantage of a complete analytical description, we have check specific approximations to these covariances and their impact on error estimates of cosmological parameters. Over-enthusiastic simplifications of covariance structure of high-order estimators such as three-point estimators for $\langleM_{\rm ap}^3\rangle$, which depend increasingly on accurate description of non-Gaussianities, are typically absent in harmonic domain based approach and can lead to erroneous error-estimates. Assuming a WMAP-centric $\Lambda$CDM cosmology, we have considered several cosmological parameters such as $\Omega_{\rm m}$, $\sigma_8$, spectral index $n_s$, running of spectral index $\alpha_s$, which determine the background dynamics of the universe, and the dark energy equation of state $w_{\rm de}$. Tomography in some cases can help to reduce the level of degeneracy in \{$\sigma_8$, $\Omega_{\rm m}$\} or \{$w_{\rm de}$, $\Omega_{\rm m}$\} however interestingly in a large number of other cases most of the information seems to be coming only from the higher redshift bin and sub-dividing the sources in different redshift bins does not seem to affect the estimation accuracy. Equivalently while inclusion of higher order moments does indeed help to break the parameter degenerecy for some parameter pairs, the other combinations are largely unaffected by inclusion of third order information. Clearly both tomography and non-Gaussianity information are more useful when a joint estimation of several parameters is performed. This is particularly true for spectral index $n_s$ and its running $\alpha_s$ which enter the modelling of bi-spectrum only through the description of power spectrum. Previous studies of tomogrpahy made use of Monte-Carlo simulations of correlated field of views (see e.g. Simon et al. 2004) across redshift bins. Our analytical results which do not rely on numerical simulations validate and generalise similar studies by including detailed descriptions of non-Gaussianities and cross-correlatios among various redshift bins. Generalised third order moments were considered recently by Kilbinger \& Schneider (2005) which seem to be improving the situation by adding extra information at the level of third order. Interestingly however the highly correlated nature of such additional input suggests that one would only be restricted to a limited range of angular scales for construction of such a generalised quantities. A more general discussion of such generalised statistics which are also named as {\it cumulant correlators} in the literature can be found in Munshi \& Valageas (2005). In our calculation here we have ignored the primordial non-Gaussianity which can potentially be studied at low redshift given the large sky-coverage of future weak lensing surveys. Recent results by Takada \& Jain (2005) showed lensing fields do retain some information regarding primordial non-Gaussianity although it is expected that most of the information regarding baryon oscillations and primordial non-Gaussianities will be erased by large scale non-linearities generated by gravitational clustering at a later epoch. Extension of our results in such directions to take into account additional cosmological parameters, with priors from external data sets such as Type-IA supernova or CMB observations, will be presented elsewhere. Additional parameters related to time evolution of dark energy equation of state or neutrino mass will also be included. Despite large sky-coverage even with future space based missions such as SNAP, error of estimation in various cosmological parameters remain inherently degenerate due to very nature of observables in weak lensing surveys. This is particularly underlined by the very high values of degradation factor $({\rm F}_{\alpha\alpha})^{-1/2} / ({\rm F}^{-1}_{\alpha\alpha})^{1/2} $ for various levels of marginalisation as presented in table-\ref{table:map1}. In table-\ref{table:cross} scaled version of inverse Fisher matrix $r_{\alpha\beta}$ which is also the cross correlation coefficient $r_{\alpha\beta} = ({\rm F}^{-1})_{\alpha\beta} / \{({\rm F}^{-1})_{\alpha\alpha}({\rm F}^{-1})_{\beta\beta}\}^{1/2}$ of estimation error of different parameters is plotted for certain choices of the parameters as indicated. Our findings are in agreement with a recent study by Kilbinger \& Munshi (2005) where they reached a similar conclusion based on more elaborate generalised eigenmode analysis of Fisher matrices. Knowledge of redshift distribution of source galaxies from external observations can help parameter estimation especially for smaller surveys. Even for larger surveys such as SNAP we found that order of magnitude improvement in accuracy is possible for almost all parameters we have studied. Large scale spectroscopy of a fair sample of faint source galaxies may not be possible, alternatively photometric redshift determination of subsample of observed galaxies can still improve the level of accuracy. It is likely that future observations with near complete sky coverage $f_{sky} \sim 1$ spanning large redshift coverage can deal with much more detailed description of the source redshift distributions which we have only parametrised by one parameter $z_0$. Several authors have recently studied joint constraints on various cosmological parameters such as $\{n_s,\sigma_8\}$ and $\{\alpha_s, n_s\}$ using Lyman-alpha forest data jointly with 1-year observations from WMAP (Spergel et al. 2003). Clearly the future space based SNAP class experiments will help to provide competitive constraints when used jointly with external data set. Bi-spectrum of density fluctuations carries complementary information regarding background geometry and dynamics of the universe. Although current surveys are at the detection limit (see Bernardeu 2002) of measuring significant non-Gaussianity, it is expected that future space based observations with much greater sky coverage will supplement power spectrum information with accurate measurement of bi-spectrum. As we have shown here, this will require a very accurate description of evolution of bi-spectrum. Associating unknown parameters such as $f_2$ and $f_3$ with low order description of dark matter clustering and trying to estimate them directly from the data degrades the estimation accuracy of other cosmological parameters considerably. Finally, in our studies we have compared Fisher analysis with full grid based $\chi^2$ calculations. Importantly Fisher analysis should only be seen as a way to estimate minimum variance errors for unbiased estimators and to check degeneracy directions. It does not propose to reveal the detailed behaviour of the confidence plane far from the fiducial models. This is particularly true for very asymmetric constraints e.g. in $(\Omega_{\rm m},\sigma_8)$ plane. Nevertheless the various parameter pairs that we have studied show a reasonable match in confidence level at $1\sigma$ and even to $3\sigma$ levels. An alternative approach which uses entropy functional to map out the likelihood distribution in the parameter space was considered in Taylor \& Watts (2002). A grid based calculation of full $\chi^2$ becomes prohibitively costly as the dimension of the parameter space increase. A more effective sampling of the parameter space can be achieved by Monte Carlo based Markov-Chain algorithms. A full implementation will be presented elsewhere. In our present analysis we have focused on survey parameters similar to proposed SANP-class surveys (JDEM). It is not difficult to extend it to other survey specifications. With future well optimised surveys with large sky coverage, CFHTLS (172 deg$^2$), SNAP (300 deg$^2$), VISTA (10000 deg$^2$), Pan-starr (31000 deg$^2$) and recent data analysis techniques, weak lensing will be a very useful tool to probe not only standard cosmological parameters, such as $\Omega_{\rm m}$ and $\sigma_8$, but also the dark energy equation of state parameters such as $w_{\rm de}$. However to achieve this goal it is essential to have a tight handle on systematics which should at least stay comparable to statistical errors. \section*{acknowledgments} DM was supported by PPARC of grant RG28936. It is a pleasure for DM to acknowledge many fruitful discussons with memebers of Cambridge Planck Analysis Center (CPAC). DM also thanks Martin Kilbinger, Alan Heavens, Yun Wang, Lindsay King and George Efstathiou for useful discussions.
1,314,259,996,213
arxiv
\section{Introduction} \label{sec:intro} Within the area of matching-based market design, the most prominent solution that uses cardinal utilities\footnote{For a brief comparison of cardinal and ordinal utilities for matching markets, see Section \ref{sec.related}.} is the Hylland-Zeckhauser (HZ) mechanism \cite{hylland}. It is based on creating parity between demand and supply, i.e., it uses the power of a pricing mechanism, which gives it attractive properties: the allocations produced satisfy Pareto optimality and envy-freeness \cite{hylland} and the mechanism is incentive compatible in the large \cite{He2018pseudo}. A serious drawback of HZ, from the viewpoint of practical applicability, is lack of computational efficiency: the recent papers \cite{VY-HZ} and \cite{HZ-hardness} show that the problem of computing even an approximate equilibrium is PPAD-complete. More precisely, \cite{VY-HZ} showed membership in PPAD and remarked that it will not be surprising if intractability sets in even for the highly special case in which utilities of agents come from a trivalued set, say $\{0, \frac{1}{2}, 1\}$; for bivalued sets, they gave an efficient algorithm. Next, \cite{HZ-hardness} showed PPAD-hardness even for the case that utilities of agents come from a four-valued set; the trivalued case is open. The second issue addressed by this paper is a deficiency of the area of matching-based market design itself, namely the extreme paucity of models that use cardinal utilities. This stands in sharp contrast with general equilibrium theory, which has defined and extensively studied several fundamental market models to address a number of specialized and realistic situations. HZ can be seen as corresponding to the most elementary model in that theory, namely the linear Fisher model. A model corresponding to the linear Arrow-Debreu market model was also studied by Hylland and Zeckhauser \cite{hylland}; however, they ended their investigation on finding instances that do not admit an equilibrium. Considering these difficulties, studying further generalizations made little sense. In particular, we are not aware of any two-sided matching market models that use cardinal utilities. Our paper addresses both these issues by proposing Nash-bargaining-based matching market models. As is well known, the Nash bargaining solution is captured as in optimal solution to a convex program. Therefore, if for a specific game, a separation oracle can be implemented in polynomial time, then using the ellipsoid algorithm, one can get as good an approximation as desired in time that is polynomial in the number of bits of accuracy required \cite{GLS, Vishnoi.book}. For all models defined in this paper, the constraints of the convex program are linear, thereby ensuring zero duality gap and easy solvability. The game-theoretic properties of the the Nash bargaining solution include: it satisfies Pareto optimality and symmetry, and since it maximizes the product of the utilities of agents, the allocations it produces are remarkably fair. The latter has been noted by several researchers \cite{Nash-Unreasonable, Abebe-MM-Truthful, Moulin2018fair} and has been further explored under the name of Nash Social Welfare \cite{cole2018approximating, cole2017convex}. Compared to HZ, we have sacrificed envy-freeness for this fairness property -- the two are incomparable, with neither dominating the other. We have also sacrificed incentive compatibility in the large, but have gained efficient solvability. Clearly, without the latter, the nice properties of HZ have little meaning, since the mechanism is unusable except for extremely small instances, perhaps not exceeding $n = 10$. Another major gain from the move to Nash bargaining is that it yields a plethora of matching market models, not only one-sided but also two-sided; for the Fisher as well as the Arrow-Debreu settings, with the latter being not much harder than the former. Furthermore, our models cover a large range of utility functions, all the way from linear to Leontief. For the two reasons given above, namely computational efficiency and richness of models, we have proposed a shift from a pricing mechanism to a Nash-bargaining-based mechanism for matching market models. The following two questions arise: Is this shift a principled one, i.e., is there a fundamental connection between the two types of models? Is either type of mechanism reducible to the other? Section \ref{sec.connection} provides answers to these questions. We note that the origins of the idea of operating markets via Nash bargaining go back to \cite{va.rational}. For the linear case of the Arrow-Debreu market model, instead of seeking allocations via a pricing mechanism, \cite{va.rational} formulated it as a Nash bargaining game and gave a combinatorial, polynomial time algorithm for solving the underlying convex program. As is well known, polynomial time solvability is often just the beginning of the process of obtaining an ``industrial grade'' implementation. Towards this end, we give very fast implementations as well as experimental results for all five of our one-sided market models and the most basic two-sided model; the more general two-sided markets are analogous to the rest of the one-sided markets. In particular, our implementation can solve very large instances, with $n = 2000$, in one hour even for a two-sided matching market. In Section \ref{sec.ideas} we have described how the standard methods needed to be adapted to the special intricacies of our settings, in order to obtain these very fast implementations. In contrast, an HZ equilibrium, in particular, the equilibrium price, is not captured by any known mathematical construct, regardless of its computational complexity. The only known method for conducting an exhaustive search for obtaining an HZ equilibrium is algebraic cell decomposition \cite{Basu1995}; its use for computing HZ equilibria was studied in \cite{HZ-Algebraic-Cell}. Each iteration of this method is time-consuming. This, together with the exhaustive search required, makes it viable for only very small values of $n$, not exceeding 10. The recent computer science revolutions of the Internet and mobile computing led to the launching of highly impactful and innovative matching markets such as Adwords, Uber and Airbnb, and in turn led to a major revival of the area of matching-based market design, e.g., see \cite{Simons}. It is safe to assume that innovations will keep coming in the future and that new models, with good algorithmic properties, may be needed at any time in the future. Our work was motivated by these considerations. \subsection{The Connection between HZ and Nash-Bargaining-Based Models} \label{sec.connection} In this section, we answer the two questions raised above by attempting a comparative study of one-sided matching markets under the two types of models. The answer to the second question is ``No'' since under an affine transformation of the utilities of agents, the Nash bargaining solution and an HZ equilibrium change in fundamentally different ways: Whereas the former solution undergoes the same affine transformation (see Section \ref{sec.Nash}), the latter remains unchanged, as shown in \cite{VY-HZ}. The answer to the first question is ``Yes'', due to the connection established in \cite{Nash-Combinatorial}. We provide a brief synopsis of the argument below. First consider the linear Fisher market model defined in Section \ref{sec.Fisher}. The setup of the {\em linear Fisher problem (LFP)} is identical, except that the agents don't have any money, so this is not really a market model. The problem is to design a polynomial time mechanism for distributing all the goods among the agents so that the allocation satisfies Pareto optimality. \begin{maxi} {} {\sum_{i \in A} {\log (\sum_{j \in G} {u_{ij} x_{ij}})}} {\label{eq.EG-LF}} {} \addConstraint{\sum_{i \in A} {x_{ij}}}{\leq 1 }{\quad \forall j \in G} \addConstraint{\xx}{\geq 0.} \end{maxi} \cite{Nash-Combinatorial} give two such mechanisms. The first is to give each agent 1 Dollar, thereby transforming LFP to the linear Fisher market model, and ask for an equilibrium allocation, which satisfies Pareto optimality. This can be obtained in polynomial time, via a combinatorial algorithm \cite{DPSV}, or by expressing it as a convex program. The latter is the celebrated Eisenberg-Gale convex program \cite{eisenberg}, given in (\ref{eq.EG-LF}). The second is to view LFP as a Nash bargaining problem; Pareto optimality is one of the axioms which it satisfies, see Section \ref{sec.Nash}. This is done by defining a convex, compact set $\mbox{${\cal N}$} \subseteq \mathbb{R}_+^n$, called the feasible set, and a point $\cb \in \mbox{${\cal N}$}$, called the disagreement point, see Section \ref{sec.Nash} for details. In this case, $\cb = 0$, and $\mbox{${\cal N}$}$ will consist of all possible vectors of utilities to the $n$ agents that can be obtained by partitioning 1 unit each of all $m$ goods among the agents. It is easy to see that the resulting convex program will be precisely Eisenberg-Gale convex program. Therefore, the two mechanisms are identical! Next, \cite{Nash-Combinatorial} define the {\em linear Fisher unit demand problem (LFUP)} to be LFP with the additional requirements that $m = n$ and that each agent should get a total of one unit of goods. As a result, every feasible allocation is a fractional perfect matching over the $n$ agents and $n$ goods. Now it turns out that when LFUP is solved via the pricing mechanism, it is identical to HZ, and when it is solved via the Nash bargaining mechanism it is identical to {\em 1LF}, i.e., our most basic Nash-bargaining-based model, see Section \ref{sec.1-models}. This establishes a strong connection between HZ and the Nash-bargaining-based models and is illustrated in Figure \ref{fig:main}. \begin{figure} \centering \includegraphics[width=0.65\linewidth]{figure1.pdf} \caption{Figure illustrating connection between HZ and NB.} \label{fig:main} \end{figure} \subsection{Our Results} \label{sec.results} In Section \ref{sec.1-models}, we give four basic models for one-sided matching markets covering a wide range of utility functions. For each model, we also give a natural application. In Section \ref{sec.2-models} we give a model for the most basic two-sided matching market. This model can be easily enhanced to four more models in a manner analogous to the other four one-sided matching market models given in Section \ref{sec.1-models}. In Section \ref{sec.CPs}, we give convex programs capturing the Nash-bargaining-based solution for all the models mentioned above. These convex programs can be solved to $\epsilon$ precision in time that is polynomial in the size of the input and $\log{1/\epsilon}$ via ellipsoid-based methods \cite{GaleS, Vishnoi.book}. In Section \ref{sec.SM}, we present two solution schemes for solving these convex programs. Our methods, namely cutting-plane method \cite{kelley1960cutting,Vishnoi.book} and Frank-Wolfe method \cite{frank1956algorithm, jaggi2013revisiting}, rely on linear approximations of the convex programs. We present enhancement techniques as well as an overview of the way structural properties of these problems can be exploited. To demonstrate the effectiveness of these methods in handling large-scale instances of the problems, we performed extensive computational experiments in Section \ref{sec.Exp} and tested the algorithms on instances of up to 2000 agents/goods and 10 segments for the piecewise linear utility functions. In particular, the Frank-Wolfe algorithm is well-suited for matching market models with linear utilities, and is capable of producing sparse optimal solutions. The cutting-plane algorithm is able to produce optimal or near-optimal solutions for the more challenging problems of one-sided market models with non-linear utility functions. \subsection{Ideas Needed beyond Standard Methods} \label{sec.ideas} Our solution methods, namely cutting-plane algorithm and Frank-Wolfe (FW) algorithm, rely on iterative linear approximations of convex programs for the one-sided and two-sided market models. For efficient implementation of these algorithms, one needs to pay attention to the structural properties of these models as described below. We implement a central cutting-plane algorithm (CCP), which not only guarantees a linear convergence rate, but also produces more effective cuts, since central points are more likely to be in the relative interior of the feasible region. Additionally, straightforward implementations of CCP are often prone to numerical instabilities. For instance, if the cut coefficients are of different scales, the solvers may not handle the cuts properly. We avoid this by choosing proper scales for the cut coefficients. Secondly, since the objective function of the convex programs involves the logarithm function, we require positive utilities for each agent at each iteration of CCP. However, since CCP is an outer-approximation algorithm, it is possible that in an iteration of CCP, the utilities of some agents may become zero, which makes the solution unbounded, and one cannot extract a cut based on this solution. We resolve this issue by taking a convex combination of the current point and some feasible interior point. The latter point is obtained by choosing the closest feasible point to the current point on the line segment from current point to the interior point. We also implement a Frank-Wolfe algorithm for solving instances of the matching markets with linear utilities. An interesting property of these models is that once the nonlinear objective function of the respective convex programs are replaced by linear functions, the resulting problems can be solved as matching problems. The solution produced by FW is therefore a sparse convex combination of a set of integral perfect matchings. \subsection{Related Results} \label{sec.related} Recently \cite{VY-HZ} gave the first comprehensive study of the computational complexity of HZ. They gave an example which has only irrational equilibria; as a consequence, this problem is not in PPAD. They showed membership of the exact equilibrium computation problem in FIXP and approximate equilibrium in PPAD. They also gave a combinatorial, strongly polynomial time algorithm for computing an equilibrium for the case of dichotomous utilities, i.e., 0/1 utilities, and they extended this result to the case of bivalued utilities, i.e., each agent's utility for individual goods comes from a set of cardinality two, though the sets may be different for different agents. Next, \cite{HZ-hardness} showed PPAD-hardness even for the case that utilities of agents come from a four-valued set; the trivalued case is open. The success of our implementations, using available solvers, naturally raises the question of finding efficient combinatorial algorithms for our proposed market models. The subsequent paper \cite{Nash-Combinatorial} has given such algorithms, based on the techniques of multiplicative weights update (MWU) and conditional gradient descent (CGD), for several of our one-sided and two-sided models. They also defined and developed algorithms for the non-bipartite matching market model which has applications to the roommate problem. Lastly, they gave the connection between HZ and the Nash-bargaining-based models stated in Section \ref{sec.connection}. The extension of one-sided matching markets to the setting in which agents have initial endowments of goods, called the Arrow-Debreu setting, has several natural applications beyond the original Fisher setting, e.g., allocating students to rooms in a dorm for the next academic year, assuming their current room is their initial endowment. The issue of obtaining such an extension of the HZ scheme, was studied by Hylland and Zeckhauser. However, this culminated in an example which inherently does not admit an equilibrium \cite{hylland}. As a recourse, \cite{Echenique2019constrained} introduced the notion of an {\em $\alpha$-slack Walrasian equilibrium}. This is a hybrid between the Fisher and Arrow-Debreu settings in which agents have initial endowments of goods and for a fixed $\alpha \in (0, 1]$, the budget of each agent, for given prices of goods, is $\alpha + (1 - \alpha) \cdot m$, where $m$ is the value for her initial endowment. Via a non-trivial proof, using the Kakutani Fixed Point Theorem, they proved that an $\alpha$-slack equilibrium always exists. A pure Arrow-Debreu model was proposed in \cite{Garg-ADHZ} by suitably relaxing the notion of an equilibrium to an {\em $\epsilon$-approximate equilibrium}. Their proof of existence of equilibrium follows from that of \cite{Echenique2019constrained}. An interesting recent paper \cite{Abebe-MM-Truthful} defines the notion of a random partial improvement mechanism for a one-sided matching market. This mechanism truthfully elicits the cardinal preferences of the agents and outputs a distribution over matchings that approximates every agent’s utility in the Nash bargaining solution. In recent years, several researchers have proposed Hylland-Zeckhauser-type mechanisms for a number of applications, e.g., see \cite{Budish2011combinatorial, He2018pseudo, Le2017competitive, Mclennan2018efficient}. The basic scheme has also been generalized in several different directions, including two-sided matching markets, adding quantitative constraints, and to the setting in which agents have initial endowments of goods instead of money, see \cite{Echenique2019constrained, Echenique2019fairness}. {\bf Ordinal vs cardinal utilities:} Under ordinal utilities, the agents provide a total preference order over the goods and under cardinal utilities, they provide a non-negative real-valued function. Both forms have their own pros and cons and neither dominates the other. Whereas the former is easier to elicit from agents, the latter is far more expressive, enabling an agent to not only report if she prefers good $A$ to good $B$ but also by how much. \cite{Abdulkadirouglu-Cardinal} exploit this greater expressivity of cardinal utilities to give mechanisms for school choice which are superior to ordinal-utility-based mechanisms. Example \ref{ex.GTV}, taken from \cite{Garg-ADHZ}, provides a very vivid illustration of the advantage of cardinal utilities over ordinal ones in one-sided matching markets. \begin{example} \label{ex.GTV} The following example illustrates the advantage of cardinal vs ordinal utilities. The instance has three types of goods, $T_1, T_2, T_3$, and these goods are present in the proportion of $(1\%, \ 97\%, \ 2\%)$. Based on their utility functions, the agents are partitioned into two sets $A_1$ and $A_2$, where $A_1$ constitute $1\%$ of the agents and $A_2$, $99\%$. The utility functions of agents in $A_1$ and $A_2$ for the three types of goods are $(1, \ \epsilon, \ 0)$ and $(1, \ 1- \epsilon, \ 0)$, respectively, for a small number $\epsilon > 0$. The main point is that whereas agents in $A_2$ marginally prefer $T_1$ to $T_2$, those in $A_1$ overwhelmingly prefer $T_1$ to $T_2$. Clearly, the ordinal utilities of all agents in $A_1 \cup A_2$ are the same. Therefore, a mechanism based on such utilities will not be able to make a distinction between the two types of agents. On the other hand, the HZ mechanism, which uses cardinal utilities, will fix the price of goods in $T_3$ to be zero and those in $T_1$ and $T_2$ appropriately so that by-and-large the bundles of $A_1$ and $A_2$ consist of goods from $T_1$ and $T_2$, respectively. \end{example} \section{Preliminaries} \label{sec.preliminaries} \subsection{The Nash Bargaining Game} \label{sec.Nash} An {\em $n$-person Nash bargaining game} consists of a pair $(\mbox{${\cal N}$}, \cb)$, where $\mbox{${\cal N}$} \subseteq \mathbb{R}_+^n$ is a compact, convex set and $\cb \in \mbox{${\cal N}$}$. The set $\mbox{${\cal N}$}$ is called the {\em feasible set} -- its elements are vectors whose components are utilities that the $n$ players can simultaneously accrue. Point $\cb$ is the {\em disagreement point} -- its components are utilities which the $n$ players accrue if they decide not to participate in the proposed solution. The set of $n$ agents will be denoted by $A$ and the agents will be numbered $1, 2, \ldots n$. Instance $(\mbox{${\cal N}$}, \cb)$ is said to be {\em feasible} if there is a point in $\mbox{${\cal N}$}$ at which each agent does strictly better than her disagreement utility, i.e., $\exists \vv \in \mbox{${\cal N}$}$ such that $\forall i \in A, \ v_i > c_i$, and {\em infeasible} otherwise. In game theory it is customary to assume that the given Nash bargaining problem $(\mbox{${\cal N}$}, \cb)$ is feasible; we will make this assumption as well. The solution to a feasible instance is the point $\vv \in \mbox{${\cal N}$}$ that satisfies the following four axioms: \begin{enumerate} \item {\em Pareto optimality:} No point in $\mbox{${\cal N}$}$ weakly dominates $\vv$. \item {\em Symmetry:} If the players are renumbered, then a corresponding renumber the coordinates of $\vv$ is a solution to the new instance. \item {\em Invariance under affine transformations of utilities:} If the utilities of any player are redefined by multiplying by a scalar and adding a constant, then the solution to the transformed problem is obtained by applying these operations to the particular coordinate of $\vv$. \item {\em Independence of irrelevant alternatives:} If $\vv$ is the solution to $(\mbox{${\cal N}$}, \cb)$, and $\mbox{${\cal S}$} \subseteq \mathbb{R}_+^n$ is a compact, convex set satisfying $\cb \in \mbox{${\cal S}$}$ and $\vv \in \mbox{${\cal S}$} \subseteq \mbox{${\cal N}$}$, then $\vv$ is also the solution to $(\mbox{${\cal S}$}, \cb)$. \end{enumerate} Via an elegant proof, Nash proved: \begin{theorem} [Nash \cite{Nash1953two}] \label{thm.nash} If the game $(\mbox{${\cal N}$}, \cb)$ is feasible then there is a unique point in $\mbox{${\cal N}$}$ satisfying the axioms stated above. Moreover, this point is obtained by maximizing $\Pi_{i \in A} {(v_i - c_i)}$ over $\vv \in \mbox{${\cal N}$}$. \end{theorem} Nash's solution to his bargaining game involves maximizing a concave function over a convex domain, and is therefore the optimal solution to the following convex program. \begin{maxi} {} {\sum_{i \in A} {\log (v_i - c_i)}} {\label{eq.CP-Nash}} {} \addConstraint{}{\vv \in \mbox{${\cal N}$}} \end{maxi} As a consequence, if for a specific game, a separation oracle can be implemented in polynomial time, then using the ellipsoid algorithm one can get as good an approximation to the solution of this convex program as desired in time polynomial in the number of bits of accuracy needed \cite{GLS, Vishnoi.book}. \subsection{Fisher Market Model} \label{sec.Fisher} The {\em Fisher market model} consists of a set $A = \{1, 2, \ldots n\}$ of agents and a set $G = \{1, 2, \ldots, m\}$ of infinitely divisible goods. By fixing the units for each good, we may assume without loss of generality that there is a unit of each good in the market. Each agent $i$ has money $m_i \in \mathbb{Q_+}$. Let $x_{ij}, \ 1 \leq j \leq m$ represent a {\em bundle of goods allocated to agent $i$}. Each agent $i$ has a utility function $u: \mathbb{R}_+^m \rightarrow \mathbb{R_+}$ giving the utility accrued by $i$ from a bundle of goods. We will assume that $u$ is concave and weakly monotonic. Each good $j$ is assigned a non-negative price, $p_j$. Allocations and prices, $\xx$ and $\pp$, are said to form an {\em equilibrium} if each agent obtains a utility maximizing bundle of goods at prices $\pp$ and the {\em market clears}, i.e., each good is fully sold to the extent of one unit and all money of agents is fully spent. We will assume that each agent derives positive utility from some good and for each agent, there is a good which gives her positive utility; clearly, otherwise we may remove that agent or good from consideration. \subsection{Arrow-Debreu Market Model} \label{sec.AD} The Arrow-Debreu market model, also known as the {\em exchange model} differs from Fisher's model in that agents come to the market with initial endowments of good instead of money. The union of all goods in initial endowments are all the goods in the market. Once again, by redefining the units of each good, we may assume that there is a total of one unit of each good in the market. The utility functions of agents are as before. The problem now is to find non-negative prices for all goods so that if each agent sells her initial endowment and buys an optimal bundle of goods, the market clears. Clearly, if $\pp$ is equilibrium prices then so is any scaling of $\pp$ by a positive factor. \subsection{Hylland-Zeckhauser Scheme} \label{sec.HZ} Let $A = \{1, 2, \ldots n\}$ be a set of $n$ agents and $G = \{1, 2, \ldots, n\}$ be a set of $n$ indivisible goods. The goal of the HZ scheme is to allocate exactly one good to each agent. However, in order to use the power of a pricing mechanism, which endows the HZ scheme with the properties of Pareto optimality and incentive compatibility in the large, it casts this one-sided matching market in the mold of a linear Fisher market as follows. Goods are rendered divisible by assuming that there is one unit of probability share of each good, and utilities $u_{ij}$s are defined as in a linear Fisher market. Let $x_{ij}$ be the allocation of probability share that agent $i$ receives of good $j$. Then, $\sum_j {u_{ij} x_{ij}}$ is the {\em expected utility} accrued by agent $i$. Each agent has 1 dollar for buying these probability shares and each good $j$ has a price $p_j \geq 0$. Beyond a Fisher market, an additional constraint is that the total probability share allocated to each agent is one unit, i.e., the entire allocation must form a {\em fractional perfect matching} in the complete bipartite graph over vertex sets $A$ and $G$. Subject to these constraints, each agent buys a utility maximizing bundle of goods. Another point of departure from a linear Fisher market is that in general, an agent's optimal bundle may cost less than one dollar, i.e., the agents are not required to spend all their money. Since each good is fully sold, the market clears. Hence these are defined to be {\em equilibrium allocation and prices}. Clearly, an equilibrium allocation can be viewed as a doubly stochastic matrix. The Birkhoff-von Neumann procedure then extracts a random underlying perfect matching in such a way that the expected utility accrued to each agent from the integral perfect matching is the same as from the fractional perfect matching. Since {\em ex ante} Pareto optimality implies {\em ex post} Pareto optimality, the integral allocation will also be Pareto optimal. \section{Nash-Bargaining-Based Models} \label{sec.models} \subsection{One-Sided Matching Markets} \label{sec.1-models} We will define four one-sided matching market models based on our Nash bargaining approach. For each model, we will also give a standard application. For the case of linear utilities, we have singled out the Fisher and Arrow-Debreu versions, namely {\em 1LF} and {\em 1LAD}, since we will study both in some detail later in the paper. For more general utility functions we have defined only the Arrow-Debreu version; the Fisher version is obtained by setting disagreement utilities to zero. It is easy to see that the fourth one generalizes the first three; however, the earlier ones involve less notation and have an independent standing of their own, hence necessitating all four definitions. Our one-sided matching market models consist of a set $A = \{1, 2, \ldots n\}$ of agents and a set $G = \{1, 2, \ldots, n\}$ of infinitely divisible goods; observe that there is an equal number of agents and goods. There is one unit of each good and each agent needs to be allocated a total of one unit of goods. Hence the allocation needs to be a fractional perfect matching, as defined next. \begin{definition} \label{def.pm} Let us name the coordinates of a vector $\xx \in \mathbb{R}_+^{n^2}$ by pairs $i, j$ for $i \in A$ and $j \in G$. Then $\xx$ is said to be a {\em fractional perfect matching} if \[ \forall i \in A: \ \sum_j {x_{ij}} = 1 \ \ \ \mbox{and} \ \ \ \forall j \in G: \ \sum_i {x_{ij}} = 1 . \] \end{definition} As mentioned in Section \ref{sec.preliminaries}, an equilibrium allocation can be viewed as a doubly stochastic matrix, and the Birkhoff-von Neumann procedure \cite{Birkhoff1946tres, von1953certain} can be used to extract a random underlying perfect matching in such a way that the expected utility accrued to each agent from the integral perfect matching is the same as from the fractional perfect matching. {\bf 1)} Under the {\em linear Fisher Nash bargaining one-sided matching market}, abbreviated {\em 1LF}, each agent $i \in A$ has a linear utility function, as defined in Section \ref{sec.Fisher}. Corresponding to each fractional perfect matching $\xx$, there is a vector $v_x$ in the feasible set $\mbox{${\cal N}$}$; its components are the utilities derived by the agents under the allocation given by $\xx$. The disagreement point is the origin. Observe that the setup of {\em 1LF} is identical to that of the HZ mechanism; the difference lies in the definition of the solution to an instance. Its standard application is matching agents to goods. {\bf 2)} Under the {\em linear Arrow-Debreu Nash bargaining one-sided matching market}, abbreviated {\em 1LAD}, each agent $i \in A$ has a linear utility function, as above. Additionally, we are specified an initial fractional perfect matching $\xx_I$ which gives the initial endowments of the agents. Each agent has one unit of initial endowment over all the goods and the total endowment of each good over all the agents is one unit, as given by $\xx_I$. These two pieces of information define the utility accrued by each agent from her initial endowment; this is her disagreement point $c_i$. As stated in Section \ref{sec.Nash}, we will assume that the problem is feasible, i.e., there is a fractional perfect matching, defining a redistribution of the goods, under which each agent $i$ derives strictly more utility than $c_i$. Each vector $\vv \in \mbox{${\cal N}$}$ is as defined in {\em 1LF}. Henceforth, we will consider the slightly more general problem in which are specified the disagreement point $\cb$ and not the initial endowments $\xx_I$. There is no guarantee that $\cb$ comes from a valid fractional perfect matching of initial endowments. However, we still want the problem to be feasible. This model is applicable when agents start with an initial endowment of goods and exchange them to improve their happiness. {\bf 3)} The {\em separable, piecewise-linear concave Arrow-Debreu Nash bargaining one-sided matching market}, abbreviated {\em 1SAD}, is analogous to 1LAD, with the difference that each agent has a separable, piecewise-linear concave utility function, hence generalizing the linear utility functions specified in 1LAD. Economists model {\em diminishing marginal utilities} via concave utility functions. Since we are in a fixed-precision model of computation, we have considered separable, piecewise-linear concave (SPLC) utility functions. We next define these functions in detail. For each agent $i$ and good $j$, function $f_i^j: \mathbb{R_+} \rightarrow \mathbb{R_+}$ gives the utility derived by $i$ as a function of the amount of good $j$ she receives. Each $f_i^j$ is a non-negative, non-decreasing, piecewise-linear, concave function. The overall utility of buyer $i$, $u_i(\xx)$, for bundle $\xx=(x_1,\ldots,x_n)$ of goods, is additively separable over the goods, i.e., $u_i(\xx) = \sum_{j \in G} f_i^j(x_j)$. We will call each piece of $f_i^j$ a {\em segment}. Number the segments of $f_i^j$ in order of decreasing slope; throughout we will assume that these segments are indexed by $k$ and that $S_{ij}$ is the set of all such indices. Let $\sigma_{ijk}, \ k \in S_{ij}$, denote the $k^{th}$ segment, $l_{ijk}$ denote the amount of good $j$ represented by this segment; we will assume that the last segment in each function is of unbounded length. Let $u_{ijk}$ denote the rate at which $i$ accrues utility per unit of good $j$ received, when she is getting an allocation corresponding to this segment. Clearly, the maximum utility she can receive corresponding to this segment is $u_{ijk} \cdot l_{ijk}$. We will assume that $u_{ijk}$ and $l_{ijk}$ are rational numbers. Finally, let $S_\sigma^i$ be the set of all indices $(j, k)$ corresponding to the segments in all utility functions of agent $i$ under the given instance, i.e., \[ S_\sigma^i = \{ (j, k) \ | \ j \in G, \ k \in S_{ij} \} . \] {\bf 4)} The {\em non-separable piecewise-linear concave Arrow-Debreu Nash bargaining one-sided matching market}, abbreviated {\em 1NAD}, differs from {\em 1SAD} in that agents' utility functions are now assumed to be non-separable, piecewise-linear concave. These utility functions are very general and can be used to capture whether goods are complements or substitutes and much more. These functions are defined next. For each agent $i$, the parameter $l(i)$ specifies the number of hyperplanes used for defining the utility of $i$. The latter, $u_i(\xx)$, for bundle $\xx=(x_1,\ldots,x_n)$ of goods is defined to be \[ u_i(\xx) = \min_{k \leq l(i)} \left\{ \sum_{j \in G} {a_{ij}^k x_{ij} + b_i^k }\right\} , \] where $a_{ij}^k$ and $b_i^k$ are non-negative rational numbers. Furthermore, $b_i^k = 0$ for at least one value of $k$ so that the utility derived by $i$ from the empty bundle is zero. Leontief utilities is a fundamental special case of non-separable piecewise-linear concave utilities under which agents want goods in specified ratios. It is used for modeling utilities when goods are complements. In this case, for each agent $i$, we are specified a set $S_i \subseteq G$ of goods she is interested in, and \[ u_i(\xx) = \min_{j \in S_i} \left\{ \frac{x_{ij}}{a_{ij}} \right\} , \] where $a_{ij} > 0$ are rational numbers. In Section \ref{sec.CPs} we prove that each of the matching markets defined above admits a convex program. \begin{remark} \label{rem.ij} Throughout this paper, we will index elements of $A, G$ and $S_{ij}$ by $i, j$ and $k$, respectively. When the domain of $i, j$ or $k$ is not specified, especially in summations, it should be assumed to be $A, G$ and $S_{ij}$, respectively. \end{remark} \subsection{Two-Sided Matching Markets} \label{sec.2-models} Our two-sided matching market model consist of a set $A = \{1, 2, \ldots n\}$ of workers and a set $J = \{1, 2, \ldots, n\}$ of firms. For uniformity, we have assumed that there is an equal number of workers and firms, though the model can be easily enhanced and made more general. Our goal is to find an integral perfect matching between workers and firms. In this setting, it is natural to assume that each side has a utility function over the other side, making this a two-sided matching market. As before, we will relax the problem to finding a fractional perfect matching, $\xx$, followed by rounding as described above. We will explicitly define only the simplest case of two-sided markets; more general models follow along the same lines as one-sided markets. Under the {\em linear Fisher Nash bargaining two-sided matching market}, abbreviated {\em 2LF}, the utility accrued by agent $i \in A$ under allocation $\xx$, \[ u_i(\xx) = \sum_{j \in J} {u_{ij} x_{ij} }, \] where $u_{ij}$ is the utility accrued by $i$ if she were assigned job $j$ integrally. Analogously, the utility accrued by job $j \in J$ under allocation $\xx$, \[ w_j(x) = \sum_{i \in A} {w_{ij} x_{ij} }, \] where $w_{ij}$ is the utility accrued by $j$ if it were assigned to $i$ integrally. In keeping with the axiom of symmetry under Nash bargaining, we will posit that the desires of agents and jobs are equally important and we are led to defining the feasible set in a $2n$ dimensional space, i.e., $\mbox{${\cal N}$} \subseteq \mathbb{R}_+^{2n}$. The first $n$ components of feasible point $v \in \mbox{${\cal N}$}$ represent the utilities derived by the $n$ agents, i.e., $u_i(\xx)$s, and the last $n$ components the utilities derived by the $n$ jobs, i.e., $w_j(\xx)$, under a fractional perfect matching $\xx$. Under {\em 2LF}, the disagreement point is the origin, and we seek the Nash bargaining point. A convex program of {\em 2LF} is given in (\ref{eq.2CP}). \section{Convex Programs for the Models} \label{sec.CPs} We start by presenting convex programs for {\em 1LF} and {\em 1LAD}, namely (\ref{eq.CP-LF}) and (\ref{eq.CP-LAD}). These differ only in that the latter has the parameters $c_i$ in the objective function. For convenience, we define $\mathcal{X}$ to be the set of feasible fractional perfect matchings as defined in Definition~\ref{def.pm}. \begin{maxi} {} {\sum_{i \in A} {\log (v_i)}} {\label{eq.CP-LF}} {} \addConstraint{v_i}{= \sum_{j} {u_{ij} x_{ij}} \quad}{\forall i \in A} \addConstraint{\xx}{\in \mathcal{X}}{} \end{maxi} \begin{maxi} {} {\sum_{i \in A} {\log (v_i - c_i)}} {\label{eq.CP-LAD}} {} \addConstraint{v_i}{= \sum_{j} {u_{ij} x_{ij}} \quad}{\forall i \in A} \addConstraint{\xx}{\in \mathcal{X}}{} \end{maxi} Program (\ref{eq.CP-SPLC}) is a convex program for {\em 1SAD}. \begin{maxi} {} {\sum_{i \in A} {\log (v_i - c_i)}} {\label{eq.CP-SPLC}} {} \addConstraint{v_i}{= \sum_{j} \sum_k {u_{ijk} x_{ijk}} \quad }{\forall i \in A} \addConstraint{\sum_{j} \sum_k {x_{ijk}}}{=1 }{\forall i \in A} \addConstraint{\sum_{i} \sum_k {x_{ijk}}}{=1 }{\forall j \in G} \addConstraint{x_{ijk}}{\leq l_{ijk}}{\forall i \in A, \forall j \in G, \forall k \in S_{ij}} \addConstraint{x_{ijk}}{\geq 0}{\forall i \in A, \forall j \in G, \forall k \in S_{ij}} \end{maxi} Program (\ref{eq.CP-NPLC}) is a convex programs for {\em 1NAD}. \begin{maxi} {} {\sum_{i \in A} {\log (v_i - c_i)}} {\label{eq.CP-NPLC}} {} \addConstraint{v_i}{\leq \sum_{j} {a_{ij}^k x_{ij} + b_i^k} \quad }{\forall i \in A, \forall k \leq l(i)} \addConstraint{\xx}{\in \mathcal{X},\vv \in \mathbb{R}_{+}^{n}}{} \end{maxi} Program (\ref{eq.2CP}) is a convex program for {\em 2LF}. \begin{maxi} {} {\sum_{i \in A} {\log (v_i)} \ + \ \sum_{j \in J} {\log (v_j)}} {\label{eq.2CP}} {} \addConstraint{v_i}{= \sum_{j} {u_{ij} x_{ij}} \quad}{\forall i \in A} \addConstraint{v_j}{= \sum_{i} {w_{ij} x_{ij}} \quad}{\forall j \in J} \addConstraint{\xx}{\in \mathcal{X}}{} \end{maxi} \section{Solution Methods}\label{sec.SM} We present two solution methods for solving instances of the convex programs given in Section~\ref{sec.CPs}: (a) Cutting-plane algorithm, and (b) Frank-Wolfe algorithm. Both algorithms rely on linear approximations of these problems and converge to the optimal solution in polynomial time. For simplicity of exposition, we focus on the simpler models {\em 1LAD} (and {\em 1LF}) and {\em 2LF} to describe the algorithms. We will explain how these algorithms can be extended to other models. \subsection{Cutting-plane Algorithm} The underlying principle in the cutting-plane method for convex programs with nonlinear objective function is to outer-approximate the epigraph of the objective function through a series of linear programs \cite{kelley1960cutting,Vishnoi.book}. Let $f(\vv)=\sum_{i\in A} \log(v_i-c_i)$ be the objective function in {\em 1LAD}. Since $f$ is concave in $\vv$, for a given solution $\hat{\vv}$ we have: \begin{align} f(\vv)\le f(\hat{\vv})+ \nabla f(\hat{\vv})^{\top} (\vv-\hat{\vv})=f(\hat{\vv})-n+\sum_{i\in A}\frac{v_i-c_i}{\hat{v}_i-c_i}\label{eq:cp-cut} \end{align} Therefore, we can rewrite {\em 1LAD} as the following semi-infinite linear program (SILP): \begin{maxi} {} {\eta} {\label{eq.silp}} {} \addConstraint{\eta}{\leq f(\hat{\vv})+ \nabla f(\hat{\vv})^{\top} (\vv-\hat{\vv})\quad}{\forall \hat{\vv} \in\mathcal{N}} \addConstraint{(\xx,\vv)}{\in \mathcal{S},}{} \end{maxi} where $\mathcal{N}$ is the set of vectors $\hat{\vv}$ such that $\hat{v}_i>c_i$, and $\mathcal{S}$ is the set of feasible assignments. Observe that replacing $\mathcal{N}$ with $\hat{\mathcal{N}}\subset \mathcal{N}$ in \eqref{eq.silp} yields an LP which is a relaxation of the SILP \eqref{eq.silp}. A natural way of solving SILP \eqref{eq.silp} is to start with a manageable subset $\hat{\mathcal{N}}$ and grow this set until the upper bound produced by the LP is sufficiently close to the optimal solution \cite{kelley1960cutting}. However, instead of solving such relaxed LPs and obtaining optimal corner points of the hypograph approximations, it is customary to solve modified forms of these LPs to find the center of the hypograph approximations. Let $\underline{f}$ be a lower bound on the optimal value of $f$ (e.g., obtained using a feasible allocation). As described by \cite{elzinga1975central}, we may construct a cutting plane through the center of the hypograph approximation by solving \begin{maxi} {} {\sigma} {\label{eq.central_silp}} {} \addConstraint{\eta}{\geq \underline{f}+\sigma}{} \addConstraint{\eta}{\leq f(\hat{\vv})+ \nabla f(\hat{\vv})^{\top} (\vv-\hat{\vv})-\sigma \|(1,\nabla f(\hat{\vv}))\|_2\quad}{\forall \hat{\vv} \in\hat{\mathcal{N}}} \addConstraint{(\xx,\vv)}{\in \mathcal{S},}{} \end{maxi} which yields radius $\sigma$ and center $(\vv,\xx,\eta)$ of the largest ball that can be inscribed inside the hypograph approximation \cite{nemhauser1971modified}. Algorithm~\ref{pseudo-code-cutting-plane} describes the proposed \textit{Central Cutting-Plane} (CCP) algorithm for solving instances of {\em 1LAD}. As the algorithm iterates, we improve the lower bound $\underline{f}$ and add new cuts to tighten the hypograph approximation. Consequently, the inscribed ball shrinks (i.e., the sequence of hypresphere radii $\{\sigma^{(t)}\}_{t=0}^{\infty}$ converges to 0), and $\{(\vv^{(t)},\xx^{(t)})\}_{t=0}^{\infty}$ converges to the optimal solution with a linear rate as described in Theorem~\ref{thm.ccp} below. \begin{theorem} \label{thm.ccp} Central Cutting Plane Algorithm~\ref{pseudo-code-cutting-plane} converges to the optimal solution of {1LAD} with linear rate. \end{theorem} \begin{proof} Strict concavity of the objective function in {\em 1LAD} implies existence of a unique optimal solution. This guarantees a linear convergence rate as described in Theorem 7 in \cite{elzinga1975central}. \end{proof} To assess the convergence of Algorithm~\ref{pseudo-code-cutting-plane} numerically, we use the optimality gap in \eqref{eq:gap-cp} and terminate the algorithm once this gap falls below a given optimality gap threshold. \begin{align} \text{Gap}=\frac{\sigma^{(t)}}{|\eta^{(t)}|}.\label{eq:gap-cp} \end{align} \begin{algorithm}[t] Find an initial solution $(\vv^{(0)},\xx^{(0)})$\; Initialize $\hat{\mathcal{N}}\leftarrow\{\vv^{(0)}\}$; $\underline{f} \leftarrow f(\vv^{(0)})$; $t\leftarrow 1$\; $(\vv^*,\xx^*)\leftarrow (\vv^{(0)},\xx^{(0)})$\; \While{not converged}{ Solve LP \eqref{eq.central_silp} to obtain the center $(\vv^{(t)},\xx^{(t)},\eta^{(t)})$ and radius $\sigma^{(t)}$\; $\hat{\mathcal{N}}\leftarrow \hat{ \mathcal{N}}\cup\{\vv^{(t)}\}$\; \If {$\underline{f}<f(\vv^{(t)})$}{ $\underline{f}\leftarrow f(\vv^{(t)})$; $(\vv^*,\xx^*)\leftarrow (\vv^{(t)},\xx^{(t)})$\;} $t\leftarrow t+1$\; } \caption{Central cutting-plane algorithm for solving {\em 1LAD}} \label{pseudo-code-cutting-plane} \end{algorithm} \subsubsection{Enhancement techniques} \textbf{Cut generation:} To produce effective cuts and to improve the lower bound quickly, instead of cutting off the current solution $(\vv^{(t)},\xx^{(t)})$, we cut off an intermediate point $(\tilde{\vv},\tilde{\xx})=\tilde{\alpha} (\vv^{(t)},\xx^{(t)})+(1-\tilde{\alpha})(\vv^*,\xx^*)$, where $\tilde{\alpha}\in(0,1]$ is an appropriately-chosen scalar and $(\vv^*,\xx^*)$ is the current incumbent solution. To guarantee convergence, $\tilde{\alpha}$ must be chosen such that the produced cut cuts off $(\tilde{\vv},\tilde{\xx})$, that is $\eta^{(t)} > f(\tilde{\vv})-n+\sum_{i\in A}\frac{v^{(t)}_i-c_i}{\tilde{v}_i-c_i}$. At each iteration of Algorithm~\ref{pseudo-code-cutting-plane}, we initialize $\tilde{\alpha}$ via line search between $\vv^{(t)}$ and $\vv^*$, that is $\tilde{\alpha}=\arg\max_{\alpha\in [0,1]} f(\alpha \vv^{(t)} + (1-\alpha)\vv^*$). \textbf{Avoiding unboundedness:} Since the objective function in the convex programs is of the form $\sum_{i\in A}\log(v_i-c_i)$, we require $v_i - c_i > 0$ for each agent $i$ at each iteration of CCP to be able to produce a cut (note that cut coefficients are $1/(\hat{v}_i - c_i)$). However, since CCP is an outer-approximation algorithm, it is possible that in an iterate of CCP, $\hat{v}_i - c_i =0$ for some agent $i$, which makes the solution $\hat{\vv}$ unbounded, and we cannot extract a cut based on this solution. We resolve this issue by taking the convex combination of $(\hat{\vv},\hat{\xx})$ and some feasible interior point $(\bar{\vv},\bar{\xx})$. We do this by choosing the closest feasible point to the $(\hat{\vv},\hat{\xx})$ on the line segment from $(\hat{\vv},\hat{\xx})$ to $(\bar{\vv},\bar{\xx})$, that is by choosing smallest $\alpha$ such that $\alpha \bar{v}_i+(1-\alpha)\hat{v}_i-c_i\ge \epsilon$ for each $i$ and a small $\epsilon$, which yields $\alpha=\min\limits_{i\in A: \hat{v}_i=c_i}\{\frac{\epsilon}{\bar{v}_i-c_i} \}$. \textbf{Scaling of $\eta$:} Another important aspect in stabilizing CCP is choosing comparable coefficients for the variables. For a given solution $\hat{\vv}$, coefficient of $\eta$ in a cut of the form \eqref{eq.silp} is 1, while the coefficients of the $\vv$-variables are $(\frac{1}{\hat{v}_i-c_i})_{i\in A}$, which can be much larger than 1 depending on the value of $\hat{\vv}$. For instance, when entries of the utility matrices are binary and $c_i>0$, then $\hat{v}_i-c_i<1$, and it is possible that $\frac{1}{\hat{v}_i-c_i} \gg 1$ for some agents, making the cut coefficients unbalanced. An LP solver using floating point arithmetic might not handle unbalanced cuts properly. To balance the cut, we replace $\eta$ with $\eta=\theta\gamma$, where $\theta>0$ is a fixed scalar and $\gamma$ acts as the new variable in place of $\eta$. With this change of variable, coefficient of $\sigma$ in the cuts becomes $\|(\theta,\nabla f(\hat{\vv}))\|_2$. In our implementation, we choose $\theta$ as the largest coefficient of the $\vv$-variables in the first cut produced, that is $\theta=1/\min_{i \in A}\{\hat{v}_i-c_i\}$. Note that we may dynamically change $\theta$, but we use the same initial $\theta$ for stabilizing all subsequent cuts. \textbf{Reoptimization:} At each iteration of Algorithm~\ref{pseudo-code-cutting-plane}, we add a single constraint of the form \eqref{eq:cp-cut} to the current LP approximation of {\em 1LAD}. Using the Dual Simplex algorithm, we can reuse the information obtained in the previous iteration (e.g. the basis), and thus avoid solving the LPs from scratch at each iteration. \subsubsection{Extension to other models} Algorithm~\ref{pseudo-code-cutting-plane} extends to {\em 2LF}, {\em 1SAD}, and {\em 1NAD} easily by replacing the objective function and the constraints with the suitable function and constraints, respectively. For instance, for {\em 2LF}, the cutting planes take the form of $$\eta\le \sum_{i\in A}\log(\hat{v}_i)+\sum_{j\in G}\log(\hat{v}_j)-2n+\sum_{i\in A}\frac{v_i}{\hat{v}_i}+\sum_{j\in G}\frac{v_j}{\hat{v}_j}.$$ Note that, in {\em 2LF}, we may eliminate the $x_{ij}$ variables such that both $u_{ij}$ and $w_{ij}$ are zero. In {\em 1SAD} and {\em 1NAD} the constraints that define $\mathcal{S}$ are updated accordingly. \subsection{Frank-Wolfe Algorithm}\label{sec:fw} Frank-Wolfe (FW) method \cite{frank1956algorithm, jaggi2013revisiting} is one of the simplest and earliest known iterative algorithms for solving non-linear convex optimization problems of the form $$\max_{\xx\in \mathcal{X}}f(\xx),$$ where $f$ is a concave function and $\mathcal{X}$ is a compact convex set. The underlying principle in Frank-Wolfe method is to replace the non-linear objective function $f$ with its linear approximation $\tilde{f}(\xx)=f(\xx^{(0)})+\nabla f(\xx^{(0)})^{\top}(\xx-\xx^{(0)})$ at a trial point $\xx^{(0)}\in \mathcal{X}$, and solve a simpler problem $$\max_{\xx\in \mathcal{X}}\tilde{f}(\xx),$$ to produce an ``atom'' solution $\hat{\xx}$. The algorithm then iterates by performing line search between $\xx^{(0)}$ and $\hat{\xx}$ to produce the next trial point $\xx^{(1)}$ as a convex combination of $\xx^{(0)}$ and $\hat{\xx}$. Algorithm~\ref{pseudo-code-frank-wolfe} presents the FW algorithm for solving instances of {\em 1LAD}, in which the objective function $f$ is defined as $f(\xx)=\sum_{i\in A}\log(\sum_{j\in G}u_{ij}x_{ij}-c_i)$ and the feasible region is defined as $$\mathcal{X}=\{\xx\in \mathbb{R}_+^{n^2}: \sum_{j\in G}x_{ij}=1\quad \forall i\in A, \sum_{i\in A}x_{ij}=1\quad \forall j\in G\}.$$ \textbf{Producing an atom:} Frank-Wolfe method is particularly useful when $\mathcal{X}$ is a polyhedron and one can exploit its combinatorial properties. In the case of {\em 1LAD} (also {\em 1LF} and {\em 2LF}), the feasible region $\mathcal{X}$ corresponds to a matching polyhedron. Hence, at each iteration of Algorithm~\ref{pseudo-code-frank-wolfe}, the \textit{atom} is an integral perfect matching produced by solving a matching problem. The optimal solution produced by FW is therefore a convex combination of these integral perfect matchings. \begin{algorithm}[t!] Set $t\leftarrow 0$ and find an initial perfect matching $\xx^{(0)}$\; \While{not converged}{ Compute $g_{ij}=\frac{\partial}{\partial x_{ij}} f(\xx^{(t)})=\frac{u_{ij}}{v^{(t)}_i-c_i}$, where $v^{(t)}_i=\sum_{j\in G}u_{ij}x^{(t)}_{ij}$\; Compute perfect matching $\hat{\xx}^{(t)}$ by solving the following problem: \[ \hat{\xx}^{(t)}=\arg\max_{\xx\in \mathcal{X}} \; \sum_{i\in A}\sum_{j\in G}g_{ij}x_{ij}\; \] Compute the step-size $\gamma^{(t)}$ using the following line search \[ \gamma^{(t)}=\arg\max_{\gamma \in [0,1]} \; f\left((1-\gamma)\xx^{(t)}+\gamma \hat{\xx}^{(t)}\right)\; \] Update $\xx^{(t+1)}=(1-\gamma^{(t)})\xx^{(t)}+\gamma^{(t)} \hat{\xx}^{(t)}$\; $t\leftarrow t+1$\; } \caption{Frank-Wolfe algorithm for solving {\em 1LAD}} \label{pseudo-code-frank-wolfe} \end{algorithm} \textbf{Convergence}: In general, Frank-Wolfe algorithm admits a sublinear convergence rate \cite{frank1956algorithm,jaggi2013revisiting}, that is, after $\mathcal{O}(\frac{1}{\epsilon})$ many iterations, the iterate $\xx^{(t)}$ is an $\epsilon$-approximate solution to problem {\em 1LAD}. \begin{theorem}[Jaggi \cite{jaggi2013revisiting}] \label{thm.fw-rate} Frank-Wolfe Algorithm~\ref{pseudo-code-cutting-plane} converges to the optimal solution of {1LAD} with sublinear rate. \end{theorem} Furthermore, as discussed in \cite{jaggi2013revisiting}, concavity of $f$ implies that at iteration $t$ of Algorithm~\ref{pseudo-code-frank-wolfe}, $\sum_{i,j}g_{ij}(\hat{x}^{(t)}_{ij}-x^{(t)}_{ij})\ge f(\xx^*)-f(\xx^{(t)})$, where $\xx^*$ is the optimal solution. Therefore, $\sum_{i,j}g_{ij}(\hat{x}^{(t)}_{ij}-x^{(t)}_{ij})$ provides an upper bound on the optimality gap of iterate $x^{(t)}$, and we may numerically assess convergence of the FW algorithm using \begin{align} \text{Gap}=\frac{\sum_{i,j}g_{ij}(\hat{x}^{(t)}_{ij}-x^{(t)}_{ij})}{|f(\xx^{(t)})|}.\label{eq:gap-fw} \end{align} \textbf{Extension to other models}: As in the cutting-plane method, Algorithm~\ref{pseudo-code-frank-wolfe} may be extended to other models. For {\em 2LF}, it suffices to compute the gradient $g_{ij}$ as $\frac{\partial}{\partial x_{ij}} f(\xx^{(t)})={u_{ij}}/{v^{(t)}_i}+{w_{ij}}/{v^{(t)}_j}$. For {\em 1SAD} and {\em 1NAD}, however, while the general framework can be extended, since the feasible region no longer defines a matching polyhedron, finding a new atom $\hat{\xx}^{(t)}$ is not straightforward. Our primary computational experiments show that a na\"{i}ve implementation of Frank-Wolfe algorithm does not scale for large instances of these problems. \section{Computational Results}\label{sec.Exp} To assess the scalability of the proposed algorithms, we conducted extensive computational experiments on instances of various difficulty levels for each matching market model. We coded our algorithms in \texttt{C\#} and solved the LPs using the \texttt{ILOG Concert} library and \texttt{CPLEX 12.10} solver. All experiments were conducted on a Dell desktop equipped with Intel(R) Xeon(R) CPU E5-2680 v3 at 2.50GHz with 8 Cores and 32 GB of memory running a 64-bit Windows 10 operating system. We used the Dual Simplex method for solving the LPs in Algorithm \ref{pseudo-code-cutting-plane} by setting the \texttt{RootAlgorithm} parameter to \texttt{Cplex.Algorithm.Dual}. Although the matching problems in Algorithm \ref{pseudo-code-frank-wolfe} can be solved by specialized algorithms, after primary experiments, we found that using a general-purpose LP solver such as the Primal Simplex method benefits from better warm-start mechanism making the overall implementation simpler. We used the primal simplex method by setting the parameter \texttt{RootAlgorithm} to \texttt{Cplex.Algorithm.Primal}. We terminated Algorithms~\ref{pseudo-code-cutting-plane} and \ref{pseudo-code-frank-wolfe} upon reaching either an optimality gap of $10^{-7}$, running time of 3600 seconds, or after 1000 iterations. \subsection{Computational Results for {\em 1LAD}, {\em 1LF} and {\em 2LF}} We start by presenting the results for matching market models with linear utility functions. We performed computational experiments on {\em 1LAD}, {\em 1LF} and {\em 2LF} by producing random utility matrices $u$ (and $w$ in {\em 2LF}) as follows. We considered two general scenarios: (a) \textbf{binary}, in which the entries of matrices $u$ and $w$ were drawn from $\{0,1\}$, and (b) \textbf{nonbinary}, in which entries of matrices $u$ and $w$ were general integer values. In both scenarios, $u_{ij}$ was set to 0 with probability $1-\rho$, where $\rho\in\{\frac{1}{20},\frac{1}{3},\frac{2}{3}\}$ represents the density of the utility matrix. For the nonbinary case, positive values of $u_{ij}$ were drawn uniformly from the set $\{1,2,\dots,20\}$. In {\em 1LAD}, the parameters $c_i$ were uniformly chosen from the set $\{\frac{\bar{u}}{3},\frac{\bar{u}}{4},0\}$, where $\bar{u}=\frac{1}{4}\max_{ij}\{ u_{ij}\}$ to ensure feasibility. Tables~\ref{tab:results-1lad}, \ref{tab:results-1lf} and \ref{tab:results-2lf} present the computational results for models {\em 1LAD}, {\em 1LF} and {\em 2LF}, respectively. In these tables, values under columns ``Time'', ``Gap'' and ``Iter.'' represent the running time (in seconds), optimality gap (as per equations \eqref{eq:gap-cp} and \eqref{eq:gap-fw}) and the number of iterations, respectively. Each entry represents average value over 5 randomly generated instances for each pair of $n$ (number of agents/goods) and $\rho$ (density of the utility matrices). In these tables, whenever a column is missing, it means the corresponding values were 0 across all experiments. Both CCP and FW are able to solve all the {\em 1LAD} and {\em 1LF} instances and the majority of the {\em 2LF} instances to optimality within the given time/iteration limits. We observe that even the largest instances of one-sided market models {\em 1LAD} and {\em 1LF} are solved in less than two minutes, while instances of {\em 2LF} prove to be computationally more challenging; still, the optimality gaps are negligible for large instances of {\em 2LF}. FW outperforms CCP in larger instances in terms of computation time, particularly in {\em 1LF} and {\em 1LAD}, and the lower computation time of FW suggests its capacity for handling even larger instances. \begin{table}[t!] \centering \footnotesize \begin{tabular}{@{}rlrrrrrr@{}} \toprule \multicolumn{1}{l}{} & & \multicolumn{2}{c}{\textbf{binary}} & \multicolumn{4}{c}{\textbf{nonbinary}} \\ \cmidrule(l){3-8} \multicolumn{1}{l}{$n$} & $\rho$ & \multicolumn{1}{l}{Time (CCP)} & \multicolumn{1}{l}{Time (FW)} & \multicolumn{1}{l}{Time (CCP)} & \multicolumn{1}{l}{Iter. (CCP)} & \multicolumn{1}{l}{Time (FW)} & \multicolumn{1}{l}{Iter. (FW)} \\ \midrule 10 & 0.33 & 0.008 & 0.000 & 0.008 & 15.5 & 0.000 & 3.0 \\ & 0.67 & 0.003 & 0.000 & 0.000 & 4.2 & 0.003 & 1.6 \\ 20 & 0.33 & 0.006 & 0.000 & 0.006 & 15.2 & 0.006 & 7.8 \\ & 0.67 & 0.006 & 0.000 & 0.003 & 4.6 & 0.003 & 2.2 \\ 50 & 0.33 & 0.003 & 0.000 & 0.013 & 14.0 & 0.013 & 3.2 \\ & 0.67 & 0.013 & 0.003 & 0.013 & 5.4 & 0.025 & 2.2 \\ 100 & 0.33 & 0.037 & 0.009 & 0.053 & 14.0 & 0.081 & 6.6 \\ & 0.67 & 0.060 & 0.000 & 0.065 & 5.2 & 0.116 & 8.6 \\ 200 & 0.05 & 0.079 & 0.094 & 0.120 & 70.0 & 0.210 & 11.2 \\ & 0.33 & 0.106 & 0.116 & 0.097 & 16.4 & 0.359 & 6.0 \\ & 0.67 & 0.081 & 0.141 & 0.204 & 1.0 & 0.134 & 1.0 \\ 500 & 0.05 & 0.125 & 0.025 & 0.421 & 31.0 & 0.610 & 7.5 \\ & 0.33 & 0.881 & 0.669 & 1.480 & 1.0 & 0.833 & 1.0 \\ & 0.67 & 3.489 & 0.960 & 1.422 & 1.0 & 1.120 & 1.0 \\ 1000 & 0.05 & 0.609 & 0.088 & 2.728 & 29.5 & 16.555 & 11.0 \\ & 0.33 & 9.802 & 3.422 & 3.437 & 1.0 & 3.463 & 1.0 \\ & 0.67 & 32.402 & 11.756 & 8.964 & 1.0 & 15.104 & 1.0 \\ 2000 & 0.05 & 3.726 & 0.438 & 9.468 & 5.0 & 9.525 & 11.0 \\ & 0.33 & 82.599 & 27.088 & 24.307 & 1.0 & 24.703 & 1.0 \\ & 0.67 & 274.854 & 90.766 & 65.958 & 1.0 & 74.131 & 1.0 \\ \bottomrule \end{tabular} \caption{Computational results for {\em 1LAD}} \label{tab:results-1lad} \end{table} \begin{table}[h!] \centering \footnotesize \begin{tabular}{@{}rlrrrrrrrr@{}} \toprule & & \multicolumn{4}{c}{\textbf{binary}} & \multicolumn{4}{c}{\textbf{nonbinary}} \\ \cmidrule(l){3-10} & & \multicolumn{2}{c}{CCP} & \multicolumn{2}{c}{FW} & \multicolumn{2}{c}{CCP} & \multicolumn{2}{c}{FW} \\ \cmidrule(l){3-10} $n$ & $\rho$ & \multicolumn{1}{l}{Time} & \multicolumn{1}{l}{Iter.} & \multicolumn{1}{l}{Time} & \multicolumn{1}{l}{Iter.} & \multicolumn{1}{l}{Time} & \multicolumn{1}{l}{Iter.} & \multicolumn{1}{l}{Time} & \multicolumn{1}{l}{Iter.} \\ \midrule 10 & 0.33 & 0.008 & 28.0 & 0.000 & 3.6 & 0.000 & 15.0 & 0.027 & 205.3 \\ & 0.67 & 0.003 & 1.0 & 0.000 & 1.0 & 0.000 & 3.2 & 0.000 & 3.6 \\ 20 & 0.33 & 0.009 & 1.0 & 0.000 & 1.0 & 0.006 & 13.6 & 0.012 & 69.0 \\ & 0.67 & 0.003 & 1.0 & 0.000 & 1.0 & 0.003 & 4.4 & 0.003 & 4.8 \\ 50 & 0.33 & 0.012 & 1.0 & 0.003 & 1.0 & 0.009 & 14.6 & 0.022 & 36.6 \\ & 0.67 & 0.025 & 1.0 & 0.003 & 1.0 & 0.012 & 8.4 & 0.013 & 11.0 \\ 100 & 0.33 & 0.056 & 1.0 & 0.003 & 1.0 & 0.069 & 18.6 & 0.094 & 16.8 \\ & 0.67 & 0.078 & 1.0 & 0.009 & 1.0 & 0.125 & 17.0 & 0.106 & 16.6 \\ 200 & 0.05 & 0.083 & 1.0 & 0.008 & 1.0 & 1.352 & 67.2 & 0.218 & 84.2 \\ & 0.33 & 0.136 & 1.0 & 0.101 & 1.0 & 1.753 & 26.2 & 0.267 & 17.0 \\ & 0.67 & 0.278 & 1.0 & 0.144 & 1.0 & 0.422 & 1.0 & 0.410 & 1.0 \\ 500 & 0.05 & 0.149 & 1.0 & 0.025 & 1.0 & 0.397 & 29.0 & 0.758 & 21.5 \\ & 0.33 & 0.755 & 1.0 & 0.687 & 1.0 & 1.354 & 1.0 & 0.844 & 1.0 \\ & 0.67 & 2.849 & 1.0 & 0.994 & 1.0 & 1.380 & 1.0 & 1.120 & 1.0 \\ 1000 & 0.05 & 0.562 & 1.0 & 0.087 & 1.0 & 5.961 & 49.5 & 11.633 & 18.5 \\ & 0.33 & 5.843 & 1.0 & 3.400 & 1.0 & 3.844 & 1.0 & 3.568 & 1.0 \\ & 0.67 & 16.917 & 1.0 & 11.844 & 1.0 & 9.250 & 1.0 & 15.208 & 1.0 \\ 2000 & 0.05 & 4.196 & 1.0 & 0.425 & 1.0 & 14.500 & 7.0 & 12.103 & 15.5 \\ & 0.33 & 38.568 & 1.0 & 27.181 & 1.0 & 27.557 & 1.0 & 24.630 & 1.0 \\ & 0.67 & 143.406 & 1.0 & 90.941 & 1.0 & 73.198 & 1.0 & 73.724 & 1.0 \\ \bottomrule \end{tabular} \caption{Computational results for {\em 1LF}} \label{tab:results-1lf} \end{table} \begin{table}[h!] \centering \footnotesize \setlength{\tabcolsep}{2.5pt} \scalebox{0.9}{ \begin{tabular}{@{}rlrrrrrrrrrrrr@{}} \toprule & & \multicolumn{6}{c}{\textbf{binary}} & \multicolumn{6}{c}{\textbf{nonbinary}} \\ \cmidrule(l){3-14} & & \multicolumn{3}{c}{CCP} & \multicolumn{3}{c}{FW} & \multicolumn{3}{c}{CCP} & \multicolumn{3}{c}{FW} \\ \cmidrule(l){3-14} $n$ & $\rho$ & \multicolumn{1}{l}{Time} & \multicolumn{1}{l}{Gap} & \multicolumn{1}{l}{Iter.} & \multicolumn{1}{l}{Time} & \multicolumn{1}{l}{Gap} & \multicolumn{1}{l}{Iter.} & \multicolumn{1}{l}{Time} & \multicolumn{1}{l}{Gap} & \multicolumn{1}{l}{Iter.} & \multicolumn{1}{l}{Time} & \multicolumn{1}{l}{Gap} & \multicolumn{1}{l}{Iter.} \\ \midrule 10 & 0.33 & 0.023 & 0.00\% & 1.0 & 0.070 & 0.00\% & 396.0 & 0.016 & 0.00\% & 38.3 & 0.151 & 0.02\% & 1001.0 \\ & 0.67 & 0.012 & 0.00\% & 1.0 & 0.009 & 0.00\% & 1.0 & 0.009 & 0.00\% & 24.2 & 0.038 & 0.00\% & 348.6 \\ 20 & 0.33 & 0.028 & 0.00\% & 1.0 & 0.141 & 0.00\% & 684.0 & 0.041 & 0.00\% & 54.6 & 0.317 & 0.01\% & 939.8 \\ & 0.67 & 0.009 & 0.00\% & 1.0 & 0.000 & 0.00\% & 1.0 & 0.022 & 0.00\% & 38.0 & 0.056 & 0.00\% & 175.8 \\ 50 & 0.33 & 0.037 & 0.00\% & 1.0 & 0.003 & 0.00\% & 3.0 & 0.350 & 0.00\% & 134.6 & 1.342 & 0.01\% & 622.8 \\ & 0.67 & 0.053 & 0.00\% & 1.0 & 0.003 & 0.00\% & 1.0 & 0.078 & 0.00\% & 50.8 & 0.141 & 0.00\% & 120.6 \\ 100 & 0.33 & 0.134 & 0.00\% & 1.0 & 0.010 & 0.00\% & 1.0 & 1.381 & 0.00\% & 196.6 & 2.743 & 0.01\% & 348.0 \\ & 0.67 & 0.228 & 0.00\% & 1.0 & 0.016 & 0.00\% & 1.0 & 0.265 & 0.00\% & 33.4 & 0.537 & 0.00\% & 37.6 \\ 200 & 0.05 & 2501.869 & 0.02\% & 1000.0 & 57.796 & 0.07\% & 1000.0 & 224.755 & 0.00\% & 1000.0 & 257.177 & 0.04\% & 1000.0 \\ & 0.33 & 0.275 & 0.00\% & 1.0 & 0.109 & 0.00\% & 1.0 & 6.143 & 0.00\% & 188.0 & 8.663 & 0.00\% & 148.8 \\ & 0.67 & 0.303 & 0.00\% & 1.0 & 0.128 & 0.00\% & 1.0 & 1.573 & 0.00\% & 54.8 & 9.025 & 0.00\% & 38.0 \\ 500 & 0.05 & 3608.480 & 0.00\% & 388.5 & 166.305 & 0.01\% & 1000.0 & 2638.611 & 0.00\% & 1000.0 & 1090.482 & 0.03\% & 1000.0 \\ & 0.33 & 1.146 & 0.00\% & 1.0 & 0.797 & 0.00\% & 1.0 & 23.548 & 0.00\% & 150.0 & 17.095 & 0.00\% & 59.8 \\ & 0.67 & 3.198 & 0.00\% & 1.0 & 1.625 & 0.00\% & 1.0 & 12.360 & 0.00\% & 66.0 & 39.471 & 0.00\% & 24.6 \\ 1000 & 0.05 & 3616.122 & 0.00\% & 262.0 & 773.132 & 0.11\% & 1000.0 & 3678.832 & 0.00\% & 656.0 & 3393.494 & 0.02\% & 913.0 \\ & 0.33 & 7.692 & 0.00\% & 1.0 & 9.244 & 0.00\% & 1.0 & 153.357 & 0.00\% & 131.3 & 363.582 & 0.00\% & 37.0 \\ & 0.67 & 22.061 & 0.00\% & 1.0 & 17.431 & 0.00\% & 1.0 & 105.834 & 0.00\% & 56.0 & 412.022 & 0.00\% & 20.2 \\ 2000 & 0.05 & 3636.086 & 0.00\% & 154.0 & 1781.018 & 0.06\% & 895.6 & 3608.648 & 0.00\% & 227.5 & 3617.831 & 0.08\% & 120.0 \\ & 0.33 & 40.645 & 0.00\% & 1.0 & 68.122 & 0.00\% & 1.0 & 923.272 & 0.00\% & 109.0 & 2540.135 & 0.00\% & 26.7 \\ & 0.67 & 172.478 & 0.00\% & 1.0 & 126.075 & 0.00\% & 1.0 & 811.562 & 0.00\% & 44.0 & 3384.356 & 0.00\% & 18.3 \\ \bottomrule \end{tabular}} \caption{Computational results for {\em 2LF}} \label{tab:results-2lf} \end{table} \subsection{Computational Results for {\em 1SAD} and {\em 1NAD}} We generated random instance for {\em 1SAD} by constructing piece-wise linear concave utility functions each with $K$ segments of equal size $\frac{1}{K}$. To ensure that the slopes of the segments for each pair $(i,j)$ (i.e., $u_{ijk}$) are non-decreasing (i.e., $u_{ij1}>u_{ij2}>\dots>u_{ijK}$), we first generated $K$ random values $\sigma_{ijk}$ uniformly drawn from the set $\{1,\dots,20\}$, and then set $u_{ijk}=\sum_{l=k}^{K}\sigma_{ijl}$. For compatibility of experiments, we scaled the $u_{ijk}$ values such that the area below the utility function is equal to $\frac{1}{2}\tilde{v}$, where $\tilde{v}$ is uniformly drawn from the set $\{1,\dots,20\}$. For {\em 1NAD}, we considered $K$ hyperplanes of the form $\sum_{j\in G}a^k_{ij}x_{ij}+b^k_i$ for each $i\in A$, and generated the coefficients $a^k_{ij}$ by multiplying $\frac{2}{3}$ with a value uniformly drawn from the set $\{0,1,\dots,20\}$, and generated the intercept $b^k_{i}$ by multiplying $\frac{1}{3}$ with a value uniformly drawn from the set $\{0,1,\dots,20\}$. If $b^k_i>0$ for all $k$, then we randomly set one of them to 0. Tables~\ref{tab:results-1sad} and \ref{tab:results-1nad} present the computational results respectively for {\em 1SAD} and {\em 1NAD} across different choices of $n$ and $K$ using CCP Algorithm~\ref{pseudo-code-cutting-plane}. As expected, in both models, as $K$ increases the problems become more challenging, yet the CCP algorithm is able to find the optimal solution or yield a small optimality gap in both cases. Interestingly, {\em 1SAD} instances are significantly easier to solve than {\em 1NAD} instances, and CCP is able to solve all {\em 1SAD} instances up to 2000 agents and 10 segments to optimality within less than 10 minutes. This highlights the capacity of CCP for solving even larger instance of {\em 1SAD}. \begin{table}[h!] \centering \footnotesize \begin{tabular}{@{}rrrrrrr@{}} \toprule \multicolumn{1}{l}{} & \multicolumn{3}{c}{$K=5$} & \multicolumn{3}{c}{$K=10$} \\ \cmidrule(l){2-7} \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{Time} & \multicolumn{1}{c}{Gap} & \multicolumn{1}{c}{Iter.} & \multicolumn{1}{c}{Time} & \multicolumn{1}{c}{Gap} & \multicolumn{1}{c}{Iter.} \\ \midrule 10 & 0.012 & 0.00\% & 7.1 & 0.011 & 0.00\% & 7.6 \\ 20 & 0.014 & 0.00\% & 7.7 & 0.033 & 0.00\% & 8.1 \\ 50 & 0.088 & 0.00\% & 7.0 & 0.158 & 0.00\% & 7.1 \\ 100 & 0.386 & 0.00\% & 7.0 & 0.661 & 0.00\% & 6.9 \\ 200 & 2.328 & 0.00\% & 8.3 & 2.764 & 0.00\% & 6.5 \\ 500 & 11.756 & 0.00\% & 7.8 & 18.540 & 0.00\% & 6.0 \\ 1000 & 69.812 & 0.00\% & 8.0 & 97.465 & 0.00\% & 6.0 \\ 2000 & 365.448 & 0.00\% & 7.6 & 553.306 & 0.00\% & 6.0 \\ \bottomrule \end{tabular} \caption{Computational results for {\em 1SAD}} \label{tab:results-1sad} \end{table} \begin{table}[h!] \centering \footnotesize \begin{tabular}{@{}rrrrrrr@{}} \toprule \multicolumn{1}{l}{} & \multicolumn{3}{c}{$K=5$} & \multicolumn{3}{c}{$K=10$} \\ \cmidrule(l){2-7} \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{Time} & \multicolumn{1}{c}{Gap} & \multicolumn{1}{c}{Iter.} & \multicolumn{1}{c}{Time} & \multicolumn{1}{c}{Gap} & \multicolumn{1}{c}{Iter.} \\ \midrule 10 & 0.008 & 0.00\% & 6.9 & 0.008 & 0.00\% & 6.8 \\ 20 & 0.012 & 0.00\% & 7.3 & 0.025 & 0.00\% & 5.2 \\ 50 & 0.111 & 0.00\% & 6.2 & 0.244 & 0.00\% & 5.2 \\ 100 & 0.789 & 0.00\% & 5.4 & 1.759 & 0.00\% & 5.0 \\ 200 & 4.528 & 0.00\% & 5.1 & 14.313 & 0.00\% & 5.0 \\ 500 & 83.479 & 0.00\% & 5.6 & 393.963 & 0.00\% & 5.0 \\ 1000 & 1498.531 & 0.00\% & 6.0 & $>$3600.000 & 3.02\% & 2.0 \\ 2000 & 2816.649 & 0.00\% & 6.0 & --- & --- & ---\\ \bottomrule \end{tabular} \caption{Computational results for {\em 1NAD}} \label{tab:results-1nad} \end{table} \section{Acknowledgement} The second author would like to thank Richard Zeckhauser for a very enlightening discussion on his ``wish list'' of models for matching markets. Several of the models considered in this paper have their origins in that discussion. \bibliographystyle{alpha}
1,314,259,996,214
arxiv
\section{Introduction} Dependency measures $\mathcal{D}(X,Y)$ are employed in data mining to assess the strength of the dependency between two continuous or categorical variables $X$ and $Y$. If the variables are continuous, we can use Pearson's correlation to detect linear dependencies, or use more sophisticated measures, such as the Maximal Information Coefficient (MIC)~\cite{Reshef2011} to detect \emph{non}-linear dependencies. If the variables are categorical we can use the well known mutual information (a.k.a.\ information gain) or the Gini gain~\cite{Kononenko1995}. Dependency measures are ubiquitously used: to infer biological networks \cite{Reshef2011}, for variable selection for classification and regression tasks~\cite{Guyon2003}, for clustering comparisons and validation~\cite{Romano2015}, as splitting criteria in random forest~\cite{Breiman2001}, and to evaluate classification accuracy~\cite{Witten2011}, to list a few. Nonetheless, there exist a number of problems when the dependency $\mathcal{D}(X,Y)$ is estimated with $\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)$ on a data sample $\mathcal{S}_n$ of $n$ data points: \emph{a)} even if the population value $\mathcal{D}(X,Y) = 0$ when $X$ and $Y$ are statistically independent, estimates have a high chance to be bigger than 0 when $n$ is finite; \emph{b)} when comparing pairs of variables which share the same fixed population value $\mathcal{D}(X,Y)$, estimates are still dependent on the sample size $n$ and the number of categories of $X$ and $Y$. These issues diminish the utility of dependency measures on \emph{quantification} tasks. For example, MIC was proposed in~\cite{Reshef2011} as a proxy of the amount of noise on the functional dependence between $X$ and $Y$: it should ``provide a score that roughly equals the coefficient of determination $R^2$ of the data relative to the regression function'', which is 0 under complete noise and 1 in noiseless scenarios. Nonetheless, MIC is not equal to 0 under complete noise, and MIC values are not comparable if computed on samples of different size $n$ because of the use of different datasets or in the case of variables with missing values: \begin{example} \label{ex:mic} Given two uniform and independent variables $X$ and $Y$ in $[0,1]$, the population value of \emph{MIC} is $0$ but the estimates $\mbox{\emph{MIC}}(\mathcal{S}_{20}|X,Y)$ on $20$ data points are higher than $\mbox{\emph{MIC}}(\mathcal{S}_{80}|X,Y)$ on $80$ data points. On average, they achieve the values of $0.36$ and $0.25$ respectively. The user expects this value to be $0$. The following box plots show estimates for $10,000$ simulations. \includegraphics[scale=.45]{figs/BoxMIC} \end{example} The example above shows that the estimated MIC does not have zero baseline for finite samples. The zero baseline property is well known in the clustering community~\cite{Romano2015}, nonetheless this property does not hold for many dependency measures used in data mining. Problems also arise when \emph{ranking} dependencies on a finite data sample. For example, if Gini gain is used to rank the dependency between variables to the target class in random forests~\cite{Breiman2001}, variables with more categories have more chances to be ranked higher: \begin{example} \label{ex:gini} Given a variable $X_1$ with two categories and a variable $X_2$ with one more category which are both independent of the target binary class $Y$, both the population value of Gini gain between $X_1$ and $Y$, and the population value between $X_2$ and $Y$ are equal to $0$. However, when Gini gain is estimated on $100$ data points the probability of $\mbox{\emph{Gini}}(\mathcal{S}_{100}|X_2,Y)$ being greater than $\mbox{\emph{Gini}}(\mathcal{S}_{100}|X_1,Y)$ is equal to $0.7$. The user expects $0.5$ given that $X_1$ and $X_2$ are equally unpredictive to $Y$. \end{example} It is common practice to use the $p$-value of Gini gain to correct this bias~\cite{Dobra01}. Nonetheless, we will shortly see that $p$-values are effective only when the population value of a dependency measure is $0$. In this paper, we identify that the issues discussed in Example~\ref{ex:mic} and~\ref{ex:gini} are due to inflated estimates arising from finite samples. Statistical properties of the distribution of the dependency measure estimator $\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)$ under independence of $X$ and $Y$ can be used to adjust these estimates. The challenge is to formalize a general framework to adjust dependency measure estimates which also addresses the shortcomings of the use of $p$-values. We make the following contributions: \begin{itemize}[topsep=1ex,itemsep=-1ex,partopsep=1ex,parsep=1ex] \item We identify common biases of dependency measure estimates due to finite samples; \item We propose a framework to adjust estimates $\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)$ which is simple, yet applicable to many dependency measures because it only requires to use the distribution of the estimator when $X$ and $Y$ are independent; \item We experimentally demonstrate that our adjustments improve interpretability when quantifying dependency (e.g.,\ when using MIC as a proxy of the amount of noise) and accuracy when ranking dependencies (e.g.,\ when using Gini gain in random forests). \end{itemize} \section{Background} Dependency measures $\mathcal{D}(X,Y)$ are defined on the joint distribution $(X,Y)$. In data mining applications, they are estimated with $\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)$ on a finite sample $\mathcal{S}_n = \{(x_k,y_k)\}$ of $n$ data points. If variables are continuous, we can compute the amount of linear dependency with the squared Pearson's correlation coefficient: \begin{equation} r^2(\mathcal{S}_n|X,Y) \triangleq \frac{ \Big( \sum_{k=1}^n (x_k - \bar{x})(y_k - \bar{y}) \Big)^2 }{\sum_{k=1}^n (x_k - \bar{x})^2 \sum_{k=1}^n (y_k - \bar{y})^2} \end{equation} with $ \bar{x} = \frac{1}{n}\sum_{k=1}^n x_k$ and $\bar{y} = \frac{1}{n}\sum_{k=1}^n y_k$. If we are interested in \emph{non}-linear relationships, we can employ the Maximal Information Coefficient (MIC)~\cite{Reshef2011}. $\mbox{MIC}(\mathcal{S}_n|X,Y)$ is estimated as the maximum normalized mutual information across all the possible grids superimposed on the sample $\mathcal{S}_n$ to estimate the joint distribution of $X$ and $Y$. When the variables are categorical, the mutual information or $\mbox{Gini}(\mathcal{S}_n|X,Y)$ can be directly estimated using the joint empirical probability distribution between $X$ and $Y$ on the sample $\mathcal{S}_n$. See Appendix~\ref{app:def} in the supplement for formal definitions. There are three important applications of dependency measures between two variables~\cite{Reimherr2013}: \begin{description}[topsep=1ex,itemsep=-1ex,partopsep=1ex,parsep=1ex] \item[\textit{\textbf{Detection}}:] Test for the presence of dependency. For example, assess if there exists any dependence between bacterial species that colonize the gut of mammals~\cite{Reshef2011}; \item[\textit{\textbf{Quantification}}:] Summarization of the amount of dependency in an interpretable fashion. For example, when MIC is used as a proxy of the amount of noise in a relationship~\cite{Reshef2011}; \item[\textit{\textbf{Ranking}}:] Sort the relationships of different variables based on the strength of their dependency. For example, when Gini gain is used to rank predictive variables to the target class in random forests~\cite{Breiman2001}. \end{description} We saw in Examples~\ref{ex:mic} and~\ref{ex:gini} that when it comes to estimating dependency on data samples via $\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)$ the interpretability of \emph{quantification} and accuracy of \emph{ranking} become challenging. We claim that both tasks can take advantage of the distribution of $\hat{\mathcal{D}}$ under the following null hypothesis: \begin{definition} $\hat{\mathcal{D}}_0(\mathcal{S}_n|X,Y)$ is the distribution of $\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)$ on a sample $\mathcal{S}_n$ under the \textbf{null hypothesis} that $X$ is statistically independent of $Y$. \end{definition} This null hypothesis is commonly exploited only in \emph{detection} tasks where the distribution $\hat{\mathcal{D}}_0(\mathcal{S}_n|X,Y)$ is computed under the null and a $p$-value is computed to filter out false discoveries~\cite{Reshef2011}. Nonetheless, this null can be used also to aid \emph{quantification} and \emph{ranking}. The challenges are to identify the distribution under the null for a particular dependency measure, and to employ it in a framework to perform adjustments to the estimates. Here we discuss the use of this null hypothesis in previous research. \subsection{Use of the Null for Quantification.} \label{sec:adjquant} To our knowledge the first instance of a systematic approach using the null distribution $\hat{\mathcal{D}}_0$ to achieve interpretability in quantification was proposed in the 1960 with the $\kappa$ coefficient of inter-annotator agreement~\cite{Witten2011}. The amount of agreement $A(\mathcal{S}_n|X,Y)$ (dependency) between two annotators $X$ and $Y$ on a sample of $n$ items can be adjusted for chance by subtracting its expected value $E[A_0(\mathcal{S}_n|X,Y)]$ under the null hypothesis of independence between annotators. The $\kappa$ coefficient is obtained by normalization via division of its maximum value $\max{A} = 1$ to obtain an adjusted dependency measure in the range $[0,1]$: \begin{equation} \label{eq:kappa} \kappa(\mathcal{S}_n|X,Y) = \frac{ A(\mathcal{S}_n|X,Y) - E[A_0(\mathcal{S}_n|X,Y)] }{1 - E[A_0(\mathcal{S}_n|X,Y)] } \end{equation} Other notable examples are the Adjusted Rand Index (ARI) and the Adjusted Mutual Information (AMI)~\cite{Romano2015}. We argue that this approach should be applied to many other dependency measures estimators $\hat{\mathcal{D}}$ because it improves interpretability by guaranteeing a zero baseline to $\hat{\mathcal{D}}$. Moreover, we will shortly see that it helps in comparing estimates on different samples $\mathcal{S}_n$. \subsection{Use of the Null for Ranking.} In the decision tree community, it is very well known that when selecting the most dependent variable $X$ to the target class $Y$, variables available on a small number of samples $n$ or with many categories tend to be chosen more often. Indeed, it has been shown that an unbiased selection can be obtained if the $p$-value~\cite{Dobra01} of a dependency estimate or its standardized version~\cite{Romano2014} is used rather than its raw value. Nonetheless, these techniques are unbiased only under the null hypothesis and not unbiased in general. Indeed, in the next sections we will see that their use actually yields bias towards variables induced on bigger $n$ or with fewer categories. This behavior has been overlooked in the decision tree community. \section{Adjusting Estimates for Quantification} \label{sec:willadjquant} To guarantee good interpretability in quantification tasks, dependency measure estimates should be equal to 0 on average when $X$ and $Y$ are independent, and their values should be comparable on average across different data samples of different size. More formally we want: \begin{proper}[Zero Baseline]\label{prop:zerobas} If $X$ and $Y$ are independent then $E[\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)]=0$ for all $n$. \end{proper} \begin{proper}[Quantification Unbiasedness]\label{prop:qunbias} If $\mathcal{D}(X_1,Y_1) = \mathcal{D}(X_2,Y_2)$ then $E[\hat{\mathcal{D}}(\mathcal{S}_n|X_1,Y_1)] = E[\hat{\mathcal{D}}(\mathcal{S}_m|X_2,Y_2)]$ for all $n$ and $m$. \end{proper} We saw in Example~\ref{ex:mic} that MIC does not satisfy either property. Therefore, we propose an adjustment that can be applied to MIC and in general to any dependency estimator $\hat{\mathcal{D}}$: \begin{definition}[Adjustment for Quantification] \label{def:adjquant} \[ \mbox{\emph{A}}\hat{\mathcal{D}}(\mathcal{S}_n|X,Y) \triangleq \frac{\hat{\mathcal{D}}(\mathcal{S}_n|X,Y) - E[\hat{\mathcal{D}}_0(\mathcal{S}_n|X,Y)]}{\max{\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)} - E[\hat{\mathcal{D}}_0(\mathcal{S}_n|X,Y)]} \] is the adjustment of $\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)$, where $\max{\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)}$ and $E[\hat{\mathcal{D}}_0(\mathcal{S}_n|X,Y)]$ are respectively the maximum of $\hat{\mathcal{D}}$, and its expected value under the null. \end{definition} $\mbox{A}\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)$ has always zero baseline (Property~\ref{prop:zerobas}) being 0 on average when $X$ and $Y$ are independent, and attains 1 as maximum value. This adjustment can be applied to $r^2$ and MIC to increase their interpretability when they are used as proxies of the amount of noise in a linear relationship and a functional relationship respectively. We just have to identify their distribution on the sample $\mathcal{S}_n$ under the null: \begin{itemize}[topsep=1ex,itemsep=-1ex,partopsep=1ex,parsep=1ex] \item $r^2_0(\mathcal{S}_n|X,Y)$: follows a Beta distribution with parameters $\frac{1}{2}$ and $\frac{n-2}{2}$\cite{Giles}; \item $\mbox{MIC}_0(\mathcal{S}_n|X,Y)$: this distribution can be computed using $s=1,\dots,S$ Monte Carlo permutations $\mbox{MIC}^{(s)}_0$ of MIC~\cite{Reshef2011}. See Appendix~\ref{app:nullmic}. \end{itemize} \noindent Therefore the adjusted Pearson's correlation squared $r^2$ and the adjusted MIC are: \begin{equation} \label{eq:ar2} \mbox{A}r^2(\mathcal{S}_n|X,Y) = \frac{r^2(\mathcal{S}_n|X,Y) - \frac{1}{n-1}}{1 - \frac{1}{n-1}} \end{equation} \begin{equation} \label{eq:amic} \mbox{AMIC}(\mathcal{S}_n|X,Y) = \frac{\mbox{MIC}(\mathcal{S}_n|X,Y) - \mbox{EMIC}_0 }{1 - \mbox{EMIC}_0} \end{equation} where $E[r^2_0(\mathcal{S}_n|X,Y)] = \frac{1}{n-1}$ and $\mbox{EMIC}_0 = \frac{1}{S} \sum_{s = 1}^S \mbox{MIC}_0^{(s)}$. $\mbox{EMIC}_0$ converges to $E[\mbox{MIC}_0(\mathcal{S}_n|X,Y)]$ at the limit of infinite permutations. However, good estimation accuracy can be obtained even with few permutations because of the law of large numbers~\cite{Good2005}. In the next section we will see how our adjustments satisfy Property~\ref{prop:zerobas} and Property~\ref{prop:qunbias}. \subsection{Experiments with Pearson Correlation and MIC.} We aim to experimentally verify the zero baseline Property~\ref{prop:zerobas} and that our adjustment in Definition~\ref{def:adjquant} enables better \emph{interpretability}. We generate a linear relationship between a uniformly distributed $X$ in $[0,1]$ and $Y$ on $n=30$ points adding different percentages of white noise. We compare $r^2$ and $\mbox{A}r^2$. Each white noise level is obtained by substituting a given percentage of points from the relationship and assigning to the $Y$ coordinate a random value in $[0,1]$. Figure~\ref{fig:noise_r2} shows the average $r^2$ and $\mbox{A}r^2$ for 2,000 simulated relationships with a given percentage of white noise: $r^2$ is not zero on average when the amount of noise is 100\% (last plot on the right). On the other hand, $\mbox{A}r^2$ is very close to zero when there is complete noise and it fully exploits its range of values from one to zero, mapping the domain from 0\% to 100\% noise. This yields more interpretability and enables $\mbox{A}r^2$ to be used as a \emph{proxy to quantify the amount of noise in linear relationships}. Similarly, we generated a quadratic relationship between $X$ and $Y$ in $[0,1]\times [0,1]$ on $n=60$ points with different levels of noise to compare MIC and AMIC. Figure~\ref{fig:noise_MIC} shows that the value of MIC computed with default parameters~\cite{Reshef2011}, is about 0.26 on average for complete noise. AMIC computed with $S = 30$ permutations, is instead very close to zero and it exploits better its range of values from one to zero. AMIC is more interpretable than MIC and might be used more intuitively as a \emph{proxy for the amount of noise in a functional relationship}. \begin{figure*}[t] \hspace*{-.7cm} \includegraphics[scale=.65]{figs/Noise_r2} \caption{Average value of $r^2$ and A$r^2$ for different percentages of white noise. \emph{Linear} relationship between $X$ and $Y$ induced on $n=30$ points in $[0,1]\times[0,1]$. A$r^2$ becomes zero on average on 100\% noise enabling a more interpretable range of variation.} \label{fig:noise_r2} \vspace{-.5em} \end{figure*} \begin{figure*}[t] \hspace*{-.7cm} \includegraphics[scale=.65]{figs/Noise_MIC} \caption{Average value of MIC and AMIC for different percentages of white noise. \emph{Quadratic} relationship between $X$ and $Y$ induced on $n=60$ points in $[0,1]\times[0,1]$. AMIC becomes zero on average on 100\% noise enabling a more interpretable range of variation.} \label{fig:noise_MIC} \vspace{-.7em} \end{figure*} The average value of a dependency estimator should not be biased with regards to the sample $\mathcal{S}_n$ as stated in Property~\ref{prop:qunbias}. In Figure~\ref{fig:noiseprc}, we show that $r^2$ and MIC suffer from this problem: their estimates are higher on average when $n$ is smaller. Figure~\ref{fig:noiseprc} shows the average value of raw and adjusted measures on 2,000 simulations for different levels of noise and sample size $n$: $r^2$ and $\mbox{A}r^2$ are compared on linear relationships; MIC and AMIC are compared on linear, quadratic, cubic, and 4th root relationships. Neither the zero baseline Property~\ref{prop:zerobas} nor the quantification unbiasedness Property~\ref{prop:qunbias} is verified for the raw measures $r^2$ and MIC, shown respectively in Figure~\ref{fig:eqr2} and~\ref{fig:eqmic}. Instead, $\mbox{A}r^2$ and AMIC in Figure~\ref{fig:eqar2} and~\ref{fig:eqamic}, satisfy both properties: they have zero baseline and their average value is not biased with regards to the sample size $n$. We claim that these properties improve \emph{interpretability} when quantifying dependency and \emph{enhance equitability} for MIC~\cite{Reshef2011}. \begin{figure*} \centering \subfigure[$r^2$ (raw)]{\label{fig:eqr2}\includegraphics[scale=.5]{figs/EquitabiltyMean_r2}} \subfigure[$\mbox{A}r^2$ (\emph{adjusted})]{\label{fig:eqar2}\includegraphics[scale=.5]{figs/EquitabiltyMean_Ar2}} \subfigure[MIC (raw)]{\label{fig:eqmic}\includegraphics[scale=.5]{figs/EquitabiltyMean_MIC}} \subfigure[AMIC (\emph{adjusted})]{\label{fig:eqamic}\includegraphics[scale=.5]{figs/EquitabiltyMean_AMIC_mean}} \caption{Average value of $r^2$, $\mbox{A}r^2$, MIC, and AMIC on different amount of noise and different sample size $n$. Raw measures show higher values for smaller $n$ on average. Instead, Property~\ref{prop:qunbias} of unbiasedness with regards to $n$ is empirically verified for \emph{adjusted} measures.}\label{fig:noiseprc} \vspace{-1em} \end{figure*} \section{Adjusting Estimates for Ranking} When the task is ranking dependencies according to their strength, dependencies induced on smaller sample size $n$ or on variables with more categories have more chances to be ranked higher as shown in Example~\ref{ex:gini} for Gini gain. This issue is due to inflated estimates due to finite samples. Indeed, $r^2$ and MIC suffer from the same problem. Consider this experiment: we generate five samples $\mathcal{S}_n$ with $n = [20,40,60,80,100]$ to simulate different amount of missing values for a joint distribution $(X,Y)$ where $X$ and $Y$ are independent. For each sample, we compute $r^2(\mathcal{S}_n|X,Y)$, we select $\mathcal{S}_n$ that achieves the highest value, and iterate this process 10,000 times. Given that the population value $\rho^2(X,Y) = 0$ for all samples, all samples should have equal chances to maximize the $r^2$. However, Figure~\ref{fig:rhoWinEx} shows that $\mathcal{S}_{20}$ has higher chances to maximize $r^2$. \emph{This implies that dependencies estimated on samples with missing values have higher chances to be ranked higher in terms of strength.} \begin{figure}[h] \centering \includegraphics[scale=.7]{figs/rhoWin} \caption{Probability to select the sample $\mathcal{S}_n$ with $n = [20,40,60,80,100]$ according $r^2(\mathcal{S}_n|X,Y)$ fixing the population value $\rho^2(X,Y) = 0$. The relationship with $n=20$ has more chances to be ranked higher.} \label{fig:rhoWinEx} \vspace{-.5em} \end{figure} \noindent We would like that dependencies which share the same population value for $\mathcal{D}$ had the same chances to maximize the dependency estimate $\hat{\mathcal{D}}$ even if estimated on different samples. More formally: \begin{proper}[Ranking Unbiasedness] \label{prop:rankingunbias} If $\mathcal{D}(X_1,Y_1) = \mathcal{D}(X_2,Y_2) = .. = \mathcal{D}(X_K,Y_K)$ then the probability of $\hat{\mathcal{D}}(\mathcal{S}_{n_i}|X_i,Y_i)$ being equal or greater than any $\hat{\mathcal{D}}(\mathcal{S}_{n_j}|X_j,Y_j)$ is $\frac{1}{K}$ for all $n_i,n_j$, $1\leq i \neq j \leq K$. \end{proper} For example in Figure~\ref{fig:rhoWinEx} we would like constant probability of selection equal to $\frac{1}{5}= 0.20$. Property~\ref{prop:rankingunbias} is useful to achieve \emph{higher accuracy} when the task is ranking the pair of variables that show the stronger relationship. Biases in ranking are well known in the decision tree community~\cite{Dobra01} as shown in Example~\ref{ex:gini}. Distributional properties of the raw dependency measure have to be employed to adjust for biases in ranking. For example, ranking according to $p$-values or standardized measures are possible solutions~\cite{Dobra01,Romano2014}. They both quantify if the estimate $\hat{\mathcal{D}}$ is statistically significant. Here we extend the standardization technique to any dependency measure estimate $\hat{\mathcal{D}}$ to employ it for unbiased ranking: \begin{definition}[Standardization for Ranking] \label{def:std} \[ \mbox{\emph{S}}\hat{\mathcal{D}}(\mathcal{S}_n|X,Y) \triangleq \frac{\hat{\mathcal{D}}(\mathcal{S}_n|X,Y) - E[\hat{\mathcal{D}}_0(\mathcal{S}_n|X,Y)]}{\sqrt{\mbox{\emph{Var}}(\hat{\mathcal{D}}_0(\mathcal{S}_n|X,Y))}} \] is the standardized $\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)$, where $E[\hat{\mathcal{D}}_0(\mathcal{S}_n|X,Y)]$ and $\mbox{\emph{Var}}(\hat{\mathcal{D}}_0(\mathcal{S}_n|X,Y))$ are, respectively, the expected value and the variance of $\hat{\mathcal{D}}$ under the null. \end{definition} Nonetheless, it is very difficult to satisfy the ranking unbiasedness Property~\ref{prop:rankingunbias} just with $\mbox{S}\hat{\mathcal{D}}$. Therefore we also define an adjustment to dependency measures whose bias can be tuned according to a parameter $\alpha$. This is particularly useful when $\alpha$ can be tuned with cross-validation, e.g.\ in random forests. \begin{definition}[Adjustment for Ranking] \[ \mbox{\emph{A}}\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)(\alpha) \triangleq \hat{\mathcal{D}}(\mathcal{S}_n|X,Y) - q_0(1-\alpha) \] is the adjustment at level $\alpha \in (0,1]$ of $\hat{\mathcal{D}}(\mathcal{S}_n|X,Y)$, where $q_0(1-\alpha)$ is the $(1-\alpha)$-quantile of $\hat{\mathcal{D}}_0(\mathcal{S}_n|X,Y)$ under the null: i.e., $P\Big(\hat{\mathcal{D}}(\mathcal{S}_n|X,Y) \leq q_0(1-\alpha) \Big) = 1 - \alpha$. \end{definition} At a fixed significance level $\alpha$, the quantile $q_0(1-\alpha)$ induces more penalization when the estimate is not statistically significant. With regards to Example~\ref{ex:gini}, fixing $\alpha = 0.05$ we penalize the variable $X_1$ and the variable $X_2$ by $q_0(0.95)$ equal to 0.036 and 0.053 respectively. The latter variable gets penalized more because it is less statistically significant having more categories. In contrast, $\mbox{S}\hat{\mathcal{D}}$ fixes the amount of penalization based on statistical significance and does not allow to tune the bias during ranking. In the next section we aim to show the shortcomings of raw measures and standardized measures for ranking tasks. \begin{figure*} \centering \subfigure[$X$ independent of $Y$ ($\rho^2 = 0$).]{\label{fig:100}\includegraphics[scale=.7]{figs/rhoWinMulti}} \subfigure[$X$ linearly related to $Y$ with 10\% white noise ($\rho^2 > 0$).]{\label{fig:10}\includegraphics[scale=.7]{figs/rhoWinMultiNo100}} \caption{Probability to select the sample $\mathcal{S}_n$ induced on $n = [20,40,60,80,100]$ according adjusted measures: $\mbox{S}r^2$ satisfies the ranking unbiasedness Property~\ref{prop:rankingunbias} when $\rho^2 = 0$ but not when $\rho^2 > 0$. All measures show to be biased in the latter case: it is difficult to satisfy Property~\ref{prop:rankingunbias} in general.}\label{fig:selbias} \vspace{-1em} \end{figure*} \subsection{Ranking Biases of Raw and Standardized Measures.} We use $r^2$ and its adjusted versions in a case study: $\mbox{A}r^2$ is defined as per Eq.~\eqref{eq:ar2}, $\mbox{A}r^2(\alpha) = r^2 - q_0(1-\alpha)$ where $q_0(1-\alpha)$ is computed with the Beta distribution (see Section~\ref{sec:willadjquant}), and the standardized $r^2$ is defined as: \begin{equation} \mbox{S}r^2(\mathcal{S}_n|X,Y) = \frac{r^2(\mathcal{S}_n|X,Y) - \frac{1}{n-1}}{\sqrt{\frac{2(n-2)}{(n-1)^2(n+1)}}} \end{equation} We do not evaluate $p$-values because their use is equivalent to the use of standardized measures which are also much easier to compute. We perform similar experiments as in the previous section: we fix the population value for a dependency and compute estimates on different samples $\mathcal{S}_n$ to compute their probability of selection. We select samples according $r^2$, $\mbox{A}r^2$, and $\mbox{S}r^2$. Figure~\ref{fig:100} shows the probability of selection of different samples at fixed population value $\rho^2 = 0$. We can clearly see that the ranking unbiasedness Property~\ref{prop:rankingunbias} is satisfied if we use $\mbox{S}r^2$ (top plot). On the other hand the sole adjustment for quantification $\mbox{A}r^2$ is not enough to remove $r^2$ bias towards small $n$. Nonetheless, Figure~\ref{fig:10} shows that if we generate a linear relationship between $X$ and $Y$ with 10\% white noise (i.e.,\ $\rho^2$ is fixed to a value greater than $0$), $\mbox{S}r^2$ is biased towards big $n$. This is because we prefer statistically significant relationships. This phenomena might have been overlooked in the decision tree community~\cite{Strobl2007Gini,Frank98}. Given that it is difficult to satisfy the ranking unbiasedness Property~\ref{prop:rankingunbias} in general, we show how $\alpha$ in our adjustment $\mbox{A}r^2(\alpha)$ might be used to tune the bias when it is possible. Figure~\ref{fig:biasalpha} shows that with big $\alpha$ ($\alpha \approx 0.4$) relationships on small $n$ have higher probability to be selected. On the other hand, small $\alpha$ ($\alpha \approx 0.05$) tunes the bias towards higher sample size $n$. \begin{figure}[h] \centering \includegraphics[scale=.7]{figs/rhoWinMultiAlpha} \caption{Probability of selection of a sample $\mathcal{S}_n$ when $X$ is linearly related to $Y$ with 10\% white noise using $\mbox{A}r^2(\alpha)$: $\alpha$ tunes the bias towards small $n$ with a big $\alpha$ (bottom plot) or big $n$ with a small $\alpha$ (top plot).} \label{fig:biasalpha} \vspace{-1.5em} \end{figure} On a real ranking task, it is reasonable to rank according to $\mbox{A}r^2(\alpha)$ and see how the rank changes with changes of $\alpha$ rather than relying on a single ranking based on biased measures such as $r^2$, $\mbox{A}r^2$, or $\mbox{S}r^2$. The best value for $\alpha$ can be chosen by cross-validation when it is possible. Similar conclusions can be drawn for MIC and its adjusted versions. \subsection{Experiments with Pearson Correlation and MIC.} MIC and $r^2$ have been used in~\cite{Reshef2011} to identify the strongest related pair of socio-economic variables using the WHO dataset. This dataset is a collection of $m = 357$ variables for $n = 201$ countries. Some of the variables have a high percentage of missing values and they are available on much fewer than $n = 201$ samples. In this section, we aim to alert the users of MIC and $r^2$ about ranking biases for relationships induced on different sample size $n$. We conduce an experiment: we choose a reference socio-economic variable $Y$ and select the top related variable according to $r^2$ and its adjusted versions. Then, we estimate the dependency between two variables based on the data points available for both $X$ and $Y$. We only consider dependencies estimated on at least $ n \geq 10$ data points. Figure~\ref{fig:exrho} shows the top-most dependent variable $X$ to $Y =$``\emph{Breast cancer number of female deaths}'' using $r^2$, $\mbox{A}r^2$, $\mbox{S}r^2$, and $\mbox{A}r^2(\alpha = 0.1)$. \begin{figure*} \centering \includegraphics[scale=.48]{figs/Example_Rho} \caption{Plot of the top-most dependent variable $X$ to $Y =$``\emph{Breast cancer number of female deaths}'' according different adjustments for $r^2$. $r^2$ and $\mbox{A}r^2$ favor relationships on small $n$. S$r^2$, and A$r^2(\alpha =0.1)$ penalize relationships on small $n$ and select more reasonably $X =$``\emph{Breast cancer number of new female cases}''.}\label{fig:exrho} \vspace{-1em} \end{figure*} The top-most dependent variable according to $r^2$ and $\mbox{A}r^2$ is $X =$``\emph{Aid given}'' which quantifies the amount of aid given to poor countries in million US\$. Instead, $\mbox{S}r^2$ and $\mbox{A}r^2(\alpha = 0.1)$ identify $X =$``\emph{Breast cancer number of new female cases}'' which seems a more reasonable choice given that the number of deaths might be correlated with new cancer cases. Indeed as seen in the previous Section, $r^2$ and $\mbox{A}r^2$ favour variables induced on small $n$. Moreover from the plot in Figure~\ref{fig:exrho} we see that they are very sensitive to extreme values or outliers: i.e.\ the United States show a very high number of deaths due to breast cancer $\approx 43{,}000$ in a year and a very high amount of aid given $\approx 20$ Billion US\$; this increases the chances for a high $r^2$ or $\mbox{A}r^2$. MIC is even more inclined to select variables induced on small $n$. For example we see in Figure~\ref{fig:exmic} that if we target $Y =$``\emph{Maternal mortality}'' which quantifies the number of female deaths during pregnancy (out of 100,000 live births), and we choose MIC or AMIC to identify the top dependent variable, we get $X =$``\emph{Oil consumption per person}'' (tonnes per year). \begin{figure*} \centering \includegraphics[scale=.48]{figs/Example_MIC} \caption{Plot of the top-most dependent variable $X$ to $Y =$``\emph{Maternal Mortality}'' according different adjustments for MIC. MIC and AMIC are biased towards small $n$. SMIC and AMIC$(\alpha = 0.1)$ select more reasonably either $X =$``\emph{Years of life lost to communicable diseases}'' or $X =$``\emph{Years of life lost to non-communicable diseases}''.}\label{fig:exmic} \vspace{-1em} \end{figure*} There seems to exist an inversely proportional relationship between $X$ and $Y$, possibly due to the common cause of overall economic development but it is difficult to argue in favor of the amount of oil/energy consumption per person as the most dependent variable to maternal mortality. We also identified the top variables according to SMIC and AMIC$(\alpha = 0.01)$ computed with 10,000 Monte Carlo permutations. More specifically, $\mbox{SMIC} = \frac{\mbox{MIC} - \mbox{EMIC}_0}{\mbox{SDMIC}_0}$, where SDMIC$_0$ is the unbiased estimator of the standard deviation of MIC permutations; and AMIC$(\alpha) = \mbox{MIC} - q_0(1-\alpha)$, where $q_0(1-\alpha)$ is the $\lceil (1- \alpha) \cdot S \rceil$-th MIC value from the sorted list of $S$ MIC permutations in ascending order (See Appendix~\ref{app:nullmic} for more details). The top variables according to SMIC and AMIC$(\alpha = 0.01)$ are instead variables related to communicable/non-communicable (infectious/non-infectious) diseases which is more intuitively related to mortality. Table~\ref{tbl:avgsize} shows the average sample size $n$ for the chosen top variables with different adjustments. The user of dependency measures should be aware of the bias of raw dependency estimators $\hat{\mathcal{D}}$ towards small $n$ and try to explore results from their adjusted versions $\mbox{S}\hat{\mathcal{D}}$ and $\mbox{A}\hat{\mathcal{D}}(\alpha)$ when ranking. Ultimately, the latter can be chosen to tune the bias towards smaller $n$ (big $\alpha$) or big $n$ (small $\alpha$). \begin{table} \centering \small \caption{Average sample size $n$ for the top relationships in the WHO datasets. The raw estimator of a dependency measure $\hat{\mathcal{D}}$ favours relationships on small $n$. Instead, its standardized version S$\hat{\mathcal{D}}$ favours big $n$. With A$\hat{\mathcal{D}}(\alpha)$ it is possible to tune the bias towards small $n$ (big $\alpha$) or big $n$ (small $\alpha$).} \label{tbl:avgsize} \begin{tabular}{lll} \toprule \emph{Measure} & $r^2$ & MIC \\ \toprule $\hat{\mathcal{D}}$ & 114.6 (min) & 103.1 (min) \\ $\mbox{A}\hat{\mathcal{D}}$ & 115.1 & 106.9 \\ $\mbox{S}\hat{\mathcal{D}}$ & 133.7 (max) & 131.8 (max) \\ \hline $\mbox{A}\hat{\mathcal{D}}(\alpha = 0.4)$ & 116.2 (min) & 111.7 (min)\\ $\mbox{A}\hat{\mathcal{D}}(\alpha = 0.05)$ & 121.1 & 119.4 \\ $\mbox{A}\hat{\mathcal{D}}(\alpha = 0.1)$ & 120.2 (max) & 117.4 (max)\\ \hline \end{tabular} \vspace{-1.5em} \end{table} \subsection{Experiments with Gini gain in Random Forests.} Splitting criteria are known to be biased towards variables induced on small $n$ or categorical with many categories. Standardized measures and $p$-values are the state-of-the-art strategy to solve this problem~\cite{Dobra01, Strobl2007Gini, Frank98, Romano2014}. However, we saw that standardized measures are unbiased in ranking only when the population value $\mathcal{D}(X,Y) = 0$, and the user might better tune the bias using the parameter $\alpha$. The optimal $\alpha$ can be found with cross-validation. Here we use the expected value $E_0[\mbox{Gini}]$ and the variance $\mbox{Var}_0(\mbox{Gini})$ of Gini proposed in~\cite{Dobra01} to standardize Gini gain as per Definition~\ref{def:std} (See Appendix~\ref{app:nullgini} for more details). Moreover, we employ them to compute the adjusted Gini gain AGini$(\alpha)$ as follows: \begin{restatable}{prop}{propginialpha} The adjustment for ranking at level $\alpha \in (0,1]$ for \emph{Gini} gain is: \[ \mbox{\emph{AGini}}(\mathcal{S}_n|X,Y)(\alpha) = \mbox{\emph{Gini}}(\mathcal{S}_n|X,Y) - \tilde{q}_0(1 - \alpha) \] where $\tilde{q}_0(1-\alpha)$ is an upper bound for the $(1-\alpha)$-quantile of \emph{Gini} gain equal to: \[E[\text{\emph{Gini}}_0(\mathcal{S}_n|X,Y)] + \sqrt{ \frac{1 - \alpha}{\alpha} \text{\emph{Var}}(\text{\emph{Gini}}_0(\mathcal{S}_n|X,Y))}. \] \end{restatable} \noindent The proof of this upper bound is proposed in the supplement~\ref{app:nullgini}. We compare WEKA random forests with Gini, SGini, and AGini($\alpha$) as splitting criteria. To our knowledge this is the first time SGini and AGini($\alpha$) are tested in random forest. The forest is built on 1,000 trees taking care of sampling data with no replacement (50\% training set records for each tree) to not introduce further biases towards categorical variables with many categories~\cite{Strobl2007}. We employed 17 UCI datasets and 2 datasets with many categorical variables studied in~\cite{Altmann2010}. The latter datasets are related to biological classification problems and some of the variables can take as many categories as the number of amino acids at a given site in a viral protein: e.g.\ in the HIV dataset, there exist variables which can take 21 possible values and induce splits of 21-cardinality in the trees. Table~\ref{tbl:rf} shows the AUC performance of random forest computed with 50 bootstrap 2-fold cross-validation using different splitting criteria. All our adjustments improve on the AUC of the random forest built with Gini. We fixed $\alpha$ in AGini($\alpha$) to show that using a value of 0.05 or 0.1 on average increases the random forest's AUC: see Figure~\ref{fig:rfavg}. \begin{figure}[h] \centering \includegraphics[scale=.8]{figs/RFAUC} \caption{AUC of random forest varying $\alpha$: with $\alpha = \{0.01,0.05\}$ it achieves the best results on average.}\label{fig:rfavg} \end{figure} Moreover, we also tuned $\alpha$ with cross-validation for the best performance, where small and big $\alpha$ correspond to penalization of variables with big and small number of categories respectively. Indeed, the performance of random forests with AGini$(\alpha)$ with $\alpha$ tuned is statistically better than the one built with Gini according to the 1-sided Wilcoxon signed rank test: $p$-value $=0.0086$. Although the observed effect size is small, it was consistent, and there is no extra computational effort. We strongly believe that adjusted splitting criteria are beneficial given that i) they can be plugged in random forests where Gini is currently used to improve classification accuracy on data sets with categorical variables or with missing values, ii) they exhibit the same computational complexity as the Gini, and iii) they are easy to implement, in particular much easier than the estimation of their confidence interval with a possibilistic loss function proposed recently~\cite{Serrurier2015}. \begin{table*} \scriptsize \centering \caption{ Random forest AUC using different splitting criteria. Either $(+)$, $(=)$, or $(-)$ means statistically greater, equal, or smaller according to the 1-sided paired $t$-test at level 0.05 than random forest AUC with Gini gain. } \label{tbl:rf} \begin{tabular}{p{1.8cm}HHR{1.7cm}R{1.1cm}R{1.9cm}rrR{1.6cm}R{1.6cm}R{1.6cm}} \toprule Dataset & perc missing & Min Cat & Variable with max number of categories & Number of classes & $m_{\textup{categorical}} + m_{\textup{continuous}} = m$ & $n$ & Gini & SGini & AGini ($\alpha = 0.05$) & AGini($\alpha$) with $\alpha$ tuned \\ \toprule Credit-g & 0.0\% & 2 & 11 & 2 & $13+7=20$ & 1000 & 77.47 & \textbf{78.17} $(+)$ & 77.66 $(=)$ & 78.16 $(+)$ \\ australian & 0.0\% & 2 & 14 & 2 & $8+6=14$ & 690 & 92.59 & 93.09 $(+)$ & 93.02 $(+)$ & \textbf{93.11} $(+)$ \\ bio-promoters & 0.0\% & 4 & 4 & 2 & $57+0=57$ & 106 & 97.03 & 97.29 $(+)$ & 97.41 $(+)$ & \textbf{97.53} $(+)$ \\ flags & 0.0\% & 2 & 14 & 8 & $26+2=28$ & 194 & 90.49 & 91.75 $(+)$ & 91.75 $(+)$ & \textbf{91.83} $(+)$ \\ kr-vs-kp & 0.0\% & 2 & 3 & 2 & $36+0=36$ & 3196 & \textbf{99.86} & \textbf{99.86} $(=)$ & \textbf{99.86} $(=)$ & \textbf{99.86} $(=)$ \\ led7 & 0.0\% & 2 & 2 & 10 & $7+0=7$ & 3200 & \textbf{94.18} & \textbf{94.18} $(=)$ & \textbf{94.18} $(=)$ & \textbf{94.18} $(=)$ \\ lymph & 0.0\% & 2 & 8 & 4 & $15+3=18$ & 148 & 92.91 & \textbf{93.16} $(+)$ & 93.13 $(=)$ & 93.13 $(=)$ \\ mfeat-pixel & 0.0\% & 5 & 7 & 10 & $240+0=240$ & 2000 & 99.58 & 99.63 $(+)$ & \textbf{99.64} $(+)$ & \textbf{99.64} $(+)$ \\ mito & 0.0\% & 2 & 21 & 2 & $23+0=23$ & 175 & \textbf{79.32} & 79.28 $(=)$ & 79.26 $(=)$ & 79.10 $(=)$ \\ monks1 & 0.0\% & 2 & 4 & 2 & $6+0=6$ & 556 & \textbf{99.96} & 99.85 $(-)$ & 97.38 $(-)$ & 99.78 $(-)$ \\ monks2 & 0.0\% & 2 & 4 & 2 & $6+0=6$ & 601 & 64.86 & 70.89 $(+)$ & 77.83 $(+)$ & \textbf{80.72} $(+)$ \\ monks3 & 0.0\% & 2 & 4 & 2 & $6+0=6$ & 554 & 98.73 & \textbf{98.74} $(=)$ & \textbf{98.74} $(=)$ & 98.73 $(=)$ \\ solar-flare & 0.0\% & 2 & 6 & 6 & $11+0=11$ & 323 & 89.17 & \textbf{89.23} $(+)$ & 89.22 $(=)$ & \textbf{89.23} $(+)$ \\ splice & 0.0\% & 4 & 6 & 3 & $60+0=60$ & 3190 & \textbf{99.52} & \textbf{99.52} $(=)$ & \textbf{ 99.52} $(=)$ & \textbf{99.52} $(=)$ \\ steel & 0.0\% & 2 & 2 & 2 & $6+27=33$ & 1941 & \textbf{99.94} & 99.93 $(-)$ & 99.93 $(-)$ & \textbf{99.94} $(-)$ \\ tae & 0.0\% & 2 & 2 & 3 & $2+3=5$ & 151 & 72.25 & 72.33 $(+)$ & 73.23 $(+)$ & \textbf{73.65} $(+)$ \\ tic-tac-toe & 0.0\% & 3 & 3 & 2 & $9+0=9$ & 958 & 97.83 & 97.93 $(+)$ & \textbf{97.95} $(+)$ & 97.94 $(+)$ \\ \hline c-to-u & 0.0\% & 1 & 5 & 2 & $42+3=45$ & 2694 & \textbf{89.74} & 89.42 $(-)$ & 89.28 $(-)$ & 89.61 $(-)$ \\ HIV & 0.0\% & 1 & 21 & 2 & $1030+0=1030$ & 355 & 84.08 & 89.27 $(+)$ & \textbf{89.58} $(+)$ & \textbf{ 89.58} $(+)$ \\ \hline \multicolumn{8}{r}{$p$-value for the 1-tailed Wilcoxon signed rank test against random forest with Gini} & 0.0114 & 0.0295 & 0.0086 \\ \hline \end{tabular} \vspace*{-1.5em} \end{table*} \section{Conclusion} In this paper we discussed how to adjust dependency measure estimates between two variables $X$ and $Y$ using the null hypothesis of their independence. This is particularly important to achieve \emph{interpretable quantification} of the amount of dependency. For this task, we proposed the quantification adjusted measures $\mbox{A}r^2$ and AMIC. However, quantification adjustment is not enough to achieve \emph{accurate ranking} of dependencies. In particular, it is very difficult to achieve ranking unbiasedness. In this task, the user should explore the possible rankings obtained with standardized and ranking adjusted measures, varying the parameter $\alpha$. We demonstrated that our S$r^2$, A$r^2(\alpha)$, SMIC, and AMIC$(\alpha)$ can be used to obtain more meaningful rankings, and that AGini$(\alpha)$ yields higher accuracy in random forests. The code for our measures, experiments, and supplementary material have been made available online\footnote{\url{https://sites.google.com/site/adjdep/}}. \vspace{-1em} \subsection*{Acknowledgments:} Supported by AWS in Education Grant Award and ARC FT110100112. \vspace{-1em} \bibliographystyle{IEEEtran}
1,314,259,996,215
arxiv
\section{Introduction}\label{intro} For a set of integers $A$ we denote by $A-A$ the set of all differences $a-a'$ with $a$ and $a'$ in $A$, and if $A$ is a finite set we denote its cardinality by $|A|$. S\'{a}rk\"{o}zy \cite{Sarkozy1} proved, by the Hardy-Littlewood method, that if $A$ is a subset of $\{1, \ldots, n\}$ such that $A-A$ does not contain a perfect square, then \[ |A| \ll n(\log_2 n)^{2/3}(\log n)^{-1/3}. \] This estimate was improved by Pintz, Steiger and Szemer\'{e}di \cite{PSS} to \[ |A| \ll n(\log n)^{-(1/12)\log\log\log\log n}. \] This improvement was obtained using the Hardy-Littlewood method together with a combinatorial result concerning sums of rationals. Balog, Pelik\'{a}n, Pintz and Szemer\'{e}di \cite{BPPS}, elucidating the method in \cite{PSS}, proved for any fixed integer $k \geq 2$, that if $A$ is a subset of $\{1, \ldots , n\}$ such that $A-A$ does not contain a perfect $k$-th power, then \[ |A| \ll_k n(\log n)^{-(1/4)\log\log\log\log n}. \] In the works cited above the following basic property is used; if $s$ is a perfect $k$-th power then so is $q^{k}s$ for every positive integer $q$. This multiplicative property is used in the following fashion: Suppose that $B$ is a set of integers and $A = \{c+q^k b : b \in B \}$ for some integers $c$ and $q \geq 1$, if $A-A$ does not contain a perfect $k$-th power, then the same is true for $B-B$. This deduction is the basis of an iteration argument that plays a fundamental r\^{o}le in \cite{BPPS}, \cite{PSS}, and \cite{Sarkozy1}. S\'{a}rk\"{o}zy \cite{Sarkozy3} also considered the set $\mathcal{S}= \{ \, p-1 \, : \, \text{$p$ a prime} \, \}$ of shifted primes, and showed that if $A$ is a subset of $\{1, \ldots , n\}$ such that $A-A$ does not contain an integer from $\mathcal{S}$ then \[ |A| \ll n \frac{(\log \log \log n)^3(\log \log\log \log n)}{(\log \log n)^{2}}. \] The argument S\'{a}rk\"{o}zy used in \cite{Sarkozy1} cannot be applied directly to the set $\mathcal{S}$ of shifted primes since it does not have a multiplicative property analogous to the one possessed by the set of perfect $k$-th powers. S\'{a}rk\"{o}zy got around this difficulty by not only considering the set $\mathcal{S}$ of shifted primes, but also the sets defined for each positive integer $d$ by \[ \mathcal{S}_d = \left\{ \; \frac{p-1}{d} \; : \; \text{$p$ a prime, $p \equiv 1 \pmod{d}$} \; \right\}. \] In \cite{Sarkozy3} S\'{a}rk\"{o}zy uses an iteration argument based on the following observation. Suppose $B$ is a set of integers and $A = \{c+q b : b \in B \}$ for some integers $c$ and $q \geq 1$, if $A-A$ does not intersect $S_d$ for some positive integer $d$, then $B-B$ does not intersect $\mathcal{S}_{dq}$. In this article we show that the combinatorial argument presented in \cite{BPPS} and \cite{PSS} can be carried out to improve S\'{a}rk\"{o}zy's result on the set $\mathcal{S}$ of shifted primes. We shall prove the following. \begin{maintheorem} Let $n$ be a positive integer and $A$ a subset of $\{1, \ldots , n\}$. If there does not exist a pair of integers $a,a' \in A$ such that $a-a'=p-1$ for some prime $p$, then \[ |A| \ll n\left(\frac{(\log\log\log n)^3(\log\log\log\log n)}{(\log\log n)} \right)^{\log\log\log\log\log n}. \] \end{maintheorem} The set of perfect squares and the set $\mathcal{S}$ of shifted primes are examples of \textit{intersective} sets. To define this class of sets we introduce some notation. Given a set of positive integers $H$ we define $D(H,n)$, for any positive integer $n$, to be the maximal size of a subset $A$ of $\{1, \ldots , n\}$ such that $A-A$ does not intersect $H$. A set of positive integers $H$ is called \textit{intersective} if $D(H,n) = o(n)$. Kamae and Mend\`{e}s France \cite{Kamae_&_MendesFrance} supplied a general criterion for determining if a set of positive integers is intersective. From their criterion they deduced the following. \begin{enumerate} \item[(I)] For any fixed integer $a$ the set $\{\, p+a \, : \text{$p$ a prime}, p > -a \}$ is intersective if and only if $a = \pm 1$. \item[(II)] Let $h$ be a nonconstant polynomial with integer coefficients and whose leading coefficient is positive. The set $\{\, h(m)\, : \, m \geq 1, h(m) \geq 1 \,\}$ is intersective if and only if for each positive integer $d$ the modular equation $h(x) \equiv 0 \pmod{d}$ has a solution. \end{enumerate} Let $h$ be a polynomial as in (II) with degree $k \geq 2$ and such that $h(x) \equiv 0 \pmod{d}$ has a solution for every positive integer $d$. The author \cite{Lucier} has shown that if $A$ is a subset of $\{1, \ldots ,n\}$ such that $A-A$ does not intersect $\{\, h(m)\, : \, m \geq 1, h(m) \geq 1 \,\}$, then $|A| \ll n(\log_2 n)^{\mu/(k-1)}(\log n)^{-(k-1)}$, where $\mu = 3$ if $k=2$ and $\mu =2$ if $k \geq 3$. It is possible to improve this result with the method presented in this paper. \section{Preliminary lemmata} In this paper we use the following notations. For a real number $x$ we write $e(x)$ for $e^{2\pi i x}$, and $[x]$ is used to denote the greatest integer less than or equal to $x$. The greatest common divisor of the integers $u$ and $v$ is given by $(u,v)$. Euler's totient function is given, as usual, by $\phi$. For any positive integer $i$ we write $\log_i$ to denote the $i$-th iterated logarithm, that is, $\log_1 n = \log n$ and $\log_i n = \log (\log _{i-1} n)$ for every integer $i \geq 2$. A fundamental r\^{o}le is played by the following relations; for integers $n$ and $r$, with $n$ positive, \[ \sum_{t=0}^{n-1} e(rt/n) = \begin{cases} n & \text{if $n|r$} \\ 0 & \text{if $n \nmid r$} \end{cases}, \quad \quad \int_0^1 e(r\alpha)d\alpha = \begin{cases} 1 & \text{if $r=0$} \\ 0 & \text{if $r \ne 0$} \end{cases}. \] Given a subset $A$ of $\{1, \ldots , n\}$ its generating function is given by \[ F(\alpha) = \sum_{a \in A} e(\alpha a), \quad \alpha \in \mathbb{R}. \] Using the relations above we find that \[ \sum_{t = 1}^n |F(t/n)|^2 = n|A|, \quad \quad \int_0^1 |F(\alpha)|^2 d\alpha = |A|. \] Of course, these are particular cases of Parseval's identity. S\'{a}rk\"{o}zy's method in \cite{Sarkozy1} and \cite{Sarkozy3} is based on Roth's work \cite{Roth} on three-term arithmetic progressions in dense sets. Following this method S\'{a}rk\"{o}zy uses a functional inequality to derive his results concerning the set of perfect squares and the set $\mathcal{S}$ of shifted primes. Our approach here uses, like Gowers \cite{Gowers} and Green \cite{Green}, a density increment argument. The next lemma tells us that if the generating function of a finite set $A$ satisfies a certain size constraint, then it must be concentrated along an arithmetic progression. We use this result in Lemma~\ref{crank} to obtain a density increment that we iterate in the final section of the paper to prove the theorem. \begin{lemma}\label{bump} Let $n$ be a positive integer and $A$ a subset of $\{1, \ldots , n\}$ with size $\delta n$. For any real $\alpha$ let $F(\alpha)$ denote the generating function of $A$. Let $q$ be a positive integer and $U$ a positive real number such that $2\pi q U \leq n$. Let $E$ denote the subset of $[0,1]$ defined by \[ E = \left\{ \alpha \in [0,1] : \left|\alpha - \frac{a}{q} \right| \leq \frac{U}{n} \text{ for some $0 \leq a \leq q$} \right\}. \] If $\theta$ is a positive number such that \begin{equation}\label{tonka} \sum_{\substack{ t = 1 \\ t/n \in E}}^{n-1} \left|F(t/n)\right|^2 \geq \theta |A|^2, \end{equation} then there exists an arithmetic progression $P$ in $\{1, \ldots , n\}$ with difference $q$ such that \[ |P| \geq \frac{n}{32\pi q U} \quad \; \text{and} \; \quad |A \cap P| \geq |P| \delta\big(1 + 8^{-1}\theta \big). \] \end{lemma} \begin{proof} This closely resembles Lemma 20 in \cite{Lucier} and can be proved in the same manner. \end{proof} We now state a combinatorial result presented by Balog, Pelik\'{a}n, Pintz and Szemer\'{e}di in \cite{BPPS}, the proof of which uses only elementary techniques. It is this result, that we use in Lemma~\ref{lemma_nonuniform}, that allows us to improve S\'{a}rk\"{o}zy result on the set $\mathcal{S}$ of shifted primes. \begin{lemma}\label{lemma_CR} Let $K$ and $L$ be positive integers, and let $\tau$ be the maximal value of the divisor function up to $KL$. Let $\mathcal{K}$ be a nonempty subset of rationals such that if $a/k \in \mathcal{K}$ is in lowest terms then $1 \leq a \leq k \leq K$. Suppose that for each $a/k \in \mathcal{K}$ there corresponds a subset of rationals $\mathcal{L}_{a/k}$ such that if $b/l \in \mathcal{L}_{a/k}$ is in lowest terms then $1 \leq b \leq l \leq L$. Suppose further that $B$ and $H$ are positive integers such that \[ |\mathcal{L}_{a/k}| \geq H \quad \text{for all $a/k \in \mathcal{K}$} \] and \[ \left|\left\{ b : \frac{b}{l} \in \bigcup \mathcal{L}_{a/k} \right\}\right| \leq B \quad \text{for all $l \leq L$.} \] Then the size of the set \[ \mathcal{Q} = \left\{ \; \frac{a}{k}+\frac{b}{l} \; : \; \frac{a}{k} \in \mathcal{K}, \, \frac{b}{l} \in \mathcal{L}_{a/k} \; \right\} \] satisfies \[ |\mathcal{Q}| \geq |\mathcal{K}| H\left(\frac{H}{LB\tau^8(1+\log K)}\right). \] \end{lemma} \begin{proof} This is Lemma CR in \cite{BPPS}. \end{proof} \section{Exponential sums over primes} Let $d$ and $n$ denote positive integers. As in \cite{Sarkozy3}, our application of the Hardy-Littlewood method employs exponential sums over numbers from the set $\mathcal{S}_d$ defined in the introduction. For any real number $\alpha$ we set \[ S_{n,d}(\alpha) = \sum_{\substack{ s \in \mathcal{S}_d \\ s \leq n}} \log (ds+1) e(\alpha s). \] In this section we present some estimates related to $S_{n,d}(\alpha)$. Throughout this section we assume $d$ and $n$ satisfy \[ d \leq \log n. \] \begin{lemma}\label{Siegel_Walfisz} For $n$ sufficiently large, \[ S_{d,n}(0) \gg \frac{dn}{\phi(d)}. \] \end{lemma} \begin{proof} By the definition of $\mathcal{S}_d$ we find that \[ S_{d,n}(0) = \sum_{\substack{p \leq dn+1 \\ p \equiv 1 \mod d}} \log p. \] Since $d \leq \log n$ the Siegel-Walfisz theorem says that this sum is asymptotic to $(dn+1)/\phi(q)$, from which the result follows. \end{proof} The next two lemmas provide estimates of $S(\alpha)$ derived by A. S\'{a}rk\"{o}zy. \begin{lemma}\label{majorestimates} Let $a$ and $b$ be integers such that $(a,b)=1$ and $1 \leq b \leq \log n$. There exists a positive real number $c$ such that if $\alpha$ is a real number that satisfies \begin{equation*} \left|\alpha - \frac{a}{b}\right| \leq \frac{\exp(c(\log n)^{1/2})}{n}, \end{equation*} and $n$ is sufficiently large, then \begin{equation*} \left| S_{d,n}\left( \alpha \right) \right| < \frac{dn}{\phi(d)\phi(b)}, \end{equation*} furthermore, if $\alpha \ne a/b$ then \begin{equation*} \left| S_{d,n}\left( \alpha \right) \right| < \frac{d}{\phi(d)\phi(b)}\left|\alpha - \frac{a}{b}\right|^{-1}. \end{equation*} \end{lemma} \begin{proof} This is a restatement of Lemma 5 from \cite{Sarkozy3}. \end{proof} Let $R$ denote a real number that satisfies \begin{equation} 3 \leq R \leq \log n. \end{equation} For integers $a$ and $b$ such that $(a,b)=1$ and $0 \leq a \leq b \leq R$ we set \begin{equation}\label{major_arc} \mathfrak{M}(b,a) = \left\{ \alpha \in [0,1] : \left\vert \alpha - \frac{a}{b}\right\vert \leq \frac{R}{n \log\log R} \right\}. \end{equation} Let $\mathfrak{m}$ denote the set of real numbers $\alpha$ for which there do not exist integers $a$ and $b$ such that $(a,b)=1$, $1 \leq b < R$ , and $\alpha \in \mathfrak{M}(b,a)$. \begin{lemma}\label{minorestimate} For $\alpha \in \mathfrak{m}$ and large $n$, \begin{equation} S_{d,n}(\alpha) \ll \frac{dn}{\phi(d)} \cdot \frac{\log \log R}{R} . \end{equation} \end{lemma} \begin{proof} This is a restatement of Lemma 9 from \cite{Sarkozy3}. \end{proof} \begin{lemma}\label{sumovermajorarc} Let $a$ and $b$ be integers such that $0 \leq a \leq b \leq R$ and $(a,b)=1$. Then for $n$ sufficiently large \[ \sum_{t/n \in \mathfrak{M}(b,a)} \left|S_{d,n}\left(t/n\right)\right| \ll \frac{dn}{\phi(d)\phi(b)}\log R. \] \end{lemma} \begin{proof} Suppose that $t/n \in \mathfrak{M}(b,a)$. Then \begin{equation*} \left|\frac{t}{n}-\frac{a}{b}\right| \leq \frac{R}{n\log\log R} \leq \frac{\log n}{n}, \end{equation*} and since $b \leq R \leq \log n$ we can, for large enough $n$, apply Lemma~\ref{majorestimates} with $\alpha$ replaced by $t/n$. Let $u$ and $v$ be integers such that \[ \frac{u}{n} < \frac{a}{b} < \frac{v}{n}, \quad \; v-u =2. \] Applying Lemma~\ref{majorestimates} we obtain \[ \sum_{\substack{t/n \in \mathfrak{M}(b,a) \\ u/n \leq t/n \leq v/n}} \left|S_{d,n}\left(t/n\right)\right| \ll \frac{dn}{\phi(d)\phi(b)}. \] For $t/n \in \mathfrak{M}(b,a)$ with $t/n < u/n$, Lemma~\ref{majorestimates} implies \[ \left|S_{d,n}\left(t/n\right)\right| \ll \frac{d}{\phi(d)\phi(b)}\left| \frac{t}{n} - \frac{a}{b}\right|^{-1} \ll \frac{d}{\phi(d)\phi(b)}\left| \frac{t}{n} - \frac{u}{n}\right|^{-1}. \] Therefore \begin{align*} \sum_{\substack{t/n \in \mathfrak{M}(b,a) \\ t/n < u/n}} \left|S_{d,n}\left(t/n\right)\right| &\ll \frac{dn}{\phi(d)\phi(b)}\sum_{\substack{t/n \in \mathfrak{M}(b,a) \\ t/n < u/n}}\frac{1}{|t-u|} \\ &\ll \frac{dn}{\phi(d)\phi(b)}\sum_{1 \leq m \leq R/\log\log R} \frac{1}{m} \ll \frac{dn}{\phi(d)\phi(b)}\log R. \end{align*} Similarly \[ \sum_{\substack{t/n \in \mathfrak{M}(b,a) \\ v/n < t/n}} \left|S_{d,n}\left(t/n\right)\right| \ll \frac{dn}{\phi(d)\phi(b)}\log R. \] The result follows. \end{proof} A multiplicative arithmetic function $f$ is called strongly multiplicative if $f(p^k) = f(p)$ for every prime $p$ and positive integer $k$. The next lemma contains a standard deduction on the average order over arithmetic progressions for certain strongly mutliplicative arithmetic functions. \begin{lemma}\label{average_order} Let $x$ be a real number such that $x \geq 1$, and let $d$ and $r$ be positive integers. If $f$ is a strongly multiplicative arithmetic function such that $f(m) \geq 1$ for every positive integer $m$ and $f(p) = 1 + O(p^{-1})$. Then \[ \sum_{\substack{m \leq x \\ m \equiv r \mod{d}}} f(m) \ll f((r,d))\frac{x}{d}. \] \end{lemma} \begin{proof} Let $g$ be the arithmetic function defined by \[ g(m) = \sum_{k|m}\mu\left(\frac{m}{k}\right)f(k), \] where $\mu$ is the M\"{o}bius function. Using the fact that $f$ is strongly multiplicative we deduce that \[ g(m) = \mu(m)^2 \prod_{p | m} (f(p) -1). \] Since $f(m) \geq 1$ for every positive integer $m$ it follows that $g$ is a non-negative valued arithmetic function. By the M\"{o}bius inversion formula $f(m) = \sum_{k|m}g(k)$, therefore \begin{equation*} \sum_{\substack{m \leq x \\ m \equiv r \mod{d} }} f(m) = \sum_{\substack{m \leq x \\ m \equiv r \mod{d}}} \sum_{k |m} g(k) = \sum_{k \leq x} g(k) \sum_{\substack{m \leq x \\ m \equiv r \mod{d} \\ m \equiv 0 \mod{k}}} 1. \end{equation*} The last sum above is zero if $(k,d) \nmid r$ and at most $x(d,k)/(dk)$ if $(k,d)|r$. This implies, since $g$ is a non-negative valued function, that \begin{align*} \sum_{\substack{m \leq x \\ m \equiv r \mod{d} }} f(m) &\leq \frac{x}{d} \sum_{\substack{k \leq x \\ (k,d)|r}} \frac{g(k)(k,d)}{k} = \frac{x}{d}\sum_{s|(r,d)} s \sum_{\substack{k \leq x \\ (k,d)=s}} \frac{g(k)}{k} \\ &= \frac{x}{d} \sum_{s|(r,d)} \sum_{\substack{l \leq x/s \\ (l,d/s)=1}} \frac{g(sl)}{l}. \end{align*} For positive integers $u$ and $v$ it can be verified that $g(uv) \leq g(u)g(v)$, thus \begin{align*} \sum_{\substack{m \leq x \\ m \equiv r \mod{d} }} f(m) &\leq \frac{x}{d} \sum_{s|(r,d)} g(s)\sum_{l \leq x} \frac{g(l)}{l} \\ &\leq f((r,d))\frac{x}{d} \prod_{p \leq x} \left(1 + \frac{g(p)}{p}\right) \\ &= f((r,d))\frac{x}{d} \prod_{p \leq x} \left(1 + \frac{f(p)-1}{p}\right). \end{align*} Since $f(p) \geq 1$ and $f(p) = 1 + O(p^{-1})$ the previous product is bounded from above by the absolutely convergent infinite product $\prod_{p} (1 + p^{-1}(f(p)-1))$. Therefore \begin{equation*} \sum_{\substack{m \leq x \\ m \equiv r \mod{d} }} f(m) \ll f((r,d))\frac{x}{d}. \end{equation*} \end{proof} The next lemma is analogous to Proposition 11 of Green \cite{Green}. \begin{lemma}\label{sumfourthpowers} \begin{equation*} \sum_{t=0}^{n-1} |S_{d,n}(t/n)|^4 \ll \left(\frac{dn}{\phi(d)}\right)^4. \end{equation*} \end{lemma} \begin{proof} By Gallagher's inequality \cite[Lemma 1.2]{Montgomery} we have \[ \sum_{t=0}^{n-1} |S_{d,n}(t/n)|^4 \leq n \int_0^1 |S_{d,n}(\alpha)|^4 d\alpha + 2 \int_0^1 |S_{d,n}(\alpha)^3 S_{d,n}'(\alpha)|d\alpha, \] where $S_{d,n}'(\alpha)$ is the derivative of $S_{d,n}(\alpha)$ with respect to $\alpha$. By H\"{o}lder's inequality \[ \int_0^1 |S_{d,n}(\alpha)^3 S_{d,n}'(\alpha)|d\alpha \leq \left(\int_0^1 |S_{d,n}(\alpha)|^4 d\alpha\right)^{3/4} \left(\int_0^1 |S_{d,n}'(\alpha)|^4 d\alpha\right)^{1/4}. \] Let $r_d(m)$ denote the number of pairs $(p_1,p_2)$ where $p_1$ and $p_2$ are primes such that $p_1,p_2 \equiv 1 \pmod{d}$ and \[ \frac{p_1-1}{d} + \frac{p_2-1}{d} = m. \] By Parseval's identity, \[ \int_0^1 |S_{d,n}(\alpha)|^4 d\alpha \leq (\log n)^4 \sum_{m \leq n} r_d(m)^2 \] and \[ \int_0^1 |S_{d,n}'(\alpha)|^4 d\alpha \leq 2\pi (n \log n)^4 \sum_{m \leq n} r_d(m)^2 . \] From the above we deduce that \begin{equation}\label{bridge} \sum_{t=0}^{n-1} |S_{d,n}(t/n)|^4 \ll n(\log n)^4 \sum_{m \leq n} r_d(m)^2. \end{equation} For each positive integer $m$ we have \[ r_d(m) \leq \big|\{ \; p \; : \; 1 < p \leq dm+2, \; p \equiv 1 \mod{d}, \; \text{$dm+2-p$ is a prime} \; \}\big|. \] To bound $r_d(m)$ we apply the combinatorial sieve to estimate the size of the set above. In particular, Corollary 2.4.1 of \cite{Halberstam_&_Richert} implies \[ r_d(m) \ll \prod_{p | d(dm+2)} \left(1 - \frac{1}{p}\right)^{-1} \frac{dm+1}{\phi(d)\log^2((dm+1)/d)}. \] Note that \[ \prod_{p | d(dm+2)} \left(1 - \frac{1}{p}\right)^{-1} \leq \frac{d}{\phi(d)} \left(\frac{dm+2}{\phi(dm+2)}\right), \] therefore \[ r_d(m) \ll \frac{d^2 m}{\phi(d)^2(\log m)^2} \left(\frac{dm+2}{\phi(dm+2)}\right). \] This implies \[ \sum_{m \leq n} r_d(m)^2 \ll \frac{d^4n^2}{\phi(d)^4(\log n)^4} \sum_{\substack{u \leq dn+2 \\ u \equiv 2 \mod{d}}} \left(\frac{u}{\phi(u)} \right)^2. \] Let $f(u) = (u/\phi(u))^2$. It can verified that $f$ is a strongly multiplicative arithmetic function such that $f(u) \geq 1$ for every positive integer $u$ and $f(p) = 1 + O(p^{-1})$. Thus, we can apply Lemma~\ref{average_order} to obtain \[ \sum_{\substack{u \leq dn+2 \\ u \equiv 2 \mod{d}}} \left(\frac{u}{\phi(u)} \right)^2 \ll n. \] Therefore \[ \sum_{m \leq n} r_d(m)^2 \ll \frac{d^2 n^3}{\phi(d)^2 (\log n)^4}, \] and thus, on account of (\ref{bridge}), the result follows. \end{proof} \section{A density increment} Throughout this section $n$ denotes a positive integer and $A$ a subset of $\{1, \ldots , n\}$. For any real $\alpha$ we set \[ F(\alpha) = \sum_{a \in A} e(\alpha a) , \quad \quad F_1(\alpha) = \sum_{\substack{a \in A \\ a \leq n/2}} e(\alpha a). \] We denote by $C_1$ a fixed positive constant. This constant will be used throughout the rest of the paper. We will need $C_1$ to be sufficiently large, but it should be noted that the size of $C_1$ will never be determined by $n$ or $A$. Let $\delta$ denote the density of $A$, that is, $|A| = \delta n$. The following parameters are defined in terms of $C_1$ and $\delta$. \begin{equation}\label{definition_of_R} R(\delta) = (C_1\delta^{-1})^{(\log\log C_1\delta^{-1})^{7/8}}, \end{equation} \begin{equation}\label{definition_of_theta} \theta(\delta) = (C_1 \delta^{-1})^{-4(\log\log\log C_1 \delta^{-1})^{-1}}. \end{equation} \begin{equation}\label{definition_of_Q1} Q_1 = (C_1\delta^{-1})^{(\log \log C_1\delta^{-1})^{1/8}}, \end{equation} \begin{equation}\label{definition_of_Lambda} \Lambda = \left[ \frac{3}{4}\log\log\log C_1\delta^{-1} \right], \end{equation} With $R=R(\delta)$ we let $\mathfrak{M}(q,a)$ be defined as in (\ref{major_arc}), and for any positive integer $q \leq R$ we set \[ \mathfrak{M}(q) = \bigcup_{\substack{a=0 \\ (a,q)=1}}^q \mathfrak{M}(q,a). \] \begin{lemma}\label{lemma_nonuniform} Let $d$ be a positive integer such that $d \leq \log n$. Suppose that $A-A$ does not intersect $\mathcal{S}_d$ and that \begin{equation}\label{size_constraint} C_1\delta^{-1} \leq e^{(\log\log n)^{1/2}}. \end{equation} Provided $C_1$ and $n$ are sufficiently large there exists a positive integer $q \leq R(\delta)$ such that \begin{equation}\label{arith_nonuniform} \sum_{\substack{t=1 \\ t/n \in \mathfrak{M}(q)}}^{n-1} \left|F\left(t/n\right)\right|^2 \geq \theta(\delta) |A|^2. \end{equation} \end{lemma} \begin{proof} Here we adopt the method used in \cite{BPPS}. Given any positive integer $\lambda$ we make the following definitions. For integers $a$ and $k$, with $k \geq 1$, we define \[ \mathfrak{M}_\lambda(k,a) = \left\{ \alpha \in [0,1] : \left\vert \alpha - \frac{a}{k}\right\vert \leq \frac{\lambda R}{n \log\log R} \right\}, \] and for real numbers $K,U \geq 1$ we define \[ \mathcal{P}_\lambda(K,U) = \left\{ \frac{a}{k} : 1 \leq a \leq k \leq K, (a,k)=1, \max_{t/n \in \mathfrak{M}_\lambda(k,a)}\left| F_1(t/n) \right| \geq |A|/U \right\}. \] Furthermore, we set \begin{equation}\label{Ql} Q_{\lambda} = Q_1^{2^\lambda -1} \end{equation} and \[ \mu_\lambda = \max_{\substack{1 \leq K \leq Q_\lambda \\ 1 \leq U}} \frac{|\mathcal{P}_\lambda(K,U)|}{U^2}. \] Let $K_\lambda$ and $U_\lambda$ denote a pair for which $\mu_\lambda$ takes its maximum. As $K=U=1$ is considered in the definition of $\mu_\lambda$ we have \begin{equation}\label{mu_ineq1} 1 \leq \mu_{\lambda} \leq \frac{K_\lambda^2}{U_\lambda^2}. \end{equation} It follows that \begin{equation}\label{chain} 1 \leq U_{\lambda} \leq K_\lambda \leq Q_\lambda. \end{equation} For each $\lambda \leq \Lambda$ we want that the intervals $\mathfrak{M}_\lambda(k,a)$ with $k \leq Q_\lambda$ to be pairwise disjoint. It can be verified that this will happen if \begin{equation}\label{wish} \frac{2\lambda R}{n\log \log R} < \frac{1}{Q_\lambda^2} \quad \quad (\text{for $\lambda \leq \Lambda$}). \end{equation} To show this is true we estimate $\lambda$, $R$, and $Q_\lambda$ for $\lambda \leq \Lambda$. By (\ref{definition_of_Lambda}) and (\ref{size_constraint}) we deduce that \[ \lambda \leq \frac{3}{4}\log\log\log\log n \quad \quad (\text{for $\lambda \leq \Lambda$}). \] By (\ref{definition_of_Lambda}) we find that $2^{\lambda} \leq (\log\log C_1 \delta^{-1})^{3/4}$, and thence by $(\ref{definition_of_Q1})$ and $(\ref{Ql})$ we find that \[ \log Q_{\lambda} \leq 2^{\lambda} \log Q_1 \leq (\log \log C_1 \delta^{-1})^{7/8}\log C_1\delta^{-1}. \] By (\ref{definition_of_R}) this implies $\log Q_{\lambda} \leq \log R$, and so \begin{equation}\label{okay} Q_{\lambda} \leq R. \end{equation} By (\ref{definition_of_R}) and (\ref{size_constraint}) we find, for $n$ large enough, that \begin{equation}\label{R_home} 3 \leq R \leq \log n. \end{equation} From the above estimates for $\lambda$, $R$, and $Q_\lambda$ we deduce that (\ref{wish}) holds for sufficiently large $n$. Therefore, when $\lambda \leq \Lambda$ we have \[ \mu_{\lambda}|A|^2 = |\mathcal{P}_\lambda(K_\lambda,U_\lambda)| \frac{|A|^2}{U_\lambda^2} \leq \sum_{t=0}^{N-1} \left|F_1(t/n)\right|^2 \leq n|A|. \] So \begin{equation}\label{mu_ineq2} \delta \leq \mu_{\lambda}^{-1}. \end{equation} Let us assume, to obtain a contradiction, that \begin{equation}\label{assumption} \sum_{\substack{t=1 \\ t/n \in \mathfrak{M}(q)}}^{n-1} \left|F(t/n)\right|^2 < \theta(\delta) |A|^2 \quad (\text{for all $1 \leq q \leq R$}). \end{equation} By using Lemma~\ref{lemma_CR} and (\ref{assumption}) we will show, provided $C_1$ and $n$ are sufficiently large, that \begin{equation}\label{target} \mu_{\lambda+1} \geq \theta(\delta)^{-1/2} \mu_{\lambda} \quad \text{(for $1 \leq \lambda \leq \Lambda$)}. \end{equation} Assuming for now that (\ref{target}) holds we show how a contradiction is obtained, thus proving that the assumption (\ref{assumption}) is false. Since $\mu_1 \geq 1$, it follows from (\ref{target}) that $\mu_{\Lambda+1} \geq \theta(\delta)^{-(1/2)\Lambda}$, and thus by (\ref{mu_ineq2}) we have \[ \delta \leq \theta(\delta)^{(1/2)\Lambda}. \] We can take $C_1$ to be large enough so that (\ref{definition_of_Lambda}) implies $\Lambda \geq (1/4)\log_3 C_1 \delta^{-1}$, then by (\ref{definition_of_theta}) we find that \[ \delta \leq C_1^{-1}\delta < \delta, \] a contradiction. Therefore (\ref{assumption}) cannot hold for all $1 \leq q \leq R$. We now proceed to show that (\ref{target}) holds. To that end, let us fix $\lambda$ with $1 \leq \lambda \leq \Lambda$. For now we also fix a rational $a/k$ in $\mathcal{P}_\lambda(U_\lambda,K_\lambda)$. We associate with $a/k$ a fraction $u/n \in \mathfrak{M}_\lambda(k,a)$ such that $|F(u/n)| \geq |A|/U_\lambda$. Such a $u/n$ exists by the way $a/k$ was chosen. Since $A-A$ contains no integers from $\mathcal{S}_d$ we find that \[ \sum_{t = 0}^{n-1} F_1(u/n+ t/n)F(-t/n)S_{d,n}(t/n) = 0. \] By the triangle inequality, Lemma~\ref{Siegel_Walfisz}, and the way $u/n$ was chosen we find that \begin{equation}\label{start} \frac{|A|^2}{U_\lambda} \cdot \left(\frac{d n}{\phi(d)} \right) \ll \sum_{t=1}^{n-1} |F_1(u/n+t/n)||F(t/n)||S_{d,n}(t/n)|. \end{equation} Set \begin{equation}\label{definition_of_Y} Y = (C_1\delta^{-1})^{3/2} Q_\lambda^2 \end{equation} and let $\mathcal{N}$ denote the set of $t/n$ such that $|F(t/n)| \leq |A|/Y$. By two applications of the Cauchy-Schwartz inequality, Parseval's identity, and Lemma~\ref{sumfourthpowers} we find that \begin{align*} & \sum_{t/n \in \mathcal{N}} |F_1(u/n+t/n)||F(t/n)||S_{d,n}(t/n)| \\ &\leq \left( \sum_{t=0}^{n-1} |F_1(u/n + t/n)|^2 \right)^{1/2} \left( \sum_{t/n \in \mathcal{N}}|F(t/n)|^4 \right)^{1/4} \left( \sum_{t=0}^{n-1} |S_{d,n}(t/n)|^4 \right)^{1/4} \\ &\ll \frac{dn^{3/2}|A|^{1/2}}{\phi(d)}\left( \sum_{t/n \in \mathcal{N}}|F(t/n)|^4 \right)^{1/4}. \end{align*} Now \begin{align*} \left( \sum_{t/n \in \mathcal{N}}|F(t/n)|^4 \right)^{1/4} &\leq \max_{t/n \in \mathcal{N}} |F(t/n)|^{1/2} \left(\sum_{t=0}^{n-1} |F(t/n)|^2 \right)^{1/4} \\ &\leq \frac{|A|^{1/2}}{Y^{1/2}} (n|A|)^{1/4} = \frac{n^{1/4}|A|^{3/4}}{Y^{1/2}}. \end{align*} Therefore \[ \sum_{t/n \in \mathcal{N}} |F_1(u/n+t/n)||F(t/n)||S_{d,n}(t/n)| \ll \frac{dn^{7/4}|A|^{5/4}}{\phi(d) Y^{1/2}}. \] By (\ref{chain}) and (\ref{definition_of_Y}) we find that \[ Y^{-1/2} = C_1^{-3/4}\delta^{3/4}Q_\lambda^{-1} \leq C_1^{-3/4}|A|^{3/4}n^{-3/4}U_\lambda^{-1}, \] thus \begin{equation}\label{small_contribution_0} \sum_{t/n \in \mathcal{N}} |F_1(u/n+t/n)||F(t/n)||S_{d,n}(t/n)| \ll C_1^{-3/4}\frac{|A|^2}{U_\lambda}\left(\frac{dn}{\phi(d)}\right). \end{equation} Let $\mathcal{N}_1$ denote the set of $t/n$ such that $|F_1(u/n+ t/n)| \leq |A|/Y$. By the same reasoning used in the deduction of (\ref{small_contribution_0}) we find that \begin{equation}\label{small_contribution_1} \sum_{t/n \in \mathcal{N}_1} |F_1(u/n+t/n)||F(t/n)||S_{d,n}(t/n)| \ll C_1^{-3/4}\frac{|A|^2}{U_\lambda}\left(\frac{dn}{\phi(d)}\right). \end{equation} For $\lambda \leq \Lambda$ we have $Q_{\lambda+1}/Q_\lambda < R$. Indeed, (\ref{definition_of_Lambda}) and (\ref{Ql}) imply \[ \frac{Q_{\lambda+1}}{Q_\lambda} \leq Q_1^{2^\Lambda} \leq (C_1\delta^{-1})^{(\log\log C_1\delta^{-1})^{3/4}} < R. \] Let $\mathfrak{m}^\ast$ denote the union of the $\mathfrak{M}(q)$ with $Q_{\lambda+1}/Q_{\lambda} \leq q \leq R$. By the Cauchy-Schwartz inequality we find that \begin{equation}\label{minortrick1} \sum_{t/n \in \mathfrak{m}^\ast} |F_1(u/n+t/n)||F(t/n)||S_{d,n}(t/n)| \leq (n|A|)\sup_{t/n \in \mathfrak{m}^\ast_\lambda}|S_{d,n}(t/n)|. \end{equation} We are now going to show that \begin{equation}\label{super} \sup_{t/n \in \mathfrak{m}^\ast_\lambda}|S_{d,n}(t/n)| \ll C_1^{-1} U_\lambda^{-1} \delta \left(\frac{dn}{\phi(d)}\right). \end{equation} Suppose that $t/n \in \mathfrak{m}^\ast$, then $t/n \in \mathfrak{M}(q,a)$ for some integers $a$ and $q$ such that $0 \leq a \leq q$, $(a,q)=1$, and $Q_{\lambda+1}/Q_{\lambda} \leq q \leq R$. Since $q \leq R \leq \log n$, we deduce from Lemma~\ref{majorestimates} that \[ S_{d,n}(t/n) \ll \frac{dn}{\phi(d)\phi(q)}. \] Using the well-known estimate \begin{equation}\label{phi_estimate} \phi(q) \gg \frac{q}{\log\log q}, \end{equation} (see for example \cite[Theorem 328]{Hardy_&_Wright}), we obtain \begin{equation}\label{v6} S_{d,n}(t/n) \ll \left( \frac{dn}{\phi(d)} \right)\frac{\log\log q}{q}. \end{equation} The lower bound on $q$ implies \begin{equation}\label{v7} \frac{\log\log q}{q} \ll \frac{\log\log Q_{\lambda+1}/Q_{\lambda}}{Q_{\lambda+1}/Q_{\lambda}}. \end{equation} By (\ref{Ql}) we have $Q_{\lambda+1}/Q_{\lambda} = Q_{\lambda}Q_1 = Q_1^{2^\lambda}$, thus \[ \frac{\log \log Q_{\lambda+1}/Q_{\lambda}}{Q_{\lambda+1}/Q_{\lambda}} = \frac{\log \log Q_1^{2^\lambda}}{Q_{\lambda}Q_1} = \frac{\lambda (\log 2) + \log \log Q_1}{Q_\lambda Q_1} . \] Using (\ref{definition_of_Q1}) and (\ref{definition_of_Lambda}) we find that $\lambda \ll \log \log Q_1$, by this and (\ref{chain}) we obtain \[ \frac{\log \log Q_{\lambda+1}/Q_{\lambda}}{Q_{\lambda+1}/Q_{\lambda}} \ll \frac{\log \log Q_1}{U_\lambda Q_1}. \] Using (\ref{definition_of_Q1}) we find, by taking $C_1$ large enough, that \[ \log \left( \frac{\log \log Q_1}{Q_1} \right) \leq -\log C_1\delta^{-1}, \] and thus \[ \frac{\log \log Q_1}{Q_1} \leq C_1^{-1} \delta. \] From (\ref{v7}) and the subsequent estimates we obtain \begin{equation}\label{v11} \frac{\log\log q}{q} \ll C_1^{-1}U_\lambda^{-1}\delta, \end{equation} Since $t/n \in \mathfrak{m}^\ast$ is arbitrary (\ref{v6}) and (\ref{v11}) imply that (\ref{super}) is true. By (\ref{minortrick1}) and (\ref{super}) we have \begin{equation}\label{minorstar_contribution} \sum_{t/n \in \mathfrak{m}^\ast} |F_1(u/n+t/n)||F(t/n)||S_{d,n}(t/n)| \ll C_1^{-1}\frac{|A|^2}{U_\lambda}\left(\frac{dn}{\phi(d)}\right). \end{equation} The contribution to the sum in (\ref{start}) coming from the terms with $t/n \in \mathfrak{m}$ can similarly be bounded. By the Cauchy-Schwartz inequality and Lemma~\ref{minorestimate} we find that \begin{align*} \sum_{t/n \in \mathfrak{m}} |F_1(u/n+t/n)||F(t/n)||S_{d,n}(t/n)| &\leq (n|A|)\sup_{t/n \in \mathfrak{m}}|S(t/n)| \\ &\ll (n|A|)\left( \frac{dn}{\phi(d)} \right) \frac{\log\log R}{R}. \end{align*} Since $R \geq Q_{\lambda+1}/Q_\lambda$ the argument used the previous paragraph implies \begin{equation}\label{minor_contribution} \sum_{t/n \in \mathfrak{m}} |F_1(u/n+t/n)||F(t/n)||S_{d,n}(t/n)| \ll C_1^{-1}\frac{|A|^2}{U_\lambda}\left(\frac{dn}{\phi(d)}\right). \end{equation} Let $\mathfrak{N}(b,a)$ be the set of $t/n \in \mathfrak{M}(b,a)$ with $t/n \ne 0$ such that \[ |F(t/n)| \geq \frac{|A|}{Y}, \quad \quad |F_1(u/n+t/n)| \geq \frac{|A|}{Y}. \] By (\ref{small_contribution_0}), (\ref{small_contribution_1}), (\ref{minorstar_contribution}), and (\ref{minor_contribution}) it follows for $C_1$ large enough that \begin{align*} & \frac{d|A|^2 n}{\phi(d)U_\lambda} \ll \\ & \sum_{b \leq Q_{\lambda+1}/Q_\lambda} \sum_{(a,b)=1} \max_{t/n \in \mathfrak{N}(b,a)} |F(t/n)| \max_{t/n \in \mathfrak{N}(b,a)} |F_1(u/n+t/n)| \sum_{t/n \in \mathfrak{M}(b,a)} |S_{d,n}(t/n)|. \end{align*} Since $d \leq \log n$ we can apply Lemma~\ref{sumovermajorarc} to the inner sum above to obtain \[ \frac{|A|^2 }{U_\lambda \log R} \ll \sum_{b \leq Q_{\lambda+1}/Q_\lambda} \frac{1}{\phi(b)} \sum_{(a,b)=1} \max_{t/n \in \mathfrak{N}(b,a)} |F(t/n)|\max_{t/n \in \mathfrak{N}(b,a)} |F_1(u/n+t/n)|. \] Let $\mathcal{L}(L,V,W)$ denote the set of reduced fractions $b/l \in [0,1]$ such that \[ \frac{L}{2} \leq l \leq L, \] \[ \frac{|A|}{V} \leq \max_{t/n \in \mathfrak{M}(l,b)}|F(t/n)| \leq 2\frac{|A|}{V}, \] \[ \frac{|A|}{W} \leq \max_{t/n \in \mathfrak{M}(l,b)}|F_1(u/n+t/n)| \leq 2\frac{|A|}{W}. \] For $b/l \in \mathcal{L}(L,V,W)$, we have \[ \frac{1}{\phi(l)} \max_{t/n \in \mathfrak{M}(l,b)} |F(t/n)|\max_{t/n \in \mathfrak{M}(l,b)} |F_1(u/n+t/n)| \ll \frac{(\log\log 3L)|A|^2}{LVW} \] by (\ref{phi_estimate}). Therefore \[ \frac{|A|^2 }{U_\lambda \log R} \ll \sum_{L}\sum_{V} \sum_{W}|\mathcal{L}(L,V,W)|\frac{(\log\log 3L)|A|^2}{LVW}. \] where $L$ runs through all the powers of $2$ in the interval $[1,2Q_{\lambda+1}/Q_\lambda]$, and $V$ and $W$ run through all the powers of $2$ in the interval $[1,2Y]$. There must exist a triple $(L,V,W)$ of such indices such that \[ |\mathcal{L}(L,V,W)| \gg \frac{LVW}{U_\lambda(\log\log 3L)(\log R)}. \] We associate this triple with $a/k$. The number of possible triples $(L,V,W)$ is $\ll \log (Q_{\lambda+1}/Q_\lambda) (\log Y)^2$, which by (\ref{okay}) and (\ref{definition_of_Y}) is $\ll (\log R)^3$. Therefore there exists a subset $\mathcal{K} \subset \mathcal{P}_\lambda$, satisfying \begin{equation}\label{size1} |\mathcal{K}| \gg \frac{|P_\lambda(K_\lambda,U_\lambda)|}{(\log R)^3}, \end{equation} such that for each $a/k \in \mathcal{K}$ we associate the same triple, say $(L,V,W)$. Let $a/k \in \mathcal{K}$, then together with the associated fraction $u/n \in \mathfrak{M}_\lambda(k,a)$, we associate a set $\mathcal{L}_{a/k}$ of rationals $b/l$, $0 \leq b \leq l$, $(b,l)=1$, $L/2 \leq l \leq L$, such that \begin{equation}\label{dm1} |\mathcal{L}_{a/k}| \gg \frac{LVW}{U_\lambda (\log \log 3L )(\log R)}, \end{equation} \begin{equation}\label{dm2} \frac{|A|}{V} \leq \max_{v/n \in \mathfrak{M}(l,b)} |F(v/n)| \leq \frac{2|A|}{V}, \end{equation} \begin{equation}\label{dm3} \frac{|A|}{W} \leq \max_{w/n \in \mathfrak{M}(l,b)} |F_1(u/n+w/n)| \leq \frac{2|A|}{W}. \end{equation} Set \[ \mathcal{Q} = \left\{ \; \frac{a}{k}+\frac{b}{l} \; : \; \frac{a}{k} \in \mathcal{K}, \; \frac{b}{l} \in \mathcal{L}_{a/k} \; \right\}. \] Let us estimate the cardinality of $\mathcal{Q}$. Since $L \leq Q_{\lambda+1}/Q_\lambda \leq R$, assumption (\ref{assumption}) and (\ref{dm2}) imply \[ \left|\left\{ \; b \; : \; \frac{b}{l} \in \bigcup \mathcal{L}_{a/k} \; \right\} \right| \left(\frac{|A|}{V}\right)^2 \leq \sum_{t/n \in \mathfrak{M}(l)}|F(t/n)|^2 \leq \theta(\delta)|A|^2. \] So that \[ \left|\left\{ \; b \; : \; \frac{b}{l} \in \bigcup \mathcal{L}_{a/k} \; \right\} \right| \ll \theta(\delta) V^2 . \] Lemma~\ref{lemma_CR} then implies \[ |\mathcal{Q}| \gg |\mathcal{K}| \cdot \frac{L^2V^2W^2}{U_\lambda^2 (\log \log 3L)^2 (\log R)^2} \cdot \frac{\theta(\delta)^{-1}}{L V^2 \tau^8 (1+\log K_\lambda)}. \] From (\ref{chain}) and (\ref{okay}) we obtain $\log K_\lambda \leq \log R$, by this and (\ref{size1}) it follows that \begin{equation}\label{road} |\mathcal{Q}| \gg W^2 \left( \frac{\theta(\delta)^{-1}}{\tau^8(\log R)^6 } \right) \frac{|\mathcal{P}_\lambda(K_\lambda,U_\lambda)|}{U_\lambda^2}. \end{equation} Note that $\mathcal{Q}$ is a subset of $(0,2]$. Let $\mathcal{Q}_1 = \mathcal{Q} \cap (0,1]$ and $\mathcal{Q}_2 = \mathcal{Q} \cap (1,2]$. Let us assume without loss of generality that $|\mathcal{Q}_1| \geq (1/2)|\mathcal{Q}|$. If this is not the case, then $|\mathcal{Q}_2| \geq (1/2)|\mathcal{Q}|$, and we can replace $\mathcal{Q}_1$ in the argument below by the rational numbers in $\mathcal{Q}_2$ shifted to the left by $1$. Since $|\mathcal{Q}_1| \geq (1/2)|\mathcal{Q}|$ we see that (\ref{road}) is still valid with $\mathcal{Q}$ replaced by $\mathcal{Q}_1$ Let $r/s = a/k + b/l$ be in $\mathcal{Q}_1$. For $u/n \in \mathfrak{M}_\lambda(k,a)$ and $w/n \in \mathfrak{M}(l,b)$ we have \[ \left| \frac{r}{s} - \left( \frac{u}{n} + \frac{w}{n} \right) \right| \leq \left| \frac{u}{n} - \frac{a}{k} \right| + \left|\frac{w}{n}- \frac{b}{l} \right| \leq \frac{(\lambda+1)R}{n\log\log R}, \] and therefore $u/n + w/n \in \mathfrak{M}_{\lambda+1}(s,r)$. Thus, by (\ref{dm3}) we deduce that \begin{equation}\label{trans} \max_{t/n \in \mathfrak{M}_{\lambda+1}(s,r)} |F_1(t/n)| \geq \frac{|A|}{W} \quad (\text{for $r/s \in \mathcal{Q}_1$}). \end{equation} We now estimate the size of the denominator of $r/s$. Certainly $s \leq kl \leq K_\lambda L$. By (\ref{chain}) we have $K_\lambda \leq Q_\lambda$ and $L$ was chosen to satisfy $L \leq Q_{\lambda+1}/Q_{\lambda}$. Therefore $s \leq Q_{\lambda +1}$ whenever $r/s \in \mathcal{Q}_1$. By this and (\ref{trans}) we obtain \begin{equation}\label{inside} \mathcal{Q}_1 \subset \mathcal{P}_{\lambda+1}(Q_{\lambda+1},W). \end{equation} By (\ref{road}), with $\mathcal{Q}$ replaced by $\mathcal{Q}_1$, and (\ref{inside}) we find that \[ \frac{|\mathcal{P}_{\lambda+1}(Q_{\lambda+1},W)|}{W^2}| \gg \left( \frac{\theta(\delta)^{-1}}{\tau^8(\log R)^6 } \right) \frac{|\mathcal{P}_\lambda(K_\lambda,U_\lambda)|}{U_\lambda^2}. \] This implies \begin{equation}\label{inc1} \mu_{\lambda+1} \gg \frac{\theta(\delta)^{-1}}{\tau^8 (\log R)^6} \mu_\lambda. \end{equation} We now estimate $\tau$ the maximum of the divisor function up to $K_\lambda L \leq Q_{\lambda+1}$. If $d(m)$ is the number of divisors of $m$ then \[ \log d(m) \ll \frac{\log m}{\log\log m}, \] (see \cite[Theorem 317]{Hardy_&_Wright}). Thus, by (\ref{Ql}), we have \[ \log \tau \ll \frac{\log Q_{\lambda+1}}{\log \log Q_{\lambda+1}} \ll \frac{2^\lambda \log Q_1}{\log \log Q_1}, \] and since $\lambda \leq \Lambda$ we deduce from (\ref{definition_of_Q1}) and (\ref{definition_of_Lambda}) that \[ \log \tau \ll \frac{\log C_1\delta^{-1}}{(\log\log C_1\delta^{-1})^{1/4}}. \] It follows from (\ref{definition_of_theta}) that \begin{equation}\label{appx1} \log \tau = o(\log \theta(\delta)^{-1}) \quad \quad \text{(for $C_1\delta^{-1} \to \infty$)}. \end{equation} We also find from (\ref{definition_of_R}) and (\ref{definition_of_theta}) that \begin{equation}\label{appx2} \log \log R = o(\log \theta(\delta)^{-1}) \quad \quad \text{(for $C_1\delta^{-1} \to \infty$)}. \end{equation} Since $\theta(\delta)^{-1}$ tends to infinity as $C_1\delta^{-1}$ tends to infinity, we deduce from (\ref{inc1}), (\ref{appx1}), and (\ref{appx2}) that for $C_1$ sufficiently large \[ \mu_{\lambda+1} \geq \theta(\delta)^{-1/2} \mu_\lambda. \] Since $\lambda \leq \Lambda$ was arbitrary (\ref{target}) is true, and as shown earlier the lemma can be deduced from this. \end{proof} We now derive a density increment argument that will be iterated in the next section to prove our theorem. \begin{lemma}\label{crank} Let $d$ be a positive integer such that $d \leq \log n$. Suppose that $A-A$ does not intersect $\mathcal{S}_d$ and that $\delta$, the density of $A$, satisfies (\ref{size_constraint}). Provided $C_1$ and $n$ are sufficiently large there exist positive integers $d'$ and $n'$, and a subset $A'$ of $\{1, \ldots ,n'\}$ of size $\delta' n'$, such that $A'-A'$ does not intersect $\mathcal{S}_{d'}$, and moreover; \[ d \leq d' \leq R(\delta)d, \; \quad \; \quad R(\delta)^{-2} n \leq n' \leq n, \] \[ \delta' \geq \delta\big( 1 + 8^{-1} \theta(\delta) \big). \] \end{lemma} \begin{proof} By the hypotheses Lemma~\ref{lemma_nonuniform} implies there exists a positive integer $q \leq R(\delta)$ such that (\ref{arith_nonuniform}) is true. With this $q$ and $U= R(\delta)/\log\log R(\delta)$ let $E$ be defined as in Lemma~\ref{bump}. Note that $\mathfrak{M}(q) \subset E$. The inequality (\ref{R_home}) is still valid, thus $2\pi q U \leq 2\pi R(\delta)^2 \leq n$ for sufficiently large $n$. Therefore, we can apply Lemma~\ref{bump} with $\theta = \theta(\delta)$ to deduce that there exists an arithmetic progression $P$ with difference $q$ such that \begin{equation}\label{lu1} |P| \geq \frac{n \log \log R(\delta)}{32 \pi q R(\delta)} \end{equation} and \begin{equation}\label{lu2} |A \cap P| \geq |P|\delta \big(1+ 8^{-1}\theta(\delta)\big). \end{equation} Let $n' = |P|$. Then there exists an integer $c$ and subset $A'$ of $\{1, \ldots , n'\}$ such that $A \cap P = \{ \, c+qa' \, : \, a' \in A' \, \}$. Put $d' = dq$. Since $A-A$ does not intersect $\mathcal{S}_d$, we deduce that $A'$ does not intersect $\mathcal{S}_{dq}$. Let the size of $A'$ be $\delta' n'$. Then (\ref{lu2}) implies \[ \delta' \geq \delta\big(1+ 8^{-1}\theta(\delta)\big). \] To finish we need to estimate $n'$ and $d'$. Since $q \leq R(\delta)$ we find by (\ref{lu1}) and for $C_1$ large enough that $n' \geq R(\delta)^{-2} n$, and clearly, $n' \leq n$. Now, again by the fact that $q \leq R(\delta)$, we obtain $q \leq d' = dq \leq R(\delta)q$. This completes the proof. \end{proof} \section{Proof of the Theorem}\label{proof} Let us assume, for a contradiction, that the theorem is false. Then for $C_1$ and $n$ sufficiently large, there exists a subset $A$ of $\{1, \ldots , n\}$ of size $\delta n$, such that $A-A$ does not intersect $\mathcal{S}$ and \begin{equation}\label{hyp} \delta \geq C_1 \left(\frac{\log_2 n}{(\log_3 n)^2(\log_4 n)}\right)^{-\log_5 n}. \end{equation} Set \begin{equation}\label{definition_of_Z} Z = \big[ 64 \, \theta(\delta)^{-1}\log C_1\delta^{-1} \big], \end{equation} and put $d_0 = 1$, $n_0 = n$, $A_0 = A$, and $\delta_0 = \delta$. By using Lemma~\ref{crank} repeatedly we can show that for each integer $k$, with $1 \leq k \leq Z$, there are integers $d_k$ and $n_k$ and a subset $A_k$ of $\{ 1 , \ldots , n_k \}$ of size $\delta_k n_k$ such that $A_k -A_k$ does not intersect $\mathcal{S}_{d_k}$. Moreover, $d_k$, $n_k$, and $\delta_k$ satisfy \[ d_{k-1} \leq d_{k} \leq R(\delta_{k-1})d_{k-1}, \; \quad \; \quad R(\delta_{k-1})^{-2} n_{k-1} \leq n_{k} \leq n_{k-1}, \] \[ \delta_k \geq \delta_{k-1}\big( 1 + 8^{-1} \theta(\delta_{k-1}) \big). \] Since $d_0 =1$ and $n_0=n$, these estimates imply \begin{equation}\label{powertrio} d_k \leq R(\delta)^{k}, \quad n_k \geq R(\delta)^{-2k} n, \quad \delta_k \geq \delta\big( 1 + 8^{-1} \theta(\delta) \big)^k. \end{equation} Let us show that we can actually perform this iteration $Z$ many times. Let $0 \leq l \leq Z-1$, and suppose that we have performed this iteration $l$ many times. To show that Lemma~\ref{crank} can be applied a $(l+1)$-th time we need to show that $n_l$ is sufficiently large, $d_l \leq \log n_l$, and that (\ref{size_constraint}) is satisfied with $\delta$ replaced by $\delta_l$. We begin by estimating $n_l$. By (\ref{powertrio}) we obtain \begin{equation}\label{estimate_of_nk} \log n_l \geq \log n - 2l \log R(\delta). \end{equation} Since $l < Z$, (\ref{definition_of_R}) and (\ref{definition_of_Z}) imply \[ l \log R(\delta) \leq 64 \, \theta(\delta)^{-1}(\log C_1\delta^{-1})^2(\log_2 C_1\delta^{-1})^{7/8}. \] By (\ref{hyp}) we obtain \[ (\log C_1\delta^{-1})^2(\log_2 C_1\delta^{-1})^{3/4} \leq 2(\log_3 n)^2 (\log_4 n)^{7/8} (\log_5 n)^2 \] for large enough $n$. By (\ref{definition_of_theta}) and (\ref{hyp}) we find, for $n$ and $C_1$ sufficiently large, that \[ \log \theta(\delta)^{-1} = \frac{4\log C_1\delta^{-1}}{\log_3 C_1 \delta^{-1}} \leq \log \left( \frac{\log_2 n}{(\log_3 n)^2(\log_4 n)} \right). \] (Here we used that $(\log x)(\log_3 x)^{-1}$ is eventually increasing.) Therefore \[ \theta(\delta)^{-1} \leq \frac{\log_2 n}{(\log_3 n)^2(\log_4 n)}. \] From the above we deduce, for $n$ and $C_1$ large enough, that \begin{equation}\label{peq2} l \log R(\delta) \leq \log_2 n. \end{equation} Therefore, by (\ref{estimate_of_nk}), \[ \log n_l \geq \log n - 2\log_2 n = \log \left(\frac{n}{(\log n)^2}\right), \] and so \begin{equation}\label{peq3} n_l \geq \frac{n}{(\log n)^2} \end{equation} for $l < Z$. This shows that by taking $n$ to be arbitrarily large, the same is true for $n_l$. We now show that $d_l \leq \log n_l$. By (\ref{powertrio}) we have $\log d_l \leq l \log R(\delta)$, and thus by (\ref{peq2}) we obtain $\log d_l \leq (1/2)\log_2 n$. For large $n$ this implies \[ d_l \leq (\log n)^{1/2} \leq \log \frac{n}{(\log n)^{2}} \leq \log n_l \] by (\ref{peq3}). We leave it to the reader to verify that (\ref{hyp}) and (\ref{peq3}) imply, for $n$ and $C_1$ sufficiently large, that (\ref{size_constraint}) is satisfied with $\delta$ and $n$ replaced by $\delta_l$ and $n_l$ respectively. Finally, since $A_l - A_l$ does not intersect $\mathcal{S}_{d_l}$ we can apply Lemma~\ref{crank} to obtain the desired outcome. Since (\ref{powertrio}) is true with $k=Z$ we find that \[ \log \delta_Z \geq Z\log \Big(1+ 8^{-1}\theta(\delta)\Big) - \log C_1\delta^{-1}. \] Since $8^{-1}\theta(\delta) < 1$, this implies \begin{equation}\label{nearly} \log \delta_Z \geq {16}^{-1}Z \theta(\delta) - \log C_1\delta^{-1}. \end{equation} (Here we used $\log(1+x) \geq x/2$ for $0 \leq x \leq 1$.) For $C_1$ large enough $Z \geq 32 \theta(\delta)^{-1} \log C_1 \delta^{-1}$, thus \[ \log \delta_Z \geq 2\log C_1\delta^{-1} - \log C_1 \delta^{-1} > 0. \] This implies $\delta_Z > 1$, a contradiction, since by definition $\delta_Z \leq 1$. This contradiction establishes the theorem. \section*{acknowledgements} The author was supported by a postdoctoral fellowship from the Centre de recherches math\'{e}matiques at Montr\'{e}al.
1,314,259,996,216
arxiv
\section{$\bm{P(s)}$ distribution\label{A5}} A valuable clue as to the role of the `hard' scale $s \sim T^2$, appearing in Eq.~\eq{m1}, is provided by the thermal weight $P[\chi]$, emerging from \eq{baym}, whose properties we now explore. Firstly, the condition for normalisation, i.\,e.\ $\int d s P(s) = 2b$, is convenient for \eq{eta with P(s)} but should be set by $\chi$. After using symmetry in the integration variables $p_{1,2}$ of \eq{Pdef}, we may complete all but one integral to find (for notational convenience we put $T \to 1$ here) \begin{eqnarray*} \int d s P(s) &=& \frac23 (2\pi)^{-2} \int_0^\infty d p f \bar f \Big[ \, \chi^2 + \frac{p^2}{6} (\chi^\prime)^2 \, \Big] \, . \end{eqnarray*} Requiring that this equals $2b$ imposes a restriction on the function $\chi$, that needs to be satisfied in addition to the constraint \eq{B}. Taken together, they imply $\chi$ must satisfy \begin{eqnarray} \chi^{\prime\prime} + \Big( \, \frac2p - 1 - 2f \, \Big) \chi^\prime - \frac{6}{p^2} \chi + A p &=& 0 \, . \label{DE} \end{eqnarray} $A$ is a Lagrange multiplier to set the norm of $\chi$ (it cannot be zero). This relation was also found in Ref.~\cite{Heiselberg1994b} in a different manner, directly evaluating \eq{Q} to LL accuracy. Because the solution to the Boltzmann equation $f_\star (p) =f [1+ \chi \bar f (\hat p_i \hat p_j - \frac13 \delta_{ij})\nabla_i u_j]$ is positive and integrable, the function $\chi$ cannot grow exponentially fast. Suppose that $\chi(p)$ is normalised, in such a way that $\chi \to p^\nu$ for large $p$. Substituting into \eq{DE} with $f\approx0$ gives $\nu=2$ and $A=2$. (Rescaling the asymptotic behaviour by some constant will just change $A$ by the same factor.) \begin{eqnarray} \chi (p) &\sim& p^2 + O(p) \, ;\quad p\gg T\, . \label{chi for large p} \end{eqnarray} In Ref.~\cite{Baym1990}, the optimal solution to \eq{etaVAR} of the same `single power' form for {\em all} $p$ gave $\nu\approx 2.104$\,. For small arguments, $f(p) \simeq \frac{1}{p} - \frac12 + \frac{1}{12}p + \ldots$ in \eq{DE} and the first two terms in brackets thus cancel. The resulting asymptotic solution (contrary to the claim of \cite{Heiselberg1994b}) is \begin{eqnarray} \chi(p) &\sim& p^3 \Big(\, \bar c - \frac{2}{5}\log p \, \Big) + {\cal O}(p^5) \, ; \quad p\ll T \, , \label{chi for small p} \end{eqnarray} with $\bar c \approx 0.62$ (this integration constant is set by \eq{chi for large p}, for which we have only a numerical value). Knowing the precise solution $\chi^\star$ to \eq{DE} only improves the estimate for $\eta$ in \eq{baym} by about $0.5$\%. The large-$p$ behaviour $\chi\sim p^2$ turns out to be more important than \eq{chi for small p}, which is to be expected for a transport quantity. We thus explore below, the function $P[\chi](s)$ at the approximate solution $\chi \to p^2$. \begin{figure}[hbt] \includegraphics[scale=0.9]{chi} \caption{ The function $\chi/p^2$ as determined by Eq.~\eq{DE}. Also shown is the limit $\chi \sim p^3 \log 1/p$ for $p \ll T$ (purple dotted line). $\chi\approx p^2$ appears to be an excellent approximation for $p\mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$>$}} T$. } \label{fig:chi} \end{figure} \subsection*{Single-function Ansatz} For a given collisional parameter $s$, using \eq{Pdef} with $\chi(p)=p^2$, we define \begin{eqnarray} { P}(s) &=& \frac{1}{(2\pi)^4} \int d E_1 d E_1 \, f_1 \bar{f}_1 f_2 \bar{f}_2 \big[ \, \frac{16}3 E_1 E_2 \left( E_1 - E_2 \right)^2 + \frac23 s \left( E_1^2+E_2^2 + 4E_1E_2 \right) + s^2 \, \big] \, .\nonumber\\ \label{160211:eq1} \end{eqnarray} The integrals are over positive energies such that $E_1 E_2 \geq s/4$\,. We do not have a simple closed form expression for $P(s)$ beyond a 1-dimensional integral over polylog functions, but we can state its large/small-$s$ behaviour For large $s$, the thermal distributions can be replaced by classical Maxwell-Boltzmann distributions, i.\,e.\ $f_1 \to \exp (-E_1)$. We then obtain \begin{eqnarray*} P_{\rm cl.}(s) &=& \frac{s}{15\cdot 2^{12}\cdot \pi^4 } \big[\, (40+3s)\sqrt{s} K_3 \big(\sqrt{s}\big) - 6 s K_4 \big(\sqrt{s}\big) \,\big] \, , \end{eqnarray*} where $K_{3,4}$ are modified Bessel functions of the second kind. The first few terms in an asymptotic series for $P(s)$ is given by \begin{eqnarray*} P(s) &\sim& \frac{32}{27} \, \Big\{ \, 54 \zeta^\prime (3) - \pi^4 + 18 \big(\, 10 - 3 \gamma - 3 \log \frac{s}{4}\, \big) \Big\} \\ & & \ +\frac{2s}{9}\, \Big\{ \, 66 - 12 \gamma^2 + \pi^2 - 54 \log \frac{s}{4} + 6 \log^2 \frac{s}{4} - 24 \gamma_1 \, \Big\} + {\cal O}(s^3) \, , \end{eqnarray*} where $\gamma_1$ is the first Stieltjies constant (the Euler-Mascheroni constant $\gamma$ is $\gamma_0$). The qualitative feature we needed for the argument in Sec.~\ref{sec:sigTR} is indeed confirmed; $P(0) \neq 0$ (see Fig.~\ref{fig:P(s)}), in fact there is an (integrable) log-divergence at $s\to 0$\,. The `moments' $M_n = \int_0^\infty d s \, s^n P(s)$, are explicitly \begin{eqnarray} M_n &=& \frac{ 4^{1+n} }{3\pi^4} \Gamma (n+1) \Gamma (n+4) \label{moments} \\ & & \qquad \times \Big[ \zeta (n+4) \zeta (n+2) (n+4) (5+3n) + \zeta (n+3)^2 n (7+3n) \Big] \, . \nonumber \end{eqnarray} These moments grow extremely rapidly; $\log M_n \sim n \log n!$ as $n\to \infty$\,. Once rescaled, $P(s)/M_0$ can be interpreted as a (normalised) probability distribution which then gives \begin{eqnarray} \left< s \right> = M_1/M_0 \simeq 27.7 T^2 \, ,\\ \left< \log (s) \right> = \partial_\epsilon M_\epsilon /M_0 \big|_{\epsilon\to0} \simeq 0.381 \, , \label{moments of P(s)} \end{eqnarray} required to specify the parameter $\kappa$ for the effective model in Section \ref{sec 2A}\,. The general formula for the moments \eq{moments} allows for analytic continuation to complex $n$, inheriting properties from the $\Gamma$- and $\zeta$-functions. Isolated poles occur at $n=-1$ (double), $n=-2$ (triple) and double poles for all $n\leq -4$. This reaffirms the fact that `negative' moments are divergent. \begin{figure}[hbt] \includegraphics[scale=0.9]{P_s_} \caption{ The (normalised) probability, \eq{160211:eq1} divided by $M_0$\,. The small-$s$ expansion is shown for comparison (dotted blue line), and implies that the slope of $P$ at $s=0$ is infinite. The counterpart function for classical Maxwell-Boltzmann particles is also shown (orange dashed line). } \label{fig:P(s)} \end{figure} \section{Soft \& hard contributions\label{A4}} Particle production rates and transport coefficients require kinematic integral convolutions with matrix elements. In a thermal setting, particle propagators are modified by their interaction with the medium, a feature already stressed in Sec.~\ref{sec:sigTR}. The general form of this screening is a $T^2$ multiplied by a dimensionless function of (separately) the momentum and frequency. The HTL approximation \cite{Braaten1990c} affords an analytic representation depending only on $z=\omega/|\bm q|$. While usually derived under the assumption that both $\omega, |\bm q| \ll T$, the approximation is in fact valid for $|\omega^2 - q^2| \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} T^2$ \cite{Peshier1998}. As mentioned in the body of the text (Sec.~\ref{sec 3A}), we opt for a covariant separation of phase space with $t^\star$ as the cut-off parameter. The only difference with \eq{mu2 with t^star} comes from the HTL self-energies. Using the Born cross section where $-s <t < t^\star$, for the $t$-channel contribution of the scattering cross section gives a `trivial' result. The essential contribution is \begin{equation} \sigma_{\rm tr}^{\rm hard} = \frac1s \int_{-s}^{t^\star} d t \, |t| \frac{\alpha^2}{t^2} = \frac{\alpha^2}{s} \log \frac{s}{|t^\star|} \, , \label{Y-hard} \end{equation} identical to \eq{eq1} of course. Screening is necessary for the complementary region, where $t^\star < t < 0$. There is a further integration over the parameter $z \in[-1,+1]$ for time-like exchanges, \begin{equation} \sigma_{\rm tr}^{\rm soft} = \frac1{2s} \int_{-1}^{+1} d z \int_{t^\star}^0 d t \frac{(-t)\alpha^2}{| t - \widetilde\Pi |^2}\, , \label{screen} \end{equation} where $\widetilde\Pi = \widetilde\Pi_{L,T} (t,z)$ is the self energy. Since $|t^\star |\ll T^2$, the self-energy is independent of $t$ (so that HTL functions apply). This expression, when combined with the complementary region $t<t^\star$, produces no residual dependence on $t^\star$ (for weak coupling). In other words, it tells us at which point the HTL dressing is to be replaced by tree level functions. The expression in Eq.~\eq{screen} was determined numerically, or in limiting cases. However it is possible to find the large-$|t^\star|$ contribution analytically in general. Keeping only relevant terms, \begin{eqnarray*} \int d t \frac{(-t)}{| t - \Pi |^2} &=& \int \frac{d t}{\mathrm{Im}\, \Pi} \mathrm{Im}\, \frac{\Pi}{t-\Pi} = \frac{1}{\mathrm{Im}\, \Pi} \mathrm{Im}\, \Big[\, \Pi \log \frac{|t^\star| + \Pi}{\Pi} \, \Big] \, . \end{eqnarray*} Making use of the fact that $|t^\star| \gg |\Pi(z)| \sim \alpha T^2$ for weak coupling. To extract the leading cut-off dependence, up to ${\cal O}(m_D^2/t^\star)$, we re-express the logarithms by \begin{eqnarray*} \log \frac{|t^\star|}{\Pi} &=& \log \frac{|t^\star|}{m_D^2} + \log \frac{m_D^2}{\Pi} \, , \end{eqnarray*} where $m_D^2 = 4\pi \alpha T^2$ is the LO Debye mass (for $n_{\!f}=0$). The familiar log-structure emerges when $| t^\star | \rightarrow \infty$, plus a constant next to the logarithm, \begin{equation} \sigma_{\rm tr}^{\rm soft} = \frac{\alpha^2}{s} \Big( \log \frac{|t^\star|}{m_D^2} - \int_{0}^{1} d z \frac{1}{\mathrm{Im}\, \phi} \mathrm{Im}\, \! \big[ \, \phi \log \phi \, \big] \Big) \, , \label{soft} \end{equation} in terms of the function $\phi(z) = \Pi/m_D^2$, which is order of unity. We have used $\phi(z)^* = \phi(-z)$, to replace the original $z$-integral by one from $z=0$ to $1$. A contour in the complex $\phi$-plane, parametrised by $z$, is traced out with endpoints at $\phi(0)$ and $\phi(1)$, see Fig.~\ref{fig: contour}. \begin{figure}[hbt] \includegraphics[scale=0.9]{contour} \caption{ The transport cross section takes into account screening via the function $\phi$ (which has real and complex parts), thus tracing a curve in the complex plane with $z$ as the parameter. } \label{fig: contour} \end{figure} Formally, $\sigma_{\rm tr}^{\rm hard} + \sigma_{\rm tr}^{\rm soft}$ to NLL accuracy is then independent of $t^\star$, which cancels. I.e. $\sigma_{\rm tr}^{\rm NLL} = \frac1s \alpha^2 [ \log (s/m_D^2) - \widetilde c\, ]$, with $\widetilde c$ defined by the integral in \eq{soft}. Relative errors in \eq{soft} are supressed by $m_D^2/|t^\star| \sim \alpha$ and amounts to a residual dependence on $t^\star$. Equation \eq{soft} is directly applicable to the calculation for the transport cross section. In the quenched case, one needs the screened matrix element from $gg \rightarrow gg$ interactions. Take only the $t$- and $u$-channel from \eq{glue-glue} with \eq{IR replacement}, one finds \begin{eqnarray} \frac{d \sigma}{d t} \Big|_{t>t^\star} &=& \frac{\alpha^2}{3} \Big[\ \frac{1}{| t - \widetilde\Pi_L(z) |^2} + \frac{1/2}{| t - \widetilde\Pi_T (z) |^2} \ \Big] \, , \label{Y-soft} \end{eqnarray} for $|t| \ll T^2$, and omitting all unnecessary prefactors. There are two terms of the type discussed in \eq{screen}, where screening is accounted for by HTL gluon self energies, viz. \begin{eqnarray*} \phi_L (z) &=& (1-z^2) \Big(\, 1 - \tfrac12 z \log \frac{ z+1 }{ z-1 } \, \Big) \, ,\\ \phi_T (z) &=& \Big(\, \tfrac12 z^2 + \tfrac14 z(1-z^2) \log \frac{z+1}{z-1} \, \Big) \, . \end{eqnarray*} Since $|z|<1$, the logarithms are complex, with values taken on the principal branch. One has the freedom to deform this contour to a simple one (just above the real axis), say $\phi = \gamma + i \delta$ and examine the limit $\delta \to 0$. This dramatically simplifies the integrand appearing in Eq.~\eq{soft}; \begin{equation} \mathrm{Im}\, \Big[\, \frac{ \phi \log \phi }{\delta }\, \Big] = \log \Big( \sqrt{\gamma^2+\delta^2}\, \Big) + \frac{\gamma}{\delta} \arg (\gamma+i\delta) \stackrel{\delta \to 0}{\simeq} \log |\gamma| + 1 \, . \end{equation} In the small-$|t|$ limit of \eq{Y-soft}, we can evaluate the transport cross section as follows. Deform the contour, see Fig.~\ref{fig: contour}, to find that (for example) the longitudinal propagator gives a constant in \eq{soft}, \begin{equation} \int_0^1 d |\phi_L| \big(\, \log |\phi_L| + 1 \, \big) = |\phi_L(1)| \log |\phi_L(1)| - |\phi_L(0)| \log |\phi_L(0)| = 0 \, . \end{equation} Only the transverse screening is non-zero, having $\phi_T(1) = \frac12$, and ends up giving $\widetilde c = -\frac13 \log 2$ next to the LL in \eq{soft}. Combining the contribution from hard and soft regions given by \eq{Y-hard} and \eq{Y-soft} produces \begin{eqnarray*} \sigma_{\rm tr} & = & \frac{\alpha^2}{s} \Big( \, \log \frac{ s }{m_D^2 } - \tfrac13 \log 2 \, \Big) \, . \end{eqnarray*} This demonstrates the Braaten-Yuan (BY) method \cite{Braaten1991d}, and confirms an earlier numerical result of Heiselberg in Eq. (B7) \cite{Heiselberg1994b}. If we had used this result to match $\mu^2 = \kappa \cdot m_D^2$ in the simple model \eq{m1}, we would have found $\kappa = 2^{1/3}/e \approx 0.46$ [this differs from the value $\kappa \approx 0.62$ obtained via \eq{eta NLL}, due to the omission of inelastic proccess, the gluon 4-vertex and $s$-channel diagrams in ${\cal M}$]. Incidently, this technique is applicable for calculating the photon rate and proves a result in \cite{Kapusta1991} previously known only numerically. Formally, $\sigma_{\rm tr}$ is then independent of $t^\star$ in the BY scheme resting on the assumption that $m_D^2 \ll -t \ll s \sim T^2$. Residual dependence on $t^\star$ in \eq{Y-soft} for subleading errors that are $\sim \Pi/|t^\star|$ and in \eq{Y-hard} they are $\sim |t^\star|/s$. Let us end with a comment on the finite radius of convergence for $\sigma_{\rm tr}$. In Sec.~\ref{sec 2A} we found that it was simply the `mass' $\mu^2$ -- here it will instead be given by the minimum of $|\Pi(z)|$. That is zero for HTL functions, but in general may be related to the QCD `magnetic' mass. This speculation is well beyond our present scope. \section{Two-body phase space\label{A2}} The collisional operator \eq{C[f]} expresses the rate of binary encounters, integrated over partner momenta $\bm p_2$, $\bm p_3$ and $\bm p_4$. For a given function $g \big( \{ \bm p_i \}\big)$ which depends on the participant momenta, we evaluate the phase space integral, $\int d \Gamma \cdot g$. One of the integrals may be completed using energy-momentum conservation; we choose the $p_4$-integral, \begin{eqnarray} \int_4 \frac{g}{2E_4} &=& 2\pi \delta ( \underline{K}_4^2 ) \theta (\underline{E}_4) \ \underline{g} \, . \label{B2} \end{eqnarray} Here the underline indicates a dependence on the fixed 4-momentum $\underline{K}_4 = P_1 + P_2 - P_3$. For instance, $\underline{f}_4$ depends on the energy $\underline{E}_4 = E_1 + E_2 - E_3$. It will turn out that the on-shell constraint $\underline{P}_4^2 = 0$, expressed by the $\delta$-function in \eq{B2}, implies already $\underline{E}_4 \geq 0$, making the factor of $\theta (\underline{E}_4)$ redundant in \eq{B2}. The remaining fivefold integral is further reduced as follows \cite{Peigne2008}. We align the $z$-axis with $\bm p_1$ and orient the $zy$-plane to contain $\bm p_3$ viz. \begin{eqnarray} \bm p_1 &=& E_1 (0,0,1) \, , \nonumber \\ \bm p_2 &=& E_2 (\sin \phi \sin \theta_2, \cos \phi \sin \theta_2, \cos \theta_2 ) \, , \nonumber \\ \bm p_3 &=& E_3 (0, \sin \theta_3, \cos \theta_3 ) \, . \label{p123} \end{eqnarray} The argument of the $\delta$-function in \eq{B2} depends on $\phi$ through \begin{eqnarray*} \underline{P}_4^2 &=& 2(P_1 P_2 - P_1 P_3 - P_2 P_3 ) \\ &=& s + t - 2 P_2 P_3 \, . \end{eqnarray*} Thus, using \eq{p123} to simplify the outstanding 4-product, we have $\underline{P}_4^2 = A + B \cos \phi$ where \begin{eqnarray} A &=& s + t -2 E_2 E_3 (1-\cos \theta_2 \cos \theta_3 ) \, ,\nonumber \\ B &=& 2 E_2 E_3 \sin \theta_1 \sin \theta_3 \, . \label{AB} \end{eqnarray} Emphasising the azimuthal dependence in $g = g (\phi)$, the $\phi$-integral is elementary \begin{eqnarray*} \int_0^{2\pi} d \phi\ \delta \left( \underline{P}_4^2 \right) \ g(\phi) &=& E_1 \frac{ \theta (h) }{\sqrt{h}} \sum_{\pm} g(\phi_\pm) \, , \end{eqnarray*} where $\phi_\pm = \pi \pm \arccos(A/B)$ and $h = E_1^2 ( B^2 - A^2 )$ ($E_1$ is factored out for convenience). Next, we reformulate the remaining integration over $\cos \theta_{2,3}$ in terms $s$ and $t$, whose values are specified by $\bm p_1, \bm p_2$ and $\bm p_3$: \begin{eqnarray} s &=& 2 E_1 E_2 (1 - \cos \theta_2 ) \, , \nonumber \\ t &=& -2 E_1 E_3 (1 - \cos \theta_3 ) \, , \label{st} \end{eqnarray} with Jacobian $4E_2E_4 \cdot E_1^2$. Using this, and \eq{B2}, we arrive at \begin{eqnarray} \int d \Gamma \cdot g(\cdots) &=& \frac{1}{16 (2\pi)^4 E_1} \int d s d t \ \int d E_2 d E_3 \frac{\theta (h)}{\sqrt{h}} \sum_\pm \underline{g} (\phi_\pm) \, . \label{B3} \end{eqnarray} The function $h$ is quadratic in $E_3$, written as $h(E_3) = aE_3^2 + bE_3+c$ the coefficients are found from \eq{AB} and \eq{B3} to be \begin{eqnarray*} a &=& -s^2 \, ,\\ b &=& -2s(uE_1+tE_2) \, ,\\ c &=& -(uE_1 - t E_2)^2 - stu \, . \end{eqnarray*} Since $a<0$, $\theta \left( h \right)$ constrains the $E_3$-integration to the interval $[E_3^-, E_3^+]$, where \begin{eqnarray*} E_3^\pm &=& \frac{-b \pm \sqrt{D}}{2a} \, . \end{eqnarray*} Positivity of the discriminant $D =4 s^2 t u (4 E_1 E_2 - s)$ summarises the 2-body phase space: $0 \leq s \leq s_{\rm max} = 4 E_1 E_2$ and $-s \leq t \leq 0$. With the kinematic bounds fully specified, the factor $\theta (h)$ can indeed (as anticipated) be dropped in \eq{B3} to give \begin{eqnarray} \int d \Gamma \cdot g &=& \frac{1}{(2\pi)^3 E_1} \int d E_2 \int_0^{s_{\rm max}} d s \int_{-s}^0 d t \ \frac{1}{16\pi}{\cal I} (s,t ,E_1, E_2) \, , \label{stuway} \end{eqnarray} in terms of a function that parametrises the final $E_3$-integral: \begin{eqnarray} {\cal I} [g] = \int_{E_3^-}^{E_3^+} \frac{ d E_3 }{\sqrt{ h(E_3) }} \ \frac12\sum_\pm \underline{g} ( E_3, \phi_\pm ) \, . \label{K} \end{eqnarray} Up to this point, there have been no simplifying approximations. After using energy and momentum conservation as well as symmetry in one of the angles, we have reduced the original expression \eq{B2} to a four dimensional integration. \subsection*{Small-$\bm t$ approximation} Of interest to Sec.~\ref{sec:sigTR}, is the behaviour of \eq{stuway} assuming dominance of small angle scatterings. We now discuss in some more detail how to complete the $E_3$-integral of \eq{stuway}, keeping terms of relevant powers in $t$. For small $\omega = E_1 - E_3$, it is reasonable to replace in $g(E_3)$ the energy $E_3 \to E_1$. However, for $-t \ll T^2$ it remains possible that both $|\omega|$ and $q$ are individually large. The integrand \eq{K} has nonzero support for $h(E_3) > 0$, which as $|t| \to 0$ becomes very narrow; $(E_3^+-E_3^-) = \frac2s \sqrt{tu(s_{\rm max}-s)}$ with the central value \begin{eqnarray*} E_3^\star &=& E_1 - \frac{t}{s} \big( E_2-E_1 \big) \, . \end{eqnarray*} Let us suppose that the integrand $g$ is largest for small-$t$ (or $u$), and accordingly expand $g(E_3,\cdots)$ about $E_3^\star$ (i.\,e.\ if $-t\ll s$, then $E_3 \simeq E_1$). We thus represent $g$ in powers of $(E_3 - E_3^\star)$, to be integrated in \eq{K}, leading us to \begin{eqnarray} {\cal I} [g] = \sum_{k=0}^\infty {\cal I}_{(k)} \frac{\partial^{k} g}{\partial E_3^{k}} \bigg|_{E_3=E_3^\star} \, , \label{fullK} \end{eqnarray} where ${\cal I}_{(k)} = {\cal I}[(E_3-E_3^\star)^k]/(k!)$. Only the first few terms in \eq{fullK} turn out to be relevant, namely \begin{equation} \begin{tabular}{c||c|c|c} $k$ & 0 & 1 & 2 \\ \hline $ {\cal I}_{(k)}\ $ & $\ \displaystyle \frac{\pi}{s} \ $ & $\ \displaystyle 0 \ $ & $\ \displaystyle \frac{\pi}{16} \frac{D}{s^5} \ $ \end{tabular} \, , \label{K_(m)} \end{equation} for $k\geq 3$ we find ${\cal I}_{(k)} = {\cal O}(D^{k/2})$, which is subleading for small-$t$ or $u$. If we let $g^\prime = \partial g / \partial E_3$, then the leading terms for a small-$t$ approximation may be written \begin{eqnarray} {\cal I}[g] &\simeq& \frac{\pi}{s} g(E_3^\star) + \frac \pi{4s^3} t u \big( 4E_1E_2 - s \big) g^{\prime\prime} (E_3^\star) \label{E3int} \end{eqnarray} It will turn out (see below), basically because of the choice of tensor basis \eq{chiij}, that $g(E_3^\star) \propto tu$ and therefore both terms above are ${\cal O}(tu/s^2)$\,. The pertinent transport weight, $|t|$ in \eq{E3int}, emerges from the peaked kinematics for $E_3\sim E_1$ as $t\to 0$ (or $E_3 \sim E_2$ as $u\to 0$). \bigskip For the linearised collisional operator \eq{symC}, the QCD matrix element is largest when $t$ or $u$ is small compared with $s$. Here the integrand is $$g(\cdots) = |{\cal M}|^2 f_1 f_2 \bar f_3 \bar f_4 \cdot \big( \Delta^{ij}[\chi] \big)^2 \, ,$$ which depends on the unknown function $\chi$. Because $E^\star_3 \simeq E_1$, up to subleading corrections for $-t\ll s$, we can replace $f_3 \to f_1$, which gives \begin{eqnarray} \int d \Gamma \cdot g &=& \frac{f_1 \bar f_1}{(2\pi)^3 E_1} \int d E_2\, f_2 \bar f_2 \int d s \, s^2 \int d t \frac{d \sigma}{d t} \, {\cal I}\bm[\, \big( \Delta^{ij}[\chi] \big)^2 \,\bm] \, , \label{g's} \end{eqnarray} assuming that the cross section $d \sigma/ d t = |{\cal M}|^2/(16\pi s^2)$ is only a function of $s$ and $t$. $\chi$ represents the input to the functional ${\cal Q}[\chi]$, see Eq.~\eq{Q}, whose maximal value gives $\eta$. Hence $$ {\cal I} = \frac{\pi |t|}{4s^2} \big[ \, 8 s {\cal A} + \big( 4E_1E_2 - s \big) {\cal B} \, \big] \, , $$ where ${\cal A}$ and ${\cal B}$ were given in \eq{Pdef}. \section*{Acknowledgments} G.J. was supported by the National Institute for Theoretical Physics (NITheP). \clearpage \section{Introduction} \label{sec:intro} Quantifying the shear viscosity $\eta$ of the quark-gluon plasma is a key issue in heavy-ion physics, and has been ever since RHIC and LHC experiments revealed it to be an (almost) perfect fluid \cite{Teaney2003a}. Successful descriptions of the data favour a small viscosity to entropy ratio, $\eta/s \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} 0.5$: the hallmark for a `strongly coupled' system. This feature of many-body QCD, along with the associated puzzle of rapid thermalisation, is not yet properly understood and remains a challenge to explain theoretically \cite{Danielewicz1985}. A strong-coupling calculation for certain supersymmetric Yang-Mills theories gives $\eta/s=1/(4\pi)\approx 0.1$ \cite{Kovtun2005}, which appears to be in the right ballpark. This value was conjectured as a universal lower bound for conformal gauge theories, but digresses from real-world QCD \cite{Huot2007}. Moreover, it does not reveal anything about the temperature dependence of $\eta$ in a QGP. Certain {\em models} for $\eta(T)$ can be eliminated by comparing hydrodynamic output to transverse particle spectra and elliptic flow \cite{Niemi2015qia,Denicol2016}. Ongoing lattice computations have helped extract $\eta(T)$ for a gluonic system near $T_c$, by reverse-engineering the Green-Kubo formulae. However, the {\em assumed} form of the spectral function does not respect asymptotic freedom \cite{Meyer2007f, Meyer2009e, Mages2015, Astrakhantsev2017}. First principle perturbation theory is apparently at odds with the small viscosity, commonly reckoned to imply that {\em all} `weak-coupling' approaches are inadequate. Actually, this point of view is misleading as it follows from an incomplete approximation. We show how an asymptotic weakening of the strong interaction, when treated consistently at the 1-loop level, leads to a natural explanation for the low values of $\eta/s$. Kinetic theory ought to give quantitative estimates for transport properties provided the mean free path length $\lambda$ (specified by the associated cross section) exceeds the inter-particle distance \cite{Reif1964}. The viscosity is given by ($\iota\approx 1/3$ is a numerical factor) \begin{equation} \eta \simeq \iota \cdot n \bar p \lambda \label{eta estimate} \end{equation} from the density of particles $n$ which can transfer a typical momentum $\bar p$ over a distance $\lambda \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$>$}} n^{-1/3}$. For gauge theories it is important to distinguish between the total cross section and $\sigma_{\rm tr} = \int d\Omega\, (1-\cos\theta)\, d\sigma/d\Omega$. Relevant for the viscosity is the latter, the so-called transport cross section, which takes into account that a single one of the prevailing small-angle scatterings is not sufficient to transfer the typical momentum $\bar p \sim T$ (for relativistic plasmas). Although the `transport weight' $(1-\cos\theta)$, which is proportional to the invariant momentum exchange $t$, does reduce the sensitivity of the viscosity to the infrared sector, it is still {\em mandatory} to also take into account quantum corrections in order to obtain non-trivial results. E.\,g., for $t$-channel boson exchange (see Fig.~\ref{1234}) with $d\sigma/dt \sim \alpha^2/t^2$ at tree-level, the transport cross section is logarithmically divergent, arising from the integral $\int_{-s}^{t_{\rm max}} d t/(-t) $. Due to the kinematic boundary $t_{\rm max} = 0$, the viscosity would be zero (in fact for any value of the bare coupling). However, the exchanged boson acquires a self-energy of the order $\mu^2 \sim \alpha T^2$ due to thermal fluctuations which cannot be omitted in particular for $t \to 0\,$. For small coupling $\alpha$, $\mu^2$ is well separated from the typical invariant collision energy\footnote{The context will make clear the distinction between Mandelstam $s$ and the entropy.} $s \sim T^2$. Thus $\sigma_{\rm tr}$ can be `mimicked' by an unscreened differential cross section, integrated over a restricted $t$-range, \begin{equation} \sigma_{\rm tr} \sim \frac1s \int_{-s}^{-\mu^2} d t\, |t| \, \frac{\alpha^2}{t^2} \sim \frac{\alpha^2}{T^2} \log \left( \alpha^{-1} \right) \, . \label{eq1} \end{equation} The emerging {\em leading logarithm} (LL) indicates the screening of soft scatterings due to loop corrections. While details of the regulator $\mu^2$ do not affect the LL prefactor, they determine the argument of the logarithm, giving subleading terms ${\cal O}(\alpha^2)$ in Eq.~\eq{eq1}. We emphasize that the argument $\alpha^{-1}$ of the characteristic Coulomb logarithm is a ratio of `hard' and `soft' scales. \begin{figure} \centerline{\includegraphics[scale=1.]{scattering}} \caption{ Gluon-gluon scattering in the $t$-channel at LO produces a large forward peak in $d \sigma / d t$ due to the long range of `glancing' interactions (a $u$-channel process contributes similarly by crossing). With $P=(E,\bm p)$ denoting the four-momenta, we label the colliding partners as $\{ P_1, P_2 \} \rightarrow \{ P_3, P_4 \}$. The exchanged gluon (carrying 4-momentum $Q=P_1-P_3$ and thus $t=Q^2$) is dressed by interactions with the medium, as a minimal requirement for the LL result. } \label{1234} \end{figure} The corresponding path length is $\lambda \sim (n\sigma_{\rm tr})^{-1} \sim 1/(T \alpha^2 \log(\alpha^{-1}))$, where we used $n \sim T^3$ for the particle density. Since $n$ is also proportional to the entropy density, \eq{eta estimate} gives the parametric $\alpha$-dependence \begin{equation} \eta/s \sim \bar p \lambda \sim 1/\left( \alpha^2 \log(\alpha^{-1}) \right) \label{eta/s parametrically} \end{equation} for hot gauge theories. We briefly summarise the history of $\eta$ calculations: first attempts were based on the relaxation time approximation of the Boltzmann equation \cite{Hosoya1985a}. Those estimates were improved by Baym and collaborators who linearised the QCD Boltzmann equation with the collision term screened with {\em hard thermal loop} (HTL) insertions \cite{Baym1990, Heiselberg1994b}. This fixed the overall prefactor in \eq{eta/s parametrically}, to give the LL result. As it turned out, inelastic processes (na\"{i}vely of higher order in $\alpha$) need be included to go beyond the LL approximation. This was recognized by Arnold, Moore and Yaffe \cite{Arnold2000d}, who subsequently calculated the coefficient $c$ in the {\em next-to leading log} (NLL) and state-of-the-art result \cite{Arnold2003f} \begin{equation} \eta_{_{\mathrm{NLL}}} = \frac{b T^3}{ \alpha^2 \log(c/\alpha) } \, . \label{eta NLL} \end{equation} The constants $b$ and $c$ were computed numerically, for various number of flavors $n_{\!f}\,$, from the leading order (LO) fixed-coupling result, $\eta_{_{\mathrm{LO}}}^{\rm fix}$, obtained in the effective kinetic framework \cite{Arnold2003a}; in the quenched limit $b \simeq 0.34$ and $c \simeq 0.61$. In general, $\eta(\alpha)$ is a decreasing function as expected on physics grounds: velocity gradients should equilibrate more efficiently by stronger interactions (unless the quasiparticle structure of the system changes, like in phase transitions). This expectation is indeed met by the NLL result \eq{eta NLL} for small $\alpha$, where it is strictly justified. However, $\eta_{_{\mathrm{NLL}}}(\alpha)$ has a minimum at $\alpha^\star = c/\sqrt{e}$ ($e$ is Euler's number). Numerically, $\min \left[ \eta_{_{\mathrm{NLL}}} \right] = 2b e T^3 / c^2$ turns out to be close to the free entropy $s_0 = (16 + \frac{21}2 n_{\!f}) \frac{4\pi^2}{90} T^3$, which overestimates the entropy of the interacting QGP (in particular near the confinement transition). Thus it is plain that the NLL result \eq{eta NLL} is incompatible with $\eta / s \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} 0.5$ -- which may have led to the common view that perturbative QCD cannot explain the experimental findings. In order to scrutinize this position, we may see the minimum at $\alpha^\star$ as a precursor to the singularity of $\eta_{_{\mathrm{NLL}}}(\alpha)$ at $\alpha = c$, which marks the ultimate break-down of the NLL approximation. However, as is obvious from the derivation of the parametric formula \eq{eta/s parametrically}, this singularity is {\em unphysical} because it stems from a swapping of integration bounds in \eq{eq1}, where screening was taken into account by modifying $t_{\rm max} = 0 \to -\mu^2$. This suggests that also the minimum of $\eta_{_{\rm{NLL}}}$ is an artefact, which will motivate us to withhold the expansion in powers\footnote{The situation may not improve by higher order terms in the log-expansion, because the latter seems not to be Borel summable \cite{Arnold2003f}.} of $\log (1/\alpha)$, focusing instead on the full (unexpanded) LO result. As a monotonously decreasing function, we shall find that $\eta_{_{\rm{LO}}}(\alpha)$ could potentially explain $\eta/s \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} 0.5$ -- depending on the value chosen for $\alpha$. But specifying $\alpha$ should not be guesswork. The running of the coupling is determined by the vacuum parts of the very same quantum corrections whose thermal counterpart we have emphasized to be mandatory for screening, and thus for any reasonable approximation of transport observables like $\eta$. Of course, for renormalisation-group invariant approximations, the choice of `the' scale of the running coupling $\alpha(Q)$ is arbitrary: rescaling $Q \to \widetilde Q$ is compensated by emerging $\alpha(\widetilde Q)\log(\widetilde Q/Q)$ correction terms. These terms should either be included explicitly, or be minimized by a `natural' choice of scale for the relevant processes. For the important $t$-channel process in Fig.~\ref{1234}, which leads to \eq{eq1}, this natural scale will be $Q^2 \sim t$. Otherwise, {\em ad hoc} choices like $Q=2\pi T$ may result in considerable inaccuracies, in particular for a quantity like $\eta$, which depends on the square of the coupling. Our paper is organised as follows. To begin with, in Sec.~\ref{sec:sigTR}, we elaborate on the existing approximations and their range of validity. From a simple model (based on the NLL result) we can make a judgement on the extrapolation of $\eta_{_{\mathrm{LO}}}$ to larger values of the fixed coupling. Section~\ref{sec:renorm} is where we turn QCD particularities and address the question of scale setting for the running coupling. Our results for $\eta(T)$ are given in Sec.~\ref{sec:results}, where we also combine it with the lattice entropy to estimate $\eta/s$ for $T\sim T_c$ and then comment on kinetic theory in Sec.~\ref{sec 3C}. \section{Remarks on the existing calculation} \label{sec:sigTR} Screening is the basic mechanism necessary to calculate typical transport properties, which amounts to resuming at least the 1-loop selfenergy in the propagators. Whether higher loop corrections will give more reliable approximations, or not, is a relevant question, given that perturbative series do not converge. In quantum field theories we may at best expect asymptotic expansions\footnote{ For QCD, Linde's argument \cite{Linde1980a} indicates further (unresolved) intricacies.}. While not being convergent, they can still constitute useful approximations if truncated at an appropriate `optimal' order, which usually becomes {\em smaller} with increasing coupling strength \cite{ItzyksonZuber}. Thus, striving for more than the 1-loop insertions absolutely necessary for screening may not result in more reliable approximations for transport phenomena in heavy-ion physics, which are characterized by a `fairly large' coupling\footnote{ At weak coupling, higher order contribution would improve this `minimal' approximation, but the relative corrections would then be small.}. Complementing the `1-loop screening' by collinear $1 \to 2$ splitting processes defines the framework of the LO effective kinetic theory developed in \cite{Arnold2003f} that allowed the values $b$ and $c$ in \eq{eta NLL} to be computed. In order to extract the LO transport properties in this effective kinetic theory it is sufficient to dress only the infrared sensitive scattering amplitudes for soft momenta. One may then calculate transport properties for arbitrarily small $\alpha(T)$, which (with the above motivation) make for a prudent compromise on which to base estimates at larger coupling. Regardless, it is the only feasible calculation scheme available at present. Using HTL perturbation theory has the virtue of gauge invariance but presumes that soft and hard scales are well separated, $\mu^2 \ll T^2$. This aspect will dominate the uncertainties when extrapolating the approach to larger coupling: Our estimates in Sec.~\ref{sec 2A} indicate that the HTL approximation gives a factor of two uncertainty for the viscosity at larger coupling. We may therefore simplify the approach in other aspects. First, we may omit the inelastic processes [which lead to ${\cal O}(5\%)$ higher scattering rates, thus mildly {\em lowering} $\eta$]. We may also solve in Sec.~\ref{sec 2B} the linearized Boltzmann equation by a suitable single-function Ansatz (which already gives an accuracy of $\mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} 1\%$ \cite{Heiselberg1994b}), instead of a full variational treatment. The validity of a kinetic approach {\em per se} at larger coupling will be discussed in Sec.~\ref{sec 3C}. \subsection{An effective model \label{sec 2A}} Given that the non-monotonous behavior of $\eta_{_{\mathrm{NLL}}}(\alpha)$ is related to a sharp cut-off in the $t$-integral \eq{eq1} for the derivation of the parametric form \eq{eta/s parametrically} of the viscosity, we are inspired to study \begin{eqnarray} \sigma^{\rm toy}_{\rm tr} (s, \mu^2)\ & = & \int_{-s}^0 d t\, \frac{|t|}{2s} \, \frac{\alpha^2}{(t-\mu^2)^2} \ =\ \frac{\alpha^2}{2s} \Big[\, \log \left( \frac{s}{\mu^2} + 1 \right) - \frac{s}{s+\mu^2} \,\Big] , \label{m1} \end{eqnarray} using a `self-energy' $\mu^2 \sim \alpha T^2$ instead of a sharp cut-off. Earlier we tentatively assumed that the typical invariant energy for the transport cross section in a thermal medium is hard, $s \sim T^2$ (which seems to be a reasonable view in the given context). However, the gist of the somewhat technical framework outlined in Section~\ref{sec 2B} is that the viscosity is approximately a convolution of the screened transport cross section with a positive function which, with appropriate normalization, can be interpreted as a probability for momentum exchange between laminar layers,\footnote{ We will define $P(s)$ later in Eq.~\eq{Pdef}, and give some of its properties in Appendix \ref{A5}.} \begin{equation} \frac\eta{T^3} \simeq 1 \Big/ \int ds\, sP(s) \cdot \sigma_{\rm tr}(s) \, . \label{eta with P(s)} \end{equation} The simple scheme \eq{m1}, \eq{eta with P(s)} will allow us to discuss several aspects of extrapolations to larger coupling strength. By matching it for weak coupling to the perturbative QCD result \eq{eta NLL}, we can readily infer the normalisation $2\int ds P(s) = b^{-1}$. In order to match also the constant $c$ in \eq{eta NLL} we adjust\footnote{ Note that this differs from \cite{Jackson2017a}, where we simplified further and took $\sigma_{\rm tr}$ at the `mean' $\bar s$, adjusted to agree with the NLL result. Because $P(s)$ is monotonically decreasing, this is not fully justified and we are more careful here. However, the results hardly differ, thus the simple formula in \cite{Jackson2017a} might still be useful for quick estimates. } the regulator $\mu^2 = \kappa \cdot 4\pi\alpha T^2$: From \eq{moments of P(s)} we find $\kappa \simeq 0.38/c \simeq 0.62$, i.\,e.\ $\mu^2$ is somewhat smaller than the Debye mass-squared, $m_D^2 = 4\pi\alpha T$. This procedure not only takes into account the remaining elastic contributions in the binary QCD cross section ($t$ and $u$ channels contribute equally by crossing symmetry), but also inelastic processes (which give only a small correction, as noted before). Figure \ref{fig: eta(alpha)} shows that our effective model agrees well the LO QCD result \cite{Arnold2003f} (reviewed in the next section) even for large coupling well beyond the breakdown of the NLL result \eq{eta NLL}. \begin{figure}[!htb] \includegraphics[scale=0.9]{toy} \caption{ The QCD shear viscosity in the quenched limit as a function of fixed coupling strength, in NLL approximation \eq{eta NLL} and to LO \cite{Arnold2003f}. The latter is well reproduced by the model \eq{m1}, \eq{eta with P(s)}, which allows to estimate the sensitivity on the screening of hard interactions, see text. For comparison, the hatched region demarcates the entropy between the LO result $s_{_{\rm LO}} = s_0 (1-\frac{15}{4\pi}\alpha)$ and the free limit. } \label{fig: eta(alpha)} \end{figure} Having specified the model, we can now estimate the uncertainties that must be faced in QCD when using HTL-dressed propagators\footnote{ Which are justified for $|t| \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} T^2$ \cite{Peshier1998}, although often derived under the stricter assumption that external energy and 3-momentum are smaller than $T$. }, by studying the sensitivity of the viscosity \eq{eta with P(s)} on the `self-energy' at {\em harder} momentum transfers. Varying $\mu^2$ by factors $\nu = 2^{\pm 1/2}$ for $|t|>T^2$ has only a mild effect: Even for $\alpha \sim 1$ the viscosity changes by less than a factor of $\nu^{-1}$, cf.~Fig.~\ref{fig: eta(alpha)}. Alternatively, we could also segregate momentum transfers at $t^\star \in [-s,0]$, and set $\mu^2 \to 0$ for $|t|>|t^\star|$, i.\,e.\ use the Born cross section for harder interactions, which is basically\footnote{ Usually a non-covariant separation in 3-momentum is made, but we find it more convenient to separate in invariant momentum transfer.} the Braaten-Yuan method \cite{Braaten1991d}, \begin{equation} \int_{-s}^{t^\star} d t \frac{|t|}{2s} \frac{\alpha^2}{t^2} \ +\ \sigma_{\rm tr}^{\rm toy}\big(-t^\star,\mu^2\big) = \frac{\alpha^2}{2s} \Big[ \, \log \Big(\, \frac{s}{\mu^2} - \frac{s}{t^\star}\, \Big) - \frac{t^\star}{t^\star - \mu^2} \, \Big] \, . \label{mu2 with t^star} \end{equation} The resulting viscosity is $t^\star$-independent only at NLL order. The sub-leading terms lead to a $t^\star$-dependence which becomes more pronounced for increasing $\alpha$. Varying $|t^\star|$ in the range $[\frac12,2]T^2$, we observe a similar sensitivity as above on $\nu$, leading us to estimate that the HTL approximation does not bring more than a factor of two uncertainty for the viscosity calculation. \bigskip While the full (unexpanded) RHS of \eq{m1} is obviously a manifestly positive, monotonously decreasing function of $\mu^2/s \sim \alpha$, its NLL approximation, $\sigma_{\rm tr}^{_{\mathrm{NLL}}} / \alpha^2 = s^{-1} [ \log(s/\mu^2) - 1]$, becomes negative for $s/\mu^2 < e$, leading to the same unphysical behavior as seen in \eq{eq1} and \eq{eta NLL}. This problem is not cured by higher order terms since the expansion has a radius of convergence $\mu^2/s =1$, stemming from the pole of ${\cal M} \sim \alpha/(t-\mu^2)$ at $t=\mu^2$ (off the physical sheet) in the integrand of \eq{m1}. The factor of $|t|$ cedes a branch cut on the negative real axis $s<-\mu^2$, rather than a pole at $s=-\mu^2$ in $\sigma = \int d t\, d \sigma / d t$, and expansions only converge in the disk of radius $\mu^2$, illustrated in Fig.~\ref{fig: wtr(s)}. Therefore the powers of $\mu^2/s$, formally beyond LO, should be considered carefully when extrapolating to larger coupling. \begin{figure}[htb] \includegraphics[scale=0.9]{wtr} \caption{ Panel (a) shows how the transport cross section in \eq{m1} depends on $s/\mu^2$. The NLL approximation, $\sigma_{\rm tr}^{\rm NLL} \sim \log (s/\mu^2)$ is accurate for $s \gg \mu^2$, guaranteed for weak coupling $\alpha \to 0$. However an expansion in $\mu^2/s \sim \alpha$ extrapolates poorly (family of gray curves) for `soft' $s$, where $\sigma_{\rm tr} \approx \alpha^2 s/(2\mu^4)$. In the complex $s$-plane, $\sigma_{\rm tr}$ has a branch cut on the negative real axis for $s<-\mu^2$. Thus, shown in (b), the radius of convergence is $\mu^2$. } \label{fig: wtr(s)} \end{figure} To substantiate this statement, and deferring proof to Appendix \ref{A5}, we point out that $P(s)$ is non-zero for $s \to 0$. Thus, expanding $\sigma_{\rm tr}$ in $\mu^2/s \sim \alpha T^2/s$ {\em before} convoluting it with $P(s)$ in Eq.~\eq{eta with P(s)} yields an ill-defined series in powers of $\alpha$: its coefficients [the negative moments of $P(s)$] are IR-divergent, with increasing severity. Interchanging expansion and integration is formally not permissible given the properties of the particle's local equilibrium distribution encoded in $P(s)$. This issue is generic; the non-monotonous behavior of $\eta_{_{\mathrm{NLL}}}(\alpha)$ is a mathematical {\sl artifact} rather than a breakdown of perturbative QCD {\em per se}. It also means that these higher order by-products must be kept. \subsection{Linearised Boltzmann equation \label{sec 2B}} Having elaborated on the quality of various approximations and their respective drawbacks, we turn to the rigorous kinetic theory for QCD. What was established in Refs.~\cite{Baym1990,Heiselberg1994b,Arnold2000d,Arnold2003f} can guide us to \eq{eta with P(s)}, which we explain here for the quenched limit ($n_{\!f} =0$). We characterise the system by $f(\bm p, \bm x, t)$, the number of partons in the phase-space element $d^3 \bm p d^3 \bm x /(2\pi)^3$. The evolution of $f$ can be described by the Boltzmann equation, \begin{equation} {\cal D} f = {\cal C} \left[ f \right] \, , \label{C[f]} \end{equation} where the streamline derivative ${\cal D} = \partial_t + \bm v \bm \nabla$ for relativistic particles. For binary scatterings (see Fig.~\ref{1234}) the collisional operator reads \begin{eqnarray} {\cal C} \left[ f_1 \right] ( p_1 ) &=& \frac{1}{2E_1} \int d \Gamma \label{C} \left| {\cal M} \right|^2 \big\{\ \bar{f}_1 \bar{f_2}f_3f_4 - f_1 f_2\bar{f_3}\bar{f_4}\ \big\} \, ,\nonumber \end{eqnarray} using the subscript on the distribution function to refer to its momentum argument, $f_i = f(\bm p_i)$. Quantum effects are included by the bracketed term \eq{C}; we use the notation $\bar{f} = 1\pm f$, with the upper sign for bosons and the lower for fermions. Two-body phase space is abbreviated \begin{eqnarray*} \int d \Gamma \ &\equiv& \ d_2 \int_{234} \frac{ (2\pi)^4 }{8 E_2 E_3 E_4} \delta^{(4)} \left( P_1 + P_2 - P_3 - P_4 \right) \, , \end{eqnarray*} using the shorthand $\int_i = \int d^3 p_i/(2\pi)^3$; with $d_g = 16$ and $d_q = 12 n_{\!f}$ being the gluon and quark degeneracies for $n_{\!f}$ light flavours respectively. Note that $\int d \Gamma$ is a Lorentz invariant measure with $P = (E, \bm p)$ being the usual 4-momentum. In \eq{C}, the matrix element $\vert {\cal M} \vert^2$ is averaged over spin and colour of incoming particles $\{1,2\}$, but summed over final states $\{3,4\}$ (taking into account double counting). In vacuum at Born level, ${\cal M}$ is dependent only on the Mandelstam invariants [we use the ${\rm Diag}(1,-\bm 1)$ convention for the metric] \begin{eqnarray*} s = (P_1 + P_2)^2 \, , \quad t = (P_1 - P_3)^2 \, , \quad u = (P_1 - P_4)^2 \, . \end{eqnarray*} A heat bath in equilibrium (which is static and uniform in $\bm x$) has a collective flow and is characterised\footnote{ We consider only zero chemical potential.} by a temperature $T$ and velocity $\bm u$. The J\"{u}ttner distribution function \begin{eqnarray} f_{\rm eq} \left( \bm p \, ; T, \bm u \right) &=& \left( e^{\displaystyle\gamma (p - \bm p \bm u )/T } \mp 1 \right)^{-1} \, , \label{global} \end{eqnarray} where $\gamma = \left( 1- u^2 \right)^{-1/2}$, solves \eq{C[f]} trivially since ${\cal C} \left[ f_{\rm eq} \right] = 0$ . Consider now small deviations from the {\em global equilibrium} in \eq{global}, by allowing for $\bm u$ to weakly depend on $\bm x$ and consider as an {\rm Ansatz} to solving \eq{C[f]} the function $f(\bm x, \bm p) = f_{\rm eq} \bm{(} \bm p\, ; \bm u(\bm x) \bm{)}$, the so called `local' equilibrium [to be distinguished from the general off-equilibrium solution, $f_\star$]. Small gradients in the collective flow $\bm u$ allow us to assume also that $|\bm u| \ll 1$, by boosting into the local rest frame. This implicit dependence of $\bm u$ on $\bm x$ means that $f$ no longer satisfies the Boltzmann equation \eq{C[f]}, since\footnote{ Suppose a steady solution, i.\,e.\ $\partial_t \bm u \to 0$, implying $\nabla T = 0$. However a nonzero $\partial_t T$ is connected to any divergence in $\bm u$, and is necessary for the second term in \eq{S}. } \begin{eqnarray} {\cal D} f_{\rm eq} \bm{\big(} \bm p \, ; \bm u (\bm x) \bm{\big)} &=& S\, \Big( \hat{p}_i \hat{p}_j - \frac13 \delta_{ij} \Big) \cdot \nabla_i u_j \, , \quad {\rm with} \quad S(p)\ \equiv\ f \bar{f} \frac{p}{T} \, , \label{S} \end{eqnarray} while the collision operator in the Boltzmann equation \eq{C[f]}, still vanishes. Thus, the actual solution to \eq{C[f]} should somewhat depart from local equilibrium, \begin{eqnarray*} f_\star ( \bm x, \bm p ) &=& f \big( \bm x , \bm p \big) + \delta f (\bm p )\, , \end{eqnarray*} with $\delta f$ being proportional to the generic velocity gradient $\nabla_i u_j$, in order for the collision operator ${\cal C} [f_\star]$ to compensate \eq{S}. Let us parametrise $\delta f$ in the form, similar to \cite{Baym1990}, \begin{eqnarray*} \delta f &=& f \bar{f}\ \frac{ \chi^{ij} (\bm p) }{T} \cdot \nabla_i u_j\, , \end{eqnarray*} where the rank-2 traceless tensor $\chi^{ij}$ is \begin{eqnarray} \chi^{ij} &=& \chi (p) \Big( \hat{p}^i \hat{p}^j - \frac13 \delta^{ij} \Big) \, , \label{chiij} \end{eqnarray} in terms of a scalar function $\chi (p)$. The tracelessness of $\chi^{ij}$ will imply zero bulk viscosity, isolating only the shear modes. The deviation $\delta f$ from local equilibrium (alternatively $\chi$), leads to a modification of the stress tensor from the non-interacting limit, \begin{eqnarray*} {\cal T}_{ij}^\star &=& d_g \frac{\pi^4}{90}T^4 \delta_{ij} + {\cal T}_{ij} [\delta f ] \end{eqnarray*} From \eq{chiij}, the first order correction (in laminar gradients) takes the conventional form \begin{eqnarray} d_g \int_p \frac{p_i p_j}{p} \delta f (\bm p ) &=& - \eta \cdot \Big( \, \nabla_i u_j + \nabla_j u_i - \frac23 \delta_{ij} \bm \nabla \bm u \, \Big) \, , \label{Tij} \end{eqnarray} where the shear viscosity is now derivable from $\chi$ : \begin{eqnarray} \eta &=& \frac{1}{15} d_g \int_p \chi S \, . \label{eta=} \end{eqnarray} But before formula \eq{eta=} may be of any use, the unknown function $\chi$ needs to be determined -- under the simplifying assumption of small gradients in velocity. The left hand side of the Boltzmann equation \eq{C[f]}, which controls the effective particle rate, is (to first order in the gradients) ${\cal D}(f + \delta f) \simeq {\cal D} f$, i.\,e.\ the convective derivative is approximated by \eq{S}. On the other hand, Ansatz \eq{chiij} gives \begin{eqnarray} {\cal C} \left[ f_1 \right] &\simeq& \frac{ \nabla_i u_j}{2TE_1} \int\! d \Gamma \vert {\cal M} \vert^2 f_1 f_2 \bar{f_3} \bar{f_4} \cdot \Delta^{ij} \left[ \chi_1 \right]\, , \label{RHS} \end{eqnarray} where (having kept terms proportional to the gradient) \begin{eqnarray*} \Delta^{ij} \left[ \chi_1 \right] \ &\equiv&\ \big\{\ \chi_1^{ij} + \chi_2^{ij} - \chi_3^{ij} - \chi_4^{ij}\ \big\} \, . \end{eqnarray*} Since $\nabla_i u_j$ was arbitrary, we may use the Boltzmann equation to match \eq{S} with \eq{RHS} and hence replace $\nabla_i u_j$ by $\hat{p}_i \hat{p}_j$ to find \begin{eqnarray} S(p) &=& {\cal C}_L \left[ \chi \right] (p) \, . \label{linBE} \end{eqnarray} In this relation, the {\em linearised} collisional operator is defined by \begin{eqnarray*} {\cal C}_L \left[ \chi_1 \right] &\equiv& \frac{3\hat{p}_1^i \hat{p}_1^j}{4E_1} \int\! d \Gamma \vert {\cal M} \vert^2 f_1 f_2 \bar{f_3} \bar{f_4} \cdot \Delta^{ij} \left[ \chi_1 \right]\, , \end{eqnarray*} where $\chi_i = \chi(p_i)$ is an arbitrary scalar function [we reserve $\chi^\star$ for the actual solution of \eq{linBE}]. Note that $\Delta^{ij}[\chi]$ is traceless, a property inherited directly from \eq{chiij}, and therefore \begin{eqnarray} \int_p \chi {\cal C}_L \left[ \chi \right] &=& \frac38 \int_1 \frac{f_1}{2E_1} \int\! d \Gamma \vert {\cal M} \vert^2 f_2 \bar{f_3} \bar{f_4} \cdot \Big( \Delta^{ij} \left[ \chi_1 \right] \Big)^2 \, , \label{symC} \end{eqnarray} where symmetry of the integrand was used to complete the square for $\Delta^{ij}$. Therefore ${\cal C}_L$ is a positive semidefinite operator, over the Hilbert space of $\chi$-functions, which vanishes only for collisionally conserved quantities. Eq.~\eq{linBE} then determines $\delta f$ [via $\chi(k)$], enabling \eq{eta=} to be used formally with the {\em exact} solution: $\chi^\star = {\cal C}_L^{-1} \left[ S \right]$. One approach to solving this symbolic equation is to represent $\chi$ by a linear combination over some complete set of functions. Since ${\cal C}_L$ is linear, this would produce an (algebraic) matrix equation which may be solved for the coefficients. Such a strategy, using a truncated basis, gives a {\em lower} estimate for $\eta$ as we now explain. \bigskip Consider the quadratic functional, following \cite{Arnold2000d}, \begin{eqnarray} {\cal Q} \left[ \chi \right] &=& d_g \int_p \Big( S \chi - \frac12 \chi {\cal C}_L \left[ \chi \right] \Big) \, . \label{Q} \end{eqnarray} At $\chi^\star$, the solution to \eq{linBE} , ${\cal Q}$ takes the value $\eta$ (up to a prefactor). The linearised Boltzmann equation is tantamount to \begin{eqnarray*} \frac{\delta {\cal Q}}{\delta \chi} &=& 0 \, , \end{eqnarray*} and is solved by $\chi^\star$, where ${\cal Q}$ is stationary. Because ${\cal C}_L$ is positive definite, see \eq{symC}, the extremum of \eq{Q} at $\chi = \chi^\star$ is in fact a {\em maximum}. Its value here, on account of Eq.~\eq{eta=}, gives the viscosity: \begin{eqnarray} \eta &=& \frac{2}{15} {\rm Max} \big( {\cal Q} \big) \, , \label{etaVAR} \end{eqnarray} which sidesteps the task of actually inverting ${\cal C}_L$. For example, the optimal norm of any test function may be derived from $\partial_A {\cal Q} \left[ A \chi \right]=0$, which in a sense `homogenises' \eq{Q}. We may then substitute ${\cal Q}$ with the following\footnote{ This functional may also be obtained by a Cauchy-Schwarz inequality.} (improved) estimate, \begin{eqnarray} {\cal Q}[\chi] &=& \frac{\displaystyle \Big( d_g \int_p S\chi \Big)^2}{\displaystyle 2 d_g \int_p \chi {\cal C}_L \left[ \chi \right]} \, . \label{baym} \end{eqnarray} If \eq{baym} is used for ${\cal Q}$ instead of Eq.~\eq{Q}, the absolute scale of $\chi$ is arbitrary (evidently it cancels out in the fraction). This functional, along with the known QCD matrix element for gluon-gluon scattering (with IR terms aptly screened), enables $\eta$ to be calculated accurately from a suitably adjusted test function $\chi$. Appending the quark sector is a technicality which allows for `mixing' between boson and fermion distributions (E.g. due to fusion reactions like $gg\leftrightarrow q\bar{q}$). To evaluate $\eta$ for a general $n_{\!f}>0$, we refer the reader to the cross section listing in Table III of Ref.~\cite{Arnold2003f}. Numerical determination of $\int_p \chi {\cal C}_L [\chi]$ involves the phase space integration over all collisional momenta subject to energy and momentum conservation. See Appendix \ref{A2} for our covariant manipulation of the resulting 5-fold integral (which in general cannot be reduced further). \subsection*{Connecting to Eq.~\eq{eta with P(s)}} In order to validate our effective $t$-channel model in Sec.~\ref{sec 2A}, we show how Eq.~\eq{baym} leads naturally to \eq{eta with P(s)}. The prevalent small-$t$ contribution to \eq{symC}, which we organise\footnote{ The order of integration is modified from \eq{g's}, so that $s$ now imposes a restriction on the energy integrations in $P(s)$. } in Appendix \ref{A2}, is \begin{eqnarray*} \int_p \chi {\cal C}_L \left[ \chi \right] &=& N \int_0^\infty d s \, s P(s) \int_{-s}^0 d t \frac{|t|}{2s} \frac{d \sigma}{d t} + \ldots \, , \end{eqnarray*} through a positive weight $P(s)$ that depends on how the system departs from equilibrium. ($N=3/8$ is just a numerical factor.) This formula assumes that $d \sigma / d t$ depends only on $s$ and $t$, and only accounts for the dominant contribution from small-angle binary scatterings. We note that $P$ specifies the `typical' momentum $\bar p$ in \eq{eta estimate} more rigorously. Presuming small-$t$ dominance allows a further integration to be performed, leaving us with $P$ in terms of an integral over the incoming energies; \begin{equation} P [\chi] (s) \ \ \thin \raise1.1pt\hbox to 0.3pt{\hss :}{=}\ \int_{12} \frac{f_1\bar f_1 f_2 \bar f_2}{4E_1E_2} \Big( {\cal B} + \frac{s}{4E_1E_2} \big[ \, 8 {\cal A} - {\cal B}\, \big] \Big) \, , \label{Pdef} \end{equation} where $\int_i = \int d E_i \, E_i^2 /(2\pi^2)$ such that $4E_1E_2>s$. And \begin{eqnarray} {\cal A} &=& \left(\hat{\chi}_1 - \hat{\chi}_2 \right)^2 + \frac{s}{E_1E_2} \hat{\chi}_1 \hat \chi_2 \, , \nonumber \\ {\cal B} &=& \frac43 \big[\, \chi_1^\prime - \chi_2^\prime \,\big]^2 + 4 s \hat{\chi}^\prime_1 \hat{\chi}^\prime_2 - \frac{s^2}{E_1E_2} \Big[\, \hat{\chi}_1^\prime - \frac{\hat \chi_1}{E_1} \, \Big] \Big[ \, \hat{\chi}_2^\prime - \frac{\hat \chi_2}{E_2} \, \Big] \, , \nonumber \end{eqnarray} where $\hat{\chi}_i = \chi_i / E_i$ and $\hat{\chi}^\prime_i = (\partial \hat{\chi}_i/\partial E_i)$\, . As crucial to our argument that the log expansion diverges, the value of $P(s=0)$ must be strictly positive. This follows from ${\cal B}>0$ for $s \to 0$ in the integrand of Eq.~\eq{Pdef}, without even knowing $\chi$. That still leaves open the possibility that $P(0)$ is finite {\em or} $P(s)$ has an (integrable) divergence at $s=0$. Either is enough assurance that all the negative moments diverge -- in fact, even polynomial growth of $P(s)$ for small-$s$ would only tame finitely many such terms. To validate \eq{eta with P(s)} against $\frac{2}{15}{\rm Max}[{\cal Q}]$, we may just assign \begin{eqnarray} 1 &=& N\frac{d_g}{15} \left( \int_p \chi S \right)^2 \, . \label{B} \end{eqnarray} This merely reinterprets the existing NLL result, where we could then adapt the effective cross section to resemble QCD. In appendix \ref{A5} we show how normalisation of $P(s)$ and \eq{B} are sufficient to determine $\chi$ from a differential equation. However, it turns out that $\chi(p)=p^2$ gives the stationary point in ${\cal Q}[\chi]$ quite accurately \cite{Baym1990}. [We discuss why, and the arising $P(s)$ in appendix \ref{A5}.] \section{Scale(s) for screening} \label{sec:renorm} Evidently it may be possible to explain $\eta/s \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} 0.5$ on the basis of a leading order treatment (see Fig.~\ref{fig: eta(alpha)}), but to do so requires a proper understanding of how `the' value of the coupling is specified by the temperature. To answer this question, we need to bring up a main feature of QCD. That will resolve the `loose end' of having to {\em impose} $Q_T\sim T$ as the relevant scale -- something already disputed in Sec.~\ref{sec:intro}\,. The toy cross section considered in the previous section offers some guidance for gauge theories, where $\mu^2$ is supplanted, e.\,g.\ by the gluon self-energy $\Pi (Q)$ [a non-trivial function of both components of the four-momentum $Q=(\omega, \bm q)$\,]. The tree level cross section is [averaged over initial, summed over final and multiplied by $\frac12$ to remove double counting: see \eq{C}] \begin{equation} \frac{d \sigma}{d t}^{\rm tree} = \alpha^2 \frac{9\pi}{4s^2} \Big[ \, - \frac{su}{t^2} - \frac{ts}{u^2} - \frac{tu}{s^2} + 3 \, \Big] \, . \label{glue-glue} \end{equation} The first two terms in square brackets, of the type we focused on in Sec.~\ref{sec:sigTR}, contribute equally to $\eta$ due to $t \leftrightarrow u$ crossing. What remains, i.\,e.\ $-tu/s^2 + 3$, only contributes at NLL-order because they are overwhelmed by the first two terms for small-$t$. A dressed gluon propagator $D=(D_0^{-1}-\Pi)^{-1}$ must be used for these infrared sensitive terms, amounting to the replacement \begin{equation} -su/t^2 \to \big| D_{\mu\nu}(Q) Y^{\mu\nu} \big|^2 + \tfrac14 \, ; \quad Q=P_1-P_3 \, , \label{IR replacement} \end{equation} where $Y^{\mu\nu} = (P_1 - \frac12 Q)^\mu (P_2 + \frac12 Q)^\nu$. The polarisation tensor $\Pi$ has a isolated finite-$T$ contribution, (denoted here by a tilde) \begin{eqnarray} \Pi_{\mu\nu} (Q) &=& \Pi_{\mu\nu} \Big|_{T=0} + \widetilde{\Pi}_{\mu\nu} \, . \label{Pi} \end{eqnarray} Running of the coupling $\alpha (\cdot)$ emerges from vacuum fluctuations in a process like \eq{glue-glue}, after the bare parameters in the Lagrangian are expressed by renormalised ones. Generically, many types of corrections are relevant: vertex dressing, ghost contributions, etc. \cite{ItzyksonZuber}. Only $\Pi$ is needed for coupling constant renormalisation in Coulomb gauge, due to its Abelian-like Ward identities \cite{Grozin2008}. In this gauge, the (resummed) gluon propagator separates into longitudinal ($L$) and transverse ($T$) components \begin{eqnarray*} D_{00} \ = \ \frac{-Q^2}{\bm q^2 } \Delta_L \ , \qquad D_{ij} \ = \ -\Big(\, \delta_{ij} - \hat{q}_i \hat{q}_j \, \Big) \Delta_T \, , \qquad D_{0i} \ = \ D_{i0} \ = \ 0 \, , \end{eqnarray*} with $\Delta_{L,T} (Q) = \big(Q^2 - \Pi_{L,T}(Q) \big)^{-1}$ being ordinary scalar propagators\footnote{ The scalar functions $\Pi_{L,T}$ are related to \eq{Pi} by $$ \Pi_L = \frac{-Q^2}{\bm q^2} \Pi_{00} \ , \qquad \Pi_T = \frac12 \Big( \hat{q}_i \hat{q}_j - \delta_{ij} \Big) \Pi_{ij} \, . $$ }. $\Delta_T$ and $\Delta_L$ coincide for $T=0$, but are different at non-zero temperature. Before returning to the issue of specifying $\alpha$, let us discuss the thermal self-energy for QCD in the quenched limit and see how Eq.~\eq{mu2 with t^star} translates in the full LO calculation. \subsection{Thermal Screening\label{sec 3A}} Coulomb gauge is customary at $T>0$ for another reason, namely due to the manifestly broken Lorentz invariance by the presence of a heat bath, see Ref.~\cite{Weldon1982aq}. Self-energies in a thermal medium acquire an additional finite contribution $\widetilde{\Pi}_{\mu\nu}$, scaling with $T^2$ (as opposed to $Q^2=\omega^2-\bm q^2$). But rather than being `constant' in $Q$, as $\mu^2$ was, $\widetilde{\Pi}_{\mu\nu}$ also depends separately on $\omega$ and $\bm q$\,, with an analytic expression valid near the light-cone; $|\omega^2-\bm q^2| \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} T^2$ (i.\,e.\ the HTL limit). Parametrically, the scalars $\widetilde{\Pi}_{L,T}$ are proportional to $\alpha$ and may be written $\widetilde\Pi = m_D^2 \phi (\omega, \bm q)$ where $m_D^2 = (1+\frac16 n_{\!f})4\pi \alpha T^2$ is the Debye mass. (Equivalently, the function $\phi$ could be written in terms of the virtuality $Q^2$ and $z=\omega/|\bm q|$. Then $\mu^2$ is interpreted as an `average' over the argument $z$ to represent the effect of Landau damping.) Details of $\phi_{L,T}$ are immaterial for the present discussion, save that they are {\em finite} and therefore do not affect renormalisation. In a fixed-$\alpha$ result, we may simply drop the vacuum contribution to $\Pi$, and that is what has been done previously. It is sufficient to use HTL propagators $D^\star$ in \eq{IR replacement} for LO accuracy because their domain of validity coincides with what is required. HTL screening is justified for $|Q^2| \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} T^2$ and gives $ \phi = \phi^\star(z)$ (details in Appendix \ref{A4}). Subsequently, $\eta$ will be sensitive to a class of higher order corrections which arise because $\phi$ is not known for harder momenta. To estimate the relevance of these subleading terms, let us again adapt the Braaten-Yuan \cite{Braaten1991d} approach by omitting screening for $|Q^2| > |t^\star|$, i.\,e.\ $$ \phi (Q^2,z) = \Theta ( Q^2 -t^\star) \cdot \phi^\star(z) \, . $$ The $t^\star$-dependence cancels under weak-coupling assumptions (cf. Appendix \ref{A4}). We thus expect our conclusions about the expansion to carry through {\em mutatis mutandis} for QCD. \bigskip Therefore we use the functional ${\cal Q}$ \eq{baym}, and probe the residual sensitivity to the covariant cut-off $t^\star$ by varying $|t^\star| \in [\frac12, 2]T^2$. This is illustrated in Fig.~\ref{fig: eta HTL} (the findings are similar to Sec.~\ref{sec 2A}); for $\alpha = 0.4$ there is a factor $2$ uncertainty from $t^\star$. Screening is simply omitted for the IR safe terms because their contribution is comparatively small, as we shall discuss later. \begin{figure}[hbt] \includegraphics[scale=0.9]{eta-HTL} \caption{ The viscosity for quenched QCD to LO accuracy, cf. Fig.~\ref{fig: eta(alpha)}. The curves here are all equally valid at LO, and calculated using HTL self-energies. Multiplying the self-energies by $\Theta (t-t^\star)$ and varying $t^\star$ near the canonical value $-T^2$ (as used in Ref.~\cite{Arnold2003f}, giving the `LO' points in Fig.~\ref{fig: eta(alpha)}) produces the pink band. We also show the constraint for the entropy, $4T^3 \leq s \leq s_0$ for $T>1.2T_c$, to confirm that $\eta_{_{\mathrm{LO}}}$ could indeed explain how $\eta/s \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} 0.5$. } \label{fig: eta HTL} \end{figure} Figure \ref{fig: eta HTL} also shows that, despite the aforementioned uncertainties, $\eta_{_{\mathrm{LO}}}$ could be compatible with $\eta/s \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} 0.5$. The rigorous bound $s > 4T^3$ of the interacting entropy for $T>1.2 T_c$ is known from lattice calculations \cite{Giusti2017}. Thus the resummed LO viscosity may actually provide a prudent basis for extrapolation, rather than grossly overestimating $\eta/s$, it would actually give $\eta/s < 1/(4\pi)$ for large enough $\alpha$. That brings us back to the central issue of coupling renormalisation at finite-$T$, i.\,e.\ what is $\alpha$? \subsection{Running coupling \label{sec 3B}} For QCD there is no preferred value for $\alpha$ (such as $\alpha_{\rm em}=\frac{1}{137}$ in QED for all practical purposes). Loop corrections include both vacuum effects and (finite) thermal fluctuations, of which the latter survive the classical `$\hbar \to 0$' limit. To then self-consistently define the running coupling, even at the LL level, one must retain the first term in \eq{Pi}. In Coulomb gauge, this is easily demonstrated because of the simple Ward identities already mentioned. Typical loop integrals in quantum field theory give infinities which require renormalisation: connecting renormalised parameters with observables at the renormalisation scale. A textbook result \cite{ItzyksonZuber}, handled through dimensional regularisation with the auxiliary scale\footnote{ In the familiar $\overline{\rm MS}$ scheme, to absorb the universal constants.} $L$, isolates the divergent term \begin{equation} \Pi_{\mu\nu} \Big|_{T=0} = \alpha \beta_0 \big( \, Q^2 g_{\mu\nu} - Q_\mu Q_\nu \, \big) \Big[\ \frac{1}{\epsilon} + \log \frac{-Q^2}{L^2} \ \Big] \ ; \quad \epsilon \to 0 \, . \label{Pi0} \end{equation} Here $\beta_0 = (33 - 2 n_{\!f})/(12 \pi)$ for QCD, where a positive value $\beta_0>0$ signals {\em antiscreening} of colour charges. In vacuum, the transverse and longitudinal projections coincide and are equal to $\Pi^{\rm vac} (Q) \equiv\, \alpha \beta_0 [\epsilon^{-1}+\log(-Q^2/L^2)]Q^2 $\, . This brings us to the crux of our argument, which parallels the standard scenario at $T=0$ : Since the scale $L$ was auxiliary, we use it as the scale with which we fix parameters in the theory. Consider an experiment at the scale $\hat t=-L^2$, thus producing a finite value for $(\alpha^{-1} - \epsilon^{-1})$ which {\em defines} an effective coupling $\hat \alpha^{-1}$. (I.e. the renormalised coupling at the scale $\hat t$.) This is sufficient to make predictions at $Q^2\neq \hat t$ and leads directly to the 1-loop {\em running coupling} formula, \begin{eqnarray} \alpha (Q^2) &=& \big[ \beta_0 \log (-Q^2 / \Lambda^2 ) \big]^{-1} \, , \label{running coupling} \end{eqnarray} where the scale $\Lambda$ is determined by $L$ and the accompanying $\hat \alpha$\,. Dressing an internal propagator $D_{\mu\nu}$, leads to the {\em renormalised} amplitudes [for the replacement rule \eq{IR replacement}] \begin{eqnarray} \alpha \Delta (Q) &\to& \frac{\alpha}{Q^2 - \big(\, \Pi^{\rm vac}+ \widetilde\Pi\, \big) } \ = \ \Big[\ \frac{Q^2}{\alpha(Q^2)} - 4\pi T^2 \, \phi (\omega, \bm q)\ \Big]^{-1} \, . \label{M thermal} \end{eqnarray} For large enough $-Q^2$ (due to asymptotic freedom; the screening term is insignificant in the denominator), we recover the familiar result, e.\,g.\ ${\cal M}^{\rm vac} \sim \alpha(t)/t$ for a $t$-channel process like Fig.~\ref{1234}. In this sense, the relevant scale in $\alpha(\cdot)$ is dictated by the process \cite{Cutler1978}. For $T>0$, the running coupling also applies to thermal screening $\widetilde\Pi(Q) \sim \alpha(Q^2)T^2$, see Fig.~\ref{fig: Pi}. Screening is then a genuine `perturbation' for $|t|^{1/2} \gg \Lambda$ because $\alpha$ is small, although (as mentioned) HTL propagators are no longer valid. Notwithstanding soft momenta $|Q^2| \simeq \Lambda^2$, where screening is crucial, the ratio $Q^2/\alpha(Q^2)$ is then small by assumption\footnote{ This, despite the Landau pole at $Q^2 = \Lambda^2$ in $\alpha(\cdot)$. We shall prudently avoid these (meaningless) large $\alpha$ values by using \eq{alpha eff}. }. Thus where pQCD becomes dubious, the matrix element saturates at some finite value, rather than giving an obviously unphysical result. At `strong coupling' the totally screened cross section actually leads to a {\em minimum} bound on $\eta$. On these grounds, an extrapolation of $\sigma_{\rm tr}$ using Eq.~\eq{M thermal} may actually yield reasonable estimates, e.\,g. for $\eta$, since screening protects $|t|^{1/2} \to \Lambda$. \begin{figure}[tb] \centerline{\includegraphics[scale=1.]{Pi}} \caption{ Example of a 1-loop contribution to the gluon self-energy. Regularising the vacuum contribution, cf. Eq.~\eq{Pi0}, we illustrate that the external momentum $t$ is the correct scale for the coupling in thermal mass $\widetilde\Pi(Q) \sim \alpha(t=Q^2)T^2$ -- not the typical loop momentum $Q_T \simeq 2\pi T$ often presumed. } \label{fig: Pi} \end{figure} \bigskip Strictly speaking, the `right' value of $\alpha(\cdot)$ should be established by calculating all ${\cal O}(\alpha)$ corrections. Evidently we have not done this. Instead we advocate resumming a subset of these: Those which come with large logarithms $\alpha (Q^2) \log (Q^2/t)$, and can be eliminated by choosing $Q^2 = t$. We cannot say whether the uncalculated corrections will be large or small (see discussion in Sec.~\ref{sec:sigTR}), but it seems legitimate to ask what effect it has. Because it is based on adding vacuum corrections to the (already) minimal requirement for screening, we speculate that it is also important for extrapolating the LO result. \subsection{Treatment of `subleading' terms} The dominant contribution to $\eta$ comes from terms in $d \sigma^{\rm tree}/d t$ that resemble \eq{M thermal}, and also give the LL result in a parametric calculation. Having found large (but not unreasonable) sensitivity to the limited scope of the HTL functions (see Fig.~\ref{fig: eta HTL}), we can afford to make some simplifications in subleading terms. That is the reason we have altogether dropped inelastic processes, which affect $\eta_{_{\mathrm{LO}}}$ by merely a few percent \cite{Arnold2003f}. Similarly, the terms in $d\sigma^{\rm tree}/d t$ that do not need to be screened give a numerically minor contribution (screening is entirely omitted for them); see Fig.~\ref{fig: relative}. To correctly incorporate screening into the sub-dominant and $s$-channel process would require dressing the individual amplitudes ${\cal M}_i$, rather than selectively mending the total cross section $d \sigma / d t \sim | \sum {\cal M}_i |^2$. \begin{figure}[thb] \includegraphics[scale=0.9]{relative} \caption{ Relative contribution from subleading terms in $d \sigma^{\rm tree}/d t$, which do not require screening in Eq.~\eq{glue-glue} (hence no sensitivity to $t^\star$). Here $\delta \eta = \eta_{_{\mathrm{LO}}} - \bar\eta$ where $\bar\eta$ is without certain contributions (labelled by arrows) to $d \sigma^{\rm tree}/d t$. Then $\delta \eta/\eta_{_{\mathrm{LO}}}$ is a `fraction' of the Red curve in Fig.~\ref{fig: eta HTL}. The $s$-channel contribution is negative, but small. A maximal coupling value is from \eq{alpha eff}. } \label{fig: relative} \end{figure} Hence we cannot preclude $|{\cal M}^2|<0$ in the numerical evaluation of ${\cal C}[\chi]$, but it seems to carry barely any influence on its value. (Figure~\ref{fig: relative} shows why; the offending terms are subleading.) Equation~\eq{running coupling} applies to the space-like exchanges in $d\sigma^{\rm tree}/d t$ that require thermal screening. Although it would be tempting to simply evaluate $\alpha(Q^2)$ at the virtuality of the intermediate state $Q^2=\{s,t,u\}$, that would in any case not help for the inelastic scatterings. We motivated a running coupling only for the leading IR terms. For the rest, let us simply take $Q^2 = (stu)^{1/3}$, and use the continuation of \eq{running coupling}, as advocated in \cite{Shirkov1997a} \begin{eqnarray} \alpha_{\rm eff} (Q^2) &=& \frac{1}{\pi \beta_0} A\big(Q^2/\Lambda^2\big) \, , \label{alpha eff} \end{eqnarray} where the (analytic) function $A$ is defined by \begin{eqnarray*} & & A(y) = \left\{ \begin{array}{l} \displaystyle \frac{\pi}{\log( -y)} + \frac{\pi}{1+y} \\[.4cm] \displaystyle \frac{\pi}{2} - \arctan \Big(\, \frac1\pi\log \, y\, \Big) \end{array} \right. \quad \text{for} \ y \mathrel{\rlap{\lower0.25em\hbox{$>$}}\raise0.25em\hbox{$<$}} 0 \, . \end{eqnarray*} Despite intrinsic difficulties with QCD in the far infrared, perturbation theory can give semi-quantitative results \cite{Dokshitzer2002}. This model expression \eq{alpha eff} has a `universal' limiting value at $Q^2 \to 0$, (from above or below), that imposes a maximal value $\alpha (\cdot) \leq \alpha_{\rm max} = 1/\beta_0$. Numerically $\alpha_{\rm max} = \{ 1.1, 1.3 \}$ for $n_{\!f} = 0$ and $3$ . and having larger values of $\alpha$ (i.\,e.\ from using the na\"{i}ve one-loop formula with $\alpha_{\rm max} \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$>$}} 1$ imposed by hand) does not markedly change the results, see Fig.~\ref{fig: relative}. The reason for this was already discussed around Eq.~\ref{M thermal}. \section{Results} \label{sec:results} Given then, is $\eta/T^3$ as a function of $T/\Lambda$ via the scales that go into $\alpha(\cdot)\,$. Figure \ref{fig: eta(T)} shows the resulting temperature dependence for $\eta_{_{\rm LO}}/T^3$, determined using Eq.~\ref{alpha eff} with an appropriate scale in the renormalised HTL propagators. [The details are in Sec.~\ref{sec:renorm}, and actual $Q^2$-values are intergration variables for \eq{baym}]. Also shown in Fig.~\ref{fig: eta(T)} is the NLL result \eq{eta NLL}, with a running coupling $\alpha(Q_T^2)$ where $Q_T = \xi \cdot 2\pi T$ for $\xi \in [\frac12,2]$. Evidently the two curves approach one another for $T \gg \Lambda$, still differing by a factor $\approx 2$ at $T/\Lambda = 10^3$, for which the na\"{i}ve coupling is $\alpha (Q_T) = 0.1\,$. Note in particular, that $\alpha (2\pi T_c/\Lambda) \approx 0.3$ -- just about $\u{\alpha}$ where \eq{eta NLL} breaks down. Changing $\xi$ is akin to a trivial rescaling of $\Lambda$ (since only the ratio $Q_T/\Lambda$ is important for $\eta_{_{\rm NLL}}/T^3$), and thus merely shifts the corresponding curve in Fig.~\ref{fig: eta(T)} to the left or right. \begin{figure}[hb] \includegraphics[scale=0.9]{large-temp-with-s} \caption{ Temperature dependence of the viscosity in the quenched limit ($n_{\!f}=0$). The purple curves (solid, dashed and dotted) represents the NLL result for $\eta$ (see Fig.~\ref{fig: eta(alpha)}), and with a coupling at the thermal scale $Q_T = 4 \pi T$; dotted, $2\pi T$; solid and $\pi T$; dash-dotted. The solid (red) line, surrounded by a light band, is our 1-loop resummation scheme (see text for details) with a `canonical' choice of screening only where $|t| < -t^\star \sim T^2$. The blue squares are the lattice results \cite{Giusti2017} and the arrow points to the Stefan-Boltzmann limit. } \label{fig: eta(T)} \end{figure} Plotting the interacting entropy $s(T)/T^3$, obtained now to high precision for $T\leq 300 T_c$ \cite{Giusti2017}, on the same axis illustrates $\eta = s$ only at moderately large temperatures $T \simeq 20 T_c$ (assuming $\Lambda=T_c$). The entropy (being a measure of the available degrees of freedom) increases rapidly at at the transition, and is within 5\% of the Stefan-Boltzmann limit after $T\gsim2T_c$. The shear viscosity (divided by $T^3$) increases more gradually above $T_c$, due to asymptotic freedom. [The mean free path becomes small, due to $\lambda \propto T^{-1}$, but but also depends weakly on $\alpha$, see \eq{eq1}.] However for physically relevant temperatures, just above the crossover (say $T<4 T_c$), the renormalised prediction is an order of magnitude smaller than the na\"{i}ve application of the NLL result. And while there is no guarantee that pQCD (or indeed kinetic theory) is applicable here, it is just what one would expect from a reasonable extension of the curve at $T\simeq 20T_c$ (where $\eta=s$). We shall speculate on this extrapolation after first presenting the subsequent ratio $\eta/s$, for $T\mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$>$}} T_c$. \subsection{Estimation of $\bm \eta \bm/ \bm s$} The interacting entropy is now well established in the published lattice QCD data (available for $n_{\!f}=\{0,3\}$) \cite{Giusti2017,Bazavov2014f}, and we shall use it to normalise our perturbative result for $\eta$. Adjusting the quantity $\ell=\Lambda/T_c$ in the ratio $$ \eta \left( T/ \bm( \ell\,T_c \bm) \right) \big/ s_{\rm latt} \left( T/T_c \right) \, , $$ overlays the units for the abscissa in in Fig.~\ref{fig: eta(T)}. \bigskip For $n_{\!f} = 0$, there are several evaluations of $\eta_{\rm \,latt}$ \cite{Meyer2007f, Meyer2009e, Mages2015, Astrakhantsev2017}. (Caveats, mentioned in Sec.~\ref{sec:intro}\,, should be reiterated here: A dynamical quantity is difficult to extract by means of {\em equilibrium} lattice gauge theory.) Nevertheless, if we confront the LO calculations with $\eta_{\rm\,latt}$ at the simulated temperatures and use\footnote{ This differs from \cite{Jackson2017a}, where we simply set $\ell \to 1$ because $\Lambda = {\cal O}(T_c)$. Our results are not too sensitive to its value, as to be expected.} $\ell = 1/1.26$ \cite{Borsanyi2012e}, we find that $\eta_{_{\mathrm{LO}}}(T)$ is quite compatible. Figure \ref{fig: eta/s quenched} indeed reveals that our calculation corroborates the lattice data, and can explain $\eta/s \approx 0.2$ at $T < 2T_c\,$. Modifying the self energy in the screened propagators via $t^\star \in -[\frac12,2]T^2$ gives a family of curves that cover the estimated uncertainty from $\eta_{\rm\,latt}$. Setting $t^\star/T^2 = -\infty$, i.\,e.\ using HTL functions for all $\omega$ and $|\bm q|$, gives $\eta$ about a factor of two larger. This is to be expected, based on the similar sensitivity for the toy model in Sec.~\ref{sec:sigTR} [see eq.~\eq{mu2 with t^star}]. Using the coupling at an argument $Q_T=2\pi T$ would instead yield $\eta_{_{\mathrm{LO}}}\bm(\alpha(Q_T)\bm)/s \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$>$}} 0.7$, which still does not explain $\eta/s < 0.5$. Adjusting (via $\xi$) the scale $Q_T=\xi\cdot 2\pi T$, to have $\eta_{_{\mathrm{LO}}}(\alpha)/s_{\rm latt} < 0.5$ at $T=1.2T_c$ would require $\alpha\mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$>$}} 0.4$ (see Fig.~\ref{fig: eta HTL}). It then seems difficult to justify the resulting $\xi \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} 0.3 \Lambda/T_c$ from using \eq{alpha eff}\footnote{ The unmodified coupling \ref{running coupling} would require $\xi \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} 0.5 \Lambda/T_c$ instead.}. Moreover, the corresponding curve would give the wrong slope for $\eta/s$ as a function of $T$. Confinement sets in for $T<T_c\,$, where the unmistakable increase in $\eta/s$ is due to both a reduced entropy and larger viscosity: $\eta_{_{\rm glueball}} \sim (\Lambda/T)^{5/2}$ \cite{Hosoya1985a}. \begin{figure}[bt] \includegraphics[scale=0.9]{eta_over_s_nf=0} \caption{ A renormalised prediction for $\eta/s$, as a function of $T/T_c$, in pure gauge QCD (assuming $T_c/\Lambda=1.26$). The lattice data are from Refs. \cite{Meyer2007f}; $\Box$, \cite{Meyer2009e}; $\triangle$, \cite{Mages2015}; $\circ$ and \cite{Astrakhantsev2017}; $\bm\times$\,. The dashed curve (orange) is the LO result from \cite{Arnold2003f}, with a running coupling at the `hard' thermal scale $Q_T = 2\pi T$. Hatching indicates the region below the conjectured lower limit $1/(4\pi)$. } \label{fig: eta/s quenched} \end{figure} All our discussion trivially extends to the physical case, $n_{\!f}=3$, for which it ought to viewed in the context of RHIC and LHC programmes. There it has been established that the experimental data cannot be reproduced with a value $\eta/s > 0.5$, by viscous hydrodynamic simulations \cite{Teaney2003a}. $\eta/s$ is one input to these codes, whose value is tuned to cover the hadronic and QGP phase of the evolution near $T_c\,$. Studies first sought a feasible range for $\eta/s$ (to reproduce experiments): \begin{equation} \begin{tabular}{c || c | c | c | c | c | c} Ref. & \cite{Aamodt2011} \& \cite{Adare2011} & \cite{Luzum2008a} & \cite{Song2011e} & \cite{Gavin2006a} \\ \hline \multirow{2}{*}{$\displaystyle \frac{\eta}s \mathrel{\rlap{\lower0.25em\hbox{$<$}}\raise0.25em\hbox{$>$}}$} & $0.11$ & & $.08$ & $0.08$ \\ & $0.16$ & $0.28$ & $0.20$ & $0.3$ \end{tabular} \label{min eta/s} \end{equation} Recently, attempts have been made to constrain the temperature dependence of $\eta/s$, which is expected to be minimal at the crossover temperature \cite{Niemi2015qia, Denicol2016, Bernhard2016}. These approaches are capable of eliminating certain {\em models} for $\eta(T)$; which also depends on $T<T_c$ (the hadronic phase of evolution). Simple parametrisations of $\eta/s$ are linear in $T$ with different slopes for $T\mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$>$}} \u T$ (where $\eta/s$ is minimal at $\u T\approx T_c\,$). Exploring minimal values from \eq{min eta/s} offers some restrictions on the two slopes -- although it remains difficult to discern the behaviour for $T>\u T$\,. In Fig.~\ref{fig: eta/s nf=3} we show the viscosity to entropy density in the physical case, where we have set $\ell = 1/0.48$ \cite{Aoki2009n}. Our results are on the lower end of the parametrisations explored in hydrodynamics and predicts a milder increase with $T/T_c\,$. Apparently there is hardly any difference with the quenched results; the increased interaction rate is compensated by the density. There does seems to be more sensitivity to the HTL restriction from $t^\star$, which brings a factor $\approx 2$ uncertainty. For $T<3T_c$, it seems quite possible to violate the lower bound $1/(4\pi)$ -- or not, depending on $t^\star$. As speculation, the smooth crossover in $s(T)$ (compared to $n_{\!f}=0$) may cause the minimum in $\eta/s$ to be shifted slightly; $\u T \approx 1.5\,T_c$. (The fact that $s_{\rm latt}$ deviates from $s_0$ by 30\% at $T=2T_c$ may indicate a change in quasiparticle structure already, which we assumed not the case for $\eta$.) The ratio of the LO viscosity with $\alpha(Q_T^2)$ versus our results is at the factor-of-2 level (for $T=4T_c$, see Fig.~\ref{fig: eta/s nf=3}). So extrapolating the fixed-coupling results with $Q_T$ down to $T\sim 1.5T_c$, would give $\eta/s \approx 0.3$ (by eye) which is possible. However, the renormalised viscosity reaches this value already at $T\approx 5T_c$. \begin{figure}[htb] \includegraphics[scale=0.9]{eta_over_s_nf=3} \caption{ The normalised viscosity for $n_{\!f}=3$ flavours as a function of $T/T_c$ (here $\Lambda/T_c = 2.1$ \cite{Aoki2009n}\,). Our estimate for the sensitivity to higher order terms gives the band around the solid red curve (see Fig.~\ref{fig: eta HTL}). Blue curves illustrate the `most likely' dependence from a Bayesian analysis using hydrodynamical simulations \cite{Bernhard2016}. (Mean; solid blue and 95\%-confidence; dash-dots.) Hatching indicates $\eta/s \leq 1/(4\pi)$. } \label{fig: eta/s nf=3} \end{figure} \subsection{Kinetic Theory \label{sec 3C}} We now pause to confront the question of whether kinetic theory is applicable in such a `strong coupling' regime. At asymptotically high $T$, there is a clear ordering between the inter-particle distance $\bar r \sim T^{-1}$, the Debye screening length $m^{-1}_D \sim (\sqrt{\alpha}\,T)^{-1}$ and the (large or small-angle scattering) mean free path $\lambda \sim (\alpha^j\, T)^{-1}$ where $j=\{1,2\}$ \cite{Arnold2003a}. So $$ \lambda \gg m_D^{-1} \gg \bar r $$ and it is permissible to neglect multi-particle correlations and treat the individual {\em binary} collisions as instantaneous. But the scales become comparable for achievable temperatures $T \approx T_c\,$. For example; the Debye mass (computable on the lattice for $n_{\!f}=0$) is largest when $T= 1.2 T_c$ but still satisfies $m_D^{-1} \geq \frac13 T^{-1}$ \cite{Peshier2006}. The interparticle distance $\bar r \approx \frac12 n^{-1/3}$ follows\footnote{ There could be other definitions of $\bar r$, i.\,e.\ setting it equal to $n^{-1/3}$ would amount to `dense packing'. We consider the `nearest-neighbor' definition discussed in many textbooks -- hence a factor $\frac12$ \cite{Reif1964}. } from the number density $n=(16+\frac{21}{2}n_{\!f})\frac{\zeta(3)}{\pi^2} T^3$. Hence the effective range of interactions is approximately equal to $\bar r$, which puts the physical picture at its limit but does not clearly invalidate it. Kinetic theory cannot be right at $T_c\,$, and thus we rather want to know where it actually breaks down. Until now we have simply used the transport equation \eq{C[f]} as a starting point, and (with motivation) applied it where $\lambda$ may not be large (i.\,e.\ strong coupling). To justify this idea {\em a posteriori}, we show that the mean free path $\lambda$ comes to the order the inter-particle distance $\bar r$ at a only few times $T_c\,$. To define $\lambda$, we shall appeal to the relaxation approximation ${\cal C}[f_1] = -\delta f_1/\lambda$ in the Boltzmann equation \eq{C[f]}. After linearising, $\lambda(E_1)$ is expressed in terms of the collision operator, \begin{equation} \lambda^{-1} (E_1) = \frac{1}{6E_1} \int d \Gamma \, \frac{tu}{s^2} |{\cal M}|^2 \frac{ f_2 \bar f_3 \bar f_4 }{\bar f_1} \, . \label{absorption rate} \end{equation} The transport factor $tu/s^2 = \frac14 (1-\cos^2\theta)$ comes from $\hat p_1^i \hat p_1^j \Delta^{ij}[\chi]\big|_{\chi=p^2}$ evaluated in the centre of momentum frame ($\theta$ is the scattering angle). We may then calculate \eq{absorption rate} with the screened QCD matrix elements ${\cal M}$, and using the running coupling as put forward in Sec.~\ref{sec 3B}. Figure \ref{fig: mfp} shows the resulting $\lambda(E)$ in units of $\bar r$, at $T=\{ 5T_c,10T_c \}$. (The variation from modifying $t^\star$ from Sec.~\ref{sec 3A} is on par with a factor of 2 in temperature.) \begin{figure}[hb] \includegraphics[scale=0.9]{mfp} \caption{ Mean free path $\lambda\,$, for a gluon of energy $E\,$, in units of the mean interparticle distance $\bar r\,$. Two temperatures are shown (for $\ell=1/1.26$ as in Fig.~\ref{fig: eta/s quenched}). } \label{fig: mfp} \end{figure} What fraction of particles is able to deliver its momentum across a path length $\lambda\geq \bar r$? If $\lambda=\bar r$ at $E\to E^\star$ then this fraction is $ \int_{E^\star}^\infty d E\, E^2 f(E)\big/ (2\pi^2 n)\, $. For the two temperatures shown in Fig.~\ref{fig: mfp}, we find that $70\%$ ($T=10T_c$) and $40\%$ ($T=5T_c$) particles meet this (albeit slightly arbitrary) requirement. Note that $T=5T_c$ is roughly the highest temperature used in lattice calculations of $\eta$ (see Fig.~\ref{fig: eta/s quenched}). Using $\bar r$ to qualify kinetic theory is not a rigorous judgement, but it at least demonstrates that we are at its limit for relevant temperatures $T \to T_c\,$. For $n_{\!f}=3$, quarks and gluons will have different mean free paths. Because of the group factors in their interaction rate, $\lambda_g < \lambda_q\,$. Unfortunately there are large uncertainties coming from $t^\star$ in \eq{absorption rate}, hence we do not show a plot like Fig.~\ref{fig: mfp}. But is seems that $\lambda_g \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} \bar r \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} \lambda_q$ for the bulk of particles at a few times $T_c\,$. \section{Summary} In this paper we refute a widespread paradigm: That the inferred ratio of shear viscosity to entropy density of the QGP, $\eta/s \mathrel{\rlap{\lower0.2em\hbox{$\sim$}}\raise0.2em\hbox{$<$}} 0.5$ near the deconfinement temperature $T_c$, is a genuinely non-perturbative effect. Contrary to this view, we find that the LO weak coupling result of Ref.~\cite{Arnold2003f} can explain {\em why} $\eta$ is so low. Our main result is a new estimate for the temperature dependence of $\eta/s$, see Figs.~\ref{fig: eta/s quenched} and \ref{fig: eta/s nf=3}. \bigskip We addressed two problems with many existing estimates from perturbation theory: \begin{enumerate} \item[1.] Logarithmic accuracy is appropriate only in scenarios where the coupling may be regarded as (asymptotically) weak. The increase of $\eta_{_{\rm NLL}}(\alpha)$ for $\alpha > \alpha^\star$ is {\em unphysical}: we expect that the viscosity to decrease with the coupling strength. \item[2.] A common procedure in using a fixed coupling result, is to {\em choose} the value of $\alpha$ as the running coupling $\alpha(Q^2)$ at, or near, the lowest Matsubara energy. In fact, a scale-dependent coupling and the calculation scheme of LO approximations are closely related: both emerge from/take into account loop corrections to tree level-amplitudes. \end{enumerate} Together, these improvements enable us to meaningfully extrapolate $\eta (T)$ near the QGP phase transition, provided that binary scatterings form the principal source of energy and momentum variation. To see how the running may affect a coefficient like $\eta$, consider our earlier formula \eq{eq1} for the transport cross section. Using \eq{running coupling} in \eq{eq1} to LL accuracy, we find\footnote{ A similar setting of scales, balanced between hard and soft modes, was found for the QCD collisional energy loss \cite{Peshier2006a}.} \begin{eqnarray} \sigma_{\rm tr} (s) \sim \frac 1s \int_{-s}^{-\mu^2} d t\, |t| \, \frac{\alpha(t)^2}{t^2} \ = \ \frac 1s \alpha( \mu^2 ) \alpha (s) \log \Big(\, \frac{s}{\mu^2} \,\Big)\, . \label{pocket} \end{eqnarray} While the overall structure is unchanged, cf.~\eq{eta/s parametrically}, a substitution $\alpha^2 \to \alpha (\text{\sl hard})\alpha(\text{\sl soft})$ in going from \eq{eta NLL} to \eq{pocket} reflects the relative importance of different scales. Soft interactions are more probable due to an overall $\alpha(t)^2$ in $d \sigma /d t$, but are also more screened by $\mu^2 \sim \alpha(t)T^2$. Resumming the vacuum self-energy $\Pi^{\rm vac}$ in \eq{IR replacement} is `optional', only thermal corrections {\em must} be taken into account (running in $\alpha$ is formally higher order). However, we may incorporate them so that vacuum and thermal parts are treated on an equal footing. And if we do, the benefit is to specify $\alpha$ (even at the LL level). In pursuing this line of reasoning, we have found that the scales in the running coupling could have a substantial effect on $\eta$.
1,314,259,996,217
arxiv
\section{Introduction} Technicolor \cite{Weinberg:1979bn,Susskind:1978ms} remains as one of the compelling alternatives to electroweak symmetry breaking extending the simple Higgs sector of the Standard Model (SM). Earliest models based on scaled up QCD run into trouble with precision data \cite{Peskin:1990zt}. Recent developments suggest that viable models are obtained via introduction of technifermions transforming under higher representations of the gauge group \cite{Sannino:2004qp,Dietrich:2005jn}. In this paper we study one of these new models, the so called Next to Minimal Walking Technicolor (NMWT). In NMWT model the Technicolor gauge group is SU(3), and the technicolor matter is constituted by two Dirac flavors transforming under the two-index symmetric, i.e. the sextet representation of the gauge group. Phenomenology of this Technicolor theory has been studied in \cite{Belyaev:2008yj}. Its properties on the lattice have also been investigated \cite{Shamir:2008pb,DeGrand:2008kx,DeGrand:2010na,Fodor:2009ar,Kogut:2010cz}. The Technicolor interaction only explains the mass patterns in the gauge sector of the Standard Model (SM), and to explain the fermion mass patterns and hierarchies extensions are needed. One possibility is so-called Extended Technicolor (ETC) \cite{Dimopoulos:1979es,Eichten:1979ah}, where one imagines the technifermions and ordinary fermions to be combined under a single representation of a larger gauge group whose breaking then induces effective low-energy four-fermion interaction terms. These, in turn, lead to mass terms for SM fermions as the technifermions condense. However, alternative avenues for addressing the fermion mass generation in Technicolor framework do exist. Instead of assuming the existence of an ETC sector, one may reintroduce scalars with renormalizable Yukawa interactions with ordinary and technicolored matter fields. This is so called bosonic Technicolor \cite{Simmons:1988fu,Samuel:1990dq,Kagan:1991gh,Carone:1992rh,Hemmige:2001vq}. To naturalize the scalars, one can supersymmetrize Technicolor \cite{Dine:1981za,Dobrescu:1995gz,Antola:2010nt}. Generally, when considering any of the above mentioned extensions of a technicolor model one must pay attention to possibly large contributions to the flavor changing neutral current (FCNC) processes. On the other hand, the fact that already the underlying Technicolor theory contributes to flavor physics has received less attention in the literature \cite{Fukano:2009zm}. This is so since in Technicolor, the spin-one composite objects and the electroweak gauge bosons with identical quantum numbers will mix providing additional contributions to the flavor observables via the usual box diagrams. So, given a Technicolor model and its bosonic extension, one will be able to constrain both sectors via flavor physics. In this paper we consider a bosonic technicolor model obtained by coupling the NMWT model with a SM-like scalar doublet \cite{Antola:2009wq}. We do not specify if this is fundamental scalar, possibly a low energy remnant of some supersymmetric scenario, or a composite with the compositeness scale much higher than the energies where the phenomenology of this theory is studied. With a bottom-up model building attitude we simply treat the effective Lagrangian, to be introduced in the next section, as a low energy effective theory and study its associated phenomenology. We will derive the results required for the analysis of the flavor constraints in this model, but the developed formalism can be applied to other model setups extending beyond basic Technicolor models. In addition, another source of constraints we consider is provided by the precision electroweak data, i.e. the oblique corrections. As we will see, they constrain the spectrum of the bosonic Technicolor in an interesting way, and also mix with the flavor constraints due to the Weinberg sum rules. As a conclusion, we find that the bosonic NMWT model is viable in light of the present data. However, the constraints do impose severe restrictions on the parameter space of the model. We will determine the results on the masses and mixing patterns of the states. In particular, the compatibility with the precision data inescapably leads to the prediction that at least one of the neutral scalars in the model has to be light, with a mass below 200 GeV. This gives an important phenomenological handle on the model with respect to the LHC data even if the couplings of the lightest scalar are weaker than those of the SM Higgs. \section{Bosonic Next to Minimal Walking Technicolor} The low energy effective Lagrangian for Bosonic NMWT has been introduced in detail in \cite{Antola:2009wq}, but for completeness we recall the basic results here. The effective Lagrangian is, schematically, \begin{eqnarray} {\cal L} = {\cal L}^{\rm eff}_{\rm TC}|_{{\rm Higgs}=0} + {\cal L}_{\rm SM}|_{{\rm Higgs}=0} + {\cal L}_{\rm Higgs} + {\cal L}_{\rm Yukawa}\,. \label{BTCLagrange} \end{eqnarray} Here ${\cal L}^{\rm eff}_{\rm TC}|_{{\rm Higgs}=0}$ is almost the same as the effective theory of the NMWT constructed in \cite{Belyaev:2008yj,Appelquist:1999dq,Foadi:2007ue}. The only difference is that ${\cal L}^{\rm eff}_{\rm TC}|_{{\rm Higgs}=0}$ above does not include the {\it composite} higgs scalar explicitly, i.e. ${\cal L}^{\rm eff}_{\rm TC}|_{{\rm Higgs}=0}$ has only the electroweak gauge bosons and the vector mesons in the sense of the generalized hidden local symmetry \cite{Bando:1987ym,Bando:1987br}. ${\cal L}_{\rm SM}|_{{\rm Higgs}=0}$ contains the contribution of the SM degrees of freedom excluding the fundamental Higgs scalar doublet. Finally, ${\cal L}_{\rm Higgs}$ is \begin{eqnarray} {\cal L}_{\rm Higgs} = |D_\mu M|^2 + |D_\mu H|^2 + ( M ,H\text{-mixing term}) -V(M,H)\,, \label{L-higgs} \end{eqnarray} where $M$ is the {\it composite} scalar field formed by the walking technicolor dynamics and $H$ is another scalar field. As discussed in the introduction we do not specify if $H$ is a fundamental or a composite object. In general $M,H$ are technicolor singlets and have the same EW-charge under $SU(2)_L \times U(1)_Y$ as the SM-Higgs field. Hence, due to the underlying symmetry the $H$ and $M$ fields mix with each other and this is accounted for by the $(M,H\text{-mixing term})$ in Eq.(\ref{L-higgs}). This mixing is resolved via transformation \cite{Antola:2009wq} \begin{eqnarray} \begin{pmatrix} M \\ H \end{pmatrix} = \begin{pmatrix} A & B \\ -A & B \end{pmatrix} \begin{pmatrix} M_- \\ M_+ \end{pmatrix}\,, \label{mhtransf} \end{eqnarray} where \begin{eqnarray} A=\frac{1}{\sqrt{2}}\frac{1}{\sqrt{1-\frac{c_3}{\alpha}y_Q}},\quad B=\frac{1}{\sqrt{2}}\frac{1}{\sqrt{1+\frac{c_3}{\alpha}y_Q}}. \label{AB} \end{eqnarray} The parameter $c_3$ is ${\mathcal O}(1)$ low energy constant, $\alpha=\Lambda/ f\sim 4\pi$ is the ratio between the mass scale of the lightest non-Goldstone mode and the Goldstone boson decay constant $f$. Finally, $y_Q$ is the Yukawa coupling between techniquarks and the scalar doublet $H$. These parameters originate from the underlying Lagrangian, for details, see \cite{Antola:2009wq}. Note that since the mixing of $M$ and $H$ occurs also on the level of the kinetic term, the transformation in (\ref{mhtransf}) is not simply a rotation. After the kinetic terms in Eq.(\ref{L-higgs}) are diagonalized \begin{eqnarray} {\cal L}_{\rm Higgs} = |D_\mu M_+|^2 + |D_\mu M_-|^2 -V(M_\pm)\,, \end{eqnarray} the physical mass eigenstates of the scalar sector are obtained via an additional rotation. We rewrite $M_\pm$ as \begin{eqnarray} \begin{pmatrix} G \\ \Pi \end{pmatrix} = \begin{pmatrix} \cos \beta & \sin \beta \\ -\sin \beta & \cos \beta \end{pmatrix} \begin{pmatrix} M_- \\ M_+ \end{pmatrix}\,, \end{eqnarray} where $G$ indicates the would-be Nambu-Goldstone boson part which is absorbed in the longitudinal mode of the EW-gauge bosons and $\Pi$ indicates the physical part which will remain in the spectrum. Thus after combining these two transformations, $M,H$ can be represented by $G,\Pi$ as \begin{eqnarray} \begin{pmatrix} M \\ H \end{pmatrix} = \begin{pmatrix} Ac_\beta+Bs_\beta & -As_\beta+Bc_\beta \\ -Ac_\beta+Bs_\beta & As_\beta+Bc_\beta \end{pmatrix} \begin{pmatrix} G \\ \Pi \end{pmatrix}\,, \end{eqnarray} where we defined $c_\beta\equiv\cos\beta$ and $s_\beta\equiv\sin\beta$. We parametrize the EW-doublets $G,\Pi$ as \begin{eqnarray} G = \begin{pmatrix} i G^+ \\[2ex] \dfrac{v_{\rm EW} + h^0 - iG^0}{\sqrt{2}} \end{pmatrix} \quad,\quad \Pi = \begin{pmatrix} i \pi^+ \\[2ex] \dfrac{ H^0 - i\pi^0}{\sqrt{2}} \end{pmatrix}\,, \end{eqnarray} where $M^2_W = g^2_{\rm EW} v^2_{\rm EW}/4$ in which $g_{\rm EW}$ is the physical $SU(2)_L$ gauge coupling. The final contribution in (\ref{BTCLagrange}), ${\cal L}_{\rm Yukawa}$, includes the SM Yukawa sector. The Yukawa couplings of the SM quarks are given by \begin{eqnarray} {\cal L}_{{\rm Yukawa},q} = -\tilde{Y}^{ij}_d \bar{\tilde{q}}^i_L H \tilde{d}^j_R - \tilde{Y}^{ij}_u \bar{\tilde{q}}^i_L (i \sigma_2 H^*) \tilde{u}^j_R + \rm{h.c.}\,, \end{eqnarray} where we assume that only $H$ has Yukawa coupling with the SM fermions (denoted with a tilde) in the weak gauge eigenbasis. The parameters $\tilde{Y}_{u,d}$ are the Yukawa matrices and have non-diagonal components in general. In terms of the physical mass eigenstates the scalar field $H$ is represented in the unitary gauge as \begin{eqnarray} H = (-Ac_\beta+Bs_\beta) \begin{pmatrix} 0 \\[2ex] \dfrac{ v_{\rm EW} + h^0}{\sqrt{2}} \end{pmatrix} + (As_\beta+Bc_\beta) \begin{pmatrix} i \pi^+ \\[2ex] \dfrac{ H^0- i \pi^0}{\sqrt{2}} \end{pmatrix}\,. \label{higgs-u-gauge} \end{eqnarray} Now we divide ${\cal L}_{{\rm Yukawa},q}$ into three parts as \begin{eqnarray} {\cal L}_{{\rm Yukawa},q} = {\cal L}_{\rm Yukawa} ({\rm mass})+ {\cal L}_{\rm Yukawa}(h^0,H^0) + {\cal L}_{\rm Yukawa}(\pi^\pm)\,, \label{divide-yukawa} \end{eqnarray} where, the first term becomes fermion mass term after changing from the weak basis to the mass eigenbasis for fermions, the second one includes physical neutral higgs and the third one includes physical charged pions. Since only one of the Higgs doublets couples to fermions, the tree level contributions to FCNC interactions are absent in this model, and in our analysis of flavor constraints it suffices to concentrate on the first and third terms of (\ref{divide-yukawa}). The first term in Eq.(\ref{divide-yukawa}) is represented as \begin{eqnarray} {\cal L}_{\rm Yukawa} ({\rm mass}) \!\!&=&\!\! - \frac{v_{\rm EW} (-Ac_\beta+Bs_\beta)}{\sqrt{2}} \left[ \bar{\tilde{d}}^i_L \tilde{Y}^{ij}_d \tilde{d}^j_R - \bar{\tilde{u}}^i_L \tilde{Y}^{ij}_u \tilde{u}^j_R + \rm{h.c.} \right] \nonumber\\ \!\!&=&\!\! - \frac{v_{\rm EW} (-Ac_\beta+Bs_\beta)}{\sqrt{2}} \left[ \bar{d}^i_L Y^{ii}_d d^i_R - \bar{u}^i_L Y^{ii}_u u^i_R + \rm{h.c.} \right]\,, \end{eqnarray} where on the second line the SM fermions without tilde are in the fermion mass eigenbasis and the fermion mass matrices are given by \begin{eqnarray} m^{ii}_{u(d)} = \frac{v_{\rm EW} (-Ac_\beta+Bs_\beta)}{\sqrt{2}} \begin{pmatrix} Y^{11}_{u(d)} & \, & \, \\ \, & Y^{22}_{u(d)} & \, \\ \, & \, & Y^{33}_{u(d)} \end{pmatrix} \equiv \begin{pmatrix} m_{u(d)} & \, & \, \\[1.5pt] \, & m_{c(s)} & \, \\[1.5pt] \, & \, & m_{t(b)} \end{pmatrix}\,. \end{eqnarray} In the fermion mass eigenbasis, the third term in Eq.(\ref{divide-yukawa}) is \begin{eqnarray} {\cal L}_{\rm Yukawa}(\pi^\pm) \!\!&=&\!\! -i \pi^+ (As_\beta+Bc_\beta) \left[ \bar{u}^i_L Y^{jj}_d d^j_R - \bar{u}^i_R Y^{ii}_u d^j_L \right] V_{ij} + \rm{h.c.} \nonumber\\ \!\!&=&\!\! -i \pi^+ X \left[ \bar{u}^i_L m^{jj}_d d^j_R - \bar{u}^i_R m^{ii}_u d^j_L \right] V_{ij} + \rm{h.c.}, \label{L-ffpi} \end{eqnarray} where we defined \begin{eqnarray} X\equiv\frac{As_\beta+Bc_\beta}{-Ac_\beta+Bs_\beta}. \label{Xvariable} \end{eqnarray} As is evident from (\ref{L-ffpi}), the constraints will be most conveniently imposed on the parameter $X$. This will also hide the parameters of the underlying theory defined briefly below Eq. (\ref{AB}) from the analysis into a single quantity. \section{Oblique corrections} \label{obliques} The oblique corrections $S$ and $T$ \cite{Peskin:1990zt} are defined as \begin{eqnarray} S &=& -16\pi \Pi_{3Y}^\prime(0),\nonumber \\ T &=& \frac{4\pi}{s_w^2c_w^2 M_Z^2}(\Pi_{11}(0)-\Pi_{33}(0)). \end{eqnarray} To evaluate the corrections we need to estimate the new contributions to the vacuum polarizations \begin{equation} \Pi_{3Y}(q^2),~~\Pi_{ii}(q^2) ~~(i=1,2), \end{equation} arising from pion and scalar loop diagrams. The relevant diagrams and Feynman rules following from the expansion of (\ref{L-higgs}) are given in \cite{Antola:2009wq}, and we will not repeat the calculation in full here. The reader should note, however, that the definition for the mixing angle $\theta$ used in \cite{Antola:2009wq} differs from our $\beta$. The angles are related by \begin{equation} \beta=\frac{\pi}{4}-\theta. \end{equation} Taking that into account, we can use the results provided in \cite{Antola:2009wq}. Generally, the $S$ parameter is given by \begin{eqnarray} S = S_{V,A} + S_S\,, \end{eqnarray} where $S_{V,A}$ indicates the contribution from the vector mesons $V,A$ and $S_S$ indicates the contribution from the scalar mesons $\pi$ and the Higgs bosons. Here we will only consider the contribution of the scalar and pseudoscalar degrees of freedom, i.e. $S_S$. The contribution of the vector mesons is discussed in section \ref{numresults}. The origin of the $(S,T)$-plane corresponds to the SM with a given value of the mass of the Higgs boson denoted by $m_{\textrm{ref}}$. We have removed the SM Higgs sector and added new sectors as described in the previous section. Thus the S-parameter is \begin{equation} S=S_{\textrm{SM}}(m_{\textrm{ref}})-S_H(m_{\textrm{ref}})+S_{\textrm{new}}=S_{\textrm{new}}-S_H(m_{\textrm{ref}}), \end{equation} because $S_{\textrm{SM}}(m_{\textrm{ref}})=0$ by definition. Similar considerations apply to $T$, and using these definitions we obtain finite expressions for $S$ and $T$. We use dimensional regularization in the $\overline{MS}$ scheme, and find that the final result can be expressed in terms of two integrals, \begin{eqnarray} I_1(m_1,m_2,q) &=& \int_0^1 dx\Delta\log\frac{\mu^2}{\Delta},\nonumber \\ I_2(m_1,m_2,q) &=& \int_0^1 dx m_1^2\log\frac{\mu^2}{\Delta}, \end{eqnarray} where $\Delta\equiv\Delta(m_1,m_2,q)=xm_2^2+(1-x)m_1^2-x(1-x)q^2$ and $\mu$ is an arbitrary mass scale. Furthermore, for notational convenience we define \begin{eqnarray} C_h &=& \frac{1}{v_{\rm EW}}(f_+\cos\beta-f_-\sin\beta),\nonumber \\ C_s &=& \frac{1}{v_{\rm EW}}(f_-\cos\beta+f_+\sin\beta), \label{oblique_constants} \end{eqnarray} where $f_+=(f+v)/(2B)$, $f_-=(f-v)/(2A)$. The parameters $A$ and $B$ were defined in (\ref{AB}) and $f$ and $v$ are the vacuum expectation values of the fields $M$ and $H$, respectively. Similar quantities, $f_+$ and $f_-$, referring to the mass eigenbasis of the scalars are related to the electroweak scale $v_{\rm EW}$ by $f_+^2+f_-^2=v_{\rm EW}^2$, so that $C_h^2+C_s^2=1$. With these preliminary definitions we have \begin{eqnarray} \Pi_{3Y}(q^2) &=& \frac{1}{32\pi^2}\left[-C_h^2\left(I_1(m_\pi,m_{h^0})-2I_2(M_Z,m_{H^0})+I_1(M_z,m_{H^0})\right)\right.\nonumber \\ & & -C_s^2\left(I_1(m_\pi,m_{H^0})-2I_2(M_Z,m_{h^0})+I_1(M_Z,m_{h^0})\right)\\ & & \left.+I_1(m_\pi,m_\pi)+I_1(M_Z,m_{\textrm{ref}})-2I_2(M_Z,m_{\textrm{ref}})\right],\nonumber \end{eqnarray} where we have dropped all $q$-independent contributions since we need only the $q^2$-derivative of this quantity. Note that here and later $m_{h^0}$ and $m_{H^0}$ are the masses of the physical scalars $h^0$ and $H^0$. The contributions to the correlators needed for the $T$-parameter are obtained similarly, and the result is \begin{eqnarray} \Pi_{11}(q^2) &=& \frac{1}{32\pi^2}\left[C_h^2\left(I_1(M_W,m_{H^0})+I_1(m_\pi,m_{h^0})-2I_2(M_W,m_{H^0})\right)\right.\nonumber \\ & & +C_s^2\left(I_1(M_W,m_{h^0})+I_1(m_\pi,m_{H^0})-2I_2(M_W,m_{h^0})\right)\nonumber \\ & & +I_1(m_\pi,m_\pi)-I_1(M_W,m_{\textrm{ref}})+2I_2(M_W,m_{\textrm{ref}})\\ & & \left.+\frac{3}{2}m_\pi^2-\frac{1}{2}m_{\textrm{ref}}^2+\frac{1}{2}m_{h^0}^2+\frac{1}{2}m_{H^0}^2-\frac{1}{3}q^2\right],\nonumber \end{eqnarray} and \begin{equation} \Pi_{33}(q^2)=\Pi_{11}(q^2) \quad \text{with}\quad M_W\rightarrow M_Z. \end{equation} The results of a numerical computation and the corresponding constraints due to the data on $S$ and $T$ will be presented in Sec. \ref{numresults}. \section{Flavor constraints} \label{flavor} In \cite{Fukano:2009zm}, the contributions to the flavor observables from the vector mesons for the NMWT model have been estimated. The analysis of \cite{Fukano:2009zm} was carried out in terms of the weak eigenstates of the vector mesons. In the present paper, we reconsider the analysis of \cite{Fukano:2009zm} in the mass basis and extend it to the case of bosonic NMWT. For this purpose, we should first determine the eigenvalues and eigenstates for the mass matrix of the vector bosons. In this paper, we take into consideration $\Delta F = 2$ constraints, so we concentrate on the charged vector bosons sector. Here $F$ labels the flavor relevant for each of the processes we consider. Thus we can represent the charged current coupling to the fermions as \begin{eqnarray} && \frac{g}{\sqrt{2}} \left[ V^*_{ij} \cdot \tilde{W}^-_\mu \bar{d}^j_L \gamma^\mu u^i_L + {\rm h.c.} \right] \nonumber\\[1ex] &&= \frac{g}{\sqrt{2}} \left[ V^*_{ij} \cdot \left( x_W W^-_\mu + x_V V^-_\mu + x_A A^-_\mu\right) \bar{d}^j_L \gamma^\mu u^i_L + {\rm h.c.} \right]\,, \label{L-ffg} \end{eqnarray} where the tilde notation refers to the weak basis and the fields and parameters without tilde refer to the mass basis. The CKM matrix is denoted by $V$. The notations and complete expressions for the mass eigenvalues and eigenvectors of vector mesons are given in Appendix \ref{masseigenvalues}. To analyse the flavor changing neutral current interactions in the bosonic NMWT model, we compute the Box diagrams contributing to the $\Delta F = 2$ processes. The required diagrams are shown in Fig. \ref{FCNC-box}. \begin{figure} \begin{center} \begin{tabular}[b]{lc} \raisebox{-35pt}{(I)} & \includegraphics[scale=0.65,angle=-90]{Box-gvm0} \\ \raisebox{-35pt}{(I\hspace{-.1em}I)} & \includegraphics[scale=0.65,angle=-90]{Box-gvms0} \\ \raisebox{-35pt}{(I\hspace{-.1em}I\hspace{-.1em}I)} & \includegraphics[scale=0.65,angle=-90]{Box-ss0} \\ \end{tabular} \end{center} \caption{Box diagrams for $\Delta S = 2$ scattering process in the unitary gauge. To obtain $\Delta B = 2$ process, we should simply rename $s$ with $b$ and $d$ with $q\,(q=d,s)$ in the various diagrams. \label{FCNC-box}} \end{figure} For these box diagrams we obtain results \begin{eqnarray} ({\rm I}) \!\!&=&\!\! \frac{-i g^4_{\rm EW}}{(4 \pi)^2 \cdot 4M^2_W} \sum_{i,j} \lambda_i \lambda_j {\cal F}^{(1)} (m_i,m_j,M_W,M_V,M_A) \times {\cal O}_{\Delta F=2}\,, \label{box-gvm} \\ ({\rm I\hspace{-.1em}I}) \!\!&=&\!\! \frac{-i g^4_{\rm EW} X^2}{(4 \pi)^2 \cdot 4M^2_W} \sum_{i,j} \lambda_i \lambda_j {\cal F}^{(2)} (m_i,m_j,M_W,\{ M \}) \times {\cal O}_{\Delta F=2}\,, \\ \label{box-gvms} ({\rm I\hspace{-.1em}I\hspace{-.1em}I}) \!\!&=&\!\! \frac{-i g^4_{\rm EW} X^4}{(4 \pi)^2 \cdot 4M^2_W} \sum_{i,j} \lambda_i \lambda_j {\cal F}^{(3)} (m_i,m_j,M_W,\{ M \}) \times {\cal O}_{\Delta F=2}\,, \end{eqnarray} where $\{ M \}$ stands for the collection $\{ M_V,M_A,M_\pi \}$, the operator ${\cal O}_{\Delta F=2}$ is \begin{eqnarray} {\cal O}_{\Delta F=2} = \begin{cases} (\bar{s}_L \gamma^\mu d_L)(\bar{s}_L \gamma_\mu d_L) & \text{for $F=S$}\,, \\[2ex] (\bar{b}_L \gamma^\mu q_L)(\bar{b}_L \gamma_\mu q_L) & \text{for $F=B_q\,,\,(q = d,s)$}\, . \end{cases} \end{eqnarray} and $\lambda_i$ is given in terms of the CKM matrix elements by \begin{eqnarray} \lambda_i = \begin{cases} V_{id} V^*_{is} & \text{for $F=S$}\,, \\[2ex] V_{iq} V^*_{ib} & \text{for $F=B_q\,,\,(q = d,s)$}\,. \end{cases} \label{def-lambda_i} \end{eqnarray} The functions ${\cal F}^{(1,2,3)}$ are given explicitly in Appendix \ref{flavorfunctions}. We have used the physical $SU(2)_L$ gauge coupling $g_{\rm EW}$, which is related to the bare gauge coupling $g$ as \begin{eqnarray} \frac{1}{g^2_{\rm EW}} = \frac{1}{g^2} \left[ 1+ \frac{1+(1-\chi)^2}{2} \epsilon^2 \right] = \frac{1}{g^2} \left[ 1+ \frac{1+(1-\chi)^2}{2} \bar{\epsilon}^2 \right] \,, \label{g-gew-relation} \end{eqnarray} where $\chi$ is a numerical factor introduced as in \cite{Appelquist:1999dq}, and whose value will be determined in Sec. {\ref{numresults}. This result is easily obtained by equating the well known Fermi coupling $G_F\sim g_{\rm EW}^2/M_W^2$ with the similar quantity $g^2/\tilde{M}_W^2$ which would be identified as the Fermi coupling in case this theory was applied as the Standard Model; see appendix \ref{flavorfunctions} for the expressions relating the parameters of the gauge and mass eigenbases. In the first equality we have used the definition $\epsilon=g/\tilde{g}$, where $\tilde{g}$ is the self coupling of the vector mesons, and expanded up to the order ${\cal O}({\epsilon}^4)$. The second equality is obtained with the definition $\bar{\epsilon}= g_{\rm EW}/\tilde{g}$, and noting that $\bar{\epsilon}^2=\epsilon^2$ up to contributions of order ${\cal O}(\bar{\epsilon}^4)$. For the results we present in this paper the quantity $\bar{\epsilon}$ is more convenient than $\epsilon$ since $g_{\rm EW}$ is the relevant coupling for the physical states. In Appendix \ref{flavorfunctions} we have collected the expressions relating the gauge and mass eigenbases up to ${\cal O}(\epsilon^5)$, but in the applications here, we will generally neglect ${\cal O}(\bar{\epsilon}^3)$ and higher terms in all expressions. In total, then, the $\Delta F = 2$ FCNC process is described by an effective Lagrangian given by \begin{eqnarray} {\cal L}(\Delta F = 2) = -\frac{G^2_F M^2_W}{4\pi^2} \cdot {\cal F}_0(m_i,m_j,M_W,M_V,M_A,M_\pi) \cdot {\cal O}_{\Delta F = 2}\,, \end{eqnarray} where \begin{eqnarray} {\cal F}_0(m_i,m_j,M_W,\{ M \}) &=& \sum_{i,j} \lambda_i \lambda_j \left[ {\cal F}^{(1)} +X^2 {\cal F}^{(2)} +X^4 {\cal F}^{(3)} \right] \nonumber\\ &=& {\cal F}^{\rm SM}_0(m_i,m_j,M_W) + \Delta {\cal F}_0(m_i,m_j,M_W,\{ M \})\,. \end{eqnarray} Here $X$ is defined in (\ref{Xvariable}), and we have decomposed the result into a SM contribution, ${\cal F}^{\rm SM}_0 \equiv \sum \lambda_i \lambda_j {\cal F}_1(m_i,m_j,M_W)$, and a contribution arising solely from the new physics, \begin{eqnarray} \Delta {\cal F}_0= \Delta {\cal F}^s_0(m_i,m_j,M_W,M_\pi) + \bar{\epsilon}^2 \Delta {\cal F}^{\rm vm}_0(m_i,m_j,M_W,M_V,M_A,M_\pi) \,. \label{f0def} \end{eqnarray} Furthermore, here $\Delta {\cal F}^s_0$ is the contribution solely from $\pi^\pm$ exchange and $\bar{\epsilon}^2 \Delta {\cal F}^{\rm vm}_0$ is the contribution from diagrams including vector mesons together with the charged pions. When we estimate the flavor observables in our setting, we take into account the unitarity of the CKM matrix by imposing $m_u \to 0$ \cite{Inami:1980fz}. For example in the case of $F = S$, \begin{eqnarray} {\cal F}_0 = \eta_1 \cdot \lambda^2_c \bar{{\cal F}}_0(m_c,M_W,\{ M \}) + \eta_2 \cdot \lambda^2_t \bar{{\cal F}}_0(m_t,M_W,\{ M \}) + \eta_3 \cdot 2 \lambda_c \lambda_t \bar{{\cal F}}_0(m_c,m_t,M_W,\{ M \}) \,, \end{eqnarray} where $\eta_{1,2,3}$ encode the QCD corrections for each $\bar{{\cal F}}_0$, and which are given by $\eta_1 = 1.44\,,\eta_2=0.57\,,\eta_3=0.47$ \cite{Nierste:2009wg}. We note that, in the case of $F=B$ we need only $\eta_2 = \eta_B = 0.55$ \cite{Nierste:2009wg} because the top quark dominates the contributions to the box diagrams for $\Delta B=2$ processes. The function $\bar{{\cal F}}_0$ is given by \begin{eqnarray} \bar{{\cal F}}_0(m_i,M_W,\{ M \}) = \lim_{m_u \to 0} \bar{{\cal F}}_0(m_u,m_i,M_W,\{ M\})\,, \end{eqnarray} and we use the charm quark mass $m_c = 1.3 \, \text{GeV}$ \cite{UT-CKM}, and the top quark mass $m_t = 163.4 \, \text{GeV}$ \cite{UT-CKM}. In accordance with \cite{Bona:2007vi}, we define $C_{\epsilon_K,\Delta M_K, B_q}$ as \begin{eqnarray} C_{\epsilon_K} &\equiv& \frac{{\rm Im} \left[ \bra{K^0} {\cal H}^{\rm full}_{\rm eff} \ket{\bar{K^0}}\right]}{{\rm Im} \left[ \bra{K^0} {\cal H}^{\rm SM}_{\rm eff} \ket{\bar{K^0}} \right]} = 1 + \frac{{\rm Im}\left[ \Delta {\cal F}_0(F=S) \right]}{{\rm Im} \left[ {\cal F}^{\rm SM}_0 (F=S)\right]}\,,\label{c1def} \\[1ex] C_{\Delta M_K} &\equiv& \frac{{\rm Re} \left[ \bra{K^0} {\cal H}^{\rm full}_{\rm eff} \ket{\bar{K^0}} \right]}{{\rm Re} \left[ \bra{K^0} {\cal H}^{\rm SM}_{\rm eff} \ket{\bar{K^0}}\right]} = 1 + \frac{{\rm Re}\left[ \Delta {\cal F}_0 (F=S)\right]}{{\rm Re}\left[ {\cal F}^{\rm SM}_0 (F=S)\right]}\,, \\[1ex] C_{B_q} &\equiv& \left| \frac{\bra{B^0_q} {\cal H}^{\rm full}_{\rm eff} \ket{\bar{B^0_q}}}{\bra{B^0_q} {\cal H}^{\rm SM}_{\rm eff} \ket{\bar{B^0_q}}} \right| = \left| 1 + \frac{\Delta {\cal F}_0(F=B)}{{\cal F}^{\rm SM}_0(F=B)} \right| \,,\label{c3def} \end{eqnarray} where ${\cal F}^{\rm SM}_0(F=S,B)$ are the SM contributions, ${\cal F}^{\rm SM}_0$ defined earlier, with $\lambda_i $ in Eq.(\ref{def-lambda_i}) chosen in correspondence with the process indicated in parentheses by $F=S$ or $F=B$. The UT-fit collaboration provides constraints on each $C$ parameter defined above; these are collected in Table \ref{ut-constraints-capri}. \begin{table} \begin{center} \begin{tabular}{|c||c|c|} \hline \rule[-9pt]{0pt}{24pt} parameter & $68\%$ probability & $95\%$ probability \\ \hline \rule[-9pt]{0pt}{24pt} $C_{\epsilon_K}$ & $1.09 \pm 0.13$ & [0.86, 1.39] \\ \hline \rule[-9pt]{0pt}{24pt} $C_{\Delta M_K}$ & $1.17 \pm 0.42$ & [0.61, 2.43] \\ \hline \rule[-9pt]{0pt}{24pt} $C_{B_d}$ & $0.97 \pm 0.14$ & [0.72, 1.30] \\ \hline \rule[-9pt]{0pt}{24pt} $C_{B_s}$ & $0.96 \pm 0.10$ & [0.79, 1.18] \\ \hline \end{tabular} \caption{ Constraints for $C_{\epsilon_K},C_{\Delta M_K}$ and $C_{B_q}$ \cite{UT-Capri2010}. \label{ut-constraints-capri} } \end{center} \end{table} Finally, the CKM matrix is given by \cite{UT-CKM} \begin{eqnarray} V_{\rm CKM}= \begin{pmatrix} 0.97427 & 0.22535 & 0.00377 e^{i(-70.0^\circ) } \\ -0.22525 e^{i (-0.0358^\circ) } & 0.9734 & 0.04082 \\ 0.00869 e^{i(-23.3^\circ } & -0.04007 e^{i(-1.138^\circ) } & 0.99916 \end{pmatrix}\,. \end{eqnarray} \section{Numerical results for Bosonic NMWT} \label{numresults} Let us start with the oblique corrections. As already mentioned, in our case, the $S$ parameter is given by \begin{eqnarray} S = S_{V,A} + S_S\,, \end{eqnarray} where $S_{V,A}$ indicates the contribution from the vector mesons $V,A$ and $S_S$ indicates the contribution from the scalar mesons $\pi$ and the Higgs bosons. On the other hand, $S_{V,A}$ is related to the 0th Weinberg sum rule, because this sum rule is derived from vector-axial current correlation functions with vector meson saturation, and this does not have any contribution from physical scalar mesons. The 0th Weinberg sum rule is represented as \cite{Foadi:2007ue} \begin{eqnarray} S_{V,A} = 4\pi \left[ \frac{F^2_V}{\tilde{M}^2_V} - \frac{F^2_A}{\tilde{M}^2_A}\right] = \frac{8 \pi}{\tilde{g}^2} \left[ 1 - (1 - \chi)^2\right] +{\cal O}(\bar{\epsilon}^2) \,. \label{0th-WSR} \end{eqnarray} The parameter $\chi$ is related to the parameters of the underlying theory, but is in principle free to take any value and hence $S_{V,A}$ can be vanishing or even negative. For our analysis, we will take $S_{V,A} = 0$, i.e. $\chi=1$. The calculation of $S_S$ was detailed in Sec. \ref{obliques}, and here we turn directly to the numerical results. We demand that the values of $S$ and $T$ are within the 90\% confidence limit as obtained by the PDG \cite{Amsler:2008zzb}. The resulting constraints for the pion mass as well as the masses of the scalars are shown in figure \ref{STconstraint}. As already noted in \cite{Antola:2009wq}, the electroweak precision constraints favor a scenario of one light and one heavy scalar particle, whereas the pion mass is not very strictly limited by the electroweak precision data. \begin{figure}[h] \begin{center} \includegraphics[scale=0.4]{mhms.pdf} \includegraphics[scale=0.4]{mpi.pdf} \end{center} \caption{Left panel: The masses of the scalars $h^0$ and $H^0$ after the constraints from $S$ and $T$ have been applied. The points marked by green circles are ruled out by direct detection limits from LEP and Tevatron, assuming SM-Higgs like branching ratios. The red crosses are allowed. Right panel: the mass of the pion and the variable $X$ as defined in equation (\ref{Xvariable}), after the electroweak precision constraints and the direct detection limits have been applied. } \label{STconstraint} \end{figure} Then we turn to the analysis of the flavor constraints. As detailed in Sec. \ref{flavor}, we have six parameters at our disposal. These are $X, M_V, M_A, M_\pi, \tilde{g}$ and $\chi$. To reduce the number of free parameters further we apply Weinberg sum rules. In addition to the 0th sum rule given above, the 1st Weinberg sum rule is given by \cite{Foadi:2007ue} \begin{eqnarray} v^2_{\rm EW} = F^2_V - F^2_A \label{1WSR} \,, \end{eqnarray} where $v_{\rm EW}$ satisfies $v^2_{\rm EW} = 4 \tilde{M}^2_W/\tilde{g}^2_{\rm EW} = 4 M^2_W/g^2_{\rm EW}$. So the 1st Weinberg sum rule becomes \begin{eqnarray} M^2_W = \dfrac{\epsilon^2}{2} \left[ M^2_V - (1-\chi)^2 M^2_A \right]\,, \label{1WSR-mass} \end{eqnarray} where we neglect ${\cal O} (\bar{\epsilon}^4)$ contributions. Thus, with the two Weinberg sum rules we reduce the parameters as follows: With eqs.(\ref{0th-WSR}) and given $S_{V,A}=0$, we have $\chi=1$. Then via (\ref{1WSR-mass}), we express $M_V$ in terms of the other parameters and this leaves $\tilde{g}, M_A, M_\pi$ and $X$ as free parameters. For our analysis, we choose certain benchmark values for $\tilde{g}$ and $M_A$ and then scan the $(m_\pi,X)$-plane. For each set of parameters we evaluate $\Delta{\cal F}$ as defined in (\ref{f0def}), and then use the constraints on (\ref{c1def})-(\ref{c3def}) to see if the corresponding point in the parameter space is viable. Results of this procedure for $\chi=1$, $\tilde{g}=1,5$ and $M_A=400$ GeV or 600 GeV are shown in Fig.\ref{fig-UTfitconstraints}. We checked that we cannot find any significant difference from the present results if the value of $S_{V,A}$ is varied from zero to $1/\pi$, where $S =1/\pi$ corresponds to the perturbative naive value of $S$ in the NMWT model. To comment about a difference in the flavor constraints arising from varying $S=1/\pi$ to $S=0$ we note the following: The lower bound constraint in the figures coming from $C_{\epsilon_K}$ is slightly larger with $S=0$ than with $S=1/\pi$. On the other hand, the upper constraint coming from $C_{B_s}$ is slightly smaller with $S=0$ than with $S=1/\pi$. These differences are only on the level of ${\cal O}(10^{-2} \%)$ for a fixed value of $m_\pi$, independently of the values of $\tilde{g},M_A$. However, the constraints on the $S$ and $T$ from precision data are more stringent: allowing for $S_{V,A}\simeq 0.1$ in addition to the scalar contribution would require the presence of a light scalar with mass below 100 GeV which would be difficult to reconcile with phenomenologically, even if its couplings were different from a Standard Model Higgs. In general, the larger the value for $S_{V,A}$ is assumed, the smaller the mass of the lighter one of the scalar boson states needs to be. For example, if we take $S_{V,A}=0.05$, the parameter space points where the lightest scalar mass is of the order of 170 GeV are ruled out and we are left with points where the mass is of the order of 130 GeV or less. Thus, depending on the value of $S_{V,A}$, the parameter space may be highly restricted by the combination of electroweak precision observables (favoring the existence of a light scalar) and SM Higgs searches. A similar tension on the Higgs mass due to precision measurements and direct observation constraints is also present in the SM itself, as well as in typical supersymmetric extensions of the SM. \begin{figure}[h] \begin{center} \begin{tabular}[b]{cc} { \begin{minipage}[t]{0.5\textwidth} \includegraphics[scale=0.35]{UTC-fc1400s0.pdf} \end{minipage} }{ \begin{minipage}[t]{0.5\textwidth} \includegraphics[scale=0.35]{UTC-fc1600s0.pdf} \end{minipage} } \\[1ex] { \begin{minipage}[t]{0.5\textwidth} \includegraphics[scale=0.35]{UTC-fc5400s0.pdf} \end{minipage} }{ \begin{minipage}[t]{0.5\textwidth} \includegraphics[scale=0.35]{UTC-fc5600s0.pdf} \end{minipage} } \end{tabular} \end{center} \caption{ The UT-fit constraints coming from $C_{\epsilon_K,\Delta M_K, B_q}$ ($68\%$ probability) on $(M_\pi,X)$-plane for $(\tilde{g},M_A) = (1,400 \, \text{GeV})$ [top-left], $(1,600 \, \text{GeV})$ [top-right], $(5,400\, \text{GeV})$ [bottom-left] and $(5,600 \, \text{GeV})$ [bottom-right] with $S _{V,A}= 0$. The shaded region is allowed region. We show $C_{\epsilon_K,\Delta M_K, B_q}$ ($68\%$ probability) constraints in all panels. In the bottom panels, the lower constraints coming from $C_{\epsilon_K}$ disappears. \label{fig-UTfitconstraints} } \end{figure} \begin{figure}[h] \begin{center} \begin{tabular}[b]{cc} { \begin{minipage}[t]{0.5\textwidth} \includegraphics[scale=0.35]{UTC-fc1400s0-2.pdf} \end{minipage} }{ \begin{minipage}[t]{0.5\textwidth} \includegraphics[scale=0.35]{UTC-fc1600s0-2.pdf} \end{minipage} } \\[1ex] { \begin{minipage}[t]{0.5\textwidth} \includegraphics[scale=0.35]{UTC-fc5400s0-2.pdf} \end{minipage} }{ \begin{minipage}[t]{0.5\textwidth} \includegraphics[scale=0.35]{UTC-fc5600s0-2.pdf} \end{minipage} } \end{tabular} \end{center} \caption{ Constraints from flavor and electroweak oblique corrections on $(M_\pi,X)$-plane for $(\tilde{g},M_A) = (1,400 \, \text{GeV})$ [top-left], $(1,600 \, \text{GeV})$ [top-right], $(5,400\, \text{GeV})$ [bottom-left] and $(5,600 \, \text{GeV})$ [bottom-right] with $S _{V,A}= 0$. The red diamonds correspond to the red crosses in the right panel in Fig.\ref{STconstraint}. The shaded regions correspond to the shaded region in Fig.\ref{fig-UTfitconstraints}. \label{fig-UTconstraint-w-EWPT} } \end{figure}% As one can see from Fig. \ref{fig-UTfitconstraints}, the most severe upper bound comes from $C_{B_s}$ and the lower $68\%\, \rm C.L.$ constraint comes from $C_{\epsilon_K}$ in table \ref{ut-constraints-capri} . A lower constraint coming from $C_{\epsilon_K}$ disappears if $\tilde{g} \geq 2$ or $M_A \geq 650 \, \text{GeV}$. So, in this case it is enough to take into consideration the flavor constraint coming solely from the upper $C_{B_s}$ constraint. In Fig.\ref{fig-UTconstraint-w-EWPT}, we show the flavor constraints (shaded region) in Fig.\ref{fig-UTfitconstraints} with the constraints (red diamonds) from the electroweak oblique corrections for scalar sector in Fig.\ref{STconstraint}. Therefore our analysis seems to indicate that electroweak oblique corrections and flavor constraints taken together prefer \begin{eqnarray} \begin{cases} -1< X<0 \quad \text{and} \quad 300 \, \text{GeV} < m_\pi < 1.6 \, \text{TeV} & \text{for any $\tilde{g}$ and $M_A$} \,, \\[2ex] \,1\,<\,X\,<\,2 \quad \text{and} \quad 800 \, \text{GeV} < m_\pi < 2 \, \text{TeV} & \text{for} \quad \tilde{g} \leq 2 \quad \text{and} \quad M_A \leq 650 \, \text{GeV}\,, \\[2ex] 0.5<X<1.5 \quad \text{and} \quad 800 \, \text{GeV} < m_\pi & \text{for} \quad \tilde{g} \geq 2 \quad \text{or} \quad M_A \geq 650 \, \text{GeV}\,. \end{cases} \end{eqnarray} In any of these cases, we see that it is possible that $m_\pi > M_A$ in the bosonic NMWT model. This ordering in the spectrum would be different from the case of QCD-like theories. \section{Conclusions and outlook} In this paper we have revisited the bosonic next-to-minimal Walking Technicolor (NMWT) model introduced in \cite{Antola:2009wq}. We have extended the viability analysis of the model by considering the flavor constraints. The model, and the analysis carried out here, illustrate an important generic feature of flavor constraints in the Technicolor context: while there are the contributions due to the extension of the basic Technicolor theory, there are also intrinsic contributions arising from the Technicolor theory itself. These latter contributions originate from the vector mesons which mix with the electroweak gauge bosons. The former contributions were evaluated in this paper for bosonic Technicolor, but clearly similar analysis should be carried out for models where fermion mass generation is due to extended Technicolor (ETC) interactions. As a result of our analysis we conclude that bosonic NMWT remains as a viable low energy model able to provide for masses of the SM matter fields. Of course the hierarchical mass patterns remain unexplained in the sense that they are merely parametrized in terms of Yukawa couplings similarly as e.g. in (MS)SM. Our results show that imposing the flavor constraints together with the constraints from oblique corrections provides bounds on the mixing patterns of the scalar states and masses of the physical pions.
1,314,259,996,218
arxiv
\section{\label{}} \section{INTRODUCTION} Many precise results from several years of successful $B$--factories' running disfavor significant ($\gessim 10\%$) contributions from ``New Physics'' (NP) in tree-dominated \bgmeson\ decays. Agreement with the Standard Model (SM) is also found in higher-order processes such as $K^0$--$\overline{K}^0$ or \bd--\abd\ transitions due to second-order weak interactions (mixing) that involve virtual massive particles and may receive contributions from NP. A less clear picture is available for the \bs\ system. The strength of NP contributions in \bs--\abs\ mixing is constrained by the precise measurement of the oscillation frequency \cita{mixing}, which disfavors large magnitudes of NP amplitudes. However, knowledge of only the frequency leaves the phase of the mixing amplitude unconstrained. Indeed, possible large NP phases are currently not excluded. The mixing phase is accessible through the time-evolution of \ensuremath{\bs \to \ensuremath{J/\psi} \phi}\ decays, which is sensitive to the relative phase between the mixing and the $\bar{b}\to \bar{c}c\bar{s}$ quark-level transition, $\betas = \betasSM+\betasNP$. Such phase is responsible for \CP-violation and is $\betasSM=\arg(-V_{ts}V_{tb}^{*}/V_{cs}V_{cb}^{*}) \approx 0.02$ in the SM \cita{globalfit}; any sizable deviation from this value would be unambiguous evidence of NP \cita{theory}. If NP contributes a phase ($\betasNP$), this would also enter $\phis = \ensuremath{\phi_{s}^{\mathrm{SM}}} - 2\betasNP$, which is the phase difference between mixing and decay into final states common to \bs\ and \abs, and is tiny in the SM: $\ensuremath{\phi_{s}^{\mathrm{SM}}} = \arg(-M_{12}/\Gamma_{12}) \approx 0.004$ \cita{constraint}. The phase \phis\ enters the decay-width difference between light and heavy states, $\Delta\Gamma=\Gamma_L-\Gamma_H=2|\Gamma_{12}|\cos(\phis)$, which is $\ensuremath{\Delta\Gamma^{\mathrm{SM}}} \approx 2|\Gamma_{12}| = 0.096 \pm 0.036$ ps$^{-1}$ in the SM \cita{constraint} and plays a r\^{o}le in \ensuremath{\bs \to \ensuremath{J/\psi} \phi}\ decays. Since the SM values for \betas\ and \phis\ cannot be resolved with the resolution of current experiments, the following approximation is used: $\phis \approx -2\betasNP \approx -2\betas$, which holds in case of sizable NP contributions. \par This measurement of \betas\ is analogous to the determination of the phase $\beta=\arg(-V_{cd}V_{cb}^{*}/V_{td}V_{tb}^{*})$ in $\bd \to \ensuremath{J/\psi}\ensuremath{K^0_S}$ decays, except for a few additional complications: the oscillation frequency is about 35 times higher in \bs\ than in \bd\ mesons, requiring excellent decay-time resolution; the decay of a pseudoscalar meson (\bs) into two vector mesons ($\ensuremath{J/\psi}$ and $\phi$) produces two \CP-even states (orbital angular momentum $L = 0,2$), and one \CP-odd state ($L=1$), which should be separated for maximum sensitivity; and the value of the SM expectation for $\betas$ is approximately $30$ times smaller \cita{bigi-sanda} than the known $\beta$ value \cita{hfag}. \section{SIGNAL SELECTION} The CDF experiment at the Fermilab Tevatron performed the first measurement of the time-evolution of flavor-tagged $\bs \to \ensuremath{J/\psi}(\to\mu^+\mu^-) \phi(\to K^+K^-)$ decays \cita{PRL_betas_CDF}. These were reconstructed in \pap\ collision data corresponding to a time-integrated luminosity of 1.35 \lumifb. Events enriched in \ensuremath{J/\psi}\ decays are selected by a trigger that requires the spatial matching between a pair of two-dimensional, oppositely-curved, tracks in the multi-wire drift chamber (coverage $|\eta|<1$) and their extrapolation outward to track-segments reconstructed in the muon detectors (drift chambers and scintillating fibers). In the offline analysis, a kinematic fit to a common space-point is applied between the candidate \ensuremath{J/\psi}\ and another pair of tracks consistent with being kaons originated from a $\phi$ meson decay. The measurement of specific energy loss by ionization in the drift chamber (\ensuremath{\mathit{dE/dx}}) provides $1.5\sigma$ separation between charged kaons and pions with momenta $p>2~\pgev$. At lower momenta, scintillators bars surrounding the chamber measure arrival times of charged particles (time-of-flight, TOF) with approximately 110 ps resolution, providing separation between kaons and pions in excess of $2\sigma$. An artificial neural network trained on simulated data (to identify signal, $S$) and \bs\ mass sidebands (for background, $B$) is used for an unbiased optimization of the selection. The quantity $S/\sqrt{S+B}$ is maximized using kinematic and particle identification (PID) information. Attempts of using the average statistical resolution on $\betas$ observed in ensembles of pseudoexperiments as figure of merit were inconclusive because of irregularities of the likelihood (see below). Discriminating observables include kaon-likelihood, from the combination of \ensuremath{\mathit{dE/dx}}\ and TOF information; transverse momenta of the \bs\ and $\phi$ mesons; the $K^+K^-$ mass; and the quality of the vertex fit. The final sample contains approximately 2000 signal events over a comparable background (\fig{yield} (a)). Seven layers of silicon sensors extending radially up to 22 cm, and the drift chamber that provides 96 measurements between 30 and 140 cm, all immersed in the 1.4 T axial magnetic field, provide a mass resolution of approximately 10 \massmev\ on the \ensuremath{\bs \to \ensuremath{J/\psi} \phi}\ peak. \begin{figure} \begin{overpic}[bb= 0 0 567 545, scale=0.39, clip=true]{bs_mass.eps} \put(40,200){(a)} \end{overpic} \begin{overpic}[bb= 0 6 567 500, scale=0.432, clip=true]{LR.eps} \put(55,200){(b)} \put(160, 7){\Large $-$} \end{overpic} \caption{\label{fig:yield} Mass distribution with fit projection and sideband regions overlaid (a). Distribution of $-2\Delta\ln(L_p)$ observed in pseudo-experiments (solid black line) compared with the nominal $\chi^2$ distribution (solid green line) (b). The effect of sampling the nuisance parameters within $5\sigma$ of their estimates in data is shown by the dashed colored lines. The vertical lines indicate the different values of $-2\Delta\ln(L_p)$ to be used in the ideal $\chi^2$ case (green), in the observed distribution (black), and in the observed distribution including systematic uncertainties (red), to obtain a 95\% C.L. region. The distribution shown corresponds to a specific $(\betas, \Delta\Gamma)$ point, but similar distributions have been observed across the whole space.} \end{figure} \section{FITTING THE TIME EVOLUTION} The sensitivity to the mixing phase is enhanced if the evolution of \CP-even eigenstates, \CP-odd eigenstates, and their interference is separated. CDF uses the angular distributions of final state particles to statistically determine the \CP-composition of the signal. The angular distributions are studied in the transversity basis, which allows a convenient separation between \CP-odd and \CP-even terms in the equations of the time-evolution \cita{transversity}.\par Sensitivity to the phase increases if the evolutions of bottom-strange mesons produced as \bs\ or \abs\ are studied independently. The time development of flavor-tagged decays contains terms proportional to $\sin(2\betas)$, reducing the ambiguity with respect to the untagged case ($\propto |\sin(2\betas)|$). Building on techniques used in the \bs\ mixing frequency measurement \cita{mixing}, the production flavor is inferred using two classes of algorithms. Opposite-side tags exploit \ensuremath{b\bar{b}}\ pair production, the dominant source of \bhadrons\ at the Tevatron, and estimate the production flavor from the charge of decay products ($e$, $\mu$, or jet) of the $b$--hadron produced from the other $b$--quark in the event. Same-side tags rely on the charges of associated particles produced in the fragmentation of the $b$--quark that hadronizes into the candidate \bs\ meson. The tagging power, $\epsilon D^2 \approx 4.5\%$, is the product of an efficiency $\epsilon$, the fraction of candidates with a flavor tag, and the square of the dilution $D=1-2w$, where $w$ is the mistag probability. Multiple tags, if any, are combined as independent. The proper time of the decay and its resolution are known on a per-candidate basis from the position of the decay vertex, which is determined with an average resolution of approximately 27 \mum\ (90 fs$^{-1}$) in \ensuremath{\bs \to \ensuremath{J/\psi} \phi}\ decays, owing to the first layer of the silicon detector at 1.6 cm radius from the beam.\par Information on \bs\ candidate mass and its uncertainty, angles between final state particles' trajectories (to extract the \CP-composition), production flavor, and decay length and its resolution are used as observables in a multivariate unbinned likelihood fit of the time evolution that accounts for direct decay amplitude, mixing followed by the decay, and their interference. Direct \CP-violation is expected to be small and is not considered. The fit determines the phase $\betas$, the decay-width difference $\Delta\Gamma$, and 25 other ``nuisance'' parameters ($\vec{\nu}$). These include the mean \bs\ decay-width ($\Gamma = (\Gamma_L + \Gamma_H)/2$), the squared magnitudes of linear polarization amplitudes ($|A_0|^2$, $|A_{\parallel}|^2$, $|A_{\perp}^2|$), the \CP-conserving (``strong'') phases ($\delta_{\parallel} = \arg(A_{\parallel} A_{0}^{*})$, $\delta_{\perp} = \arg(A_{\perp}A_{0}^{*})$), and others. The acceptance of the detector is calculated from a Monte Carlo simulation and found to be consistent with observed angular distributions of random combinations of four tracks in data; the angular-mass-lifetime model was validated by measuring lifetime and polarization amplitudes in 7800 \ensuremath{\bd \to \ensuremath{J/\psi} K^{*0}}\ decays, which show angular features similar to the \bs\ sample: $c\tau(\bd) = 456 \pm 6 \stat \pm 6 \syst \mum$, $|A_0|^2 = 0.569 \pm 0.009 \stat \pm 0.009 \syst$, $|A_\parallel|^2 = 0.211 \pm 0.012 \stat \pm 0.006 \syst$, $\delta_\parallel = -2.96 \pm 0.08 \stat \pm 0.03 \syst$, and $\delta_\perp = 2.97 \pm 0.06 \stat \pm 0.01 \syst$. The results, consistent and competitive with most recent $B$--factories' results \cita{bdjpsikstar}, support the reliability of the model. Additional confidence is provided by the precise measurement of lifetime and width-difference in untagged \ensuremath{\bs \to \ensuremath{J/\psi} \phi}\ decays \cita{untagged}. \section{STATISTICAL ISSUES} Tests of the fit on simulated samples show biased, non-Gaussian distributions of estimates and multiple maxima, because the likelihood is invariant under the transformation $\mathcal{T} = (2\betas\to\pi-2\betas, \Delta\Gamma\to-\Delta\Gamma, \delta_\parallel\to2\pi-\delta_\parallel,\delta_\perp\to\pi-\delta_\perp)$, and the resolution on \betas\ was found to depend crucially on the true values of \betas\ and $\Delta\Gamma$. CDF quotes therefore a frequentist confidence region in the ($\betas, \Delta\Gamma$) plane rather than point-estimates for these parameters. Obtaining a correct and meaningful region from a multidimensional likelihood is challenging: one should construct the full 27-dimensional region, a difficult task computationally, and project it onto the ($\betas,\Delta\Gamma$) plane. The choice of the ordering algorithm is critical to prevent the projection from covering most of the ($\betas,\Delta\Gamma$) space, yielding a scarcely informative result. A common approximate method is to replace the likelihood, $L(\betas,\Delta\Gamma, \vec{\nu})$, with the \emph{profile} likelihood, $L_{p}(\betas,\Delta\Gamma, \hat{\vec{\nu}})$. For every point in the ($\betas,\Delta\Gamma$) plane, $\hat{\vec{\nu}}$ are the values of nuisance parameters that maximize the likelihood. Then $-2\Delta\ln(L_p)$ is typically used as a $\chi^2$ variable to derive confidence regions in the two-dimensional space ($\betas,\Delta\Gamma$). However, the simulation shows that in the present case the approximation fails: the resulting regions contain the true values with lower probability than the nominal confidence level (C.L.) because the $-2\Delta\ln(L_p)$ distribution has longer tails than a $\chi^2$, and is not even independent of the true values of the nuisance parameters (\fig{yield} (b)). A full confidence region construction is therefore needed, using simulation of a large number of pseudo-experiments to derive the actual distribution of $-2\Delta\ln(L_p)$, with a potential for an excessive weakening of the results from systematic uncertainties. However, in a full confidence limit construction, the use of $-2\Delta\ln(L_p)$ as ordering function is close to optimal for limiting the impact of systematic uncertainties \cita{stat}. With this method, CDF is able to rigorously account for the effect of systematic uncertainties just by randomly sampling a limited number of points in the space of all nuisance parameters: a specific value $(\betas, \Delta\Gamma)$ is excluded only if it can be excluded for any assumed value of the nuisance parameters within $5\sigma$ of their estimate on data. The result is a \betas--$\Delta\Gamma$ contour that is the truly two-dimensional projection of the full, 27-dimensional confidence region. \begin{figure} \begin{overpic}[width=0.475\textwidth,angle=0]{2d_contours.eps} \put(200,200){(a)} \end{overpic} \begin{overpic}[bb= 75 -13.5 437 312, scale=0.72, clip=true]{discovery2c.eps} \put(40,200){(b)} \put(200,3){\large \boldmath $\betas$\unboldmath} \end{overpic} \caption{\label{fig:contours}Confidence region in the $(\betas,\Delta\Gamma)$ plane obtained with 2.8 \lumifb\ of CDF data (a). The green band is the region allowed by any NP contribution not entering $|\Gamma_{12}|$, and assuming $2|\Gamma_{12}|= 0.096 \pm 0.036$ ps$^{-1}$ \cita{constraint}. Fraction of CDF-equivalent experiments that would observe a $5\sigma$ deviation from the SM as a function of the value of \betas\ in a sample corresponding to 6 \lumifb\ (black line) or 8 \lumifb\ (red line) of integrated luminosity (b).} \end{figure} \section{RESULTS} The results on 1.35 \lumifb\ show a 1.5$\sigma$ fluctuation with respect to the SM values \cita{PRL_betas_CDF}. Considering $\Delta\Gamma$ as an additional nuisance parameter, the 68\% C.L. allowed region for the mixing phase is $0.16 < \betas < 1.41$, which restricts to $\betas \in[0.12,0.68]\cup[0.89,1.45]$ assuming no NP contributions in $|\Gamma_{12}|$ (\ie\ constraining $2|\Gamma_{12}|$ to $0.096\pm0.036$ ps$^{-1}$ \cita{constraint}). This result has been confirmed by the D\O\ Collaboration \cita{betas_D0}, which observed a consistent fluctuation in an analysis where the two-fold symmetry of the likelihood is removed by assuming an additional theoretical constraint between strong phases of \ensuremath{\bs \to \ensuremath{J/\psi} \phi}\ and \ensuremath{\bd \to \ensuremath{J/\psi} K^{*0}}\ decays \cita{gronau}. After removing this assumption, CDF and D\O\ results can be combined yielding a $2.2\sigma$ fluctuation with respect to the SM and the following 68\% C.L. range: $\betas \in[0.24,0.57]\cup[0.99,1.33]$ \cita{hfag}. \par CDF has reported at this conference a partial extension of the analysis to a larger sample, corresponding to 2.8 \lumifb. This is approximately equivalent to 2.0 \lumifb\ effective luminosity, because the calibration of \ensuremath{\mathit{dE/dx}}\ and TOF was unavailable for the whole sample and PID information is not used in the selection, nor in flavor tagging for the second half of the dataset. More than 3200 decays are reconstructed, but approximately 4000 are expected when PID will be available in the selection. \Fig{contours} (a) shows the results. The two regions symmetric with respect to the $(\pi/4, 0)$ point reflect the symmetry of the likelihood, which cannot determine from data if $\cos(\delta_\perp)<0$ and $\cos(\delta_\perp -\delta_\parallel)>0$ (corresponding to the $\Delta\Gamma>0$ solution) or \emph{viceversa} ($\Delta\Gamma<0$). The fluctuation with respect to the SM is confirmed and strengthened, reaching the 1.8$\sigma$ level. The updated analysis restricts the allowed regions for the phase to the range $0.28 < \betas < 1.29$ at the 68\% C.L.\par Although the observed deviations are not yet significant, the pattern of independent results showing consistent fluctuations in the same direction is promising in view of the analysis of the full dataset, expected to reach approximately 6 \lumifb\ by year 2009, or 8 \lumifb\ by 2010, if Run II of the Tevatron will be extended. \Fig{contours} (b) shows the probability of a $5\sigma$ exclusion of the SM at CDF as a function of the value of \betas\ in these two scenarios and assuming $\Delta\Gamma = 0.1$ ps$^{-1}$. This extrapolation, which assumes no external constraints and no improvements in the analysis, is conservative: CDF is improving the analysis, with significantly increased tagging power, a 50\% additional signal collected by other triggers, and the possibility to resolve the strong-phases ambiguity using data \cita{strong}; tight external constraints (\eg\ on the \bs\ lifetime) can be applied, and CDF and D\O\ results will be combined for maximum Tevatron sensitivity. As happened in the past, deviations from expectations in measurements of lower-energy processes may indicate NP prior to direct discovery of new resonances, as those expected in the forthcoming run of the Large Hadron Collider \cita{NP}. \begin{acknowledgments} I would like to thank my CDF colleagues, who provided valuable suggestions during the preparation of this manuscript, and M.~Gronau for spotting an error in the definition of the strong phases. \end{acknowledgments}
1,314,259,996,219
arxiv
\section{Introduction} Solutions of {\it certain} linear elliptic boundary value problems can be expanded in complete sets of eigenfunctions. Unfortunately, the actual form of these eigenfunctions is known for only simple geometries. In fact, only geometries that allow separation of variables yield well known expressions for the associated eigenfunctions. But what happens when separation of variables does not apply? Is it possible to construct the spectral characteristics of a fundamental domain that does not fit any separable coordinate system? Some examples where this construction is possible are presented in the present work. The approach used here has its roots in the unified transform method for analysing both lineal and integrable nonlinear PDEs introduced in [4]. A crucial role in this analysis is played by a certain equation coupling all boundary values, which was called the global relation in [4]. The concrete form of this equation for the equilateral triangle was given in the important work of [14], where it was called a functional equation. A general overview of the problems solved in this paper is presented in the sequel where notations and some elementary formulae are included in order to facilitate the understanding of the new results. We study boundary value problems for the Laplace, the Helmholtz and the modified Helmholtz equations in the interior of an equilateral triangle. These equations are three of the basic equations of classical mathematical physics. In particular, they arise as the reduction of several fundamental parabolic and hyperbolic linear equations. Furthermore, the specific boundary conditions discussed here cover most cases of physical significance. We first introduce some notations. \subsection{Notations and Useful Identities} (i) $z$ will denote the usual complex variable and $\alpha$ will denote one of the complex roots of unity, $$ z = x+iy, \quad \alpha = e^{\frac{2i\pi}{3}} = - \half + \frac{i\sqrt{3}}{2}. \eqno (1.1)$$ Bar will denote complex conjugation, in particular $$ \bar z = x-iy, \quad \bar \alpha = e^{- \frac{2i\pi}{3}}. $$ $\overline{F(\bar k)}$ will denote the {\it Schwarz conjugate} of the function $F(k)$. (ii) The complex numbers $$ z_1 = \frac{l}{\sqrt{3}} e^{\frac{i\pi}{3}}, \quad z_2 = \bar z_1, \quad z_3 = - \frac{l}{\sqrt{3}}, \eqno (1.2)$$ will denote the vertices of the equilateral triangle, and $D \subset {\mathbb{C}}$ will denote the interior of the triangle. The length of each side is $l$. The sides $(z_2,z_1)$, $(z_3,z_2)$, $(z_1,z_3)$ will be referred to as sides (1), (2), (3) respectively. \begin{center} \begin{minipage}[b]{6cm} \psfrag{x}{$x$} \psfrag{y}{$y$} \psfrag{a}{$z_{1}$} \psfrag{b}{$z_{2}$} \psfrag{c}{$z_{3}$} \psfrag{A}{$l/2\sqrt{3}$} \psfrag{B}{$-l/\sqrt{3}$} \psfrag{T}{$\hat{T}$} \psfrag{N}{$\hat{N}$} \centerline{\includegraphics{fig1.eps}} \centerline{\textbf{Figure 1.1:} The fundamental domain $D$} \end{minipage} \end{center} (iii) On each side we identify the positive direction ${\mathbf{\hat T}}$ and the outward normal $ {\mathbf{\hat N}}$ as in Figure 1.1. The functions $$ q^{(j)}(s), \quad q^{(j)}_N(s), \quad s \in \left[ - \frac{l}{2}, \frac{l}{2}\right], \quad j = 1,2,3, \eqno (1.3)$$ will denote the function $q(x,y)$, as well as the derivative of $q(x,y)$ along the outward normal ${\mathbf{\hat N}}$ respectively, for the side $(j)$. (iv) $E(k)$ and $e(k)$ will denote the following exponential functions $$ E(k) = \exp \left\{ \left( k + \frac{\lambda}{k} \right) \frac{l}{2\sqrt{3}} \right\}, \quad e(k) = \exp \left\{ \left( k + \frac{\lambda}{k}\right) \frac{l}{2} \right\}. \eqno (1.4)$$ (v) Using the fact that the numbers $\alpha$ and $\bar \alpha$ satisfy the obvious relations $$ \alpha^2 = \bar \alpha = \alpha^{-1}, \quad 1+\alpha + \bar \alpha =0, \quad i\bar \alpha - i \alpha = \sqrt{3}, \quad i\alpha - i = \sqrt{3} \bar \alpha, \eqno (1.5)$$ it is straightforward to obtain analogous relations for $E(k)$ and $e(k)$. For example, the last three equations in (1.5) imply $$E(k)E(\alpha k)E(\bar \alpha k)=1, \quad E(i\bar \alpha k)E(-i\alpha k)=e(k), \quad E(i\alpha k)E(-ik) = e(\bar \alpha k). \eqno (1.6)$$ \subsection{Formulation of the Problem} We will investigate the basic elliptic equations in the interior of the equilateral triangle $D$, namely we will study the equation $$q_{xx} + q_{yy} - 4\lambda q =0, \quad (x,y) \in D, \eqno (1.7)$$ where $q(x,y)$ is a real valued function and $\lambda$ is a real constant. The case of $\lambda =0$, of $\lambda$ negative, and of $\lambda$ positive, correspond to the Laplace, the Helmholtz, and the modified Helmholtz equations, respectively. We will analyze the following problems: (i) The Dirichlet problem $$ q^{(j)}(s) = f_j(s), \quad s \in \left[ - \frac{l}{2}, \frac{l}{2} \right], \quad j = 1,2,3 . \eqno (1.8)$$ (ii) The oblique Robin problem, $$ \sin \beta q^{(j)}_N(s) + \cos \beta \frac{d}{ds} q^{(j)}(s) + \gamma q^{(j)}(s) = f_j(s), \quad s \in \left[ - \frac{l}{2}, \frac{l}{2}\right], \quad j = 1,2,3, \eqno (1.9)$$ where $\beta$ and $\gamma$ are real constants and $\sin \beta \neq 0$. The sum of the first two terms of the lhs of this equation equals the derivative of $q^{(j)}(s)$ in the direction making an angle $\beta$ with the positive direction of the side $(j)$. The Neumann and the Robin problems correspond to the following particular choices of $\beta$ and $\gamma$, $$ {\mathrm{Neumann:}} \ \ \beta = \frac{\pi}{2}, \ \ \gamma = 0; \quad {\mathrm{Robin:}} \ \ \beta = \frac{\pi}{2}, \ \ \gamma \neq 0. \eqno (1.10)$$ (iii) The Poincar\'e type problem $$\sin\beta_j q^{(j)}_N(s) + \cos \beta_j \frac{d}{ds}q^{(j)}(s) + \gamma_j q^{(j)}(s) = f_j(s), \quad s \in \left[ - \frac{l}{2}, \frac{l}{2}\right], \quad j=1,2,3, \eqno (1.11)$$ where $\beta_1$ is a real constant such that $\sin \beta_1\neq 0$, $\beta_2$ and $\beta_3$ satisfy $\sin\beta_2\neq 0$, $\sin\beta_3 \neq 0$ and are given in terms of $\beta_1$ by the expressions $$ \beta_2 = \beta_1 + \frac{n\pi}{3}, \quad \beta_3 = \beta_1 + \frac{m\pi}{3}, \quad m,n \in {\mathbb{Z}}, \eqno (1.12a)$$ and the real constants $\{ \gamma_j\}^3_1$ satisfy the relations $$ \sin 3\beta_1 \left[ \gamma_2(3\lambda - \gamma^2_2) - e^{in\pi} \gamma_1(3\lambda - \gamma^2_1)\right] =0, \eqno (1.12b)$$ $$ \sin 3\beta_1 \left[ \gamma_3(3\lambda - \gamma^2_3) - e^{im\pi} \gamma_1(3\lambda - \gamma^2_1)\right] =0. \eqno (1.12c)$$ A particular case of such a Poincar\'e type problem, which is solved in detail, is the modified Helmholtz equation with Neumann values on sides (2) and (3) and with Robin values on side (1), where the constant $\gamma$ is given by $\sqrt{3\lambda}$. We {\it assume} that the functions $f_j$ have sufficient smoothness and that they are compatible at the corners of the triangle. The case of boundary conditions which are discontinuous at the corners will be considered elsewhere. \subsection{The Global Relation} As it was mentioned earlier the approach used here is based on the analysis of the global relation, which is the fundamental algebraic relation that couples the Dirichlet and the Neumann values around the perimeter of the triangle. This equation, first derived for the case of equilateral triangle in [14] (see also [5]) is $$E(-ik)\Psi_1(k) + E(-i\bar \alpha k) \Psi_2(\bar \alpha k) + E(-i\alpha k)\Psi_3(\alpha k) $$ $$=2i \left\{ E(-ik)\Phi_1(k) + E(-i\bar \alpha k)\Phi_2(\bar \alpha k) + E(-i\alpha k)\Phi_3(\alpha k)\right\}, \quad k \in {\mathbb{C}} - \{ 0 \}, \eqno (1.13)$$ where the exponential function $E(k)$ is defined in equation (1.4a), and $\Psi_j$ and $\Phi_j$ are the following transforms of the Neumann and Dirichlet boundary values: $$\Psi_j(k) = \int^{\frac{l}{2}}_{- \frac{l}{2}} \exp \left\{ (k+\frac{\lambda}{k})s \right\} q^{(j)}_N(s)ds, \ \ \Phi_j(k) = \int^{\frac{l}{2}}_{- \frac{l}{2}} \exp \left\{ (k + \frac{\lambda}{k})s \right\} \left[ \half \dds q^{(j)}(s) + \frac{\lambda}{k} q^{(j)}(s)\right]ds, \eqno (1.14)$$ for each $j=1,2,3$, and every complex $k\neq 0$. The general methodology introduced in [4], [5] implies that the global relation must be supplemented by its Schwarz conjugate, as well as by the four equations obtained from these two equations by replacing $k$ with $\alpha k$ and with $\bar \alpha k$. We will refer to these six equations as the {\it basic algebraic relations}. In this paper we present two different techniques for solving these equations. \subsubsection{Solutions via Infinite Series} For simple problems it is possible to compute the unknown boundary values by evaluating the basic algebraic relations at particular discrete values of $k$. This yields the unknown boundary values in terms of infinite series. The Dirichlet and the Neumann problems are examples of problems which can be solved using this technique. We use the Dirichlet problem to illustrate this approach. In this case the functions $\Phi_j$ appearing in the rhs of the global relation (1.13) can be immediately computed in terms of the given boundary conditions $f_j$, thus the global relation becomes a single equation for the three unknown functions $\{ \Psi_j\}^3_1$. Multiplying this equation by $E(i\alpha k)$, and multiplying the Schwarz conjugate of the global relation by $E(-i\alpha k)$, we find the following two equations (where we have used the last two of the identities (1.6)) $$e(\bar \alpha k)\Psi_1(k) + e(-k)\Psi_2(\bar \alpha k) + \Psi_3(\alpha k) = 2iA(k), \eqno (1.15)$$ $$e(-\bar \alpha k) \Psi_1(k) + \Psi_2(\alpha k) + e(k) \Psi_3(\bar \alpha k) =-2i B(k). \eqno (1.16)$$ In these equations $A(k)$ and $B(k)$ are known functions and $k\in {\mathbb{C}} - \{ 0 \}$. For the general Dirichlet problem, we will supplement these two equations with the four equations obtained from these equations by replacing $k$ with $\alpha k$ and with $\bar \alpha k$. However, there exists a particular case for which it is sufficient to analyze only the above two equations. This is the {\it symmetric} Dirichlet problem, namely the problem where the functions $f_j$ are all the same, $f_j=f$, $j=1,2,3$. Then the Neumann values $q_N^{(j)}(s)$ are also the same, $q_N^{(j)} = q_N$ and hence $\Psi_j(k) = \Psi(k)$, $j=1,2,3$. Thus equations(1.15) and (1.16) become two equations for the three unknown functions $\Psi(k)$, $\Psi(\bar \alpha k)$, $\Psi(\alpha k)$. Hence, any two of them can be expressed in terms of the remaining one, for example $\Psi(\bar \alpha k)$ and $\Psi(\alpha k)$ can be expressed in terms of $\Psi(k)$. In particular, subtracting equations (1.15), (1.16), we find $$ \left( e(k)-e(-k)\right) \Psi(\bar \alpha k) = \left( e(\bar \alpha k)-e(-\bar \alpha k)\right) \Psi(k) - 2iG(k), \eqno (1.17)$$ where $G(k) = A(k) + B(k)$ is a known function. Equation (1.17) is a single equation for the two unknown functions $\Psi(\bar \alpha k)$ and $\Psi(k)$. However, by evaluating this equation at those values of $k$ for which the coefficient of $\Psi(\bar \alpha k)$ vanishes, i.e. at $e^2(k) =1$, or $k = s_n$, $$ s_n + \frac{\lambda}{s_n} = \frac{2in\pi}{l}, \quad n \in {\mathbb{Z}}, \eqno (1.18)$$ it follows that $\Psi(s_n)$ can be determined. Recalling the definition of $\Psi(k)$ and evaluating equation (1.17) at $k=s_n$, we find $$\sinh\left[ \left( \bar \alpha s_n + \frac{\lambda}{\bar \alpha s_n}\right) \frac{l}{2} \right] \int^{\frac{l}{2}}_{- \frac{l}{2}} e^{2i\pi n \frac{s}{l}} q_N(s)ds = iG(s_n), \quad n \in {\mathbb{Z}}. \eqno (1.19)$$ Thus $q_N(s)$ can be expressed as a Fourier series. For the general Dirichlet problem (1.8), the six basic algebraic relations couple the nine unknown functions $\{ \Psi_j(k)$, $\Psi_j(\bar \alpha k)$, $\Psi_j(\alpha k)\}^3_1$. Thus any six of them can be expressed in terms of the remaining three. In particular, it is shown in section 3 that $\Psi_2(\bar \alpha k)$ can be expressed in terms of $\{ \Psi_j(k)\}^3_1$ by the equation $$ \left( e^3(-k)-e^3(k)\right) \Psi_2(\bar \alpha k) = \left[ e(-\bar \alpha k) - e^2(-k)e(\bar \alpha k)\right] \left( \Psi_1(k) + e^2(k) \Psi_3(k)\right) $$ $$ + e^2(-k) \left[ e(-\bar \alpha k) -e^4(k) e(\bar \alpha k)\right] \Psi_2(k) + 2iX(k), \eqno (1.20)$$ where $X(k)$ is known. In spite of the fact that this equation is a single equation for four unknown functions, it yields all the three Neumann values $q^{(j)}_N$, $j = 1,2,3$. Indeed, by evaluating equation (1.20) at those values of $k$ for which the coefficient of $\Psi_2(\bar \alpha k)$ vanishes, i.e. at $e^6(k) =1$, or $k=k_m$, where $$ k_m + \frac{\lambda}{k_m} = \frac{2im\pi}{3l}, \quad m \in {\mathbb{Z}}, \eqno (1.21)$$ equation (1.20) yields $$ \int^{\frac{l}{2}}_{-\frac{l}{2}} e^{\frac{2i\pi m}{3l} s} \left[ q^{(1)}_N(s) + e^{- \frac{2i\pi m}{3}}q^{(2)}_N(s) + e^{\frac{2i\pi m}{3}} q^{(3)}_N(s)\right] ds = M(k_m), \quad m \in {\mathbb{Z}}, \eqno (1.22)$$ where $M(k_m)$ is known. This equation in contrast to equation (1.19) involves {\it three} unknown functions. However, equation (1.21) gives {\it three times} as many values for $m$ as equation (1.18). Replacing in equation (1.22) $m$ by $3n$, $3n-1$, $3n-2$, and inverting the left hand sides of the resulting equations, we find $$ q_N^{(1)}(s) + q_N^{(2)}(s) + q_N^{(3)}(s) = \frac{1}{l} \sum^\infty_{-\infty} M(3n)e^{- \frac{2i\pi ns}{l}},$$ $$e^{- \frac{2i\pi s}{3l}} \left[ q_N^{(1)}(s) + \alpha q_N^{(2)}(s) + \bar \alpha q_N^{(3)}(s) \right] = \frac{1}{l} \sum^\infty_{-\infty} M(3n-1) e^{- \frac{2i\pi ns}{l}}, $$ $$ e^{- \frac{4i\pi s}{3l}} \left[ q_N^{(1)}(s) + \bar \alpha q_N^{(2)}(s) + \alpha q_N^{(3)}(s)\right] = \frac{1}{l} \sum^\infty_{-\infty} M(3n-2)e^{- \frac{2i\pi ns}{l}}. \eqno (1.23)$$ Thus by solving this system of three algebraic equations it follows that each one of the Neumann boundary values can be represented in terms of a Fourier series (see Proposition 3.2). The analysis of the oblique Robin problem (equation (1.9)) is similar. However, the values of $s_n$ and of $k_m$ in general cannot be found explicitly. The values $k_m$ satisfy the transcendental equation $$ e^{\left( k_m + \frac{\lambda}{k_m}\right)l} \frac{ \left( \alpha k_me^{i\beta} + \frac{\lambda}{\alpha k_m e^{i\beta}} - \gamma \right) \left( \bar \alpha k_me^{-i\beta} + \frac{\lambda}{\bar \alpha k_me^{-i\beta}} - \gamma \right)}{ \left( \alpha k_me^{-i\beta} + \frac{\lambda}{\alpha k_me^{-i\beta}} - \gamma\right) \left( \bar \alpha k_me^{i\beta} + \frac{\lambda}{\bar \alpha k_me^{i\beta}}-\gamma\right) } = e^{\frac{2i\pi m}{3}}, \quad m \in {\mathbb{Z}}. \eqno (1.24)$$ Thus in equation (1.22) instead of $\exp\{ \frac{2i\pi m}{3l}\}$ we now have $\exp\left\{k_m+\frac{\lambda}{k_m} \right\}$, where $k_m$ satisfies (1.24). In the particular case of the Neumann problem, $k_m$ satisfies equation (1.21). \subsubsection{Solutions Via Generalized Fourier Integrals} For more complicated problems, such as the problem (1.11), the basic algebraic relations can be solved in terms of a generalized Fourier integral. This technique in generic, in the sense that it can also be used for the solution of simple problems. We use such a simple problem, namely the symmetric Dirichlet problem, to illustrate this approach: It is shown in Section 4 that the integral defining $\Psi(\bar \alpha k)$ can be solved for $q_N(s)$. For $\lambda \geq 0$, $q_N(s)$ is given by $$q_N(s) = \frac{i\bar \alpha}{4\pi} \int^{\infty e^{\frac{i\pi}{6}}}_{\infty e^{\frac{7i\pi}{6}}} \exp \left\{-\left( \bar \alpha k + \frac{\lambda}{\bar \alpha k}\right)s \right\} \left( 1 - \frac{\lambda}{(\bar \alpha k)^2}\right) \Psi( \bar \alpha k)dk, \quad \lambda \geq 0. \eqno (1.25)$$ Replacing in this equation $\Psi(\bar \alpha k)$ with the expression obtained by solving equation (1.17) for $\Psi(\bar \alpha k)$, it follows that $q_N(s)$ involves a known integral, as well as an integral containing the unknown function $\Psi(k)$. However, using the analyticity properties of the integrant of the latter integral, it can be shown that this integral can be computed in terms of residues. Furthermore these residues can be explicitly calculated in terms of the known function $G(k)$. The situation for more complicated problems is similar: The unknown boundary values can be expressed in terms of known integrals, as well as integrals containing the three unknown functions $\{ \Psi_j(k)\}^3_1$. Exploiting the analyticity properties of the integrants of the latter integrals, it can be shown that these integrals can be computed explicitly. \subsection{Integral Representations for $q(x,y)$} When both the Dirichlet and the Neumann boundary values are known, the solution $q(x,y)$ can be determined either using the classical integral representation in terms of Green's functions [3], or using the novel integral representations constructed in [5] and [8]. For completeness, both representations are presented in Section 5. \subsection{Organization of the Paper} In Section 2 we derive the global relation (1.13). In Section 3 we solve the symmetric Dirichlet problem (Proposition 3.1), the general Dirichlet problem (Proposition 3.2), the general Neumann problem (Proposition 3.3), and we also discuss the oblique Robin problem. In Section 4 we discuss the basic algebraic relations associated with the Poincare boundary condition (1.11) and derive the relations (1.12b) and (1.12c). In Section 5 we obtain an alternative representation for the symmetric Dirichlet problem and then analyze the problem defined by equations (1.11) and (1.12). A particular case of this problem, which is solved in detail in Proposition 5.1, is a mixed boundary value problem for the modified Helmholtz equation. In Section 6 we discuss the associated integral representations for $q(x,y)$. Further discussion of these results is presented in Section 7. \section{The Global Relation} Writing the basic elliptic equation (1.7) in the complex variables $(z,\bar z)$ we find $$q_{z\bar{z}} - \lambda q=0. \eqno (2.1)$$ It is straightforward to verify that this equation can be rewritten in the form [5] $$ \left( \exp \left\{-ikz - \frac{\lambda}{ik} \bar z \right\} q_z\right)_{\bar{z}} + \frac{\lambda}{ik} \left( \exp \left\{-ikz - \frac{\lambda}{ik} \bar z \right\} q\right)_{z} = 0, \eqno (2.2)$$ where for the rest of this section $k \in {\mathbb{C}} - \{ 0\}$. Suppose that equation (2.1) is valid in a simply connected bounded domain $\Omega \subset {\mathbb{C}}$ with a piecewise smooth boundary $\partial\Omega$. Then equation (2.2) and the complex form of Green's theorem imply $$ \int_{\partial\Omega} \exp \left\{-ikz - \frac{\lambda}{ik} \bar z \right\} \left( q_zdz - \frac{\lambda}{ik} qd\bar z\right) = 0. \eqno (2.3)$$ In the particular case that $\Omega$ is the triangular domain $D$, equation (2.3) becomes $$ \sum^3_{j=1} \tilde\rho_j(k) =0, \eqno (2.4)$$ where the function $\tilde \rho_j(k)$ is given by the following line integral along the side $(j)$ of the equilateral triangle $$\tilde \rho_j(k) = \int^{z_j}_{z_{j+1}} \exp \left\{-ikz - \frac{\lambda}{ik} \bar z\right\} \left( q_zdz - \frac{\lambda}{ik} qd\bar{z}\right) , \quad j = 1,2,3. \eqno (2.5)$$ In what follows we will show that $$ \tilde \rho_1(k) = \rho_1(k), \quad \tilde\rho_2(k) = \rho_2(\bar \alpha k), \quad \tilde\rho_3(k) = \rho_3(\alpha k), \eqno (2.6)$$ where the functions $\rho_j(k)$ are defined in terms of the functions $\Phi_j(k)$ and $\Psi_j(k)$ by the equation $$ \rho_j(k) = E(-ik) \left[ \frac{i}{2} \Psi_j(k) + \Phi_j(k)\right], \quad j = 1,2,3. \eqno (2.7)$$ For this purpose we will use the following local parameterizations: {\it Side 1:} On the side (1) the variable $z$ can be parameterized as $$ z(s) = \frac{l}{2\sqrt{3}} + is, \quad s \in \left[ -\frac{l}{2}, \frac{l}{2}\right]. \eqno (2.8a)$$ Then $z(-l/2) =z_2$ and $z(l/2) = z_1$. Since the normal and the tangential derivatives are parallel to the $x$ and to $y$ axes respectively, it follows that $$ \partial_z = \half \left( \partial_x - i\partial_y\right) = \half \left( \partial_N - i\partial_T\right). \eqno (2.8b)$$ {\it Side 2:} If $z$ varies along the side (2) and $\zeta$ varies along the side (1), then $z = \zeta\exp\left\{ -i \frac{2\pi}{3}\right\} $. Thus $$z(s) = \left( \frac{l}{2\sqrt{3}} + is\right) e^{-i \frac{2\pi}{3}}, \quad s \in \left[ - \frac{l}{2}, \frac{l}{2} \right]. \eqno (2.9a)$$ Note again that $z(-l/2) = z_3$ and $z(l/2) = z_2$. The equation $\partial_z = \exp\left\{ i \frac{2\pi}{3}\right\} \partial_\zeta$ implies that $$ \partial_z = \frac{\alpha}{2} \left( \partial_N - i\partial_T\right). \eqno (2.9b)$$ {\it Side 3:} In analogy with equations (2.9), if $z$ varies along side (3) we find the equations $$z(s) = \left( \frac{l}{2\sqrt{3}} + is\right) e^{ i \frac{2\pi}{3}}, \quad s \in \left[ - \frac{l}{2}, \frac{l}{2} \right], \eqno (2.10a)$$ and $$ \partial_z = \frac{\bar\alpha}{2} \left( \partial_N - i\partial_T\right). \eqno (2.10b)$$ Finally, $z(-l/2) = z_1$ and $z(l/2) = z_3$. Using equations (2.8)-(2.10) in the expressions (2.5) we find equations (2.6), (2.7). \section{The Analysis of the Global Relation for Simple \\ Boundary Value Problems} \subsection{The Symmetric Dirichlet Problem} We first give the details for the symmetric problem. In this case $$ q^{(j)}_N(s) = q_N(s), \quad \Phi_j(k) = F(k), \quad \Psi_j(k) = \Psi(k), \quad j = 1,2,3, \eqno (3.1)$$ where the function $\Psi(k)$ is defined in terms of the unknown function $q_N(s)$ by equation (1.14a) (without the superscript $(j)$), and the function $F(k)$ is defined in terms of the given boundary condition $f(s)$ by equation (1.14b), i.e., by the equation $$ F(k) = \int^{\frac{l}{2}}_{-\frac{l}{2}} \exp \left\{\left( k + \frac{\lambda}{k}\right)s\right\} \left[ \half \dds f(s) + \frac{\lambda}{k} f(s)\right] ds, \quad k \in {\mathbb{C}} - \{ 0\}. \eqno (3.2)$$ Using equations (3.1), the global relation (1.13) and its Schwarz conjugate yield (1.15) and (1.16), with $$ A(k) = e(\bar \alpha k)F(k) + e(-k)F(\bar \alpha k) + F(\alpha k),$$ $$ B(k) = e(-\bar \alpha k)F(k) + F(\alpha k) + e(k)F(\bar \alpha k).$$ Hence, since $G=A+B$, $$ G(s_n) = 2\cosh \left[ \left( \bar \alpha s_n + \frac{\lambda}{\bar \alpha s_n}\right) \frac{l}{2}\right] F(s_n) + 2e^{in\pi} F(\bar \alpha s_n) + 2F(\alpha s_n). \eqno (3.3)$$ In summary, we have derived the following result: \paragraph{Proposition 3.1} Let the real valued function $q(x,y)$ satisfy equation (1.7) in the triangular domain $D$, with the Dirichlet conditions (1.8), where $$ f_j(s) = f(s), \quad j=1,2,3, \quad s\in \left[ - \frac{l}{2}, \frac{l}{2} \right], \eqno (3.4)$$ and the function $f(s)$ is sufficiently smooth and satisfies the continuity condition $f(-l/2) = f(l/2)$. Then the Neumann boundary values are the same, $q^{(j)}_N(s) = q_N(s)$, $j=1,2,3$, and are given by the Fourier series $$ q_N(s) = \frac{i}{l} \sum^\infty_{-\infty} e^{- \frac{2in\pi s}{l}} \frac{G(s_n)}{\sinh \left[ \left( \bar \alpha s_n + \frac{\lambda}{\bar \alpha s_n}\right) \frac{l}{2}\right] }, \eqno (3.5)$$ where $s_n$ is defined by equation (1.18) and $G(s_n)$ is given in terms of $f(s)$ by equations (3.2) and (3.3). \subsection{The General Dirichlet Problem} The global relation and its Schwarz conjugate yield equations (1.15) and (1.16), where the known functions $A(k)$ and $B(k)$ are now given by the equations $$A(k) = e(\bar \alpha k)F_1(k) + e(-k)F_2(\bar \alpha k) + F_3(\alpha k),$$ $$ B(k) = e(-\bar \alpha k) F_1(k) + F_2(\alpha k) + e(k)F_3(\bar \alpha k), \eqno (3.6)$$ where $$ F_j(k) = \int^{\frac{l}{2}}_{- \frac{l}{2}} \exp \left\{\left( k+\frac{\lambda}{k}\right)s\right\} \left[ \half \dds f_j(s) + \frac{\lambda}{k} f_j(s)\right] ds, \quad j=1,2,3,\quad k \in {\mathbb{C}} -\{ 0\}. \eqno (3.7)$$ Replacing in equations (1.15) and (1.16) $k$ by $\bar \alpha k$ and then eliminating $\Psi_1(\bar \alpha k)$ from the resulting two equations we find $$ e(-\alpha k) \Psi_3(k) + e(k) \Psi_2(\alpha k) - 2ie(-\alpha k)A(\bar \alpha k) \qquad \ \ \qquad \ \ $$ $$ = e(\alpha k) \Psi_2(k) + e(-k)\Psi_3(\alpha k) + 2ie(\alpha k)B(\bar \alpha k). \eqno (3.8)$$ Taking the Schwarz conjugate of this equation (or equivalently eliminating $\Psi_1(\alpha k)$ from the equations obtained from equations (1.15) and (1.16) by replacing $k$ with $\alpha k$) we find $$ e(-\bar \alpha k) \Psi_3(k) + e(k)\Psi_2(\bar \alpha k) + 2ie(-\bar \alpha k) \overline{ A(\bar \alpha \bar k)} \qquad \ \ \qquad \ \ $$ $$ = e(\bar \alpha k) \Psi_2(k) + e(-k)\Psi_3(\bar \alpha k) -2ie(\bar \alpha k) \overline{ B(\bar \alpha \bar k)}. \eqno (3.9)$$ Substituting $\Psi_2(\alpha k)$ from equation (3.8) and $\Psi_3(\bar \alpha k)$ from equation (3.9) into equation (1.16), we find an equation involving $\Psi_3(\alpha k)$, $\Psi_2(\bar \alpha k)$, and $\{ \Psi_j(k)\}^3_1$. Eliminating $\Psi_3(\alpha k)$ from this equation and from equation (1.15) we find equation (1.20) with $X(k)$ given by the following equation, $$ \begin{array}{rl} X(k) & = \left[ e^2(-k)e(\bar \alpha k) + e(-\bar \alpha k)\right] \left[ F_1(k) + e^2(k) F_3(k)\right] \\ \\ & + e^2(-k) \left[ e^2(-k)e(\bar \alpha k)e^6(k) + e(-\bar \alpha k)\right] F_2(k)\\ \\ & + 2e^2(k) F_1(\alpha k) + 2F_2(\alpha k) + 2e^2(-k)F_3(\alpha k)\\ \\ & + e^3(-k) \left[ 2e^2(k) F_1(\bar \alpha k) + (e^6(k) + 1)F_2(\bar \alpha k) + 2e^2(-k)e^6(k)F_3(\bar \alpha k)\right]. \end{array} \eqno (3.10a)$$ Letting $k=k_n$ we find $$ \begin{array}{rl} X(k_n) & = \left[ e^2(-k_n)e(\bar \alpha k_n) + e(-\bar \alpha k_n)\right] \times \\ \\ & \ \ \ \ \ \ \left[ F_1(k_n) + e^2(-k_n) F_2(k_n) + e^2(k_n)F_3(k_n)\right] \\ \\ & + 2e^2(k_n) \left[ F_1(\alpha k_n) + e^2(-k_n)F_2(\alpha k_n) + e^2(k_n)F_3(\alpha k_n) \right] \\ \\ & + 2e(-k_n) \left[ F_1(\bar \alpha k_n) + e^2(-k_n)F_2(\bar \alpha k_n) + e^2(k_n) F_3(\bar \alpha k_n) \right] \end{array} \eqno (3.10b)$$ Solving the algebraic equations (1.23) we find the following result: \paragraph{Proposition 3.2} Let the real valued function $q(x,y)$ satisfy equation (1.7) in the triangular domain $D$, with the boundary conditions (1.8), where the given functions $f_j(s)$ have sufficient smoothness and are continuous at the vertices. Then the Neumann data $q^{(j)}_N(s)$, $j=1,2,3$ can be expressed in terms of the given Dirichlet data by the Fourier series $$q^{(j)}_N(s) = \frac{1}{3l} \sum^\infty_{n=-\infty} \left[ M(k_{3n}) + c^{(j)}_1 e^{\frac{2i\pi s}{3l} }M(k_{3n-1}) + c^{(j)}_2 e^{\frac{4i\pi s}{3l}} M(k_{3n-2})\right] e^{- \frac{2i\pi ns}{l}}, \eqno (3.11)$$ where $k_m$ is defined by equation (1.21), $$ c^{(1)}_1 = c^{(1)}_2 =1, \quad c^{(2)}_1 = c^{(3)}_2 = \bar \alpha, \quad c^{(3)}_1 = c^{(2)}_2 = \alpha , \eqno (3.12)$$ $$M(k_m) = \frac{2iX(k_m)}{\bar\alpha^ne(\bar \alpha k_m) - e(-\bar \alpha k_m)} \eqno (3.13)$$ and $X(k_m)$ is defined in terms of $f_j(s)$ by equations (3.7) and (3.10). \subsection{The Oblique Robin Problem} Suppose that $q(x,y)$ satisfies the Poincar\'e boundary condition (1.11), i.e. $$q^{(j)}_N(s) = \frac{1}{\sin \beta_j} \left( f^{(j)} - \cos \beta_j \frac{dq^{(j)}}{ds} - \gamma_jq^{(j)}\right). \eqno (3.14)$$ Substituting this expression in the definition of $\rho_j(k)$, i.e. in the equation (2.7), we obtain $$ \rho_j(k) = E(-ik) \int^{\frac{l}{2}}_{- \frac{l}{2}} \exp\left\{\left( k + \frac{\lambda}{k}\right)s\right\} \left[ \frac{i}{2} q^{(j)}_N(s) + \half \frac{d}{ds}q^{(j)}(s) + \frac{\lambda}{k} q^{(j)}(s) \right] ds.$$ Integrating by parts we find the following expression for $\rho_j(k)$: $$\rho_j(k) = iE(-ik) \left[ H_j(k)Y_j(k) + F_j(k) + C_j(k)\right], \quad j = 1,2,3, \eqno (3.15)$$ where the function $H_j(k)$ is defined by $$ H_j(k) = ke^{i\beta_j} + \frac{\lambda}{ke^{i\beta_j}} - \gamma_j, \eqno (3.16)$$ the function $F_j(k)$ is defined in terms of the given boundary conditions $f_j(s)$ by the equation $$F_j(k) = \frac{1}{2\sin\beta_j} \int^{\frac{l}{2}}_{-\frac{l}{2}} \exp\left\{\left( k+\frac{\lambda}{k}\right)s\right\} f_j(s)ds, \eqno (3.17)$$ the function $C_j(k)$ involves the values of $q(x,y) $ at the vertices, $$C_j(k) = \frac{e^{i\beta_j}}{2\sin\beta_j} \left[ e(-k)q^{(j)} \left( - \frac{l}{2}\right) - e(k)q^{(j)}\left( \frac{l}{2}\right)\right], \eqno (3.18)$$ and the function $Y_j(k)$ involves the unknown Dirichlet boundary values, $$ Y_j(k) = \frac{1}{2\sin \beta_j} \int^{\frac{l}{2}}_{-\frac{l}{2}} \exp \left\{\left( k+\frac{\lambda}{k}\right)s\right\} q^{(j)}(s)ds. \eqno (3.19)$$ In equations (3.15)-(3.19), $k$ is complex and $k \neq 0$. In the particular case of the oblique Robin problem (1.9), $\beta_j=\beta$ and $\gamma_j=\gamma$, $j = 1,2,3,$ thus $H_j(k) = H(k)$, where $H(k)$ is defined by equation (3.16) without the subscript $(j)$. Substituting the expression for $\rho_j(k)$ (with $H_j=H$) in the global relation (2.4) we find $$E(-ik)\left[ H(k)Y_1(k) + F_1(k)+C_1(k)\right] + E(-i\bar\alpha k) \left[ H(\bar \alpha k)Y_2(\bar \alpha k) + F_2(\bar \alpha k) + C_2(\bar \alpha k)\right] $$ $$+ E(-i\alpha k) \left[ H(\alpha k)Y_3(\alpha k) + F_3(\alpha k) + C_3(\alpha k)\right] =0. \eqno (3.20)$$ The contribution from the corner terms $C_j$ cancels. Indeed, this contribution is proportional to the following expression $$E(-ik) \left[ e(-k)q^{(1)} \left( - \frac{l}{2} \right) - e(k) q^{(1)} \left( \frac{l}{2}\right)\right] + E(-i\bar \alpha k) \left[ e(-\bar \alpha k) q^{(2)} \left( - \frac{l}{2} \right) - e(\bar \alpha k) q^{(2)} \left( \frac{l}{2}\right)\right]$$ $$ +E(-i\alpha k) \left[ e(-\alpha k) q^{(3)} \left( - \frac{l}{2} \right)-e(\alpha k)q^{(3)} \left( \frac{l}{2}\right) \right]. \eqno (3.21)$$ But, the assumption of continuity at the vertices implies $$q^{(1)}\left( -\frac{l}{2}\right) = q^{(2)}\left( \frac{l}{2} \right), \quad q^{(2)}\left( - \frac{l}{2} \right) = q^{(3)} \left( \frac{l}{2} \right), \quad q^{(3)} \left( - \frac{l}{2} \right) = q^{(1)} \left( \frac{l}{2} \right). \eqno (3.22)$$ Hence the terms $q^{(1)}(-l/2)$ and $q^{(2)}(l/2)$ in the expression (3.21) cancel iff $$ E(-ik) e(-k) = E(-i\bar\alpha k)e(\bar \alpha k). \eqno (3.23)$$ This equation is indeed valid, and it is the consequence of the identity $$ \frac{1}{\sqrt{3}} \left( -ik + \frac{\lambda}{-ik}\right) + \left(-k + \frac{\lambda}{-k} \right) = \frac{1}{\sqrt{3}} \left( -i \bar\alpha k + \frac{\lambda}{-i\bar\alpha k}\right) + \left( \bar\alpha k + \frac{\lambda}{\bar\alpha k}\right). $$ Using the fact that the corners term cancel, the global relation and its Schwarz conjugate yield (compare with equations (1.15) and (1.16)) the following equations: $$ e(\bar\alpha k) H(k)Y_1(k) + e(-k)H(\bar\alpha k) Y_2(\bar \alpha k) + H(\alpha k)Y_3(\alpha k) =-A(k),$$ $$ e(-\bar\alpha k) \overline{H(\bar k)} Y_1(k) + \overline{H(\bar\alpha \bar k)} Y_2(\alpha k) + e(k) \overline{H(\alpha \bar k)} Y_3(\bar \alpha k) =-B(k), \eqno (3.24)$$ where $A(k)$ and $B(k)$ are defined by equations (3.6) in terms of $F_j$. Let $$P(k) = \frac{H(k)}{\overline{H(\bar k)}}. \eqno (3.25)$$ Following precisely the same steps used for the general Dirichlet problem we find the following expression for $Y_2(\bar\alpha k)$ in terms of $\{ Y_j(k)\}^3_1$: $$ \left[ e^3(-k) \frac{P^2(\bar \alpha k)}{P^2(\alpha k)} - e^3(k) \frac{P(\alpha k)}{P(\bar \alpha k)} \right] \frac{\overline{H(\alpha \bar k)}}{\overline{H(\bar k)}} Y_2(\bar \alpha k) = T(k) $$ $$+ \left[ e(-\bar \alpha k) - e^2(-k) \frac{P(\bar \alpha k)P(k)}{P^2(\alpha k)} e(\bar \alpha k)\right] \left[ Y_1(k) + e^2(k) \frac{P(\alpha k)}{P(\bar \alpha k)} Y_3(k) \right] $$ $$+e^2(-k) \frac{P(\bar\alpha k)}{P(\alpha k)} \left[ e(-\bar \alpha k) - e^4(k) \frac{P^3(\alpha k)}{P^3(\bar \alpha k)} \frac{P(\bar \alpha k)P(k)}{P^2(\alpha k)} e(\bar \alpha k)\right] Y_2(k), \eqno (3.26)$$ where $T(k)$ is given in terms of $F_j(k)$ by the following equation $$ \overline{H(\bar k)} T(k) = e(-k) \left[ E^3(-i\alpha k) - E^3(i\alpha k) \frac{P(\bar \alpha k)}{P^2(\alpha k)} \right] F_1(k) $$ $$ + \!\left[\! E^3(-i\bar \alpha k) \frac{P(\bar \alpha k)}{P(\alpha k)} - E^3(i\bar\alpha k) \frac{1}{P(\bar \alpha k)} \!\right]\! F_2(k) + e(k) \!\left[\! E^3(-i\alpha k) \frac{P(\alpha k)}{P(\bar \alpha k)} - E^3(i\alpha k) \frac{1}{P(\alpha k)} \!\right]\! F_3(k) $$ $$ + e^2(k) \frac{P(\alpha k)-1}{P(\bar \alpha k)} F_1(\alpha k) + \frac{P(\alpha k)-1}{P(\alpha k)} F_2(\alpha k) + e^2(-k) \frac{P(\bar \alpha k)(P(\alpha k)-1)}{P^2(\alpha k)} F_3(\alpha k) $$ $$+ e(-k) \frac{P(\bar \alpha k)-1}{P(\alpha k)} F_1(\bar \alpha k) + \left( e^3(k) \frac{P(\alpha k)}{P(\bar \alpha k)} - e^3(-k) \frac{P(\bar \alpha k)}{P^2(\alpha k)} \right) F_2(\bar \alpha k) + e(k) \frac{P(\bar \alpha k)-1}{P(\bar \alpha k)} F_3(\bar \alpha k). \eqno (3.27)$$ If $k=k_m$, where $k_m$ is defined by $$ e^6(k) \frac{P^3(\alpha k)}{P^3(\bar \alpha k)} = 1,$$ then $$e^2(k) \frac{P(\alpha k)}{P(\bar \alpha k)} = e^{\frac{2i\pi m}{3}}, \quad e^2(-k) \frac{P(\bar\alpha k)}{P(\alpha k)} = e^{- \frac{2i\pi m}{3}}. $$ Thus evaluating equation (3.26) at $k=k_m$, we find the following expression: $$\int^{\frac{l}{2}}_{- \frac{l}{2}} \exp\left\{\left( k_m + \frac{\lambda}{k_m}\right)s\right\} \left[ q^{(1)}(s) + e^{- \frac{2i\pi m}{3}} q^{(2)}(s) + e^{\frac{2i\pi m}{3}} q^{(3)}(s)\right] = G(k_m), \quad m \in {\mathbb{Z}} \eqno (3.28)$$ where $$ G(k_m) = \frac{2T(k_m)\sin \beta}{\bar \alpha^m \frac{P(k_m)}{P(\alpha k_m)} e(\bar \alpha k_m) -e(-\bar \alpha k_m) } . \eqno (3.29)$$ Even if one is able to invert equation (3.28) for the bracket appearing in the integrant of the lhs of equation (3.28), one will find an expression which will involve an infinite series over the transcendental values of $k_m$. Thus instead of analyzing the relevant inversion we will analyze the general Robin problem using the generalized Fourier transform approach (see Section 5). In the case of the Neumann problem, equations (3.28) and (3.29) simplify and yield the following result. \paragraph{Proposition 3.3} Let the real valued function $q(x,y)$ satisfy (1.7) in the triangular domain $D$, with the Neumann boundary conditions $$ q^{(j)}_N(s) = f_j(s), \quad s \in \left[ - \frac{l}{2}, \frac{l}{2} \right], \quad j = 1,2,3 \eqno (3.30)$$ where the functions $f_j(s)$ have sufficient smoothness and are continuous at the vertices of the triangle. Then the Dirichlet data $q^{(j)}(s)$, $j=1,2,3$ can be expressed in terms of the given Neumann data by the Fourier series $$ q^{(j)} (s) = \frac{1}{3l} \sum^\infty_{n=-\infty} \left[ N(k_{3n}) + c^{(j)}_1 e^{\frac{2i\pi s}{3l}} N(k_{3n-1}) + c_2^{(j)} e^{\frac{4i\pi s}{3l}} N(k_{3n-2})\right] e^{- \frac{2i\pi ns}{l}} \eqno (3.31)$$ where $c_i^{(j)}$ are given by (3.12) and $$N(k_m) = \frac{2T_N(k_m)}{\bar\alpha^m e(\bar \alpha k_m) -e(-\bar\alpha k_m)}. \eqno (3.32)$$ The known function $T_N(k_m)$ is defined by the equation $$ -\left( ik+\frac{\lambda}{ik} \right) T_N(k) = e(-k) \left[ E^3(-i\alpha k) + E^3(i\alpha k) \right] F_1(k) + \left[ E^3(-i\bar \alpha k) + E^3(i\bar \alpha k)\right] F_2(k) $$ $$ + e(k) \left[ E^3(-i\alpha k) + E^3(i\alpha k)\right] F_3(k) + 2e^2(k) F_1(\alpha k) + 2F_2(\alpha k) + 2e^2(-k)F_3(\alpha k) $$ $$+ 2e(-k)F_1(\bar \alpha k) + \left( e^3(k) + e^3(-k)\right) F_2(\bar \alpha k) + 2e(k) F_3(\bar \alpha k), \eqno (3.33) $$ where $F_j(k)$ is given by (3.17) with $\sin \beta_j = 1$. \section{Poincar\'e Type Boundary Value Problems} Suppose that $q(x,y)$ satisfies the Poincar\'e type boundary condition (1.11). Then substituting the expression $\rho_j(k)$ from equation (3.15) into the global relation (2.4), we find an equation similar with equation (3.20), where $H(k)$, $H(\bar\alpha k)$ and $H(\alpha k)$ are replaced by $H_1(k)$, $H_2(\bar \alpha k)$ and $H_3(\alpha k)$, and $F_j$, $C_j$, $Y_j$ are defined by equations (3.17)-(3.19). Proceeding as in Section 3.3, in analogy with equation (3.26), we now find $$D(k)H_2(\bar \alpha k) Y_2(\bar \alpha k) = \sum^3_{j=1} \Gamma_j(k)H_j(k)Y_j(k) + T(k) + C(k), \eqno (4.1)$$ where $T(k)$ is defined in terms of the known functions $f_j(s)$, $C(k)$ involves the values of $q$ at the corners, and $D(k)$, $\{ \Gamma_j(k)\}^3_1$ are defined by the following equations: $$D(k) = \frac{P_1(\bar \alpha k)}{P_2(\alpha k)P_3(\alpha k)} \left[ e^3(-k) - e^3(k) \frac{P_1(\alpha k)P_2(\alpha k)P_3(\alpha k)}{P_1(\bar\alpha k)P_2(\bar\alpha k)P_3(\bar\alpha k)}\right], \eqno (4.2)$$ $$\Gamma_1(k) = \frac{1}{P_1(k)} \left[ e(-\bar\alpha k) - e^2(-k)e(\bar\alpha k) \frac{P_1(k)P_1(\bar\alpha k)}{P_2(\alpha k)P_3(\alpha k)} \right], \eqno (4.3a)$$ $$\Gamma_2(k) = e^2(-k) \frac{P_1(\bar\alpha k)}{P_2(k)P_2(\alpha k)} \left[ e(-\bar\alpha k) - e^4(k)e(\bar\alpha k) \frac{P_2(k)P_2(\alpha k)}{P_1(\bar\alpha k) P_3(\bar \alpha k)} \right], \eqno (4.3b)$$ $$ \Gamma_3(k) = e^2(k) \frac{P_1(\alpha k)}{P_3(k)P_3(\bar\alpha k)} \left[ e(-\bar\alpha k) - e^2(-k) e(\bar\alpha k) \frac{P_3(k)P_3(\bar \alpha k)}{P_1(\alpha k)P_2(\alpha k)} \right], \eqno (4.3c)$$ with $$P_j(k) = \frac{H_j(k)}{\overline{H_j(\bar k)}}. \eqno (4.4)$$ In order to be able to solve this problem using a generalized Fourier integral we require that when $D(k)$ vanishes, then $\Gamma_2(k)$ and $\Gamma_3(k)$ are proportional to $\Gamma_1(k)$. Actually, $\Gamma_3(k)$ is proportional to $\Gamma_1(k)$ for all complex $k$ provided that $$P_1(k)P_1(\alpha k) P_1(\bar \alpha k) = P_3(k) P_3(\alpha k)P_3(\bar \alpha k). \eqno (4.5a)$$Equating the brackets appearing in the definitions of $\Gamma_1(k)$ and $\Gamma_2(k)$, and replacing in the resulting expression $e^6(k)$ by $$\frac{P_1(\bar\alpha k)P_2(\bar \alpha k) P_3(\bar\alpha k)}{P_1(\alpha k)P_2(\alpha k)P_3(\alpha k)},$$ it follows that $\Gamma_2(k)$ is proportional to $\Gamma_1(k)$ provided that $$ P_1(k)P_1(\alpha k)P_1(\bar \alpha k) = P_2(k)P_2(\alpha k)P_2(\bar \alpha k). \eqno (4.5b)$$ Equation (4.5b) is valid if the following two equations are valid: $$\sin 3(\beta_1-\beta_2) = 0, \quad \gamma_2(3\lambda - \gamma^2_2)\sin 3\beta_1 - \gamma_1(3\lambda - \gamma^2_1) \sin 3\beta_2 =0. \eqno (4.6)$$ Indeed, in order to simplify equation (4.5b) we first compute the product \\ $H_1(k)H_1(\alpha k)H_1(\bar\alpha k)$, $$H_1(k)H_1(\alpha k)H_1(\bar\alpha k) = k^3e^{3i\beta_1} + \frac{\lambda^3}{k^3e^{3i\beta_1}} + 3\lambda \gamma_1-\gamma^3_1. \eqno (4.7)$$ The function $\overline{H_1(\bar k)}$ can be obtained from $H_1(k)$ by replacing $\beta_1$ with $-\beta_1$, thus $\overline{H_1(\bar k)}$ $ \overline{H_1(\bar\alpha \bar k)}$ $\overline{H_1(\alpha \bar k)}$ is given by an expression similar to (4.7) with $\beta_1$ replacing by $-\beta_1$. Hence equation (4.5b) yields $$ \frac{k^3e^{3i\beta_1} + \frac{\lambda^3}{k^3e^{3i\beta_1}} + 3\lambda\gamma_1-\gamma^3_1}{ k^3e^{-3i\beta_1} + \frac{\lambda^3}{k^3e^{-3i\beta_1}} + 3\lambda\gamma_1-\gamma^3_1} = \frac{k^3e^{3i\beta_2} + \frac{\lambda^3}{k^3e^{3i\beta_2}} + 3\lambda\gamma_2-\gamma^3_2}{k^3 e^{-3i\beta_2} + \frac{\lambda^3}{k^3e^{-3i\beta_2}} + 3\lambda\gamma_2-\gamma^3_2}. $$ This equation simplifies to the equation $$ \left( k^6 - \frac{\lambda^6}{k^6}\right) \sin 3(\beta_1-\beta_2) + \left( k^3 - \frac{\lambda^3}{k^3}\right) \left[ (3\lambda\gamma_2-\gamma^3_2)\sin 3\beta_1 - (3\lambda \gamma_1-\gamma^3_1)\sin 3\beta_2\right] = 0, $$ which is valid for all $k$ iff equations (4.6) are valid. Equation (4.6a) implies $\beta_2 = \beta_1 + n\pi/3$, then $\sin 3\beta_2=\sin 3\beta_1\exp\{in\pi\}$ and we find equation (1.12b). Similarly equation (4.5a) yields equation (1.12c). \paragraph{The Case that the Corner Terms Cancel} \ \ The definition of the corner terms $C_j(k)$, i.e. equation (3.18), shows that $C_j(k)$ involves $\exp[i\beta_j]/\sin\beta_j$. Thus the contribution of the corner terms in the global relation (3.20) vanishes iff $$ e^{2i\beta_1} = e^{2i\beta_2} = e^{2i\beta_3}. \eqno (4.8)$$ \paragraph{Example 1. } \ \ $$ \beta_1=\beta_2 = \beta_3 = \frac{2\pi}{3}, \quad \gamma_j \ \ {\mathrm{arbitrary}}. \eqno (4.9)$$ In this case $$ H_j(k) = k\alpha + \frac{\lambda}{k\alpha} - \gamma_j, \quad P_j(k) = \frac{k\alpha + \frac{\lambda}{k\alpha} - \gamma_j}{k\bar\alpha + \frac{\lambda}{k\bar\alpha} - \gamma_j}. \eqno (4.10)$$ \paragraph{Example 2.} \ \ $$\beta_1=\beta_2=\beta_3=\beta, \quad \beta \ \ {\mathrm{arbitrary}}, \quad \gamma_2=\gamma_3=0, \quad \gamma_1 = (3\lambda)^\half, \quad \lambda >0. \eqno (4.11)$$ In this case $$ H_1(k) = ke^{i\beta} + \frac{\lambda}{ke^{i\beta}} - \gamma_1, \quad H_2(k) = H_3(k) = ke^{i\beta} + \frac{\lambda}{ke^{i\beta}}. \eqno (4.12)$$ In particular if $\beta=\pi/2$, then $$H_1(k) = i\left( k-\frac{\lambda}{k}\right) - \gamma_1, \quad H_2(k) = H_3(k) = i\left(k - \frac{\lambda}{k}\right). \eqno (4.13)$$ Thus $$P_1(k) = \frac{i\left(k - \frac{\lambda}{k}\right)-\gamma_1}{-i\left( k - \frac{\lambda}{k}\right)-\gamma_1}, \quad P_2(k) = P_3(k) =-1. \eqno (4.14)$$ Hence $$D(k) = P_1(\bar \alpha k) \left[ e^3(-k) - e^3(k) \frac{P_1(\alpha k)}{P_1(\bar \alpha k)}\right], \eqno (4.15a)$$ $$\Gamma_1(k) = \frac{1}{P_1(k)} \left[ e(-\bar\alpha k) - e^2(-k)e(\bar \alpha k)P_1(k)P_1(\bar \alpha k)\right], \eqno (4.15b)$$ $$\Gamma_2(k) = e^2(-k)P_1(\bar \alpha k) \left[ e(-\bar\alpha k) + \frac{e^4(k)e(\bar\alpha k)}{P_1(\bar\alpha k)} \right] \eqno (4.15c)$$ $$\Gamma_3(k) = e^2(k)P_1(\alpha k) \left[ e(-\bar\alpha k) + \frac{e^2(-k)e(\bar\alpha k)}{P_1(\alpha k)} \right]. \eqno (4.15d)$$ If $P_1(k)$ is defined by (4.14) with $\gamma^2_1 = 3\lambda$, it can be verified that $$ P_1(k)P_1(\bar\alpha k) = - \frac{1}{P_1(\alpha k)}.$$ Hence $$\Gamma_3(k) = - \frac{e^2(k)}{P_1(\bar\alpha k)} \Gamma_1(k), \quad \Gamma_2(k)|_{D(k) =0} = - \frac{e^2(-k)\Gamma_1(k)}{P_1(\alpha k)}. \eqno (4.16)$$ \paragraph{The Laplace Equation} \ \ In the particular case of the Laplace equation with $\gamma_j =0$, it follows that $P_j = e^{2i\beta_j}$, i.e. $P_j$ is independent of $k$. \section{The Analysis of the Global Relation Via Fourier Integrals} In this section we restrict $\lambda$ to be non-negative. Slightly more complicated formulae can be derived for $\lambda < 0$. We first derive equation (1.25). Letting $k = |k|e^{\frac{i\pi}{6}}$, the definition of $\Psi(\bar\alpha k)$ yields $$\Psi(-i|k|) = \int^{\frac{l}{2}}_{- \frac{l}{2}} \exp \left\{\left( -i|k| + \frac{\lambda}{-i|k|}\right)s\right\} q_N(s)ds. \eqno (5.1)$$ Suppose that $\lambda >0$. Letting $t(|k|) = |k| - \lambda/|k|$, it follows that if $|k| \in (0,+\infty)$, then \noindent $t \in (-\infty,\infty)$. Thus inverting equation (5.1) we find $$q_N(s) = \frac{1}{2\pi} \inta e^{its} \Psi(-i|k|)dt, \quad s \in \left[ -\frac{l}{2}, \frac{l}{2} \right] , \quad q_N(s) =0 \ \ {\mathrm{elsewhere,}}$$ where $|k|$ (in the argument of $\Psi$) is a function of $t$. Rewriting $t$ in terms of $|k|$ we find $$q_N(s) = \frac{1}{2\pi} \into \exp \left\{i\left( |k| - \frac{\lambda}{|k|}\right)s\right\} \Psi(-i|k|) \left( 1+ \frac{\lambda}{|k|^2}\right) d|k|.$$ Letting $|k| = ke^{-i\pi/6}$ we obtain $$ q_N(s) = \frac{1}{2\pi} \int_0^{\infty e^{\frac{i\pi}{6}}} \exp \left\{-\left( \bar\alpha k + \frac{\lambda}{\bar\alpha k}\right)s \right\} \Psi(\bar\alpha k) \left( ke^{- \frac{i\pi}{6}} + \frac{\lambda}{ke^{-\frac{i\pi}{6}}}\right) \frac{dk}{k}. \eqno (5.2)$$ The rhs of this equation equals $$ \frac{1}{2\pi} \int^0_{-\infty e^{\frac{i\pi}{6}}} \exp \left\{- \left( \bar\alpha \xi + \frac{\lambda}{\bar\alpha \xi}\right) s\right\} \Psi(\bar\alpha \xi) \left( \xi e^{- \frac{i\pi}{6}} + \frac{\lambda}{\xi e^{- \frac{i\pi}{6}}}\right) \frac{d\xi}{\xi}. \eqno (5.3)$$ Indeed, for the derivation of (5.3) we first observe that the function $\Psi(\bar\alpha k)$ remains invariant under the transformation $k\rightarrow \bar\alpha \lambda/k$. Thus making the change of variables $k = \bar\alpha \lambda/\xi$ in the rhs of equation (5.2), and using $$ke^{- \frac{i\pi}{6}} + \frac{\lambda}{ke^{- \frac{i\pi}{6}}} \rightarrow - \left( \xi e^{- \frac{i\pi}{6}} + \frac{\lambda}{\xi e^{- \frac{i\pi}{6}}}\right), \quad \frac{dk}{k} =- \frac{d\xi}{\xi}, $$ we find the expression (5.3). Combining (5.2) and (5.3) we obtain $$ q_N(s) = \frac{1}{4\pi} \int^{\infty e^{\frac{i\pi}{6}}}_{-\infty e^{\frac{i\pi}{6}}} \exp \left\{- \left( \bar\alpha k + \frac{\lambda}{\bar\alpha k}\right)s\right\} \Psi(\bar\alpha k) \left( ke^{- \frac{i\pi}{6}} + \frac{\lambda}{ke^{- \frac{i\pi}{6}}}\right) \frac{dk}{k}. \eqno (5.4)$$ Using $e^{- \frac{i\pi}{6}} = i\bar\alpha$, this equations becomes equation (1.25). \\ If $\lambda =0$, we set $k = t \exp\{ i \frac{7\pi}{6}\}$, $t \in {\mathbb{R}}$, and rewrite (1.14a) as $$ \Psi(it) = \int^{l/2}_{-l/2} e^{its} q_N(s)ds \eqno (5.5)$$ which is inverted to $$ q_N(s) = \frac{l}{2\pi} \int^{+\infty}_{-\infty} e^{-its} \Psi(it)dt. \eqno (5.6)$$ Replacing in (5.6) $\{it\}$ with $\{\bar\alpha k\}$, we arrive at $$q_N(s) = \frac{i\bar\alpha}{2\pi} \int^{\infty e^{ \frac{i\pi}{6}}}_{\infty e^{ \frac{i7\pi}{6}}} e^{- \bar \alpha ks} \Psi(\bar \alpha k)dk.$$ This equation, in comparison to (1.25) misses a factor of 1/2; this is due to the linearity of the relevant transformation in this case. \subsection{The Symmetric Dirichlet Problem} Solving equation (1.17) for $\Psi(\bar\alpha k)$ and substituting the resulting expression in equation (1.25) we find $$q_N(s) = \frac{i\bar\alpha}{4\pi} \int^{\infty e^{\frac{i\pi}{6}}}_{\infty e^{\frac{7i\pi}{6}}} \frac{\exp \left\{- \left( \bar \alpha k + \frac{\lambda}{\bar \alpha k}\right)s\right\}}{\Delta (k)} \left[ (e(\bar \alpha k)-e(-\bar\alpha k)) \Psi(k) - 2iG(k)\right] \left( 1 - \frac{\lambda}{(\bar\alpha k)^2} \right) dk, \eqno (5.7)$$ where $\Delta(k)$ denotes the coefficient of $\Psi(\bar\alpha k)$ in equation (1.17), i.e. $$\Delta (k) = e(k) - e(-k).$$ The line $\left( \infty e^{\frac{7i\pi}{6}}, \infty e^{\frac{i\pi}{6}}\right) $ splits the complex $k$-plane into the two half planes, $$ {\mathcal{D}}^+ = \left\{ k \in {\mathbb{C}}, \quad \frac{\pi}{6} < \arg k < \frac{7\pi}{6} \right\}, $$ $$ {\mathcal{D}}^- = \left\{ k \in {\mathbb{C}}, \quad \frac{7\pi}{6} < \arg k < \frac{13\pi}{6} \right\}.$$ We observe that $$ \exp \left\{- \left( \bar\alpha k + \frac{\lambda}{\bar\alpha k}\right)s\right\} e(\bar\alpha k) \ \ {\mathrm{is \ \ bounded \ \ for}} \ \ k \in {\mathcal{D}}^-,$$ $$ \exp \left\{- \left( \bar\alpha k + \frac{\lambda}{\bar\alpha k}\right)s\right\} e(-\bar\alpha k) \ \ {\mathrm{is \ \ bounded \ \ for}} \ \ k \in {\mathcal{D}}^+. \eqno (5.8)$$ Indeed the exponential of (5.8a) involves $\bar \alpha k\left( \frac{l}{2} - s\right)$, and since $l/2 - s \geq 0$, the exponential in (5.8a) is bounded for Re$(\bar\alpha k) \leq 0$, i.e. in ${\mathcal{D}}^-$. Similarly, the exponential of (5.8b) involves $-\bar\alpha k\left( \frac{l}{2} + s\right)$, which since $l/2 + s \geq 0$, is bounded for Re$(\bar\alpha k) \geq 0$, i.e. in ${\mathcal{D}}^+$. We also note that $\Psi(k)/\Delta(k)$ is bounded for all $k \in {\mathbb{C}}$, $k \neq s_n$. Indeed, for Re $k>0$, $\Delta(k)$ is dominated by $e(k)$, while for Re $k <0$, $\Delta (k)$ is dominated by $e(-k)$, hence $$\frac{\Psi(k)}{\Delta(k)} \sim \left\{ \begin{array}{ll} \Psi(k)e(-k), & {\mathrm{Re}} k>0 \\ \\ -\Psi(k)e(k), & {\mathrm{Re}} k < 0. \end{array} \right. \eqno (5.9)$$ Furthermore $\Psi(k) e(-k)$ involves $k(s-l/2)$ which is bounded for Re $k\geq 0$, while $\Psi(k)e(k)$ involves $k(s+l/2)$ which is bounded for Re $k\leq 0$ (recall that $-\frac{l}{2} \leq s \leq \frac{l}{2}$). The above considerations imply that the parts of the integral (5.7) containing $e(\bar \alpha k)\Psi(k)$ and $e(-\bar\alpha k)\Psi(k)$ can be computed by using Cauchy's theorem in ${\mathcal{D}}^-$ and ${\mathcal{D}}^+$ respectively. The associated residues can be computed as follows: Let $s^+_n$ and $s^-_n$ denote the subsets of $s_n$ in ${\mathcal{D}}^+$ and ${\mathcal{D}}^-$, respectively. Evaluating equation (1.17) at $k=s^\pm_n$ we find $$e(\bar\alpha s^-_n) \Psi(s^-_n) = \frac{2iG(s^-_n)}{-e^2(-\bar\alpha s^-_n) + 1}, \quad -e(-\bar\alpha s^+_n) \Psi(s^+_n) = \frac{2iG(s^+_n)}{1-e^2(\bar \alpha s^+_n)}.$$ Thus $$ q_N(s) =- \frac{\bar\alpha}{2\pi} \int^{\infty e^{\frac{i\pi}{6}}}_{\infty e^{\frac{7i\pi}{6}}} \exp \left\{- \left( \bar\alpha k + \frac{\lambda}{\bar\alpha k}\right)s\right\} \frac{G(k)}{\Delta(k)} \left[ 1- \frac{\lambda}{(\bar\alpha k)^2} \right] dk$$ $$ -i\bar\alpha \sum_{s^+_n} \exp \left\{-\left( \bar\alpha s^+_n + \frac{\lambda}{\bar\alpha s^+_n}\right)s\right\} \frac{G(s^+_n)}{\Delta'(s^+_n)\left[ 1-e^2(\bar\alpha s^+_n)\right]} \left[ 1 - \frac{\lambda}{(\bar\alpha s^+_n)^2} \right] $$ $$ + i\bar\alpha \sum_{s^-_n} \exp \left\{-\left( \bar\alpha s^-_n + \frac{\lambda}{\bar\alpha s^-_n}\right)s\right\} \frac{G(s^-_n)}{\Delta'(s^-_n)\left[ 1-e^2(-\bar\alpha s^-_n)\right]} \left[ 1 - \frac{\lambda}{(\bar\alpha s^-_n)^2} \right]. \eqno (5.10) $$ \subsection{The Poincar\'e Problem} Evaluating equation (4.1) at $k=k_m$, where $k_m$ is a zero of $D(k)$, it follows that the unknown terms $Y_j(k)$ appear in the form $$\Gamma_1(k_m) H_1(k_m) \left\{ Y_1(k_m) + e^2(-k_m) \frac{P_1(\bar\alpha k_m)P_1(k_m)}{P_2(k_m)P_2(\bar\alpha k_m)} \frac{H_2(k_m)}{H_1(k_m)} Y_2(k_m) \right. $$ $$+ \left. e^2(k_m) \frac{P_1(\alpha k_m)P_1(k_m)}{P_3(k_m)P_3(\bar\alpha k_m)} \frac{H_3(k_m)}{H_1(k_m)} Y_3(k_m) \right\}. $$ The crucial difference of this general case, as compared with the oblique Robin case (1.9), is the following: Using the definition of $Y_j(k_m)$ we find that the coefficients of $q^{(2)}(s)$ and of $q^{(3)}(s)$ involve in general $k_m$-dependent expressions, thus it is not clear how the associated integral can be inverted. In contrast, equation (4.1) can be solved using the approach of section (5.1). The definition of $Y_2(\bar\alpha k)$, i.e. equation (3.19), and equation(1.25), imply $$q^{(2)}(s) = \frac{i\bar\alpha\sin\beta_2}{2\pi} \int^{\infty e^{\frac{i\pi}{6}}}_{\infty e^{\frac{7i\pi}{6}}} \exp \left\{- \left( \bar\alpha k + \frac{\lambda}{\bar\alpha k}\right)s\right\} \left(1- \frac{\lambda}{(\bar\alpha k)^2} \right) Y_2(\bar\alpha k)dk. \eqno (5.11)$$ Solving equation (4.1) for $Y_2(\bar\alpha k)$ and substituting the resulting expression in equation (5.11) we find an integral involving the three unknown functions $\{ Y_j(k)\}^3_1$. The unknown part of this integral involves the factors (5.8) analyzed already, as well as factors of the type $$e^2(-k) \frac{Y_j(k)}{D(k)}, \quad \frac{e^2(k)Y_j(k)}{D(k)}. $$ These terms are bounded for all $k \neq k_m$. Indeed, ignoring the terms involving $P_j(k)$ we find $$ e^2(-k) \frac{Y_j(k)}{D(k)} \sim \left\{ \begin{array}{ll} Y_j(k) e(-k) \cdot e^4(-k), & {\mathrm{Re}} k>0 \\ \\ Y_j(k)e(k), & {\mathrm{Re}} k<0, \end{array} \right.$$ $$ e^2( k) \frac{Y_j(k)}{D(k)} \sim \left\{ \begin{array}{ll} Y_j(k) e(-k) , & {\mathrm{Re}} k>0 \\ \\ Y_j(k)e(k) \cdot e^4(k), & {\mathrm{Re}} k<0, \end{array} \right.$$ which are identical with the expressions (5.9) except for the occurrence of the factors $e^4(-k)$ and $e^4(k)$ for Re $k>0$ and Re $k<0$, which are bounded. The above discussion implies that the integral involving $$ \frac{e(-\bar\alpha k)}{D(k)} \left[ \frac{H_1(k)}{P_1(k)} Y_1(k) + e^2(-k) \frac{P_1(\bar\alpha k)H_2(k)}{P_2(k)P_2(\alpha k)} Y_2(k) + e^2(k) \frac{P_1(\alpha k)H_3(k)}{P_3(k)P_3(\bar\alpha k)} Y_3(k)\right], \eqno (5.12)$$ can be computed by using Cauchy's theorem in ${\mathcal{D}}^+$. Evaluating equation (4.1) at $k^+_m$ it follows that the associated residue equals $$ \frac{-[T(k^+_m) + C(k^+_m)]}{1 - \frac{ P_1(k^+_m)P_1(\bar\alpha k^+_m)}{P_2(\alpha k^+_m)P_3(\alpha k^+_m)} e^2(-k^+_m) e^2(\bar\alpha k^+_m)}. \eqno (5.13)$$ Similarly, the integral involving $$- \frac{P_1(k)P_1(\bar\alpha k)}{P_2(\alpha k)P_3(\alpha k)} \frac{e(\bar\alpha k)}{D(k)} \left[ \frac{e^2(- k)}{P_1(k)}H_1(k) Y_1(k) + \frac{e^2(k)P_2(\alpha k)P_3(\alpha k)}{P_1(k)P_1(\bar\alpha k)P_3(\bar\alpha k)} H_2(k) Y_2 (k) \right.$$ $$ \left. + \frac{P_3(\alpha k)}{P_1(\bar\alpha k)P_1(\alpha k)} H_3(k)Y_3(k) \right] \eqno (5.14)$$ can be computed by using Cauchy's theorem in ${\mathcal{D}}^-$. Evaluating equation (4.1) at $k^-_m$, it follows that the associated residue equals $$ \frac{[T(k^-_m) + C(k^-_m)]}{ 1 - \frac{P_2(\alpha k^-_m)P_3(\alpha k^-_m)}{P_1(k^-_m)P_1(\bar\alpha k^-_m)} e^2(k^-_m)e^2(-\bar\alpha k^-_m)}. \eqno (5.15)$$ In what follows we give the details for a mixed Neumann-Robin problem. \vskip .1in We will consider Example 2, as it is described by (4.11) with $\beta = \frac{\pi}{2}$. On side (1) we assume the Robin condition $$ q^{(1)}_N(s) + \sqrt{3\lambda} q^{(1)}(s) = f_1(s), \eqno (5.16)$$ and on sides (2) and (3) we assume the Neumann conditions $$q^{(2)}_N(s) = f_2(s) \eqno (5.17)$$ and $$q^{(3)}_N(s) = f_3(s). \eqno (5.18)$$ Then $H_j(k)$, $P_j(k)$, $D(k)$ and $\Gamma_j(k)$, are given by (4.13), (4.14), (4.15a) and (4.15b,c,d) respectively. Furthermore, $\Gamma_3(k)$ is proportional to $\Gamma_1(k)$ (see equation (4.16a)), while $\Gamma_2(k)$ becomes proportional to $\Gamma_1(k)$ only on those $k_m$'s for which $D(k)$ vanishes. These are roots of the transcendental equation $$ \exp\left\{3\left( k+ \frac{\lambda}{k}\right)\right\} = \frac{\left( k + \frac{\lambda}{k} - \sqrt{\lambda}\right)\left( k + \frac{\lambda}{k} - 2\sqrt{\lambda}\right)}{\left( k + \frac{\lambda}{k} + \sqrt{\lambda}\right)\left( k + \frac{\lambda}{k} + 2\sqrt{\lambda}\right)}. \eqno (5.19) $$ For $k=k_m$, equation (4.1), in view of (4.16), implies $$ \Gamma_1(k_m) \left[ H_1(k_m)Y_1(k_m) - \frac{e^2(-k_m)}{P_1(\alpha k_m)} H_2(k_m)Y_2(k_m) - \frac{e^2(k_m)}{P_1(\bar\alpha k_m)} H_3(k_m)Y_3(k_m)\right] = -T(k_m). \eqno (5.20)$$ By virtue of (4.15b) and the identity $$P_1(k_m)P_1(\alpha k_m)P_1(\bar\alpha k_m) = -1 \eqno (5.21)$$ equation (5.20) is written as $$ \left[ e(-\bar\alpha k_m) + e(\bar\alpha k_m) \frac{e^2(-k_m)}{P_1(\alpha k_m)} \right] \left[ \frac{H_1(k_m)Y_1(k_m)}{P_1(k_m)} \right.$$ $$+e^2(-k_m)P_1(\bar\alpha k_m)H_2(k_m)Y_2(k_m)+ e^2(k_m) P_1(\alpha k_m)H_3(k_m)Y_3(k_m)] =- T(k_m). \eqno (5.22)$$ Since the corner terms $C(k)$ vanish, the representation (5.11) and equation (4.1) yield $$q^{(2)}(s) = \frac{i\bar\alpha}{2\pi} \int^{\infty e^{i\frac{\pi}{6}}}_{\infty e^{i\frac{7\pi}{6}}} \exp \left\{- \left(\bar\alpha k + \frac{\lambda}{\bar\alpha k}\right)s\right\} \left( 1 - \frac{\lambda}{(\bar\alpha k)^2}\right)$$ $$ \frac{1}{H_2(\bar\alpha k)D(k)} \left[ \sum^3_{j=1} \Gamma_j(k)H_j(k)Y_j(k) + T(k)\right] dk. \eqno (5.23)$$ Utilizing the expression (5.22) we arrive at the following result. \paragraph{Proposition 5.1.} Let the real valued function $q(x,y)$ satisfy equation (1.7) with $\lambda >0$ in the triangular domain $D$, with the Robin boundary condition (5.16) on side (1) and the Neumann boundary conditions (5.17) and (5.18) on sides (2) and (3), where the given functions $f_j(s)$ have sufficient smoothness and are continuous at the vertices. Then the Dirichlet value on side (2) is given by $$q^{(2)}(s) = \frac{i\bar\alpha}{2\pi} \int^{\infty e^{i\frac{\pi}{6}}}_{\infty e^{i\frac{7\pi}{6}}} \exp\left\{-\left(\bar\alpha k+ \frac{\lambda}{\bar\alpha k}\right)s\right\} \left( 1 - \frac{\lambda}{(\bar\alpha k)^2}\right) \frac{T(k)}{H_2(\bar\alpha k)D(k)} dk$$ $$ + \bar\alpha \sum_{k^+_m} \frac{\exp\left\{-\left( \bar\alpha k^+_m + \frac{\lambda}{\bar\alpha k^+_m}\right)s\right\}}{H_2(\bar\alpha k^+_m)D'(k^+_m)} \left( 1 - \frac{\lambda}{(\bar\alpha k^+_m)^2}\right) \frac{T(k^+_m)}{1+ \frac{E^6(i\alpha k^+_m )}{P_1(\alpha k^+_m)}} $$ $$ - \bar\alpha \sum_{k^-_m} \frac{ \exp\left\{- \left( \bar\alpha k^-_m + \frac{\lambda}{\bar\alpha k^-_m}\right)s \right\}}{H_2(\bar\alpha k^-_m)D'(k^-_n)} \left( 1 - \frac{\lambda}{(\bar\alpha k^-_m)^2} \right) \frac{T(k^-_m)}{1 + P_1(\alpha k^-_m)E^6(-i\alpha k^-_m)}, \eqno (5.24)$$ where $D'(k^\pm_n)$ denotes the derivative of $ D(k)$ evaluated at $k=k^\pm_n$. The summations are taken over all $k^+_m \in {\mathcal{D}}^+$ and $k^-_m\in {\mathcal{D}}^-$ respectively, and $T$, $H_2$, $P_1$ are defined by equations (3.27), (3.16), (4.4) respectively. There exist similar formulas for the Dirichlet values on sides (1) and (3). \section{The Integral Representations} If $\lambda \geq 0$ the classical Green's representation is given by [3] $$ q({\mathbf{r}}) = \frac{1}{2\pi} \int_{\partial D} \left[ K(2\sqrt{\lambda}|{\mathbf{r}}-{\mathbf{r}}'|)\partial_{n'} q({\mathbf{r}}') - q({\mathbf{r}}')\partial_{n'} K(2\sqrt{\lambda}|{\mathbf{r}}-{\mathbf{r}}'|)\right] dl({\mathbf{r}}') \eqno (6.1)$$ where the integration is over the boundary $\partial D$ of the triangle in the positive direction, $\partial_{n'}$ denotes the outward normal derivative on $\partial D$, $dl({\mathbf{r}}')$ is the line element along $\partial D$, and $K(x)$ is the modified Bessel function of the zeroth order and of the second kind for the modified Helmholtz equation. For the case of the Helmholtz equation $K(x)$ is proportional to the Hankel function of the zeroth order and of the first kind, while for the Laplace's equation $K(x)$ is proportional to the logarithm of $x$. For the Laplace equation, the integral representation constructed in [5] is defined as follows: $$ \frac{\partial q}{\partial z} = \frac{1}{2\pi} \sum^3_{j=1} \int_{l_j} e^{ikz} \tilde \rho_j(k)dk, \quad z \in D, \eqno (6.2)$$ where the contours $l_j$ are the rays from 0 to $\infty$ specified by the arguments $-\pi/2$, $\pi/6$, $5\pi/6$ respectively, and the functions $\tilde \rho_j(k)$ are defined by equations (2.6) in terms of $\rho_j(k)$, where the latter functions are defined by equations (2.7), (1.14) with $\lambda =0$. \begin{center} \begin{minipage}[b]{6cm} \psfrag{a}{$\pi/6$} \psfrag{b}{$k_{R}$} \psfrag{c}{$k_{I}$} \psfrag{A}{$l_{1}$} \psfrag{B}{$l_{2}$} \psfrag{C}{$l_{3}$} \centerline{\includegraphics{fig6.eps}} \centerline{\textbf{Figure 6.1:} The rays $l_j$ in the complex $k$-plane} \end{minipage} \end{center} For the modified Helmholtz equation, the analogue of equation (6.2) is [5] $$ q(z,\bar z) = \frac{1}{2\pi i} \sum^3_{j=1} \int_{l_j} e^{ikz + \frac{\lambda}{ik}\bar z} \tilde \rho_j(k) \frac{dk}{k}, \quad z \in D, \eqno (6.3)$$ where the rays $l_j$ are the same as in (6.2) and $\tilde \rho_j(k)$ are defined by equations (2.6) and (2.7). There exists a similar representation for the Helmholtz equation, which however, in addition to rays, it also involves circular arcs [5]. \subsection{The Symmetric Dirichlet Problem} Using the integral representation (6.3) it is possible to compute directly $q(z,\bar z)$, bypassing the computation of the unknown boundary values. For brevity of presentation we will only give details for the symmetric Dirichlet problem. The analysis of the more general boundary value problems (1.8)-(1.11) is similar. Recalling the definitions of $\tilde \rho_j$, i.e. equations (2.6) and (2.7), it follows that the representation $q(z,\bar z)$ given by equation (6.3) involves the known function $F(k)$ defined by equation (3.2), as well as the unknown function $\Psi(k)$ which on the rays $l_j$ appears as: $$ l_1: E(-ik)\Psi(k), \quad l_2: E(-i\bar\alpha k)\Psi(\bar \alpha k), \quad l_3: E(-i\alpha k)\Psi(\alpha k). \eqno (6.4)$$ Solving equation (1.17) for $\Psi(\bar\alpha k)$ in terms of $\Psi(k)$, and then using the Schwarz conjugation of the resulting equations in order to express $\Psi(\alpha k)$ in terms of $\Psi(k)$, it follows that the expressions in (6.4) involve the unknown function $\Psi(k)/\Delta(k)$, $\Delta(k)=e(k)-e(-k)$, times the following expressions: $$l_1: E(-ik) [e(k)-e(-k)], \quad l_2: E(-i\bar\alpha k)[e(\bar\alpha k)-e(-\bar\alpha k)], \quad l_3: E(-i\alpha k)[e(\alpha k)-e(-\alpha k)]. \eqno (6.5)$$ The third of the relations in (1.5) implies $e(k) = E(i\bar\alpha k)E(-i\alpha k)$, thus $$ E(-ik)e(k) = E^2(i\bar\alpha k), \quad E(-ik)e(-k) = E^2(i\alpha k).$$ Replacing $k$ by $\bar\alpha k$ and by $\alpha k$ in these identities, we find $$ E(-i\bar\alpha k)e(\bar\alpha k) = E^2(i\alpha k), \quad E(-i\bar\alpha k)E(-\bar\alpha k) = E^2(ik), $$ $$E(-i\alpha k)e(\alpha k) = E^2(ik), \quad E(-i\alpha k) e(-\alpha k) = E^2(i\bar\alpha k).$$ Thus the expressions in (6.5) involve $$ l_1: E^2(i\bar\alpha k) - E^2(i\alpha k), \quad l_2: E^2(i\alpha k) - E^2(ik), \quad l_3: E^2(ik) - E^2(i\bar\alpha k).$$ Hence, the unknown part of $q(z,\bar z)$ involves the following integral $$ J(z,\bar z) = \sum^3_{j=1} J_j (z,\bar z), \eqno (6.6)$$ $$J_3(z,\bar z) = \frac{1}{4\pi} \int_{\{ -l_1\}\cup\{l_2\} } \exp\left\{ik z+ \frac{\lambda}{i k} \bar z \right\} E^2(i\alpha k) \frac{\Psi(k)dk}{k\Delta (k)}, \eqno (6.7a)$$ $$J_1(z,\bar z) = \frac{1}{4\pi} \int_{\{ -l_2\}\cup\{l_3\} } \exp \left\{ikz + \frac{\lambda}{i k} \bar z \right\} E^2(ik) \frac{\Psi(k)dk}{k\Delta (k)}, \eqno (6.7b)$$ $$J_2(z,\bar z) = \frac{1}{4\pi} \int_{\{ -l_3\}\cup\{l_1\} } \exp \left\{ikz + \frac{\lambda}{i k} \bar z \right\} E^2(i\bar \alpha k) \frac{\Psi(k)dk}{k\Delta (k)}. \eqno (6.7c)$$ Each of the above integrals can be computed in terms of residues. Indeed, it was shown in Section 5 that $\Psi(k)/k\Delta(k)$ is bounded as $k\rightarrow 0$ and as $k\rightarrow \infty$. Furthermore, it will be verified below that the exponentials, $$ \exp \left\{ikz + \frac{\lambda}{ik} \bar z\right\} E^2(i\alpha k), \quad \exp \left\{ikz + \frac{\lambda}{ik}\bar z\right\} E^2(ik), \quad \exp \left\{ikz + \frac{\lambda}{ik} \bar z\right\} E^2(i\bar \alpha k), \eqno (6.8)$$ are bounded as $k\rightarrow 0$ and $k\rightarrow \infty$, for arg $k$ in $$ \left[ - \frac{\pi}{2}, \frac{\pi}{6}\right], \quad \left[ \frac{\pi}{6}, \frac{5\pi}{6}\right], \quad \left[ \frac{5\pi}{6}, \frac{3\pi}{2} \right], $$ respectively, provided that $(z,\bar z) \in D$. We first consider the first exponential in (6.8); since $z_2 =-\bar\alpha \frac{l}{\sqrt{3}}$, this exponential can be written as $\exp\{ik(z-z_2) + \lambda(\bar z - \bar z_2)/ik\}$. If $z$ is in the triangular domain then $$ \frac{\pi}{2} \leq \arg (z-z_2) \leq \frac{5\pi}{6}.$$ Thus if $- \frac{\pi}{2} \leq \arg k \leq \frac{\pi}{6}$, we find $$ 0 \leq \arg [k(z-z_2)] \leq \pi.$$ Hence $\exp\{ik(z-z_2)\}$ is bounded as $|k|\rightarrow\infty$ and $\exp\{\lambda(\bar z - \bar z_2)\bar k/i|k|^2\}$ is bounded as $|k|\rightarrow 0$. Hence there is no contribution from zero and from infinity. The results for the second and for the third integrals in (6.8) follows from the above result by using appropriate rotations. The roots of $\Delta(k) =0$ lie on the imaginary axis. Denote by $s^+_n$ those with positive imaginary part and by $s^-_n$ those with negative imaginary part. Obviously, the residue from each $s^+_n$ has a full contribution to $J_1(z,\bar z)$, while the residue contribution from each $s^-_n$ is split in two halfs, one half is contributed to $J_2(z,\bar z)$ and one half to $J_3(z,\bar z)$. Tedious but straightforward calculations lead to the expression $$J(z,\bar z) = \sum_{s^+_n} \exp\left\{is^+_nz + \frac{\lambda}{is^+_n}\bar z\right\} \frac{E^2(is^+_n)G(s^+_n)}{s^+_n\Delta'(s^+_n)\Delta(\bar \alpha s^+_n)}$$ $$ + \half \sum_{s^-_n} \exp\left\{is^-_n z + \frac{\lambda}{is^-_n} \bar z\right\} \frac{\left( E^2(i\alpha s^-_n) + E^2(i\bar\alpha s^-_n)\right)G(s^-_n)}{s^-_n\Delta'(s^-_n)\Delta(\bar\alpha s^-_n)}. \eqno (6.9)$$ In the above calculations, the value of $\Psi_1(s^\pm_n)$ is obtained from (1.18). We observe that for each $n \in {\mathbb{Z}}$, $$ \exp \left\{is_nz + \frac{\lambda}{is_n}\bar z\right\} = \exp \left\{\pm 2\sqrt{(\frac{n\pi}{l})^2+\lambda} x\right\} \cdot \exp \left\{-2i\frac{n\pi}{l}y\right\} \eqno (6.10)$$ thus this expression shows that the equilateral triangle admits separable solutions. It is clear that each eigensolution in (6.10), solves equation (1.7). \section{Conclusion} Eigenvalues and eigenfunctions for equation (1.7) with homogeneous Dirichlet, Neumann, and Robin boundary conditions were constructed in the classical works of Lam\'e [10]-[12]. Some of these results have been rederived by several authors, in particular the Dirichlet problem is discussed in the recent review [13]. The Robin problem is analysed in [14]. It is remarkable that Lam\'e argued, using physical considerations, that it is impossible to solve certain problems using infinite series as opposed to integrals. Indeed Lam\'e writes [12 p.191]: ``The series should therefore express the fact that the temperature remains zero on strips of constant width separated by other strips of double width, in which the temperature may vary. The analytic interpretation of this sort of discontinuity demands the introduction of terms where the variables appear {\it inside integrals}\footnote{These words were not italic in the original text}. These terms, of a nature that we will not consider here, cannot disappear from the total series unless the discontinuity disappears''. In this paper we have solved several boundary value problems by introducing a novel analysis of the global relation, i.e. of equation (1.13). Although this equation was first derived in the important work [14], where it was also used to solve the Robin problem, our treatment of equation (1.13) is different than that of [14]. As a consequence of our novel analysis of equation (1.13) we are able to first present a straightforward treatment of {\it simple} boundary value problems. This treatment, which is based on the evaluation of the basic algebraic relations (see the Introduction) at particular values of $k$, expresses the unknown boundary values in terms of infinite series. The Dirichlet, Neumann and Robin problems can be solved using this approach. We then show that, in agreement with the above remarks of Lam\'e, more {\it complicated} boundary value problems apparently require the use of generalized Fourier integrals as opposed to infinite series. Proposition 5.1 presents the solution of such a problem. In this paper, as opposed to the works of [1], [2], [5]-[7], we have introduced a method for determining the {\it generalised Dirichlet to Neumann map}, i.e. determining the {\it unknown boundary values} as opposed to determing $q(x,y)$ itself. In this respect we note that: (a) In some applications one requires precisely these unknown boundary values. (b) When both the Dirichlet and the Neumann boundary values are known, it is straightforward to compute $q(x,y)$. We emphasize however, that the approach of Section 5 can be used to construct directly $q(x,y)$. Indeed, if one uses the novel integral representations for $q(x,y)$ obtained in [5], instead for the representation (1.25) for $q_N$, and if one follows the approach of Section 5, one can again compute explicitly the contribution of the unknown functions $Y_j(k)$. This latter approach is illustrated in Section 6.1 for the symmetric Dirichlet problem. More complicated problems using this approach are solved in [2], [6], [7]. In order to compute $q(x,y)$ from the knowledge of both the Dirichlet and the Neumann boundary values one can use either the classical Green's formulae or the representations of [5]. Regarding the latter representations we note that they provide a tailor-made transform for the particular problem at hand. In fact the exponential $\exp\{ikz+(\lambda/ik)\bar z\}$ reflects the structure of the PDE, the contours $l_j$ in the complex $k$-plane reflect the geometry of the domain, and the functions $\rho_j(k)$ describe the boundary conditions. Both the Dirichlet and the Neumann problems involve elementary trigonometric functions. It is interesting that the analysis of the global condition yields these separable solutions without the direct use of separation of variables. For arbitrary values of the constants $\beta_j$ and $\gamma_j$, the Poincar\'e problem (1.11) gives rise to a matrix Riemann-Hilbert problem. For the particular case that equations (1.12) are valid, it is possible to avoid this Riemann-Hilbert problem and to solve the problem in closed form. Although equations (1.12) impose severe restrictions on $\beta_j$ and $\gamma_j$, some of the resulting cases appear interesting. These cases include the following: (1) $\beta_1 = \beta_2 = \beta_3 = \frac{2\pi}{3}$, $\gamma_j$ arbitrary. In this case the angles are specified, but $\gamma_j$ are arbitrary. (2) The mixed Neumann-Robin problem analyzed in Section 5. (3) $\beta_2 = \beta + \frac{4\pi}{3}, \quad \beta_3=\beta + \frac{2\pi}{3}, \quad \gamma_1 = \gamma_2 = \gamma_3 = \gamma$. In this case all derivatives are computed along a direction making an angle $\beta$ with the positive vertical axis. The results presented here can be made rigorous, following a formalism similar to the one used in [9]. Several problems remain open which include the following: 1. The investigation of singularities associated with discontinuous boundary conditions. 2. If the $\beta_j$'s differ, then the global relation (3.20) contains a contribution from $q$ at the three corners. Several approaches for determining these terms are presented in [1] and [6], however, the optimal treatment of these terms remains open. The approach introduced in [4] and [5] constructs the solutions of a given boundary value problem {\it without} using eigenfunction expansions. Similar considerations apply to the approach introduced here for constructing the generalised Dirichlet to Neumann map. However, it turns out that the above approaches can also be used to investigate the existence of eigenfunction expansions and to construct these expansions when they exist. This will be presented elsewhere. \section*{Acknowledgements} This work was partially supported by the EPSRC. This is part of a joint program undertaken with A.C. Newell.
1,314,259,996,220
arxiv
\section{Introduction} The 5th generation of wireless communication standards demand more stringent deadlines with higher throughput demands compared to their 4th generation counterparts. Extensive work in the literature has emerged to satisfy these requirements. Offloading the data from the base station (BS) to cellular users was shown to provide promising results to increase the network throughput and users satisfaction \cite{ji2015throughput,le2013instantly,golrezaei2014base,keller2012microcast}. While the algorithms in these works reduce the retransmission traffic significantly, they are suitable for data with no hard deadlines imposed on each packet. Hence, such algorithms are unsuitable for applications such as streaming videos. In addition, the variability of the wireless channel between the BS and the users is ignored in those algorithms. The authors of \cite{7925800} propose a cooperative device-to-device-based communication scheme that improves the cellular network's spectral efficiency. While cooperation is shown to improve the network's performance \cite{7920395,7952823}, offloading the data was out of the scope of their work. The contributions of this paper is summarized as follows: \begin{itemize} \item Modeling the data offloading problem in the presence of hard deadlines and channel variations. \item Presenting a scheduling algorithm with polynomial complexity in the number of users and showing its asymptotic optimality. \end{itemize} \section{System Model} \label{Model} We assume a single-frequency-channel, time-slotted downlink system with slot duration of $T$ seconds. The system has a single base station (BS) and $N$ users indexed by elements from the set $\script{N}\in\{1,\cdots,N\}$ while the BS has the index $0$. The users are streaming the same data that is divided into packets that arrive to the BS, each slot, to be broadcast to the users in a timely manner before its hard deadline. We model the channels between the BS and user $i\in\script{N}$ as a fading channel with power gain $\gamma_{0i}(k)\in\mathds{R}_+$ that is known to the BS at the beginning of each slot. The distribution of $\gamma_{0i}(k)\in\mathds{R}_+$ can be modeled using the approaches in \cite{8053830} or \cite{fe7af411c5064e66a78e5c890947636d} which present efficient ways of modeling fading channels. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{figures/BS.pdf}% \caption{We consider a downlink system where the BS is broadcasting a multicast packet. The packet has a hard deadline and users are allowed to relay to each other as long as no more than one user is transmitting at time.}% \label{BS}% \end{figure} \subsection{Packet Arrival Model} Let $a(k)$ be the indicator for a packet arrival at the BS at the beginning of slot $k$ and if not received, by some user $i$, by the end of slot $k$ (hard deadline), then this packet is dropped out of the system and does not contribute towards the throughput of that user. Assuming that $\{a(k)\}$ is a Bernoulli process with rate $\lambda$ packets per slot, user $i$ is satisfied if it receives, on average, more than $q_i\%$ of the packets arrived at the BS. We refer to this constraint as the QoS constraint for user $i$. \subsection{Packet Service Model} Following \cite{hou2010scheduling} we assume that more than one packet can be transmitted in one time slot. Thus, we divide time slot $k$ into two phases: Phase I and Phase II, with durations $\mu_{\rm I}\left ( k\right )$ and $\mu_{\rm II}\left ( k\right )=T-\mu_{\rm I}\left ( k\right )$, respectively (see Fig. \ref{Time_Slot}). In Phase I, the BS broadcasts the packets to its users with some rate $R_0(k)$ given by \begin{equation} R_0(k)=\log \left ( 1+P_0\Gamma_0(k)\right ), \label{Rate_z} \end{equation} where we normalize the noise variance of all receivers in the system to unity while $\Gamma_0(k)$ is referred to as the BS's ``gain threshold'' which is a parameter that is dictated by the BS's transmission rate $R_0(k)$. Due to the fading nature of the channels, those users having their channel gains $\gamma_{0i}(k)$ less than $\Gamma_0(k)$ are in outage and thus will not be able to decode this packet in Phase I. The smaller the rate $R_0(k)$ is, the more users will be able to decode the packet in Phase I, but the more time it will take the BS to transmit the packet. In Phase II, one of these successful users, say user $i^*$, rebroadcasts the packet to potentially increase the number of users who decode it by the deadline. The transmission rate in Phase II by user $i^*$ is given by \begin{equation} R_{i^*}(k)=\log \left ( 1+P_{i^*}\left ( k\right )\Gamma_{i^*}(k)\right ). \label{Rate_i} \end{equation} where $\Gamma_{i^*}(k)$ is Phase II's ``gain threshold'' that is dictated by user $i^*$'s rate. Users with gain $\gamma_{i^* j}(k)$ greater than $\Gamma_{i^*}(k)$ will be able to decode the packet in Phase II. This technique offloads the data from the BS since it allows the users to help each other using Device-to-Device (D2D) communication while freeing up the BS during Phase II to serve other group of users outside the set $\script{N}$. Our objective in this paper is to maximize the long-term average duration of Phase II. \begin{figure} \includegraphics[width=1\columnwidth]{figures/Time_Slot.pdf} \caption{Each time slot is divided into two phases; Phase I and Phase II. In Phase I, the BS broadcasts the packet to the users with a certain rate $R_0(k)$. In Phase II, one of those users whose channel capacity was higher than $R_0(k)$ and was able to decode in Phase I will be re-broadcasting this packet to give those who were not able to decode a second chance of potentially decoding the packet.} \label{Time_Slot}% \end{figure} \section{Problem Formulation} \label{Problem_Formulation} \subsection{Objective Function} At the beginning of the $k$th slot, the BS needs to decide the transmission rate $R_0(k)$ by which it transmits in Phase I, the duration $\mu_{\rm I}\left ( k\right )$, the user $i^*$ that will relay the packet in Phase II as well as its transmission rate $R_{i^*}(k)$. This is what we refer to as the ``offloading decision problem'' which needs to be solved at the beginning of each time slot. The objective of this problem is to maximize the ``offloading factor'' which is the time-average value of $\mu_{\rm II}\left ( k\right )$ and is given by \begin{equation} \overline{\mu}_{\rm II}\triangleq\liminf_{K\rightarrow \infty}\frac{1}{K}\sum_{k=1}^K\frac{\mu_{\rm II}\left ( k\right )}{T}. \label{Off_Fac} \end{equation} This value represents the average portion of the time slot that the BS is able to free by offloading the data to the local network of users. This portion of the time slot can be used to serve other users or increase the system capacity by adding more users. \subsection{Constraints} In addition to maximizing the BS's offloading factor, the users should be given a minimum QoS in terms of the average number of packets each was able to successfully decode by the deadline. We define $\overline{R}_i$ to be the average number of packets that user $i$ successfully decoded by the deadline and is given by \begin{equation} \overline{R}_i\triangleq \liminf_{K\rightarrow \infty}\sum_{k=1}^K\frac{\mathds{1}_i(k)}{K}. \label{Rate_RT} \end{equation} where \begin{equation} \mathds{1}_i(k)=\left\{ \begin{array}{lll} &1 & \gamma_{0i}(k)\geq\Gamma_0(k) \mbox{ OR } \gamma_{i^* i}(k)\geq \Gamma_{i^*}(k)\\ &0 & \mbox{otherwise} \end{array} \right. \label{mu_ist} \end{equation} is the indicator function which is $1$ if user $i$ was able to successfully decode the packet in Phase I or Phase II of slot $k$, and $0$ otherwise. Thus, the mathematical problem becomes \begin{align} \label{Prob_Offload} \text{maximize } &\overline{\mu}_{\rm II},\\ \text{subject to }&\overline{R}_i\geq\lambda_i q_i, \hspace{1cm}\forall i\in\script{N}, \label{RT_QoS}\\ &\text{At most 1 user transmits in Phase II}, \label{One_Tx}\\ &\mu_{\rm I}\left ( k\right )+\mu_{\rm II}\left ( k\right )=T. \label{Ph_I_II_Dur} \end{align} Constraint \eqref{RT_QoS} indicates that the average number of packets decoded by user $i$ by the deadline is greater than the required QoS $q_i$, while constraint \eqref{One_Tx} indicates that at most one user should be allowed to transmit in Phase II. \subsection{Degrees of Freedom} Since the packet length of each packet in the system is fixed to $L$ bits, the degrees of freedom in this problem are 2, namely $\mu_{\rm I}\left ( k\right )$ and $i^*$. The reason is because once we find the value of $\mu_{\rm I}\left ( k\right )$ the BS's transmission rate $R_0(k)$ can be calculated through the relation \begin{equation} R_0(k)=\frac{L}{\mu_{\rm I}\left ( k\right )}. \label{Rate_Duration_Eq} \end{equation} Similarly, once the user $i^*$ has been decided, the rate $R_{i^*}(k)$ can be found through the relation \begin{equation} R_{i^*}(k)=\frac{L}{T-\mu_{\rm I}\left ( k\right )}. \label{Rate_ist_Duration_Eq} \end{equation} Hence, the offloading problem in \eqref{Prob_Offload} constitutes of two coupled subproblems; the rate allocation problem of finding $\mu_{\rm I}\left ( k\right )$ as well as the scheduling problem of finding $i^*$. knowing the channel gains $\gamma_{ij}(k)$ with $i\in\{0\}\cup\script{N}$ and $j\in\script{N}$ in a negligible duration, the BS decides the duration of $\mu_{\rm II}\left ( k\right )$ as well as the user $i^*$ that will be broadcasting the packet in Phase II. as well as $\mu_{\rm II}\left ( k\right )$ through \eqref{Ph_I_II_Dur}. We are interested in finding the (slot-based) rate allocation algorithm that maximizes the ``\emph{offloading factor}'' which is the average value of $\mu_{\rm II}\left ( k\right )$, subject to the system constraints. \section{Proposed Solution} \subsection{Approach} \label{Approach} We propose to solve this problem using Lyapunov optimization \cite{li2011delay,Ewaisha_TVT2017}. We do this on three steps: i) We define a ``virtual queue'' associated with each average constraint in problem \eqref{Prob_Offload}. This helps in decoupling the problem across time slots. ii) Then we define a Lyapunov function, its drift and a, per-slot, reward function. iii) Based on the virtual queues and the Lyapunov function, we form and solve an optimization problem, for each slot $k$, that minimizes the drift-minus-reward expression. The solution of this problem is the proposed algorithm. We mathematically show the optimality of this algorithm. We define the virtual queues $Y_i(k)$ and $Z(k)$ as \begin{align} Y(k+1)&\triangleq \left ( Y_i(k)+a(k)q_i - \mathds{1}_i(k)\right )^+, \label{Yik}\\ Z(k+1)&\triangleq \left ( Z(k)+r(k)-\mu_{\rm II}\left ( k\right )\rb^+ \label{Zk}. \end{align} where $r(k)$ is an auxiliary variable that is to be optimized over. Its range is in the interval $[0,1]$. The queue $Y_i(k)$ is an indication of how much user $i$ has been served from slot $1$ up to slot $k$. The larger the virtual queue $Y_i(k)$ is, the more indication that user $i$ has not been served enough up to slot $k-1$, the more priority user should be given in slot $k$. On the other hand, $Z(k)$ indicates whether we should give priority to maximizing the offloading factor or to serving the users, during slot $k$. To provide a sufficient condition on the virtual queues to satisfy the corresponding constraints, we use the definition of \emph{mean rate stability} of queues \cite[Definition 1]{li2011delay} to state the following lemma. \begin{lma} \label{Mean_Rate_Lemma} If, for some $i\in\script{N}$, $\{Y_i(k)\}_{k=0}^\infty$ is mean rate stable, then constraint \eqref{RT_QoS} is satisfied for user $i$. \end{lma} Lemma \ref{Mean_Rate_Lemma} shows that when the virtual queue $Y_i(k)$ is mean rate stable, then constraint \eqref{RT_QoS} is satisfied for user $i\in\script{N}$. Similarly, if $\{Z(k)\}_{k=0}^\infty$ is mean rate stable, then we have \begin{equation} \liminf_{K\rightarrow\infty}\sum_{k=1}^K\frac{r(k)}{K}\leq\liminf_{K\rightarrow\infty}\sum_{k=1}^K\frac{\mu_{\rm II}\left ( k\right )}{K}. \label{Z_MRS} \end{equation} In the proof of the optimality of the proposed algorithm, we will see that \eqref{Z_MRS} is one of the keys to show this optimality. Thus, our objective now would be to devise an algorithm that guarantees the mean rate stability of both $\left[Y_i(k)\right]_{i\in\script{N}}$ and $Z(k)$. \subsection{Applying the Lyapunov Optimization} \label{Motivation_DL} Let the quadratic Lyapunov function be defined as \begin{equation} L_{\rm yap}\left ( U(k)\right )\triangleq \frac{1}{2}\sum_{i\in\script{N}}{Y_i^2(k)}+\frac{1}{2}{Z^2(k)}, \label{Lyapunov_Func} \end{equation} where ${\bf U}(k)\triangleq [{\mathbf Y}(k),Z(k)]$, and the Lyapunov drift as $\Delta (k) \triangleq \E_{U(k)}[L_{k+1}\left ( {\bf U}(k+1)\right ) - L_{\rm yap}\left ( {\bf U}(k)\right )]$ where $\EEU{x}\triangleq \EE{x\vert U(k)}$ is the conditional expectation of the random variable $x$ given $U(k)$. Squaring \eqref{Yik} and \eqref{Zk}, taking the conditional expectation then summing over $i$, the drift becomes bounded by \begin{align} \nonumber\Delta(k)\leq & C+\sum_{i\in\script{N}}\EEU{\lbY_i(k) a(k)q_i - Y_i(k) \mathds{1}_i(k)\right )}\\ &+\left (\EEU{Z(k)r(k) - Z(k)\mu_{\rm II}\left ( k\right )}\right ), \label{Drift_Bound} \end{align} where $C\triangleq\left (\sum_{i\in\script{N}}\left ( q_i^2+1\right )+1+T^2\right )/2$. We then define $V$ as an arbitrarily chosen positive control parameter that controls the performance of the algorithm. Since $\EEU{r(k)}$ represents the expected duration of $\mu_{\rm II}\left ( k\right )$ at slot $k$, we refer to $V \EEU{r(k)}$ as the ``reward term''. We subtract this term from both sides of \eqref{Drift_Bound}, then use \eqref{Psi_k} and rearrange to bound the drift-minus-reward term as \begin{equation} \Delta(k)-V \sum_{i\in\script{N}}\EEU{r(k)}\leq C+\Psi(k), \label{Drift_minus_Reward_Bound} \end{equation} where \begin{align} \nonumber\Psi(k)\triangleq &-\EEU{\sum_{i\in\script{N}}Y_i(k)\mathds{1}_i(k) + Z(k)\mu_{\rm II}\left ( k\right )}\\ &+\sum_{i\in\script{N}}Y_i(k) \lambda q_i+\EEU{\left ( Z(k)-V\right ) r(k)}. \label{Psi_k} \end{align} The algorithm we propose is to allocate the transmission rate and schedule the users to minimize the right-hand-side of \eqref{Drift_minus_Reward_Bound} at each slot. Since the only term in $\Psi(k)$ that is a function in $r(k)$ is the last term, we can decouple the problem without losing optimality. Minimizing this term results in setting $r(k)=1$ if $Z(k)<V$ and $0$ otherwise. Minimizing the remaining terms yields \begin{equation} \begin{array}{ll} &\text{maximize}\sum_{i\in\script{N}}Y_i(k)\mathds{1}_i(k) + Z(k)\mu_{\rm II}\left ( k\right )\\ &\text{subject to } \eqref{Ph_I_II_Dur} \text{ and } \eqref{One_Tx}, \end{array} \label{Max_Prob} \end{equation} with decision variables $\mu_{\rm I}\left ( k\right )$ and $i^*$. This is a per-slot optimization problem the solution of which is an algorithm that minimizes the upper bound on the drift-minus-reward term defined in \eqref{Drift_minus_Reward_Bound}. Next we present the proposed algorithm. \subsection{``\emph{Free-Base-Station}'' Algorithm} The proposed algorithm to problem \eqref{Prob_Offload} is: \begin{algorithm} \caption{Free-BS Algorithm} \begin{algorithmic}[1] \label{Scheduling_Alg_Cont} \STATE At the beginning of slot $k$, sort the users in a descending order of $\gamma_{0i}(k)$. Without loss of generality, we assume that $\gamma_{0i}(k)>\gamma_{0j}(k)$ for $i<j$. \STATE Set $i=1$. \WHILE{$i\leq N$} \STATE Set $\Gamma_0(k)=\gamma_{0i}(k)$ and calculate $R_0(k)$, $\mu_{\rm I}\left ( k\right )$ and $\mu_{\rm II}\left ( k\right )$ using \eqref{Rate_z}, \eqref{Rate_Duration_Eq} and \eqref{Ph_I_II_Dur}, respectively. \STATE For all $j\leq i$, assume $j$ rebroadcasts the packet at Phase II and calculate the corresponding $\tilde{\Psi}_j(i)\triangleq\sum_{i\in\script{N}}Y_i(k)\mathds{1}_i(k) + Z(k)\mu_{\rm II}\left ( k\right )$. \STATE Calculate $\tilde{\Psi}(i)\triangleq\min_j \tilde{\Psi}_j(i)$. \ENDWHILE \STATE The optimum $\mu_{\rm I}\left ( k\right )$ comes from the iteration $i$ solving $\max_i\tilde{\Psi}(i)$, and $i^*=\arg\min_j \tilde{\Psi}_j(i)$. \STATE Update \eqref{Yik} and \eqref{Zk} at the end of the $k$th slot. \end{algorithmic} \end{algorithm} We can see that the algorithm calculates $\tilde{\Psi}_j(i)$ at most $N^2$ times yielding a polynomial time complexity. The performance of this algorithm is discussed next. \subsection{Optimality of the Proposed Algorithm} \begin{thm} \label{Optimality_Thm} For any value $V>0$, there exists some finite constant $C$ such that the Free-Base-Station algorithm results in an offloading factor satisfying \begin{equation} \liminf_{K\rightarrow \infty}\sum_{k=1}^K\frac{\mu^*_{\rm II}(k)}{K} \geq \mu_{\rm II}^{\rm (opt)}-\frac{C}{V}, \label{Optimality_Equ} \end{equation} where $\mu^*_{\rm II}(k)$ is the optimal value of $\mu_{\rm II}\left ( k\right )$ solving \eqref{Max_Prob}, while $\mu_{\rm II}^{\rm (opt)}$ is the optimal objective function achieved by the optimal algorithm solving problem \eqref{Prob_Offload}. Moreover, the queues $Y_i(k)$ and $Z(k)$ are mean-rate stable. \end{thm} \begin{proofsketch} We show the proof sketch of \eqref{Optimality_Equ} and omit the queues' mean-rate stability proof due to lack of space. Equation \eqref{Optimality_Equ} is shown by considering an optimal genie-aided algorithm solving \eqref{Prob_Offload} and showing that, when applied to the problem, the corresponding $\Psi^{\rm (opt)}(k)$ satisfies \begin{equation} \liminf_{K\rightarrow\infty}\frac{1}{K}\sum_{k=1}^K\EE{\Psi^{\rm (opt)}(k)}\leq-V\mu_{\rm II}^{\rm (opt)}. \label{Key_Opt_Alg} \end{equation} Dropping $\Delta(k)$ from \eqref{Drift_minus_Reward_Bound}, evaluating by the Free-BS algorithm, taking $\EE{\cdot}$ to both sides, summing over $k$ and taking the limit yields \begin{equation} \liminf_{k\rightarrow\infty}\frac{1}{K}\sum_{k=1}^K\EE{r(k)}\leq C+\liminf_{k\rightarrow\infty}\frac{1}{K}\sum_{k=1}^K\EE{\Psi^*(k)}, \end{equation} where $\Psi^*(k)$ is the value of $\Psi(k)$ when evaluated at the Free-BS algorithm. But since the Free-BS algorithm minimizes $\Psi(k)$, then we must have $\Psi^*(k)\leq\Psi^{\rm (opt)}(k)$. Thus we can use the latter inequality and \eqref{Key_Opt_Alg} to write \begin{equation} V\liminf_{k\rightarrow\infty}\frac{1}{K}\sum_{k=1}^K\EE{r(k)}\geq V\mu_{\rm II}^{\rm (opt)}-C. \label{Bound1} \end{equation} Removing the $(\cdot)^+$ sign from \eqref{Zk}, taking $\EE{\cdot}$ to both sides, summing over $k$ and taking the limit yields \begin{equation} \liminf_{k\rightarrow\infty}\frac{1}{K}\sum_{k=1}^K\EE{r(k)}\leq\liminf_{k\rightarrow\infty}\frac{1}{K}\sum_{k=1}^K\EE{\mu^*_{\rm II}(k)}. \label{Bound_Using_Zk} \end{equation} Using \eqref{Bound1} and \eqref{Bound_Using_Zk} we get \eqref{Optimality_Equ}. \end{proofsketch} Theorem \ref{Optimality_Thm} indicates that setting the control parameter $V$ to a sufficiently high value results in an asymptotically optimal algorithm. \section{Simulation Results} The system was simulated with parameters shown in Table \ref{Parameters}. Fig. \ref{Offloading_vs_No_TxPow} shows the throughput performance with the transmission power. This figure shows that even when allowing for only one user to retransmit the packet in Phase II, the performance is significantly higher than the non-offloading case. In Fig. \ref{Offloading_vs_No_N} we plot the throughput versus the number of users in system ($N$). This figure shows that the non-offloading case has a decreasing throughput as $N$ increases. However, the offloading case is not monotonic under the proposed Free-BS algorithm. The throughput increases with $N$ when it the latter small due to the multi-user diversity effect \cite{zeng2012multi,6860297} where more users in the system gives the BS a larger set of users to choose from while scheduling Phase II's re-transmitting user. However, when $N$ increases beyond a certain value, adding more users to the system overloads it since these users need to be guaranteed a minimum average number of packets. Hence, the offloading factor starts decreasing. \begin{table} \centering \caption{Simulation Parameter Values} \label{Parameters} \begin{tabular}{|c|c||c|c|} \cline{1-4} Parameter & Value & Parameter & Value \\ \cline{1-4} $q_i$ & $0.9$ &$T$ & $1$ms\\ $L$ & $1$ bit/packet & $P_i$ $\forall i\in\{0\}\cup\script{N}$& $20$dB \\ $V$ & $1000$ &$\overline{\gamma}_{ij}$ $\forall i,j$ & $0.3$ \\ \cline{1-4} \end{tabular} \end{table} \begin{figure}% \centering \includegraphics[width=1\columnwidth]{figures/Offloading_vs_No_TxPow.pdf}% \caption{With just one user being allowed to relay, the BS is free $100\%$ times more.}% \label{Offloading_vs_No_TxPow}% \end{figure} \begin{figure}% \centering \includegraphics[width=1\columnwidth]{figures/Offloading_vs_No_N.pdf}% \caption{Unlike the ``Without Offloading'' case, when doing offloading, the throughput increases with $N$ for small values of $N$. This is due to the multi-user diversity effect where adding more users to the system creates more better users to retransmit the packet in Phase II of the time slot. This is a huge increase in the throughput with only a minor change in the system where only one user is allowed to retransmit.}% \label{Offloading_vs_No_N}% \end{figure} \section{Conclusions} \label{Conclusions} We discussed the problem of data offloading in cellular wireless systems. While existing work focuses on algorithms that offload the data locally to minimize traffic requested by cellular users, the objective of this work is to study the problem while taking the physical channel variations into consideration as well as the hard deadlines that have to be respected for each packet. We presented the Free-Base-Station algorithm to the formulated problem. In the full version we will show that it converges to the optimal solution asymptotically. \bibliographystyle{IEEEbib}
1,314,259,996,221
arxiv
\section*{Declarations} \label{sec:dec} \noindent \textbf{Conflict of Interest}\\ The authors have no conflicts of interest to disclose in relation to this article.\\ \noindent \textbf{Funding/Support Statement}\\ Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project Number 327154368 – SFB 1313. We also acknowledge the support by the Stuttgart Center for Simulation Science (SimTech). J.K.W.~acknowledges the European Union’s Horizon 2020 (H2020-MSCA-IF-2019) research and innovation programme under the Marie Skłodowska-Curie grant agreement 893099 — ImmunoBioInks.\\ \noindent \textbf{Acknowledgements} \\ Z.T.~thanks Johannes Hommel, Martin Schneider, and Prof.~Rainer Helmig from the Department of Hydromechanics and Modelling of Hydrosystems at the University of Stuttgart for providing help with pore-network modelling and implementing the Box discretization. \section{Introduction} \label{sec:intro} Annually, more than 1.4 million clinical vertebral fractures occur worldwide \cite{johnell_estimate_2006}. Vertebral fractures lead to back pain, loss of height, immobility, reduced pulmonary function, and increased mortality. Vertebroplasty is a common medical procedure for treating and preventing vertebral fractures. In this procedure, a so-called ``bone cement" is injected into the porous interior of the vertebra, where it undergoes curing, providing additional structural strength to the vertebra \cite{jensen_percutaneous_1997}. Vertebroplasty provides quick pain relief in 80-90 \% of the cases \cite{mcgraw_prospective_2002}. Generally, the practitioners use haptic feedback and X-ray imaging to guide the cement injection. However, this is difficult due to several factors, including the non-constant viscosity of the bone cement owing to its non-Newtonian nature and the curing process. The bone marrow in the vertebra is also a non-Newtonian fluid. Severe complications could occur if the cement leaks into the blood vessels or the spinal canal due to improper injection, e.g.~pulmonary embolism or paralysis \cite{bernhard_asymptomatic_2003, ratliff_root_2001}. In this regard, a reliable simulation of the procedure could help practitioners predict the risks and determine the safe ranges for operating parameters like injection pressure, cement viscosity, curing time, flow rate, etc., specific to each case. The decrease in reliance on X-ray imaging would also reduce the patient's radiation exposure. In the past, various approaches have been used for simulating vertebroplasty. Theoretical approaches, e.g. \cite{bohner_theoretical_2003}, usually have limited use-case because many idealizing assumptions are necessary. The earliest works using computational approaches were limited to small two-dimensional segments and assumed the porous structure to be a bundle of capillary tubes, e.g.~in \cite{beaudoin_finite_1991}. With improvement in computational capabilities, more complicated models appeared, such as branching pipe networks \cite{lian_biomechanical_2008}, Computational Fluid Dynamics (CFD) models \cite{teo_preliminary_2007,landgraf_modelling_2015} or Lattice-Boltzmann models \cite{zeiser_pore-scale_2008}. The main drawback in these models is that a pore-scale flow resolution inside a complicated porous structure quickly becomes computationally expensive. Therefore, most pore-scale models simulate flow only over a small part or a representative segment of the vertebra. For a simulation regarding the entire vertebra, a macro-scale model is a more feasible option. In general, there is substantial research on the mathematical modelling of such multiphase porous media flow problems at the macro-scale; however, most of it is in the context of groundwater flow. It is possible to extend and adapt such models to vertebroplasty, but a major challenge is to upscale the viscosities of the non-Newtonian fluids for use in the macro-scale flow equations. \citet{widmer_soyka_numerical_2013} developed such a macro-scale model using a continuum length scale Reynolds number for upscaling the viscosity. However, they employed a front-tracking Volume of Fluid method that assumes a sharp cement-marrow interface \cite{widmer_mixed_2011}, which may not always be the case. In this regard, the Theory of Porous Media (TPM) \cite{ehlers_foundations_2002, ehlers_challenges_2009} provides a solid framework for implementing a multiphase continuum-mechanical macro-scale model to simulate the fluid flow in a porous medium as well as the deformations in the porous medium, without needing to idealize the interface between the fluids. The application of TPM application extends to, for example, the biomechanics of soft tissues such as liver ~\cite{Ricken.2019}, brain ~\cite{Ehlers.2022}, and cartilage~\cite{Wang.2018}. TPM has previously also been used to develop a preliminary model for vertebroplasty \cite{bleiler_multiphasic_2015}. The aim of this work was to develop a computationally feasible model based on TPM, and to extend it beyond previous studies \cite{bleiler_multiphasic_2015} for better suitability of simulating vertebroplasty by including a more appropriate choice of model variables and constitutive equations for non-Newtonian rheology, investigating suitable rheology upscaling methods, and numerically treating the governing equations by a combined Finite Element - Finite Volume approach using the Box discretization to address the different natures of the fluid flow and the solid deformation problem in a stable framework. Furthermore, the work aims to validate the model using a benchmark problem, which is investigated experimentally as well as simulated computationally; and carry out further simulations to determine the influence of various parameters, with specific regard to avoiding cement leakage. \section{Materials and Methods} \label{sec:methods} \subsection{Modelling approach} \label{sec:numerical_model} \subsubsection{Multiphase modelling based on the Theory of Porous Media} \label{sec:TPM} The Theory of Porous Media (TPM) provides a framework for modelling the mechanics of a multiphase continuum at the macro-scale \cite{ehlers_foundations_2002, ehlers_challenges_2009}. It originates from the Theory of Mixtures \cite{bowen_porous_1984}, where the pore-scale properties of the phases are homogenized over a so-called Representative Elementary Volume (REV), such that the macroscopic properties of this volume are representative of the entire domain. In our case, the overall aggregate contains three phases: the trabecular bone $\varphi^S$, the bone cement $\varphi^C$, and the bone marrow $\varphi^M$, such that $\varphi = \bigcup_{\alpha} \varphi^{\alpha} = \varphi^S \, \cup \, \varphi^F = \varphi^S \, \cup \, \varphi^C \,\cup \, \varphi^M $. Here, $\alpha = \{S, C, M\}$ and $\varphi^F = \bigcup_{\beta} \varphi^\beta$, where the fluid phases $\beta = \{C, M\}$ are immiscible since the bone cement is hydrophobic. In the TPM, this is further extended by including the individual amount of each phase using the concept of volume fractions. This is shown in Figure \ref{fig:homogenization}. The volume fraction $n ^{\alpha} = \frac{\dv^{\alpha}}{\dv}$, where $\dv^{\alpha}$ is the volume of the phase $\dv^{\alpha}$ and $\dv$ is the total volume of the REV, sum up to one, i.e. $\sum _{\alpha} n ^{\alpha} = 1$. Thus, the medium does not have any void spaces. Similarly, the saturations $s^{\beta} = \frac{n^{\beta}}{n^F}$ of the fluid phases are introduced. \begin{figure}[htbp] \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{homogenization.jpeg} \caption{} \label{fig:homogenization} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{continuum3.jpeg} \caption{} \label{fig:continuum} \end{subfigure} \caption{(a) Homogenization over a Representative Elementary Volume (REV) (b) Kinematics of a multiphase continuum body} \end{figure} Under loading conditions, such a system undergoes a change in its state with time. The state at the current time $t$ is called the current configuration, whereas that at the reference time $t = 0$ is the reference configuration. Each spatial point with a position vector $\mathbf{x}$ in the current configuration is occupied by all three phases (superimposed continua). Each of these three phases may originate from a different point with position vector $\mathbf{X} _{\alpha}$ in the reference configuration, depending on their respective placement function $\boldsymbol{\chi} _{\alpha}$, such that $\mathbf{x} = \boldsymbol{\chi} _{\alpha} (\mathbf{X} _{\alpha}, t)$. The material time derivative of each phase is defined as $\overset{\prime}{\mathbf{x}} _{\alpha} = \mathrm{d}(\boldsymbol{\chi} _{\alpha} (\mathbf{X} _{\alpha}, t))/\mathrm{d} t$. The deformation of the bone is then defined in a Lagrangian setting as $\mathbf{u} _{\alpha} = \mathbf{x} - \mathbf{X} _{\alpha}$, while the movement of the fluid phases is described in a `modified' Eulerian setting by their seepage velocities, given as $\mathbf{w}_{\beta} = \overset{\prime}{\bx}_{\beta} - \overset{\prime}{\bx}_S$. The kinematics are shown in Figure \ref{fig:continuum}. These variables are governed by a set of balance equations, viz., the mass balance and the momentum balance equations for each of the phases. These are obtained after applying the assumptions that all phases are materially incompressible, no mass exchange occurs between the phases, the porous medium is fully saturated, and the constituents are non-polar (stress tensors are symmetric). Additionally, quasi-static processes are assumed, body forces are neglected, and all processes are considered isothermal. This yields the volume balance law for the phases as \begin{equation} (n ^{\alpha})' _{\alpha} + n ^{\alpha} \div \overset{\prime}{\mathbf{x}}_{\alpha} = 0\,. \end{equation} For the solid, this becomes simply \begin{equation} (n^S)'_S + n^S \div \overset{\prime}{\bx}_S = 0 \quad \longrightarrow \quad n^S = n^S_{0S}\, (\det \mathbf{F}_S)^{-1}\,, \label{eq:vol_balance_solid} \end{equation} where $n^S_{0S}$ is the initial volume fraction and $\mathbf{F}_S = {\mathrm{d} \mathbf{x} }/{\mathrm{d} \mathbf{X}_S}$ is the deformation gradient. For fluids, the volume balance law in the modified Eulerian setting becomes \begin{equation} (n^\beta)'_S + \div(n^\beta \mathbf{w}_\beta) + n^\beta \div \overset{\prime}{\bx}_S = 0\,. \label{eq:vol_balance_fluids} \end{equation} Additionally, we have the overall momentum balance given as \begin{equation} \div \boldsymbol{\sigma} = \mathbf{0}\,, \label{eq:mtm_balance_aggr} \end{equation} where $\boldsymbol{\sigma} = \boldsymbol{\sigma}^S_E - p \mathbf{I}$\,. The solid extra stress $\boldsymbol{\sigma}^S_E$ is related to the deformation by constitutive relations, and $p$ is the hydrostatic pore pressure. For the considered application, the capillary number is large ($Ca \gg 10^{-3}$), implying that the capillary forces are negligible compared to the viscous forces. Therefore, we neglect the pressure difference between the two fluids arising due to capillary effects and assume the same pore pressure $p$ for both fluid phases. The resulting stress of the solid skeleton is obtained by $\boldsymbol{\sigma}^S = \boldsymbol{\sigma}^S_E - n^S p \mathbf{I}$, and the fluid stress is given by $\boldsymbol{\sigma}^\beta = - n^\beta p \mathbf{I}$. Here, we neglect the fluid extra stresses $\boldsymbol{\sigma}^\beta_E$ arising from viscous effects compared to the local interaction forces, as can be concluded from dimensional analysis \cite{markert_constitutive_2007}. Furthermore, the Darcy flow equations are used to relate the seepage velocities of the fluid phases to the pressure gradient. These equations can be derived in the framework of the TPM by applying constitutive relations to the individual momentum balances of the fluid phases $\varphi^\beta$. These are then written as \begin{align} n^C \mathbf{w}_C &= - \frac{\kappa^C_r \mathbf{K}^S}{\mu^{CR}} \mathrm{grad}\, p \,, \label{eq:darcyc}\\ n^M \mathbf{w}_M &= - \frac{\kappa^M_r \mathbf{K}^S}{\mu^{MR}} \mathrm{grad}\, p \,, \label{eq:darcym} \end{align} where $\mathbf{K}^S$ is the intrinsic permeability of the porous medium, $\kappa^\beta_r$ is the relative permeability of the fluid phases arising due to the resistance from the other fluid, and $\mu^{\beta R}$ is the dynamic viscosity. For a detailed overview of the TPM, the reader is referred to the works of Ehlers \cite{ehlers_foundations_2002, ehlers_challenges_2009}. \subsubsection{Numerical treatment} \label{sec:implementation} The governing equations are solved over the entire domain $\Omega$ of the porous medium using the primary variables solid displacement $\mathbf{u}_S$, pore pressure $p$, and bone marrow saturation $s^M$. Since we neglected the capillary forces for our intended application, we used saturation as one of the primary variables, making the formulation independent of the capillary pressure - saturation relation, in contrast to one with two pore pressures as primary variables (like in \cite{bleiler_multiphasic_2015}). The volume balance for the solid (Equation \ref{eq:vol_balance_solid}) is implemented directly such that it is satisfied at all points, while those for the fluids (Equation \ref{eq:vol_balance_fluids}) and the momentum balance (Equation \ref{eq:mtm_balance_aggr}) are satisfied weakly, i.e., only their weighted integrals over the domain must be zero. This is written as \begin{gather} \int_\Omega [\div \boldsymbol{\sigma}^S_E - \mathrm{grad}\, p] \cdot \delta \mathbf{u}_S \,\mathrm{d} \Omega = 0 \label{eq:weak_mtm} \\ \int_\Omega [(n^C)'_S + \div (n^C \mbfrm{w}_C) + n^C \div \overset{\prime}{\bx}_S] \, \delta p \, \mathrm{d} \Omega = 0 \label{eq:weak_cement}\\ \int_\Omega [(n^M)'_S + \div (n^M \mbfrm{w}_M) + n^M \div \overset{\prime}{\bx}_S] \, \delta s^M \, \mathrm{d} \Omega = 0 \label{eq:weak_marrow} \end{gather} where $\delta\mathbf{u}_S$, $\delta p$, and $\delta s^M$ are test functions. Applying the Gaussian divergence theorem, the surface integrals for the Neumann boundary conditions appear via \begin{gather} \int_\Gamma (\boldsymbol{\sigma}_E^S \, \delta \mathbf{u}_S) \cdot \hat{\mathbf{n}} \, \mathrm{d} \Gamma - \int_\Omega \boldsymbol{\sigma}_E^S \cdot \mathrm{grad}\, \delta \mathbf{u}_S \, \mathrm{d} \Omega - \int_\Omega \mathrm{grad}\, p \cdot \delta \mathbf{u}_S \, \mathrm{d} \Omega = 0 \label{eq:weak_mtm_2} \\ \int_\Gamma n^C \mbfrm{w}_C \cdot \hat{\mbfrm{n}} \, \delta p \, \mathrm{d} \Gamma + \int_\Omega (n^C)'_S \, \delta p \, \mathrm{d} \Omega - \int_\Omega n^C \mbfrm{w}_C\, \mathrm{grad}\, \delta p \, \mathrm{d} \Omega + \int_\Omega n^C \div \overset{\prime}{\bx}_S \, \delta p \, \mathrm{d} \Omega = 0 \label{eq:weak_cement_2}\\ \int_\Gamma n^M \mbfrm{w}_M \cdot \hat{\mbfrm{n}} \, \delta s^M \, \mathrm{d} \Gamma + \int_\Omega (n^C)'_S \, \delta s^M \, \mathrm{d} \Omega - \int_\Omega n^M \mbfrm{w}_M\, \mathrm{grad}\, \delta s^M \, \mathrm{d} \Omega + \int_\Omega n^M \div \overset{\prime}{\bx}_S \, \delta s^M \, \mathrm{d} \Omega = 0 \label{eq:weak_marrow_2} \end{gather} where $\hat{\mbfrm{n}}$ is the surface normal vector. Equation \ref{eq:weak_mtm_2} is spatially discretized and solved using the Bubnov-Galerkin method by the Finite Element discretization. In Equations \ref{eq:weak_cement_2} and \ref{eq:weak_marrow_2}, the volume integral terms consist of a temporal evolution term, a flow term, and a contribution from the solid deformation, therefore requiring both temporal and spatial discretization. For the temporal discretization, we used the Implicit Euler method. For flow problems, especially those hyperbolic by nature, it is known that discretization using the Bubnov-Galerkin Finite Element method leads to numerical instabilities \cite{zienkiewicz_finite_1978}. Therefore, we used the Box spatial discretization scheme and applied mass-lumping for the temporal evolution term. This yields a setting which behaves similar to the Finite Volume discretization. The flow equations were then solved using the fully-upwind Galerkin scheme. More details about the Box discretization can be found in \cite{huber_node-centered_2000}. For the last remaining term, i.e., the contribution from the solid deformation, we again used the Finite Element discretization. The system was discretized using eight-noded hexahedral elements with linear shape functions for all primary variables. The numerical implementation of the model was done on the monolithic solver framework PANDAS \footnote{\textbf{P}orous media \textbf{A}daptive \textbf{N}onlinear finite-element solver based on \textbf{D}ifferential \textbf{A}lgebraic \textbf{S}ystems (http://www.get-pandas.com)} \subsubsection{Constitutive models} \label{sec:constitutive} The governing balance relations need to be supplemented by constitutive relations to include the specific material behaviour of the constituents. For the solid part, i.e., the trabecular bone, the relationship between the stress and the strain is described using the linear Hookean law \begin{equation} \boldsymbol{\sigma}^S_E = 2 \mu_{Lam\acute{e}}\, \boldsymbol{\epsilon}_S + \lambda_{Lam\acute{e}} (\boldsymbol{\epsilon}_S : \mathbf{I}) \mathbf{I} \end{equation} where $\boldsymbol{\sigma}^S_E$ is the solid extra stress, $\boldsymbol{\epsilon}_S$ is the strain tensor, $\mu_{Lam\acute{e}}$ and $\lambda_{Lam\acute{e}}$ are the Lam\'{e} material parameters. For the two fluids, constitutive relations are required for the change in viscosity with shear rate. We used the Carreau model for this purpose, given for a fluid $\beta$ as \begin{equation} \mu^{\beta R} = \mu_{\infty}^{\beta} + (\mu_{0}^{\beta} - \mu_{\infty} ^{\beta}) \, [1 + (\lambda^{\beta}_{rh} \, \dot{\gamma}^{\beta})^{2}]^{\frac{n^{\beta}_{rh} - 1}{2}} \label{eq:carreau} \end{equation} where $\mu_{0}^{\beta}$ is the viscosity limit at zero shear rate, $\mu_{\infty}^{\beta}$ is the viscosity limit at very high shear rates, $\lambda^{\beta}_{rh}$ is the reciprocal of the shear rate where the behaviour transitions to non-Newtonian, $n^{\beta}_{rh}$ is the flow behaviour index, and $\dot{\gamma}^{\beta}$ is the shear rate. Note that the superscript `$\beta$' stands for the fluid phase ($C$ or $M$ for cement or marrow respectively) and the subscript `$rh$' stands for `rheological parameter' to avoid confusion with similar symbols used in other places. The viscosity discussed up to this point is a pore-scale quantity. Unlike for a Newtonian fluid, the pore-scale viscosity cannot be directly used in the macro-scale equations (Equations \ref{eq:darcyc}, \ref{eq:darcym}) for a non-Newtonian fluid because the shear rates, and hence the viscosities, vary with the geometries of the pores through which the fluid flows. The viscosity at the macro-scale must be a representative average of the pore-scale viscosities obtained through upscaling. Here, we compared three different upscaling models to evaluate their applicability to model vertebroplasty at the macro-scale. The first two, namely the Cannella model \cite{cannella_prediction_1988} and the Hirasaki and Pope model \cite{hirasaki_analysis_1974}, are based on the capillary-tube-bundle model and semi-empirical by nature. In both approaches, the effective shear rate is obtained from the equation given as \begin{equation} \dot{\gamma}^{\beta}_{eff} = \mathfrak{C}\,\bigg[\frac{3n^{\beta}_{rh}+1}{4n^{\beta}_{rh}}\bigg]^{\frac{n^{\beta}_{rh}}{n^{\beta}_{rh}-1}}\,\bigg[4 |\mbfrm{w}_{\beta}| \sqrt{\frac{n^{\beta}}{8 \kappa_r^{\beta} K^S}} \bigg] \label{eq:rheo_upscaling} \end{equation} where $\dot{\gamma}^{\beta}_{eff}$ is the effective upscaled shear rate, and $\mathfrak{C}$ is a semi-empirical constant, the value of which varies depending on the internal geometry of the porous medium. The Cannella model uses the value $\mathfrak{C}= 6.0$, while the Hirasaki and Pope model uses $\mathfrak{C}= 0.69$. The effective shear rate is then plugged in Equation \ref{eq:carreau} to obtain the effective viscosity at the macro-scale. Alternatively, the third upscaling method under consideration was the average viscosity model by Eberhard et al.~\cite{eberhard_determination_2019}, which is analytical by nature in contrast to the previous two. This approach analytically computes the average viscosity of the fluid flowing through a representative pore channel of characteristic radius $R_{char}$. This characteristic radius $R_{{char}}$ is determined from the pore size distribution of the porous medium. The details about the derivations of the equations and their analytical solution can be found in \cite{eberhard_determination_2019}. Finally, constitutive relations are required for the relative permeabilities. Relative permeability is a factor applied to account for the reduction in permeability due to mutual hindrance caused by the fluid phases. Hence, the constitutive relation is usually a function of saturation. In this work, the Brooks-Corey model \cite{brooks_corey_1964} was used, given as \begin{align} \begin{split} \kappa^C_r &= (1 - s^M)^2 \bigg[1 - (s^M)^{\frac{2+\lambda_{bc}}{\lambda_{bc}}} \bigg] \\ \kappa^M_r &= (s^M)^{\frac{2+3\lambda_{bc}}{\lambda_{bc}}} \end{split} \label{eq:brooks-corey} \end{align} where $\lambda_{bc}$ is the uniformity parameter. A large value means that pore sizes in the porous medium do not strongly vary, while a small value represents non-uniformity in the pore sizes. \subsection{Validation with benchmark problem} \label{sec:methods_validation} \begin{figure}[h] \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{schematic.jpeg} \caption{} \label{fig:schematic} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{exp_setup1.jpeg} \caption{} \label{fig:exp_setup_injector} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{exp_setup2.jpeg} \caption{} \label{fig:exp_setup_ct} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{boundary_conditions.jpeg} \caption{} \label{fig:boundary_conditions} \end{subfigure} \caption{(a) Bone cement injected into aluminium foam (above) and its schematic with dimensions in mm (below). The figure is not to scale. (b) Custom-made bone cement injector (c) Experiment setup (d) Geometry and boundary conditions for numerical implementation: The cannula walls (yellow) were assigned no-slip and clamped boundary conditions, while the outer walls (blue) of the foam were assigned zero pressure. The opening for the inflow (red) was assigned velocity by dividing the flow rate by the area of the opening.} \end{figure} A simple problem, based on the actual vertebroplasty procedure, was formulated as a benchmark to validate the numerical model. This is described with a simplified schematic in Figure \ref{fig:schematic}. In the problem, 2 ml of bone cement is injected into a piece of aluminium foam using a surgical syringe and cannula at two different flow rates, viz., 0.1 ml/s and 0.4 ml/s. Aluminium foam was chosen since its internal structure is very similar to a vertebra \cite{loeffel_vertebroplasty_2008}. \subsubsection{Experimental setup} \label{sec:methods_validation_exp} To carry out this benchmark experiment, a custom-made bone cement injector was designed and manufactured, as shown in Figure \ref{fig:exp_setup_injector}. The injector consisted of a carriage which was driven by a stepper motor using a ball screw with 5 mm feed per revolution. The stepper motor had a resolution of 1.8° corresponding to 200 steps per revolution, amounting to a calculated carriage resolution of 0.025 mm. A 200 N load cell was mounted on the carriage to measure the forces applied to the plunger of the syringe. The bone cement was directly mixed in a 10 ml syringe to easily transfer it to the 2 ml syringes of the Vertecem V+ Syringe Kit for injection, using a method described later in Section \ref{sec:methods_validation_mat}. The time from the start of mixing to the start of injection was measured using a stopwatch, which was about 200-210 seconds. The bone cement was injected by the injector at a prescribed rate into the aluminium foam, as shown in Figure \ref{fig:exp_setup_ct}. The aluminium foam was cut into a cylinder with a height of 30 mm and a diameter of 40 mm, for a size similar to that of a vertebra. The aluminium foam was placed in a polymethyl methacrylate (PMMA) housing with a hole in the centre for inserting the cannula and another hole to allow air to escape. The load cell measured the total injection force required for pushing the bone cement through the syringe and the cannula and then into the aluminium foam. Tests were also done without the aluminium foam to isolate the contribution of the aluminium foam. A Computed Tomography (CT) scanner (Revolution EVO, GE Medical Systems (Schweiz) AG, Glattbrugg, Switzerland) recorded the bone cement flow inside the foam by capturing CT images with a frequency of 2.5 Hz and a resolution of 200 x 200 x 625 microns. The CT-scanner was set at 120 kV and 150 mA, with 0.625 mm slice thickness, bone reconstruction kernel, and 0.4 s rotation time. \subsubsection{Numerical implementation} \label{sec:methods_validation_num} The flow into the aluminium foam was simulated using our numerical model, the geometry and boundary conditions for which are shown in Figure \ref{fig:boundary_conditions}. The geometry and mesh were created in the software Cubit v13.0, as per the dimensions shown in Figure \ref{fig:schematic}. The geometry was discretized by a mesh of 2540 hexahedral elements. \subsubsection{Material parameters} \label{sec:methods_validation_mat} \begin{table}[h] \centering \begin{tabular}{|c|c c|c|c c|c|} \hline \textbf{Para-} & \multicolumn{3}{|c|}{\textbf{Benchmark experiment}} & \multicolumn{3}{|c|}{\textbf{Simulations with marrow}} \\ \cline{2-7} \textbf{-meter} & \multicolumn{2}{|c|}{\textbf{Value}} & \textbf{Source} & \multicolumn{2}{|c|}{\textbf{Value}} & \textbf{Source} \\ \hline $\mu_{Lam\acute{e}}$ & 28.2 & GPa & Aluminum & 3.85 & GPa & Trabeculae\\ $\lambda_{Lam\acute{e}}$ & 54.7 & GPa & 6101 & 5.77 & GPa & \cite{wu_youngs_2018}\\ \hline $\mu^C_0$ & 1930 & Pa s & Rheometer & 1930 & Pa s & \multirow{4}{5em}{Same as in benchmark experiment} \\ $\mu^C_\infty$ & 1.93 & Pa s & measurement & 1.93 & Pa s & \\ $\lambda_{rh}^C$ & 1.38 & s & and injection & 1.38 & s & \\ $n^C_{rh}$ & 0.30 & - & without foam & 0.30 & - & \\ \hline $\mu^{MR}$ & 1.8$\times 10^{-5}$ & Pa s & Viscosity & \multicolumn{2}{|c|}{-} & \\ $\mu_{0}^M$ & \multicolumn{2}{|c|}{-} & of air & 1000 - 0.1 & Pa s & \cite{gurkan_mechanical_2008, davis_nonlinear_2006} \\ $\mu_{\infty}^M$ & \multicolumn{2}{|c|}{-} & (at 25 °C) & 100 - 0.01 & Pa s & \cite{ metzger_rheological_2014, jansen_mechanics_2015}\\ $\lambda^M_{rh}$ & \multicolumn{2}{|c|}{-} & & 10 & s &\\ $n^M_{rh}$ & \multicolumn{2}{|c|}{-} & & 0.5 & - &\\ \hline $\lambda_{bc}$ & 3 & & - & 3 & & \multirow{4}{5em}{Same as in benchmark experiment} \\ $n^F$ & 0.92 & & $\mu$CT and & 0.92 & & \\ $K^S$ & 2.1$\times 10^{-9}$ & m$^2$ & pore- & 2.1$\times 10^{-9}$ & m$^2$ & \\ $R_{char}$ & 0.32 & mm & network & 0.32 & mm & \\ \hline \end{tabular} \caption{Material parameters and values used for numerical implementation of the benchmark problem and simulations with marrow} \label{tab:mats_benchmark} \end{table} \begin{figure}[t] \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{rheo_parameter_left.jpeg} \caption{} \label{fig:rheo_parameter_left} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{rheo_parameter_right.jpeg} \caption{} \label{fig:rheo_parameter_right} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{pnm.jpeg} \caption{} \label{fig:pnm} \end{subfigure} \caption{(a) Measurements used for obtaining rheological parameters of bone cement including viscosity change with time (left) and pressure required for cement injection without foam (right) (b) Pore-network geometry extraction from aluminium foam} \end{figure} The parameters and their values used for the benchmark problem are summarized in Table \ref{tab:mats_benchmark}. The aluminium foam was made of standard material Aluminium 6101, thus the material parameters were easily available from literature. Since the aluminium foam consisted of nearly uniformly sized pores, $\lambda_{bc} = 3$ was taken as a reasonable estimate. The rheological parameters of the bone cement include the upper and lower viscosity limits $\mu_{0}^C$ and $\mu_{\infty}^C$, the relaxation time $\lambda^C_{rh}$ for the transition to shear-thinning behaviour, and the flow behaviour index $n^C_{rh}$ for the shear-thinning. The upper viscosity limit and its change with time were determined by rheological measurement. We used multiple methods of preparation of bone cement for rheological measurements to simulate real kit application. After multiple trials, the successful and reproducible method consisted of the following steps: \begin{enumerate} \item Pre-weighing 2.6 grams of bone cement powder in a 10 mL beaker. \item Separately, 10 mL of PMMA solution was prepared in a batch glass vial. \item At time t = 0 s, 1.0 mL PMMA solution was dropped into an open 10 mL beaker with powder using a positive displacement pipette. \item Immediately at that time (t = 0 s), a timer was started to count 20 seconds of gentle stirring using a polyetheretherketone stirring rod (counting ~20 rotations). \item After 20 s, part of the sample was gently dropped onto the rheometer bottom plate and the top plate was gently lowered down. The humidity control hood was used to avoid sample evaporation, and silicon oil was gently spread around the sample directly before each measurement. \end{enumerate} Understanding the timing of polymerization is crucial; therefore, we evaluated the average time taken from the start of the mixing of cement components to the start of the rheological measurement with the above-established procedure, which was 118 ± 10 s (based on a total of 26 trials). The rheological measurements were performed on MCR302 rheometer from Anton Paar. Parallel plate geometry (PP) with a 25 mm diameter top plate and a 1.5 mm gap was used. Time sweep measurement using plate-plate with a total measurement time of 20 minutes was run in a rotational mode at a shear rate of 0.2 s$^{-1}$. Each point was recorded every 5 seconds. The results are shown in Figure \ref{fig:rheo_parameter_left}. The viscosity at 3.5 minutes, i.e.~the cement preparation time in the injection experiments, was used to obtain $\mu_{0}^C=1930$ Pa s. The actual value of the lower viscosity limit $\mu_{\infty}^C$ is difficult to obtain due to its occurrence at shear rates too high to be able to measure reliably, although the viscosity has been observed to reduce by at least two orders of magnitude without plateauing \cite{krause_viscosity_1982, lepoutre_bone_2019}. Therefore, the value $\mu_{\infty}^C=1.93$ Pa s was assumed such that a 1000-times reduction in viscosity is allowed. The cement showed a sharp increase in viscosity for about first 2-2.5 minutes after the start of mixing, after which the curing slowed down for the next 9-10 minutes, followed by rapid curing at the end. In clinical applications and our benchmark experiment, the injection occurs in the period from 3 to 12 minutes, where the viscosity increase was found to be linear at 0.32 Pa s/s. This increase would have a negligible effect on the viscosity during the period of injection for our experiments. Hence, the dependence on time was neglected in this case. To obtain the remaining two parameters, $\lambda^C_{rh}$ and $n^C_{rh}$, the experimental setup as explained in Section \ref{sec:methods_validation_exp} was used without the aluminium foam to obtain the injection force required for flow through the assembly made of the syringe, the nozzle, and the cannula; at flow rates 0.1 and 0.4 ml/s. The injection pressure was then obtained by dividing the force by the area of the syringe. The injection pressures are shown in Figure \ref{fig:rheo_parameter_right}. The value immediately after the initial ramp is taken. These injection pressures and their respective flow rates were inserted in the analytical solution for the pressure required by a Carrreau fluid flowing through a tube at a given flow rate, as given in \cite{sochi_analytical_2015}. This gave a system of two equations, which could be solved to obtain the two remaining unknowns $\lambda^C_{rh} = 1.38$ and $n^C_{rh} = 0.30$. All parameters are listed in Table \ref{tab:mats_benchmark}. The remaining parameters, viz., porosity, permeability, and the characteristic radius $R_{char}$ (used in the average viscosity rheological model), are dependent on the pore geometry of the porous medium. A micro-CT scan of the aluminium foam was carried out to resolve the pore structure of the aluminium foam, from which the porosity could be computed. The micro-CT scanner used here was a VivaCT40 from SCANCO Medical AG at an isotropic resolution of 19 µm using 70 kV, 114 µA and 200 ms integration time. To determine the permeability and the characteristic radius, we used a pore-network model \cite{joekar-niasar_non-equilibrium_2010}. The micro-CT image was used to extract a pore-network, as shown in Figure \ref{fig:pnm}, using an open-source Python toolkit PoreSpy \cite{gostick_porespy_2019}. The permeability was obtained from this pore-network by simulating a Stokes flow through this pore-network using the open-source porous media flow solver DuMux \cite{koch_dumux_2020}. Note that the permeability obtained here was isotropic, i.e., nearly the same in all directions, hence we could use a scalar value instead of the tensor in Equations \ref{eq:darcyc} and \ref{eq:darcym}. Similarly, the characteristic radius $R_{char}$ was obtained from the volume-weighted average of the radii of the pores and the throats. \subsection{Clinically relevant simulations and further investigations} \label{sec:further_trials} To investigate a more clinically relevant setting, we used the same geometry and boundary conditions as in the benchmark problem with Cannella model and 0.4 ml/s flow rate, while suitably replacing the values of material parameters. Firstly, we replaced the mechanical properties of the aluminium foam with those of trabecular bone \cite{wu_youngs_2018}. Since the mechanical properties of the trabeculae vary depending on factors like specimen, location, and condition; mean values found in the literature were used. Furthermore, instead of a previously empty porous medium, we did trials assuming the presence of bone marrow. In reality, bone marrow could consist of red bone marrow, which due to its similarity to blood, is non-Newtonian by nature; and yellow bone marrow, which has Newtonian rheology \cite{gurkan_mechanical_2008}. The rheological behaviour could vary depending on the red-to-yellow bone marrow ratio, which is then dependent upon the age and health conditions of the patient. Given this uncertainty, we did the simulations over a range of viscosities, considering both non-Newtonian and Newtonian rheologies. For non-Newtonian rheologies, we assumed realistic values for the parameters based on literature \cite{gurkan_mechanical_2008, davis_nonlinear_2006, metzger_rheological_2014, jansen_mechanics_2015}, while the viscosity limits were set such that $\mu_{0}^M/\mu_{\infty}^M = 10$. The rest of the parameters used were the same as in the benchmark experiment. The parameters and their values are summarized in Table \ref{tab:mats_benchmark}. Apart from the above, further trials were carried out to investigate the effect of permeability, porosity, and the Brooks-Corey uniformity parameter $\lambda_{bc}$. In reality, these parameters are interdependent, e.g., a higher porosity also leads to a higher permeability. However, here we wanted to investigate them in isolation to study their standalone effects. The chosen values for permeability and porosity were those obtained from literature for human vertebrae \cite{baroud_experimental_2004, daish_estimation_2017, ochia_hydraulic_2002}. As far as the author is aware, no information is available in the literature for the Brooks-Corey uniformity parameter of human vertebrae. Therefore, a general but wide range of values from 0-10 was chosen. \section{Results} \label{sec:results} \subsection{Validation with benchmark problem} \begin{figure}[htbp] \centering \begin{subfigure}[t]{0.9\textwidth} \centering \includegraphics[width=\textwidth]{timelapse.jpeg} \caption{} \label{fig:injexp_timelapse} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=\textwidth]{validation_injpres.jpeg} \caption{} \label{fig:validation_injpres} \end{subfigure} \caption{(a) Cement injection from the benchmark experiment using CT images (top) and from simulation (bottom) with cement saturation ($s^C$) for flow rate of 0.4 ml/s (b) Injection force measured from experiments compared to simulations using various rheology upscaling models (left) and their respective percentage errors (right).} \label{fig:results_benchmark} \end{figure} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Rheology upscaling} & \multicolumn{2}{|c|}{\textbf{Error (RMS)}} \\ \cline{2-3} \textbf{model} & \textbf{0.1 ml/s} & \textbf{0.4 ml/s} \\ \hline Cannella model & 37\% & 50\% \\ Hirasaki and Pope model & 173\% & 110\% \\ Average viscosity model & 22\% & 23\% \\ \hline \end{tabular} \caption{Root mean square error (RMS) for rheology upscaling models over 0.1 to 0.9 of normalized time} \label{tab:perc_error} \end{table} Looking at the cement distribution first in Figure \ref{fig:injexp_timelapse}, similar cement distribution patterns, i.e., fully saturated ($s^C \approx 1$) with a sharp front, were obtained from the simulation and in the experiment of the benchmark problem. However, the injection forces, shown in Figure \ref{fig:validation_injpres}, differed from the experiments depending on which rheology upscaling model was used. The difference also did not remain constant with time. Table \ref{tab:perc_error} shows the percentage error averaged over the time, barring the 10\% at the start and the end to ignore any effect of the difference in climb/drop times. The average viscosity model gave results closest to the experiments. The Cannella model gave results with more error but in a similar range whereas the Hirasaki model performed the worst with much higher magnitudes of force. \subsection{Clinically relevant simulations} \begin{figure}[h] \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{marrow_injpres_n.jpeg} \caption{} \label{fig:marrow_injpres_n} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{marrow_injpres_nn.jpeg} \caption{} \label{fig:marrow_injpres_nn} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{cement_dist_compare.jpeg} \caption{} \label{fig:cement_dist_compare} \end{subfigure} \hfill \begin{subfigure}[b]{0.37\textwidth} \centering \includegraphics[width=\textwidth]{marrow_cement_visc.jpeg} \caption{} \label{fig:marrow_cement_visc} \end{subfigure} \caption{(a) Injection force for marrow with Newtonian rheology (b) Injection force for marrow with non-Newtonian rheology (c) Cement saturation ($s^C$) for Newtonian bone marrow with viscosity $\mu^{MR}=1000$ Pa s (left) and $\mu^{MR}=0.01$ Pa s (right) (d) Cut section showing viscosity distribution of cement ($\mu^{MR}$) and marrow ($\mu^{CR}$) for case of $\mu^{MR}$ 10$^3$ to $10^2$ Pa s} \end{figure} From Figure \ref{fig:marrow_cement_visc}, it was observed that except at the peripheries, the viscosities were the same everywhere within the fluids despite the non-Newtonian rheology. The cement viscosity after shear-thinning was 5.9 Pa s. Figure \ref{fig:marrow_injpres_n} shows a stark difference in the injection force development for cases with marrow viscosity higher than this value of 5.9 Pa s, compared to the other way round. For cases where marrow viscosity was higher, the force peaked near the start and declined afterwards; whereas when marrow viscosity was lower, the force showed a small gradual increase with time. The difference in the injection force for the non-Newtonian and the Newtonian marrow cases was only in magnitude, with higher values observed in the case of Newtonian marrow since it does not undergo shear-thinning. Another difference caused due to the marrow viscosity is shown in Figure \ref{fig:cement_dist_compare}. Higher marrow viscosity caused the cement distribution to be diffuse, as observed from low cement saturation values; and to spread farther, nearly touching the end of the porous medium. For lower marrow viscosity, the cement cloud was fully saturated and compact, similar to the benchmark experiment. Note that both with aluminium foam and the trabecular bone as the solid skeleton, the deformations obtained were negligible in the order of 10$^{-10}$ m and 10$^{-9}$ m respectively. The results are not shown here but can be referred to in the dataset \cite{darus-3146_2022}. \subsection{Permeability, porosity, and Brooks-Corey uniformity parameter} Figure \ref{fig:more_params} shows how the injection force at the end of injection varies with permeability, porosity, and the uniformity parameter $\lambda_{bc}$. Only permeability showed a significant influence out of the three parameters. Porosity had nearly no influence. For $\lambda_{bc}$, there was some increase in injection force at $\lambda_{bc}=0.1$, but that is already an extreme and unrealistic case. However, since $\lambda_{bc}$ represents the pore size uniformity in the porous medium, we also investigated its effect on the cement distribution. Therefore, we considered two cases: with lower marrow viscosity (10$^{-5}$ Pa s) and with higher marrow viscosity (10$^{3}$ Pa s) relative to the cement viscosity since we observed a difference in cement distribution for these cases earlier in Figure \ref{fig:cement_dist_compare}. Accordingly, we compared the injection force graphs and cement distributions for various values of $\lambda_{bc}$ for both these cases, as shown in Figure \ref{fig:bc_analysis}. We observed no significant difference in the injection forces due to $\lambda_{bc}$ as long as $\lambda_{bc} > 0.1$. However, the cement distribution did show dependence on $\lambda_{bc}$ for the case of higher marrow viscosity, although this was not the case when the marrow viscosity was lower than the cement. The exception here is again the extreme case of $\lambda_{bc} = 0.1$. \begin{figure}[h] \begin{subfigure}[t]{0.9\textwidth} \centering \includegraphics[width=\textwidth]{permporobc.jpeg} \caption{} \label{fig:permporobc} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=\textwidth]{bc_analysis.jpeg} \caption{} \label{fig:bc_analysis} \end{subfigure} \caption{(a) Injection force at $t=5.0$ s for varying permeability, porosity, and $\lambda_{bc}$ (b) Injection force cement saturation along X-axis at the point of injection for various values of $\lambda_{bc}$ when marrow has higher viscosity (dotted lines) and lower viscosity (solid lines) than the cement} \label{fig:more_params} \end{figure} \section{Discussion} \label{sec:discussion} The matching cement patterns in the experiment and the simulations for the benchmark problem show that the presented model can correctly describe and replicate the flow boundary conditions from the experiments. In terms of injection force, the average viscosity model for upscaling the rheologies gave the best results, followed by the Cannella model. The error, although not negligible, is expected due to uncertainty caused by factors like the sensitivity of the cement curing to the mixing conditions and exposure to air, which are difficult to control given the constraints of the rheological measurement and the benchmark experiments. The Hirasaki and Pope model performed the worst among the three. In contrast, the study by \citet{eberhard_determination_2019} found the results from Hirasaki and Pope and the average viscosity models similar and more accurate than the Cannella model. The reason for better accuracy of the Hirasaki and Pope model in Eberhard et al.'s study could be that they used a ``granular" structure made of monodisperse spheres, which is also what \citet{hirasaki_analysis_1974} used to arrive at the value of $\mathfrak{C} = 0.69$ (in Equation \ref{eq:rheo_upscaling}) in their model. Comparatively, the ``fibrous"-type structure of the aluminium foam is considerably different. Therefore, the Cannella and the average viscosity models are better suited than the Hirasaki and Pope model to upscale the viscosity for the case of aluminium foam, and therefore, also for vertebrae. The average viscosity model gave the most accurate results not only in Eberhard et al.'s study but also in our study, despite the difference in the porous materials used, hinting at its potential applicability to a relatively wide range of porous media. The reason for its wider applicability compared to the semi-empirical models could be because it requires an additional parameter (the characteristic radius $R_{char}$) derived from the pore geometry, whereas the other two models use fixed values for the constant $\mathfrak{C}$. Interestingly, the injection force increased with time in the simulations even though the time-increasing viscosity due to curing was neglected in the model. Therefore, the increase in the force must have to do with the nature of the cement distribution. Also, note that in both experiment and simulation, a four-times increase in flow rate requires less than double the injection force. The non-linear dependence is due to the decrease in the cement viscosity by shear-thinning at higher flow rates. The presence of bone marrow had a considerable impact on the results. Whether the marrow is non-Newtonian or Newtonian affects only the magnitude of the injection force, the lesser force being in the non-Newtonian case due to the shear-thinning viscosity. The injection force development and cement distribution are significantly affected by whether the cement viscosity is higher or lower than the marrow viscosity. There are three clear risks if the cement viscosity falls below that of the marrow: (i) unintuitive injection force development, (ii) improper cement filling, and (iii) additional dependence on the pore size uniformity. Firstly, the injection force required is highest at the start, followed by a substantial dip. The peak force needed at the start requires the practitioners to apply more effort early on, which could lead to sudden excess cement injection as the required force suddenly dips. Hence, such injection force behaviour is highly unintuitive for practitioners relying on haptic feedback. Secondly, the injected cement only partially fills the porous bone before spreading further. The marrow is not fully displaced because its higher viscosity makes it less movable than the cement. Therefore, the cement spreads in an unstable fingering pattern in a phenomenon known as ``viscous fingering". Such a filling pattern causes poor interdigitation of the cement with the trabeculae. It also causes the cement to spread farther, potentially leaking outside the cortical shell before the required volume is injected. Finally, since the fingers develop depending on the pore geometries, an additional dependence of the cement distribution on the pore size uniformity parameter $\lambda_{bc}$ is introduced. This additional dependence adds another factor of uncertainty to the procedure outcome. On the other hand, these risks could be avoided if the cement viscosity stays higher than the marrow's, yielding advantages like (i) a gradually rising injection force, which is more stable and intuitive for practitioners; (ii) the cement completely filling the porous bone, providing better interdigitation and leading to better mechanical strengthening; (iii) no strong dependence on the pore size uniformity. The advantages of high-viscosity cement have been emphasized in earlier works \cite{baroud_high-viscosity_2006, loeffel_vertebroplasty_2008}, but none have provided a clear definition for `high-' or `low-viscosity' cement. From the results of this work, we can state that the marrow viscosity is the reference relative to which the cement viscosity must be higher, especially after the shear-thinning from the injection. The shear-thinning causes the cement and marrow viscosities to come closer to their respective $\mu_{\infty}$ compared to $\mu_{0}$. In this regard, the power law exponent $n$ and the lower viscosity limit $\mu_{\infty}$ of the cement and the marrow are crucial. For practitioners, this information could be useful for choosing the bone cement and its curing time depending on the patient's marrow composition, or for making decisions like whether techniques like marrow aspiration need to be employed. There are also some limitations to our current study. As mentioned earlier, the bone cement viscosity and curing behaviour are sensitive to mixing conditions and exposure to air. We also did not consider the temperature dependence, which would cause the viscosities to change from the heat released by curing, or the difference in the room and body temperature in a clinical setting. More limitations could arise in clinical settings, e.g., we applied boundary conditions such that the fluids can freely flow out of the walls, whereas \textit{in vivo}, they might enter into the blood vessels inside the vertebra. Modelling these \textit{in vivo} conditions would require dependence of the boundary conditions on the distribution of the blood vessels in the vertebra. Moreover, influential parameters like permeability and marrow viscosity are hard to determine in \textit{in vivo} conditions. These parameters could be estimated from \textit{in vivo} measurements like clinical CT images \cite{teo_correlation_2007} or patient characteristics based on clinical data. However, these factors contribute to the uncertainty in the outcome of the simulation results. In this regard, a study for quantifying the uncertainties in the parameters could be quite beneficial. Simulations using such estimations could help practitioners estimate at least a pilot range for the operating parameters, e.g.~injection pressure, cement viscosity, curing time, flow rate, etc., before performing the actual procedure instead of completely relying on haptic feedback and frequent X-ray images at the time of the procedure. The reduced reliance on X-ray imaging would decrease the patient's exposure to harmful radiation. The model could also serve as a training tool for relatively young, inexperienced practitioners. In all the simulations carried out in this study, the deformations were negligible: $\approx 10^{-10}$ m for aluminium foam, $\approx 10^{-9}$ m for trabecular bone. Despite this, the capability to compute deformations is advantageous in cases like a weakened vertebra where the deformations may not be negligible. Extending the model to include the effect of existing fractures on vertebroplasty is possible and especially clinically relevant for future work since vertebroplasty mostly happens on fractured vertebrae. Similarly, the inclusion of temperature effects in the model has relevance since tissue necrosis is another risk associated with vertebroplasty. Lastly, the physics of this model can be suitably modified for other injection and infusion processes in the biomedical field. \section{Conclusion} \label{sec:conclusion} In conclusion, we presented a continuum-mechanical model based on the Theory of Porous Media for simulating vertebroplasty. The model can simulate the injection of bone cement in porous materials like vertebrae and yield realistic results, as was evident from the benchmark experiment used for validation. The average viscosity model is found to be the most suitable approach for upscaling the rheology in the macro-scale framework. The Cannella model could potentially be used, but the Hirasaki and Pope model is not suitable. Our simulations show that the cement must have a higher viscosity than the marrow to ensure stable development of injection force and proper filling of cement. In this regard, the marrow's and the cement's relative rheologies play a crucial role in the outcome of vertebroplasty and in avoiding cement leakage. We expect the current model to support future developments of vertebroplasty simulations that are closer to clinical reality and expect its possibilities to be extended toward modelling other fluid injection mechanisms in biomedical fields.
1,314,259,996,222
arxiv
\section{Introduction} The convolution operators with non-singular kernels have drawn, in recent years, a wide interest, from both the theoretical and the applied points of view: see, for example, \cite{LOS}, \cite{ATAN}, \cite{YAN}, \cite{ZHE}, \cite{ARS} and the references therein. Let $f:\mathbb{R}^{+}\rightarrow \mathbb{R}$ be a differentiable and absolutely integrable function in $AC_{loc}\left( 0,+\infty \right) $ (i.e. with integrable first derivative); we use here the following operator \begin{equation} ^{CF}D_{x}^{\alpha }f(x):=\frac{B(\alpha )}{1-\alpha }\int\limits_{0}^{x \frac{d}{dz}f(z)e^{-\frac{\alpha }{1-\alpha }(x-z)}dz,\qquad x>0,\text{ \alpha \in (0,1), \label{cf} \end{equation (where $B(\alpha )$ is a positive normalizing constant), which is a variant of the so-called \textquotedblleft Caputo-Fabrizio fractional derivative" introduced in \cite{CAP}: in our case the lower limit of integration is zero, since we identify the kernel with the tail of a (bounded) L\'{e}vy measure on $(0,+\infty ).$ Analogously, we define the following convolution operator \begin{equation} D_{x}^{\alpha ,\nu }f(x):=\frac{B(\alpha )}{1-\alpha }\int\limits_{0}^{x \frac{d}{dz}f(z)E_{\nu }\left( -\frac{\alpha }{1-\alpha }(x-z)^{\nu }\right) dz,\qquad x>0,\text{ }\alpha \in (0,1),\nu \in (0,1], \label{ab} \end{equation where $E_{\nu }(\cdot )$ is the Mittag-Leffler function, i.e. $E_{\nu }(x):=\sum_{j=0}^{\infty }x^{j}/\Gamma (\nu j+1),$ for $x\in \mathbb{R},$ \func{Re}(\nu )>0$. We note that (\ref{ab}) reduces, for $\alpha =\nu $, to the so-called \textquotedblleft Atangana-Baleanu fractional derivative" (in the Caputo sense), see \cite{ATA}, while, for $\nu =1$, it coincides with \ref{cf}). As for (\ref{cf}), the kernel of (\ref{ab}) is non-singular in the origin, since $E_{\nu }(0)=1.$ Moreover, we introduce here the following operato \begin{equation} \mathcal{D}_{x}^{\alpha ,\rho }f(x):=\frac{1}{\Gamma (\rho ) \int\limits_{0}^{x}\frac{d}{dz}f(z)\Gamma \left( \rho ;k_{\alpha }z\right) dz,\qquad x>0,\text{ }k_{\alpha }>0,\rho \in (0,1], \label{ig} \end{equation where $\Gamma (\rho ;x):=\int_{x}^{+\infty }e^{-w}w^{\rho -1}dw$ is the upper-incomplete gamma function. Also (\ref{ig}) generalizes (\ref{cf}), to which it reduces for $\rho =1$ and $k_{\alpha }=\alpha /(1-\alpha ).$ Again the kernel of (\ref{ig}) is non-singular in the origin, since $\Gamma \left( \rho ;0\right) =\Gamma \left( \rho \right) .$ Our aim is to analyze the role of this kind of operators in the field of stochastic processes; while the random models associated to differential equations with classical fractional derivatives have been extensively studied, the probabilistic applications of the above defined operators are not yet explored. Let us start by recalling that $\psi $:$(0,+\infty )\rightarrow \mathbb{R ^{+}$ is a Bernstein function if it is non-negative, of class $C^{\infty }$ and such that, for any $x>0,$ $(-1)^{k}\frac{d^{k}}{dx^{k}}\psi (x)\leq 0,$ k\in \mathbb{N}.$ Then it is well-known that $\psi $ admits the following representation (see \cite{SCH}, p.21 \begin{equation} \psi (x)=c+bx+\int\limits_{0}^{+\infty }(1-e^{-zx})\mu (dz), \label{bb} \end{equation where $c,b\geq 0$ and $\mu $ is a measure on $(0,+\infty )$ satisfying \int\limits_{0}^{+\infty }(1\wedge z)\mu (dz)<\infty $, called L\'{e}vy measure$.$ Moreover, the triplet $\left( c,b,\mu \right) $ determines uniquely $\psi $ (and the reverse holds as well). Let us define the stochastic process $\mathcal{S}:=\mathcal{S}(t),t\geq 0$, assuming that it is a subordinator, i.e. L\'{e}vy and almost surely non-decreasing. Let $h_{\mathcal{S}}(B,t):=P(\mathcal{S}(t)\in B)$ be its transition probabilities, for any $t\geq 0$ and Borel interval $B\subset (0,+\infty ).$ Let now $\overline{\mu }(s):=\int_{s}^{+\infty }\mu (dz),$ for $s\geq 0,$ be the so-called tail L\'{e}vy measure; if $\overline{\mu (\cdot )$ is absolutely continuous on $(0,+\infty )$ and $\int_{0}^{+\infty }\mu (z)dz=+\infty $, the corresponding subordinator $\mathcal{S}$ has absolutely continuous distribution (see \cite{SAT}, p.177), with density $h_ \mathcal{S}}(dx,t)=h_{\mathcal{S}}(x,t)dx,$ $t,x\geq 0,$ such tha \begin{equation} \mathbb{E}e^{-\eta \mathcal{S}(t)}:=\int\limits_{0}^{+\infty }e^{-\eta x}h_ \mathcal{S}}(x,t)dx=e^{-\psi (\eta )t},\qquad \eta >0,\;t\geq 0 \label{lp2} \end{equation (see \cite{SCH}, p.49). We define now the convolution operator $\mathcal{D _{x}^{\psi }$ as follow \begin{equation} \mathcal{D}_{x}^{\psi }f(x):=\int\limits_{0}^{x}\frac{d}{dz}f(z)\overline \mu }_{\psi }(x-z)dz, \label{co} \end{equation for any differentiable and absolutely integrable function in $AC_{loc}\left( 0,+\infty \right) .$ If $f(0)<\infty ,$ it can be alternatively written as \mathcal{D}_{x}^{\psi }f(x)=\frac{d}{dz}\int\limits_{0}^{x}f(z)\overline{\mu }_{\psi }(x-z)dz-\overline{\mu }_{\psi }(x)f(0)$ (see \cite{KOC} and \cit {KOC2}). Note that we have emphasized the dependence of the tail L\'{e}vy measure on the Bernstein function by denoting it as $\overline{\mu }_{\psi }(\cdot )$; analogously, we will denote by $\mathcal{S}_{\psi }$ the subordinator with L\'{e}vy triplet $(0,0,\mu _{\psi }),$ under the assumption that $c=b=0,$ without loss of generality$.$ The Laplace transform of (\ref{co}) can be easily obtained, if again $f(0)<\infty $, by considering that $\int\limits_{0}^{+\infty }e^{-\eta x}\overline{\mu }_{\psi }(x)dx=\psi (\eta )/\eta ,$ and read \begin{equation} \mathcal{L}\left\{ \mathcal{D}_{x}^{\psi }f(x);\eta \right\} =\psi (\eta \widetilde{f}(\eta )-\frac{\psi (\eta )}{\eta }f(0), \label{lt3} \end{equation where $\widetilde{f}(\eta ):=\mathcal{L}\left\{ f(x);\eta \right\} $ and \mathcal{L}\left\{ \cdot ;\eta \right\} $ denotes the Laplace transform with respect to $x$. It is proved in \cite{TOA} that, if the L\'{e}vy measure is absolutely continuous and unbounded and if the subordinator $\mathcal{S _{\psi }$ has density $h_{\mathcal{S}_{\psi }}(x,t)$ vanishing in the origin, the following initial value proble \begin{equation} \left\{ \begin{array}{l} \frac{\partial }{\partial t}h(x,t)=-\mathcal{D}_{x}^{\psi }h(x,t) \\ h(0,t)=0,\qquad h(x,0)=\delta (x \end{array \right. , \label{cau} \end{equation is satisfied by $h_{\mathcal{S}_{\psi }}(x,t)$ for $x,t\geq 0.$ Since we consider here integral operators with non-singular kernel in the origin, such as (\ref{cf}), (\ref{ab}) and (\ref{ig}), we move to the case where the L\'{e}vy measure has finite mass, i.e. $\int_{0}^{+\infty }\mu _{\psi }(dz)<\infty ,$ assuming also that $b=0$ in (\ref{bb}), without loss of generality$.$ The corresponding Bernstein function $\psi $ is consequently bounded and the subordinator $\mathcal{S}_{\psi }$ is a driftless, step process. Moreover, almost all its paths have a finite number of jumps on every compact interval (finite activity). We will make these assumptions, together with $c=0,$ which implies that $\mathcal{S}_{\psi }$ is a strict subordinator. Then, in our case, the condition for the absolute continuity of $h_{\mathcal{S}}(\cdot ,t)$ does no longer hold. Moreover, in some cases considered here also the assumption of density vanishing in the origin does not hold; on the contrary, as happens even for well-known processes (such as, for example, the gamma subordinators), the density is infinite for $x=0.$ Thus the result in (\ref{cau}) must be modified accordingly. It is well known that, under the assumption that $\int_{0}^{+\infty }\mu _{\psi }(dz)<\infty ,$ a L\'{e}vy process with triplet $(0,0,\mu _{\psi })$ is a compound Poisson process, i.e. \begin{equation} \mathcal{S}_{\psi }(t)=\sum_{j=1}^{N(t)}X_{j}^{\psi }, \label{ss} \end{equation where $N:=N(t),t\geq 0$ is a Poisson process with parameter $\lambda =1$, independent of $X_{j}^{\psi },$ for any $j=1,2...$ (see, for example, \cit {APP}, p.49). Moreover, the addends $X_{j}^{\psi }$ are, for $j=1,2,...,$ non-negative, independent, identically distributed (i.i.d.) random variables with $F_{X_{j}^{\psi }}(x):=P(X_{j}^{\psi }\leq x)$ such tha \begin{equation} \mathcal{L}\left\{ F_{X_{j}^{\psi }}(y);\eta \right\} =\frac{1}{\eta }\left[ 1-\psi (\eta )\right] . \label{fr} \end{equation In this case, the distribution function of $\mathcal{S}_{\psi }$ reads, for y\in \mathbb{R}, \begin{equation} F_{\mathcal{S}_{\psi }}(y,t):=P\{\mathcal{S}_{\psi }(t)<y\}=e^{-t}1_{[0,+\infty )}(y)+\int_{-\infty }^{y}f_{\mathcal{S}_{\psi }}(x,t)dx, \label{cc} \end{equation where \begin{equation} f_{\mathcal{S}_{\psi }}(x,t)=e^{-t}\sum_{n=1}^{\infty }\frac{t^{n}}{n! f_{X_{j}^{\psi }}^{\ast (n)}(x), \label{lp} \end{equation is the density of the absolutely continuous component and $f^{\ast (n)}$ denotes the $n$-fold convolution of the function $f$. The compound Poisson process has important applications in different fields, ranging from models of insurance risk, to the analysis of statistical behavior in social and biological systems, as well as to the treatment of certain types of random dynamics in physics. As a consequence of (\ref{cc}) and (\ref{lp}), the Laplace transform of $f_ \mathcal{S}_{\psi }}$is given b \begin{eqnarray} \widetilde{f}_{\mathcal{S}_{\psi }}(\eta ,t) &=&e^{-t}\sum_{n=1}^{\infty \frac{1}{n!}\left( \widetilde{f}_{X^{\psi }}(\eta )t\right) ^{n} \label{lap3} \\ &=&[\text{by (\ref{fr})}] \notag \\ &=&e^{-t}\sum_{n=1}^{\infty }\frac{1}{n!}\left( (1-\psi (\eta )t\right) ^{n}=e^{-\psi (\eta )t}-e^{-t},\qquad \eta >0,\;t\geq 0, \notag \end{eqnarray instead of (\ref{lp2}). Correspondingly, as we will prove in the next section, the equation satisfied by the density $f_{\mathcal{S}_{\psi }}$ differs from (\ref{cau}) by two additional terms, which depend on the choice of $\psi $ and whether or not the density of the subordinator is infinite in the origin, for some values of $t$. In the last section, we extend these results by generalizing the previous operators to the case of random parameters, thus obtaining distributed-order convolution operators. We provide the explicit solution of the corresponding equations, at least under simplifying assumptions. Finally, in the concluding remarks, we hint some applications of the obtained results to the risk theory and, in particular, to a continuous-time model, where the surplus process of the insurance company is modelled by a compound Poisson process with non-negative, absolutely continuous claim sizes. \ We recall the following definitions of well-known special functions that we will apply later: let $W_{\alpha ,\beta }(x):=\sum_{j=0}^{\infty }x^{j}/j!\Gamma (\alpha j+\beta ),$ for $x,\alpha ,\beta \in \mathbb{C},$ be the Wright function and let \begin{equation} E_{\alpha ,\beta }^{\gamma }(x):=\sum_{j=0}^{\infty }\frac{x^{j}(\gamma )_{j }{j!\Gamma (\alpha j+\beta )},\qquad \func{Re}(\alpha ),\func{Re}(\beta )>0,\;\gamma >0,\;x\in \mathbb{R}, \label{ml} \end{equation where $(\gamma )_{j}:=\gamma (\gamma +1)...(\gamma +j-1),$ $j=0,1,...,$ be the Prabhakar function (or Mittag-Leffler function with three parameters). We will denote, for brevity, the function (\ref{ml}), as $E_{\alpha ,\beta }(x)$ when $\gamma =1$, and as $E_{\alpha }(x)$, when $\gamma =\beta =1.$ Let us recall the following formula for the Laplace transform of (\ref{ml}) \begin{equation} \mathcal{L}\left\{ x^{\beta -1}E_{\alpha ,\beta }^{\gamma }(\lambda x^{\alpha });\eta \right\} =\frac{\eta ^{\alpha \gamma -\beta }}{(\eta ^{\alpha }-\lambda )^{\gamma }}, \label{ml2} \end{equation for $\func{Re}(\eta )>0,$ $\func{Re}(\beta )>0,$ $\lambda \in \mathbb{C}$ and $|\lambda \eta ^{-\alpha }|<1$ (see \cite{KIL}, p.47). \section{Main results} Let us consider the convolution operator $\mathcal{D}_{x}^{\psi }$ defined in (\ref{co}) under different assumptions on the kernel $\overline{\mu _{\psi }(\cdot )$: in the exponential case, $\mathcal{D}_{x}^{\psi }$ reduces to the variant of the Caputo-Fabrizio operator defined in (\ref{cf}) and the solution is finite in the origin. Then we analyze two cases, where the kernel of the operator is represented by a Mittag-Leffler (with parameter $\nu \in (0,1]$) or an incomplete gamma function (with parameter \rho \in (0,1]$). These extensions are both quite natural, since the Mittag-Leffler density is the fractional counterpart of the exponential one; on the other hand, the incomplete gamma function coincides with the tail distribution function of a gamma random variable, which generalizes the exponential. As we will see, even though the last two operators reduce to the first one, for $\nu =1$ and $\rho =1$, respectively, the exponential case must be treated separately, since the governing equation is not accordingly obtained as special case. This is a consequence of the different behavior of the solutions in the origin. \subsection{Exponential kernel} \begin{theorem} Let $^{CF}D_{x}^{\alpha }$ be the convolution operator defined in (\ref{cf ), with $B(\alpha )=1-\alpha $. Then the solution to the following equatio \begin{equation} \frac{\partial }{\partial t}f(x,t)=-\,^{CF}D_{x}^{\alpha }\,f(x,t)+k_{\alpha }(1-t)e^{-t-k_{\alpha }x},\qquad x\geq 0,t\geq 0,\text{ }\alpha \in (0,1), \label{eq} \end{equation with $k_{\alpha }=\alpha /(1-\alpha )$ and initial condition $f(x,0)=0,$ is given by \begin{equation} f_{\mathcal{S}_{\psi }}(x,t)=k_{\alpha }t\exp \left\{ -k_{\alpha }x-t\right\} W_{1,2}\left( k_{\alpha }xt\right) ,\qquad x\geq 0, \label{den} \end{equation and (\ref{den}) is the density of the absolutely continuous component of \mathcal{S}_{\psi }$ defined in (\ref{ss}), for $X_{j}^{\psi }$ exponentially distributed with parameter $k_{\alpha }$, for $j=1,2...$. \end{theorem} \begin{proof} We first prove that the density $f_{\mathcal{S}_{\psi }}$ of the absolutely continuous component of (\ref{ss}) satisfies the following equatio \begin{equation} \frac{\partial }{\partial t}f(x,t)=-\mathcal{D}_{x}^{\psi }f(x,t)-\overline \mu }_{\psi }(x)f(0,t)+f_{X_{j}^{\psi }}(x)e^{-t},\qquad x,t\geq 0. \label{cau3} \end{equation Indeed, by taking the time-derivative of (\ref{lap3}), we ge \begin{equation*} \frac{\partial }{\partial t}\widetilde{f}_{\mathcal{S}_{\psi }}(\eta ,t)=-\psi (\eta )e^{-\psi (\eta )t}+e^{-t}. \end{equation* which coincides with the Laplace transform of the r.h.s. of (\ref{cau3}), by considering (\ref{lt3}) and that $\int\limits_{0}^{+\infty }e^{-\eta x \overline{\mu }_{\psi }(x)dx=\psi (\eta )/\eta $. Equation (\ref{cau3}) obviously holds only when $f(0,t)<\infty ,$ for any $t.$ By definition (\re {cf}) we can write, in this case, $\overline{\mu }(x)=e^{-k_{\alpha }x}$ and \begin{equation*} \,\psi (\eta )=\eta \int\limits_{0}^{+\infty }e^{-\eta x-k_{\alpha }x}dx \frac{\eta }{\eta +k_{\alpha }}. \end{equation* Then equation (\ref{cau3}) coincides with (\ref{eq}) and (\ref{lap3}) reduces t \begin{equation} \widetilde{f}_{\mathcal{S}_{\psi }}(\eta ,t)=e^{-\frac{\eta t}{\eta +k_{\alpha }}}-e^{-t}, \label{hh} \end{equation whose inverse is equal to (\ref{den}) and is finite also for $x=0$. Moreover, by considering (\ref{fr}), we have that $f_{X_{j}^{\psi }}(x) \mathcal{L}^{-1}\{k_{\alpha }/(\eta +k_{\alpha });x\}=k_{\alpha }e^{-k_{\alpha }x}$, where $\mathcal{L}^{-1}\{\cdot ;x\}$ denotes the inverse Laplace transform. \end{proof} \begin{remark} This result can be alternatively checked directly, by differentiating (\re {den}) and applying definition (\ref{cf}): indeed we have tha \begin{eqnarray} &&^{CF}D_{x}^{\alpha }\,f_{\mathcal{S}_{\psi }}(x,t) \label{uf2} \\ &=&k_{\alpha }te^{-t-k_{\alpha }x}\sum_{n=1}^{\infty }\frac{(k_{\alpha }t)^{n}}{(n-1)!(n+1)!}\int_{0}^{x}z^{n-1}dz-k_{\alpha }^{2}te^{-t-k_{\alpha }x}\sum_{n=0}^{\infty }\frac{(k_{\alpha }t)^{n}}{n!(n+1)!}\int_{0}^{x}z^{n}dz \notag \\ &=&k_{\alpha }te^{-t-k_{\alpha }x}\sum_{n=1}^{\infty }\frac{(k_{\alpha }tx)^{n}}{n!(n+1)!}-k_{\alpha }e^{-t-k_{\alpha }x}\sum_{l=1}^{\infty }\frac (k_{\alpha }tx)^{l}}{l!^{2}}, \notag \end{eqnarray which proves equation (\ref{eq}). Formula (\ref{den}) coincides with the special case, for $\beta =1,$ of the distribution of the time-fractional compound Poisson process (with exponential addends) obtained in \cite{BEG}. Moreover, an alternative expression, in terms of modified Bessel functions, is given in \cite{ROL}. \end{remark} \subsection{Mittag-Leffler kernel} We now consider the operator defined in (\ref{ab}). Even if the latter reduce to (\ref{cf}) for $\nu =1,$ the following result holds only for $\nu <1,$ as explained in the Remark 4 below. \begin{theorem} Let $D_{x}^{\alpha ,\nu }$ be the convolution operator (\ref{ab}), with B(\alpha )=1-\alpha $, for $\alpha ,\nu \in (0,1)$ and $k_{\alpha }=\alpha /(1-\alpha )$. Then the solution of the following equatio \begin{equation} \frac{\partial }{\partial t}f(x,t)=-\,D_{x}^{\alpha ,\nu }\,f(x,t)+k_{\alpha }e^{-t}x^{\nu -1}E_{\nu ,\nu }(-k_{\alpha }x^{\nu }),\qquad x\geq 0,t\geq 0, \label{abeq} \end{equation with initial condition $f(x,0)=0$, coincides with the density function \begin{equation} f_{\mathcal{S}_{\psi }}(x,t)=\frac{e^{-t}}{x}\sum_{n=1}^{\infty }\frac \left( k_{\alpha }tx^{\nu }\right) ^{n}}{n!}E_{\nu ,\nu n}^{n}(-k_{\alpha }x^{\nu }). \label{abden} \end{equation} \end{theorem} \begin{proof} In this case $\overline{\mu }(x)=E_{\nu }\left( -k_{\alpha }x^{\nu }\right) $ and, by considering (\ref{ml2}), we ge \begin{equation*} \,\psi (\eta )=\eta \int\limits_{0}^{+\infty }e^{-\eta x}E_{\nu }\left( -k_{\alpha }x^{\nu }\right) dx=\frac{\eta ^{\nu }}{\eta ^{\nu }+k_{\alpha }}. \end{equation* Thus, we have tha \begin{equation} \widetilde{f}_{\mathcal{S}_{\psi }}(\eta ,t)=e^{-\frac{\eta ^{\nu }t}{\eta ^{\nu }+k_{\alpha }}}-e^{-t} \label{ml3} \end{equation The inverse Laplace transform of (\ref{ml3}) gives (\ref{abden}), as can be easily checked by (\ref{ml2}). It has been proved in \cite{BEG} that (\re {abden}) coincides with the density of the absolutely continuous component of the compound Poisson process $\mathcal{S}_{\psi }$, under the assumption that the addends $X_{j}^{\psi }$ are i.i.d. random variables, for j=1,2,..., $ with density functio \begin{equation} f_{X_{j}^{\psi }}(x)=k_{\alpha }x^{\nu -1}E_{\nu ,\nu }(-k_{\alpha }x^{\nu }),\qquad x\geq 0. \label{xx} \end{equation In this case we must take into account that, for $\nu <1$, the density (\re {abden}) is infinite when $x=0$; thus we can not apply the Laplace transform formula (\ref{lt3}); however, for a function $f$ such that the Laplace transform of the derivative exists, we can write tha \begin{equation} \mathcal{L}\{D_{x}^{\alpha ,\nu }\,f(x);\eta \}=\mathcal{L}\{\frac{d}{dx \,f(x);\eta \}\mathcal{L}\{E_{\nu }\left( -k_{\alpha }x^{\nu }\right) ;\eta \}, \label{ll} \end{equation by considering (\ref{ab}). The space-derivative of (\ref{abden}) is equal t \begin{equation} \frac{\partial }{\partial x}f_{\mathcal{S}_{\psi }}(x,t)=e^{-t}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }t\right) ^{n}}{n! \sum_{j=0}^{\infty }\frac{(n)_{j}(-k_{\alpha })^{j}x^{\nu j+\nu n-2}} j!\Gamma (\nu j+\nu n-1)} \label{ds} \end{equation and thus, by (\ref{ll}) and (\ref{ml2}), we ge \begin{eqnarray} \mathcal{L}\{D_{x}^{\alpha ,\nu }\,f_{\mathcal{S}_{\psi }}(x);\eta \} &= \frac{\eta ^{\nu -1}e^{-t}}{\eta ^{\nu }+k_{\alpha }}\sum_{n=1}^{\infty \frac{\left( k_{\alpha }t\right) ^{n}}{n!}\sum_{j=0}^{\infty }\frac (n)_{j}(-k_{\alpha })^{j}}{j!\eta ^{\nu j+\nu n-1}} \label{ll3} \\ &=&\frac{\eta ^{\nu }e^{-t}}{\eta ^{\nu }+k_{\alpha }}\sum_{n=1}^{\infty \frac{\left( k_{\alpha }t/\eta ^{\nu }\right) ^{n}}{n!(n-1)! \sum_{j=0}^{\infty }\frac{(n+j-1)!(-k_{\alpha }/\eta ^{\nu })^{j}}{j!} \notag \\ &=&\frac{\eta ^{\nu }e^{-t}}{\eta ^{\nu }+k_{\alpha }}\left[ e^{\frac k_{\alpha }}{\eta ^{\nu }+k_{\alpha }}t}-1\right] =\frac{\eta ^{\nu }}{\eta ^{\nu }+k_{\alpha }}\left[ e^{-\frac{\eta ^{\nu }}{\eta ^{\nu }+k_{\alpha } t}-e^{-t}\right] . \notag \end{eqnarray Therefore equation (\ref{abeq}) is proved to hold, by considering (\ref{ll3 ) together with the time-derivative of (\ref{ml3}). \end{proof} \begin{remark} The previous result holds only for $\nu <1$, since formula (\ref{ds}) can be rewritten as \begin{equation*} \frac{\partial }{\partial x}f_{\mathcal{S}_{\psi }}(x,t)=e^{-t}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }t\right) ^{n}}{n! \sum_{j=1}^{\infty }\frac{(n)_{j}(-k_{\alpha })^{j}x^{\nu j+\nu n-2}} j!\Gamma (\nu j+\nu n-1)}+e^{-t}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }t\right) ^{n}}{n!}\frac{x^{\nu n-2}}{\Gamma (\nu n-1)} \end{equation* where the first term of the last sum (i.e. for $n=1$) vanishes when $\nu =1.$ Therefore, in this special case, we ge \begin{eqnarray} &&\mathcal{L}\{D_{x}^{\alpha ,1}\,f_{\mathcal{S}_{\psi }}(x);\eta \} \label{ll4} \\ &=&\frac{\eta e^{-t}}{\eta +k_{\alpha }}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }t/\eta \right) ^{n}}{n!(n-1)!}\sum_{j=1}^{\infty }\frac (n+j-1)!(-k_{\alpha }/\eta )^{j}}{j!}+\frac{\eta e^{-t}}{\eta +k_{\alpha } \sum_{n=1}^{\infty }\frac{\left( k_{\alpha }t/\eta \right) ^{n}}{n!} \notag \\ &=&\frac{\eta e^{-t}}{\eta +k_{\alpha }}\left[ e^{\frac{k_{\alpha }}{\eta +k_{\alpha }}t}-1\right] -\frac{\eta e^{-t}}{\eta +k_{\alpha }}\left[ e^ \frac{k_{\alpha }}{\eta }t}-1\right] +\frac{\eta e^{-t}}{\eta +k_{\alpha } \left[ e^{\frac{k_{\alpha }}{\eta }t}-1-\frac{k_{\alpha }t}{\eta }\right] \notag \\ &=&\frac{\eta }{\eta +k_{\alpha }}\left[ e^{-\frac{\eta }{\eta +k_{\alpha } t}-e^{-t}\right] -\frac{k_{\alpha }te^{-t}}{\eta +k_{\alpha }}, \notag \end{eqnarray which coincides with the Laplace transform of (\ref{uf2}), while differs from (\ref{ll3})$,$ with $\nu =1$. \end{remark} \begin{remark} As for the exponential case, also Theorem 3 can be, alternatively, proved directly as follows: the time-derivative of (\ref{abden}) read \begin{eqnarray} &&\frac{\partial }{\partial t}f_{\mathcal{S}_{\psi }}(x,t) \label{dt2} \\ &=&-\frac{e^{-t}}{x}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }tx^{\nu }\right) ^{n}}{n!}\left[ E_{\nu ,\nu n}^{n}(-k_{\alpha }x^{\nu })-k_{\alpha }x^{\nu }E_{\nu ,\nu (n+1)}^{n+1}(-k_{\alpha }x^{\nu })\right] + \notag \\ &&+k_{\alpha }e^{-t}x^{\nu -1}E_{\nu ,\nu }(-k_{\alpha }x^{\nu }) \notag \\ &=&-\frac{e^{-t}}{x}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }tx^{\nu }\right) ^{n}}{n!}E_{\nu ,\nu n}^{n+1}(-k_{\alpha }x^{\nu })+k_{\alpha }e^{-t}x^{\nu -1}E_{\nu ,\nu }(-k_{\alpha }x^{\nu }), \notag \end{eqnarray where, in the last step, we applied formula (3.6) in \cite{BEG2}, for $m=n+1$ and $z=0.$ By considering (\ref{ds}), together with (\ref{ab}), we ge \begin{eqnarray} &&D_{x}^{\alpha ,\nu }\,f_{\mathcal{S}_{\psi }}(x,t) \label{ds2} \\ &=&e^{-t}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }t\right) ^{n}}{n! \sum_{j=0}^{\infty }\frac{(n)_{j}(-k_{\alpha })^{j}x^{\nu j+\nu n-1}} j!\Gamma (\nu j+\nu n-1)}\sum_{l=0}^{\infty }\frac{(-k_{\alpha })^{l}x^{\nu l}}{\Gamma (\nu l+1)}\int_{0}^{1}(1-y)^{\nu j+\nu n-2}y^{\nu l}dy \notag \\ &=&\frac{e^{-t}}{x}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }tx^{\nu }\right) ^{n}}{n!}\sum_{j=0}^{\infty }\frac{(n)_{j}}{j!}\sum_{l=0}^{\infty \frac{(-k_{\alpha }x^{\nu })^{l+j}}{\Gamma (\nu l+\nu j+\nu n)} \notag \\ &=&\frac{e^{-t}}{x}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }tx^{\nu }\right) ^{n}}{n!}\sum_{m=0}^{\infty }\frac{(-k_{\alpha }x^{\nu })^{m}} \Gamma (\nu m+\nu n)}\sum_{j=0}^{m}\binom{n+j-1}{j} \notag \\ &=&\frac{e^{-t}}{x}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }tx^{\nu }\right) ^{n}}{n!}\sum_{m=0}^{\infty }\frac{(n+1)_{m}(-k_{\alpha }x^{\nu })^{m}}{m!\Gamma (\nu m+\nu n)}=\frac{e^{-t}}{x}\sum_{n=1}^{\infty }\frac \left( k_{\alpha }tx^{\nu }\right) ^{n}}{n!}E_{\nu ,\nu n}^{n+1}(-k_{\alpha }x^{\nu }), \notag \end{eqnarray where $(\gamma )_{j}:=\gamma (\gamma +1)...(\gamma +j-1),$ $j=0,1,...$ The last step follows by the well-known formula $\sum_{j=0}^{m}\binom{s+j}{j} \binom{s+m+1}{m};$ by considering (\ref{ds2}) with (\ref{dt2}), we obtain \ref{abeq}). \end{remark} \subsection{Incomplete-gamma kernel} Let $\Gamma (\rho ;x):=\int_{x}^{+\infty }e^{-w}w^{\rho -1}dw$ be the upper-incomplete gamma function (which is defined for any $\rho ,x\in \mathbb{R}$ and is real-valued for $x\geq 0$). We now consider the operator defined in (\ref{ig}), for a differentiable function $f$ in $AC_{loc}\left( 0,+\infty \right) .$ It is immediate to check that, when $\rho =1$, it reduces to (\ref{cf}). Moreover, for $z=0$, the kernel is equal to one and thus it is non-singular in the origin. As in the Mittag-Leffler case, the following result holds only for $\rho <1.$ \begin{theorem} Let $\mathcal{D}_{x}^{\alpha ,\rho }$ be the convolution operator defined in (\ref{ig}). Then the solution to the following equation, for $\rho \in (0,1), \begin{equation} \frac{\partial }{\partial t}f(x,t)=-\,\mathcal{D}_{x}^{\alpha ,\rho }\,f(x,t)+\frac{k_{\alpha }^{\rho }x^{\rho -1}}{\Gamma (\rho ) e^{-t-k_{\alpha }x},\qquad x\geq 0,t\geq 0,\text{ }\alpha \in (0,1), \label{ggeq} \end{equation with initial condition $f(x,0)=0,$ is given by \begin{equation} f_{\mathcal{S}_{\psi }}(x,t)=\frac{e^{-t-k_{\alpha }x}}{x}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }^{\rho }tx^{\rho }\right) ^{n}}{n!\Gamma (\rho n) ,\qquad x,t\geq 0. \label{gg4} \end{equation Moreover (\ref{gg4}) is the density of the absolutely continuous component of $\mathcal{S}_{\psi }$ defined in (\ref{ss}), for $X_{j}^{\psi }$ i.i.d. random variables with density functio \begin{equation} f_{X_{j}^{\psi }}(x)=\frac{k_{\alpha }^{\rho }}{\Gamma (\rho )}x^{\rho -1}e^{-k_{\alpha }x},\qquad x\geq 0. \label{gg5} \end{equation} \end{theorem} \begin{proof} We can write, analogously to the previous cases, that $\overline{\mu (x)=\Gamma \left( \rho ;k_{\alpha }x\right) /\Gamma \left( \rho \right) $ and \begin{eqnarray} \,\psi (\eta ) &=&\eta \int\limits_{0}^{+\infty }e^{-\eta x}\frac{\Gamma \left( \rho ;k_{\alpha }x\right) }{\Gamma \left( \rho \right) }dx=\frac{\eta }{\Gamma \left( \rho \right) }\int\limits_{0}^{+\infty }e^{-\eta x}\int\limits_{k_{\alpha }x}^{+\infty }e^{-w}w^{\rho -1}dwdx \label{psi} \\ &=&\frac{\eta k_{\alpha }^{\rho }}{\Gamma \left( \rho \right) \int\limits_{0}^{+\infty }e^{-k_{\alpha }y}y^{\rho -1}\int\limits_{0}^{y}e^{-\eta x}dxdy=1-\frac{k_{\alpha }^{\rho }}{\left( \eta +k_{\alpha }\right) ^{\rho }}. \notag \end{eqnarray Therefore we have tha \begin{equation} \widetilde{f}_{\mathcal{S}_{\psi }}(\eta ,t)=e^{-t+\frac{k_{\alpha }^{\rho }{\left( \eta +k_{\alpha }\right) ^{\rho }}t}-e^{-t}. \label{loc} \end{equation By taking the inverse Laplace transform of (\ref{loc}), we get (\ref{gg4}). The density in (\ref{gg5}) is correspondingly obtained, in view of (\ref{fr ), by invertin \begin{equation*} 1-\psi (\eta )=\frac{k_{\alpha }^{\rho }}{\left( \eta +k_{\alpha }\right) ^{\rho }}. \end{equation* As in the previous case, in order to prove that (\ref{gg4}) satisfies equation (\ref{ggeq}), we can not apply (\ref{lt3}), since also the function (\ref{gg4}) is infinite in the origin. From (\ref{gg4}), we have that \begin{equation} \frac{\partial }{\partial x}f_{\mathcal{S}_{\psi }}(x,t)=-\frac{k_{\alpha }e^{-t-k_{\alpha }x}}{x}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }^{\rho }tx^{\rho }\right) ^{n}}{n!\Gamma (\rho n)}+\frac{e^{-t-k_{\alpha }x}}{x^{2} \sum_{n=1}^{\infty }\frac{\left( k_{\alpha }^{\rho }tx^{\rho }\right) ^{n}} n!\Gamma (\rho n-1)} \label{dx} \end{equation an \begin{equation*} \mathcal{L}\{\frac{\partial }{\partial x}f_{\mathcal{S}_{\psi }}(x,t);\eta \}=\eta \left[ e^{-\frac{\eta ^{\rho }}{\left( \eta +k_{\alpha }\right) ^{\rho }}t}-e^{-t}\right] . \end{equation* Then, by taking into account the analogue of (\ref{ll}) together with (\re {psi}), we ge \begin{eqnarray} \mathcal{L}\{\mathcal{D}_{x}^{\alpha ,\rho }\,f(x,t);\eta \} &=&\frac{1} \Gamma \left( \rho \right) }\mathcal{L}\{\frac{\partial }{\partial x}f_ \mathcal{S}_{\psi }}(x,t);\eta \}\mathcal{L}\{\Gamma \left( \rho ;k_{\alpha }x\right) ;\eta \} \label{uu} \\ &=&\frac{\left( \eta +k_{\alpha }\right) ^{\rho }-k_{\alpha }^{\rho }} \left( \eta +k_{\alpha }\right) ^{\rho }}\left[ e^{-\frac{\left( \eta +k_{\alpha }\right) ^{\rho }-k_{\alpha }^{\rho }}{\left( \eta +k_{\alpha }\right) ^{\rho }}t}-e^{-t}\right] , \notag \end{eqnarray so that (\ref{ggeq}) easily follows, by considering the time-derivative of \ref{loc}). \end{proof} \begin{remark} The previous result holds only for $\rho $ strictly smaller than $1$, since formula (\ref{dx}) can be rewritten as \begin{equation*} \frac{\partial }{\partial x}f_{\mathcal{S}_{\psi }}(x,t)=-\frac{k_{\alpha }e^{-t-k_{\alpha }x}}{x}\sum_{n=1}^{\infty }\frac{\left( k_{\alpha }^{\rho }tx^{\rho }\right) ^{n}}{n!\Gamma (\rho n)}+\frac{e^{-t-k_{\alpha }x}}{x^{2} \sum_{n=2}^{\infty }\frac{\left( k_{\alpha }^{\rho }tx^{\rho }\right) ^{n}} n!\Gamma (\rho n-1)}+\frac{k_{\alpha }^{\rho }tx^{\rho -2}e^{-t-k_{\alpha }x }{\Gamma (\rho -1)} \end{equation* where the last term vanishes when $\rho =1.$ Therefore, in this special case, we ge \begin{eqnarray*} &&\mathcal{L}\{\mathcal{D}_{x}^{\alpha ,1}\,f_{\mathcal{S}_{\psi }}(x,t);\eta \} \\ &=&-\frac{k_{\alpha }e^{-t}}{\eta +k_{\alpha }}\sum_{n=1}^{\infty }\frac \left( k_{\alpha }t\right) ^{n}}{n!(\eta +k_{\alpha })^{n} +e^{-t}\sum_{n=2}^{\infty }\frac{\left( k_{\alpha }t\right) ^{n}}{n!(\eta +k_{\alpha })^{n}} \\ &=&\frac{\eta }{\eta +k_{\alpha }}\left[ e^{-\frac{\eta t}{\eta +k_{\alpha } }-e^{-t}\right] -\frac{k_{\alpha }te^{-t}}{\eta +k_{\alpha }} \end{eqnarray* which coincides with the Laplace transform of (\ref{uf2}), while differs from (\ref{uu})$,$ with $\rho =1$. \end{remark} \section{The distributed case} We now extend the results of the previous section by generalizing the kernels to the case of random parameters and obtaining a distributed order operator. We can provide the explicit solution of the corresponding equations, at least under simplifying assumptions. We start by considering the exponential kernel in (\ref{cf}) and by assuming that $\alpha ,$ instead of being a fixed parameter, is a random variable, taking values in $(0,1)$, with a given distribution. \begin{definition} Let $\alpha $ be a random variable, with values in $\left( 0,1\right) $ almost surely, and let $q:=q(\alpha )$ be its density function. Then, we define the following convolution operato \begin{equation} ^{CF}D_{x}^{q(\alpha )}f(x):=\int\limits_{0}^{x}\frac{d}{dz}f(z)\left( \int_{0}^{1}e^{-\frac{\alpha }{1-\alpha }(x-z)}q(\alpha )d\alpha \right) dz,\qquad x>0, \label{qq} \end{equation for a differentiable function $f,$ in $AC_{loc}\left( 0,+\infty \right) .$ \end{definition} It is immediate to see that, in the special case where $q(z)=\delta (z-\alpha )$ (with $\delta (\cdot )$ denoting the Dirac delta function), formula (\ref{qq}) reduces to (\ref{cf}) (under the assumption that B(\alpha )=1-\alpha $). We will assume hereafter that $q$ is such that $\int_{0}^{1}\alpha q(\alpha )/(1-\alpha )d\alpha <\infty .$ This assumption implies, for example, in the Beta case, i.e. for $q(\alpha )=\alpha ^{r-1}(1-\alpha )^{s-1}/B(r,s),$ \alpha \in (0,1),$ (where $B(r,s)$ is the Beta function) that we must choose $s>1.$ \begin{theorem} Let $^{CF}D_{x}^{q(\alpha )}$ be the convolution operator defined in (\re {qq}); let moreover $A_{q}(x):=\int_{0}^{1}k_{\alpha }e^{-k_{\alpha }x}q(\alpha )d\alpha $, for $x\geq 0$, $k_{\alpha }=\alpha /(1-\alpha )$ and $B_{q}(x):=\left[ \int_{0}^{1}k_{\alpha }q(\alpha )d\alpha \right] \left[ \int_{0}^{1}e^{-k_{\alpha }x}q(\alpha )d\alpha \right] $, for $x\geq 0.$Then the solution to the following equatio \begin{equation} \frac{\partial }{\partial t}f(x,t)=-\,^{CF}D_{x}^{q(\alpha )}\,f(x,t)+e^{-t}A_{q}(x)-te^{-t}B_{q}(x),\qquad x\geq 0,t\geq 0, \label{qq2} \end{equation (with initial condition $f(x,0)=0),$ is given by the density of the absolutely continuous component of $\mathcal{S}_{\psi }$ defined in (\ref{ss ), for $X_{j}^{\psi }$ independent and identically distributed r.v.'s, j=1,2...$., with density $f_{X_{j}^{\psi }}(x)=A_{q}(x),$ $x\geq 0.$ \end{theorem} \begin{proof} In this case we have that $\overline{\mu }(x)=\int_{0}^{1}e^{-k_{\alpha }x}q(\alpha )d\alpha $; thus we ge \begin{equation*} \psi (\eta )=\eta \int\limits_{0}^{+\infty }e^{-\eta x}\int_{0}^{1}e^{-k_{\alpha }x}q(\alpha )d\alpha dx=\int_{0}^{1}\frac{\eta } k_{\alpha }+\eta }q(\alpha )d\alpha \end{equation* and \begin{equation} \widetilde{f}_{\mathcal{S}_{\psi }}(\eta ,t)=e^{-t\int_{0}^{1}\frac{\eta } k_{\alpha }+\eta }q(\alpha )d\alpha }-e^{-t}. \label{qq3} \end{equation The density $f_{X_{j}^{\psi }}$ is obtained, in view of (\ref{fr}), by the inversion of $1-\,\psi (\eta )=\int_{0}^{1}\frac{k_{\alpha }}{k_{\alpha }+\eta }q(\alpha )d\alpha ,$ which gives $f_{X_{j}^{\psi }}(x)=\int_{0}^{1}k_{\alpha }e^{-k_{\alpha }x}q(\alpha )d\alpha .$ Even if, without any specific assumption on $q(\alpha )$, we can not write an explicit form for the density $f_{\mathcal{S}_{\psi }}$ (since (\ref{qq3}) can not be inverted), we can nevertheless obtain its value in zero. Indeed, by (\ref{lp}) and by considering that $f_{X^{\psi }}^{\ast n}(0)=0$, for n>1,$ while $f_{X^{\psi }}^{\ast n}(0)=\int_{0}^{1}k_{\alpha }q(\alpha )d\alpha $, for $n=1$, we ge \begin{equation} f_{\mathcal{S}_{\psi }}(0,t)=te^{-t}\int_{0}^{1}k_{\alpha }q(\alpha )d\alpha ,\qquad t\geq 0. \label{qqo} \end{equation By definition (\ref{qq}) we have tha \begin{eqnarray*} &&\mathcal{L}\{^{CF}D_{x}^{q(\alpha )}\,f(x);\eta \}=\mathcal{L \{\int_{0}^{1}e^{-k_{\alpha }x}q(\alpha )d\alpha ;\eta \}\mathcal{L}\{\frac{ }{dx}f(x);\eta \} \\ &=&\int_{0}^{1}\frac{q(\alpha )}{k_{\alpha }+\eta }d\alpha \left[ \eta \widetilde{f}(\eta )-f(0)\right] . \end{eqnarray* In view of (\ref{qq3}) and (\ref{qqo}), we can writ \begin{eqnarray} &&\mathcal{L}\{^{CF}D_{x}^{q(\alpha )}\,f(x,t);\eta \} \label{qqd} \\ &=&\int_{0}^{1}\frac{\eta q(\alpha )}{k_{\alpha }+\eta }d\alpha \left[ e^{-t\int_{0}^{1}\frac{\eta }{k_{\alpha }+\eta }q(\alpha )d\alpha }-e^{-t \right] -te^{-t}\left[ \int_{0}^{1}\frac{q(\alpha )}{k_{\alpha }+\eta d\alpha \right] \left[ \int_{0}^{1}k_{\alpha }q(\alpha )d\alpha \right] . \notag \end{eqnarray By comparing (\ref{qqd}) with the time-derivative of (\ref{qq3}), we easily prove that (\ref{qq2}) holds. \end{proof} \ For $q(z)=\delta (z-\alpha )$, we obtain that $A_{q}(x)=B_{q}(x)=k_{\alpha }e^{-k_{\alpha }x}$ and the previous result coincides with that presented in Theorem 1. In another special case, i.e. for $q(z)=q_{1}\delta (z-\alpha _{1})+q_{2}\delta (z-\alpha _{2})$, for $0<\alpha _{1}<\alpha _{2}<1$ and q_{1},q_{2}\in \lbrack 0,1]$ such that $q_{1}+q_{2}=1,$ we can obtain an explicit form of the density $f_{\mathcal{S}_{\psi }},$ which generalizes \ref{den}). \begin{corollary} Let $^{CF}D_{x}^{\alpha _{1},\alpha _{2}}$ be the convolution operator defined in (\ref{qq}) under the assumption that $q(z)=q_{1}\delta (z-\alpha _{1})+q_{2}\delta (z-\alpha _{2})$, $z\geq 0,$ for $0<\alpha _{1}<\alpha _{2}<2$ and $q_{1},q_{2}\in \lbrack 0,1]$ such that $q_{1}+q_{2}=1$ $.$Then the solution to the following equatio \begin{eqnarray} \frac{\partial }{\partial t}f(x,t) &=&-\,^{CF}D_{x}^{\alpha _{1},\alpha _{2}}\,f(x,t)+[q_{1}k_{\alpha _{1}}e^{-t-k_{\alpha _{1}}x}+q_{2}k_{\alpha _{2}}e^{-t-k_{\alpha _{2}}x}]+ \label{qqeq} \\ &&-t(q_{1}k_{\alpha _{1}}+q_{2}k_{\alpha _{2}})[q_{1}e^{-t-k_{\alpha _{1}}x}+q_{2}e^{-t-k_{\alpha _{2}}x}], \notag \end{eqnarray $x\geq 0,t\geq 0,$ (with initial condition $f(x,0)=0),$ is given b \begin{equation} f_{\mathcal{S}_{\psi }}(x,t)=\frac{e^{-t-k_{\alpha _{2}}x}}{x \sum_{n=1}^{\infty }\frac{(q_{2}k_{\alpha _{2}}xt)^{n}}{n!}\sum_{j=0}^{n \binom{n}{j}\left( \frac{q_{1}k_{\alpha _{1}}}{q_{2}k_{\alpha _{2}}}\right) ^{j}E_{1,n}^{j}((k_{\alpha _{2}}-k_{\alpha _{1}})x),\qquad x\geq 0. \label{qqr} \end{equation Moreover (\ref{qqr}) is the density of the absolutely continuous component of $\mathcal{S}_{\psi }$ defined in (\ref{ss}), for $X_{j}^{\psi }$ independent and identically distributed r.v.'s, $j=1,2...$., with density f_{X_{j}^{\psi }}(x)=q_{1}k_{\alpha _{1}}e^{-k_{\alpha _{1}}x}+q_{2}k_{\alpha _{2}}e^{-k_{\alpha _{2}}x},$ $x\geq 0.$ \end{corollary} \begin{proof} Equation (\ref{qqeq}) is obtained as special case of (\ref{qq2}). We only need to prove that (\ref{qqr}) satisfies equation (\ref{qqeq}), by checking that its Laplace transform coincides with (\ref{qq3}), as follow \begin{eqnarray*} \widetilde{f}_{\mathcal{S}_{\psi }}(\eta ,t) &=&e^{-t}\sum_{n=1}^{\infty \frac{(q_{2}k_{\alpha _{2}}t)^{n}}{n!}\sum_{j=0}^{n}\binom{n}{j}\left( \frac q_{1}k_{\alpha _{1}}}{q_{2}k_{\alpha _{2}}}\right) ^{j}\int_{0}^{+\infty }e^{-(k_{\alpha _{2}}+\eta )x}x^{n-1}E_{1,n}^{j}((k_{\alpha _{2}}-k_{\alpha _{1}})x)dx \\ &=&e^{-t}\sum_{n=1}^{\infty }\frac{(q_{2}k_{\alpha _{2}}t)^{n}}{n! \sum_{j=0}^{n}\binom{n}{j}\left( \frac{q_{1}k_{\alpha _{1}}}{q_{2}k_{\alpha _{2}}}\right) ^{j}\frac{(k_{\alpha _{2}}+\eta )^{j-n}}{(k_{\alpha _{1}}+\eta )^{j}} \\ &=&e^{-t}\sum_{n=1}^{\infty }\frac{(q_{2}k_{\alpha _{2}}t/(k_{\alpha _{1}}+\eta ))^{n}}{n!}\left( \frac{q_{1}k_{\alpha _{1}}}{q_{2}k_{\alpha _{2} }+\frac{k_{\alpha _{1}}+\eta }{k_{\alpha _{2}}+\eta }\right) ^{n} \\ &=&e^{-t}\left[ \exp \left\{ \frac{k_{\alpha _{1}}k_{\alpha _{2}}+\eta (q_{1}k_{\alpha _{1}}+q_{2}k_{\alpha _{2}})}{(k_{\alpha _{1}}+\eta )(k_{\alpha _{2}}+\eta )}t\right\} -1\right] . \end{eqnarray* The previous expression coincides with (\ref{qq3}), by the assumption on $q$ and by considering that $q_{1}+q_{2}=1$. \ \end{proof} In the case of the Mittag-Leffler kernel, we generalize the operator (\re {ab}) to the distributed case, as follows. \begin{definition} Let $\nu $ be a random variable, with values in $\left( 0,1\right) $ almost surely, and let $q:=q(\nu ),$ be its density function. Then, we define the following convolution operator \end{definition} \begin{equation} D_{x}^{\alpha ,q(\nu )}f(x):=\int\limits_{0}^{x}\frac{d}{dz}f(z)\left[ \int_{0}^{1}E_{\nu }\left( -k_{\alpha }(x-z)^{\nu }\right) q(\nu )d\nu \right] dz,\qquad x>0,\text{ }\alpha \in (0,1), \label{nu} \end{equation for a differentiable function $f$ in $AC_{loc}\left( 0,+\infty \right) .$ \begin{theorem} Let $D_{x}^{\alpha ,q(\nu )}$ be the convolution operator defined in (\re {nu}); then the solution to the following equatio \begin{equation} \frac{\partial }{\partial t}f(x,t)=-\,D_{x}^{\alpha ,q(\nu )}\,f(x,t)+k_{\alpha }e^{-t}\int_{0}^{1}x^{\nu -1}E_{\nu ,\nu }(-k_{\alpha }x^{\nu })q(\nu )d\nu ,\qquad x\geq 0,t\geq 0, \label{eq3} \end{equation (with initial condition $f(x,0)=0),$ is given b \begin{equation} f_{\mathcal{S}_{\psi }}(x,t)=\frac{e^{-t}}{x}\sum_{n=1}^{\infty }\frac \left( k_{\alpha }t\right) ^{n}}{n!}\int_{0}^{1}x^{\nu }E_{\nu ,\nu n}^{n}(-k_{\alpha }x^{\nu })q(\nu )d\nu . \label{den3} \end{equation Moreover, (\ref{den3}) is the density of the absolutely continuous component of $\mathcal{S}_{\psi }$ defined in (\ref{ss}), for $X_{j}^{\psi }$ independent and identically distributed r.v.'s, $j=1,2...$., with density f_{X_{j}^{\psi }}(x)=k_{\alpha }\int_{0}^{1}x^{\nu -1}E_{\nu ,\nu }\left( -k_{\alpha }x^{\nu }\right) q(\nu )d\nu ,$ $x\geq 0.$ \end{theorem} \begin{proof} Following the same lines of the non-distributed case, we can write tha \begin{equation} \widetilde{f}_{\mathcal{S}_{\psi }}(\eta ,t)=e^{-t\int_{0}^{1}\frac{\eta ^{\nu }}{\eta ^{\nu }+k_{\alpha }}q(\nu )d\nu }-e^{-t} \notag \end{equation and \begin{equation*} \mathcal{L}\{\mathcal{D}_{x}^{\alpha ,q(\nu )}\,f_{\mathcal{S}_{\psi }}(x,t);\eta \}=\int_{0}^{1}\frac{\eta ^{\nu -1}}{\eta ^{\nu }+k_{\alpha } q(\nu )d\nu \left[ e^{-t\int_{0}^{1}\frac{\eta ^{\nu }}{\eta ^{\nu }+k_{\alpha }}q(\nu )d\nu }-e^{-t}\right] , \end{equation* so that equation (\ref{eq3}) easily follows. \end{proof} \ As far as the incomplete-gamma kernel case is concerned, we can extend the results of section 2.3 by considering the operator defined in the following \begin{definition} Let $\rho $ be a random variable, with values in $\left( 0,1\right) $ almost surely, and let $q:=q(\rho ),$ be its density function. Then, we define the following convolution operato \begin{equation} \mathcal{D}_{x}^{\alpha ,q(\rho )}f(x):=\int\limits_{0}^{x}\frac{d}{dz f(z)\left( \int_{0}^{1}\frac{\Gamma \left( \rho ;k_{\alpha }z\right) } \Gamma (\rho )}q(\rho )d\rho \right) dz,\qquad x>0,\text{ }\alpha \in (0,1), \label{qr} \end{equation for a differentiable function $f$ in $AC_{loc}\left( 0,+\infty \right) .$ \end{definition} Then, in this case, we prove the following \begin{theorem} Let $\mathcal{D}_{x}^{\alpha ,q(\rho )}$ be the convolution operator (\re {qr}); then the solution to the following equatio \begin{equation} \frac{\partial }{\partial t}f(x,t)=-\,\mathcal{D}_{x}^{\alpha ,q(\rho )}\,f(x,t)+e^{-t-k_{\alpha }x}\int_{0}^{1}\frac{k_{\alpha }^{\rho }x^{\rho -1}}{\Gamma (\rho )}q(\rho )d\rho ,\qquad x\geq 0,t\geq 0, \end{equation (with initial condition $f(x,0)=0),$ is given by the density of the absolutely continuous component of $\mathcal{S}_{\psi }$ defined in (\ref{ss ), for $X_{j}^{\psi }$ independent and identically distributed r.v.'s, j=1,2...$., with density $f_{X_{j}^{\psi }}(x)=e^{-k_{\alpha }x}\int_{0}^{1 \frac{k_{\alpha }^{\rho }x^{\rho -1}}{\Gamma (\rho )}q(\rho )d\rho ,$ $x\geq 0.$ \end{theorem} \begin{proof} Analogously to the non-distributed case, we can write tha \begin{equation} \widetilde{f}_{\mathcal{S}_{\psi }}(\eta ,t)=e^{-t\int_{0}^{1}\frac{\left( \eta +k_{\alpha }\right) ^{\rho }-k_{\alpha }^{\rho }}{\left( \eta +k_{\alpha }\right) ^{\rho }}q(\rho )d\rho }-e^{-t} \notag \end{equation and \begin{equation*} \mathcal{L}\{\mathcal{D}_{x}^{\alpha ,\rho }\,f_{\mathcal{S}_{\psi }}(x,t);\eta \}=\int_{0}^{1}\frac{\left( \eta +k_{\alpha }\right) ^{\rho }-k_{\alpha }^{\rho }}{\left( \eta +k_{\alpha }\right) ^{\rho }}q(\rho )d\rho \left[ e^{-t\int_{0}^{1}\frac{\left( \eta +k_{\alpha }\right) ^{\rho }-k_{\alpha }^{\rho }}{\left( \eta +k_{\alpha }\right) ^{\rho }}q(\rho )d\rho }-e^{-t}\right] . \end{equation* On the other hand, \begin{equation*} \frac{\partial }{\partial t}\widetilde{f}_{\mathcal{S}_{\psi }}(\eta ,t)=-e^{-t\int_{0}^{1}\frac{\left( \eta +k_{\alpha }\right) ^{\rho }-k_{\alpha }^{\rho }}{\left( \eta +k_{\alpha }\right) ^{\rho }}q(\rho )d\rho }\int_{0}^{1}\frac{\left( \eta +k_{\alpha }\right) ^{\rho }-k_{\alpha }^{\rho }}{\left( \eta +k_{\alpha }\right) ^{\rho }}q(\rho )d\rho +e^{-t}. \end{equation*} \end{proof} As for the exponential kernel case, when $q(z)=q_{1}\delta (z-\rho _{1})+q_{2}\delta (z-\rho _{2})$, for $0<\rho _{1}<\rho _{2}<1,$ we can obtain an explicit form of the density $f_{\mathcal{S}_{\psi }},$ which generalizes (\ref{gg4}). \begin{corollary} Let $\mathcal{D}_{x}^{\alpha ,q(\rho )}$ be the convolution operator defined in (\ref{qr}) under the assumption that $q(z)=q_{1}\delta (z-\rho _{1})+q_{2}\delta (z-\rho _{2})$, for $0<\rho _{1}<\rho _{2}<1$ and q_{1},q_{2}\in \lbrack 0,1]$ such that $q_{1}+q_{2}=1$. Then the solution to the following equatio \begin{equation} \frac{\partial }{\partial t}f(x,t)=-\,\mathcal{D}_{x}^{\alpha ,q(\rho )}\,f(x,t)+e^{-t-k_{\alpha }x}\left[ \frac{q_{1}k_{\alpha }^{\rho _{1}}x^{\rho _{1}-1}}{\Gamma (\rho _{1})}+\frac{q_{2}k_{\alpha }^{\rho _{2}}x^{\rho _{2}-1}}{\Gamma (\rho _{2})}\right] , \notag \end{equation $x\geq 0,t\geq 0,$ (with initial condition $f(x,0)=0),$ is given b \begin{equation} f_{\mathcal{S}_{\psi }}(x,t)=\frac{e^{-t-k_{\alpha }x}}{x}\sum_{l=0}^{\infty }\frac{(q_{1}k_{\alpha }^{\rho _{1}}x^{\rho _{1}}t)^{l}}{l!}W_{\rho _{2},\rho _{1}l}(q_{2}k_{\alpha }^{\rho _{2}}x^{\rho _{2}}t),\qquad x\geq 0. \label{ggr} \end{equation Moreover (\ref{ggr}) is the density of the absolutely continuous component of $\mathcal{S}_{\psi }$ defined in (\ref{ss}), for $X_{j}^{\psi }$ independent and identically distributed r.v.'s, $j=1,2...$., with density f_{X_{j}^{\psi }}(x)=\frac{q_{1}k_{\alpha }^{\rho _{1}}x^{\rho _{1}-1}} \Gamma (\rho _{1})}+\frac{q_{2}k_{\alpha }^{\rho _{2}}x^{\rho _{2}-1}} \Gamma (\rho _{2})},$ $x\geq 0.$ \end{corollary} \begin{proof} We omit the details of the calculations which retrace those of Corollary 10. \end{proof} \section{Risk-theory applications and concluding remarks} So far we have described the interplay between integro-differential equations based on Caputo-like operators (with non-singular kernels) and the densities of the corresponding stochastic processes. We proved that, ranging from exponential to Mittag-Leffler or incomplete-gamma kernels, we obtain compound Poisson processes with very different jump distributions. In particular, passing from the Caputo-Fabrizio operator to the Atangana-Baleanu one, we even loose the finiteness of all the moments of the jumps $X_{j}^{\psi },$ $j=1,2,....$ Indeed, in the exponential case, \mathbb{E}\mathcal{S}_{\psi }=\mathbb{E}X_{j}^{\psi }=1/k_{\alpha }=(1-\alpha )/\alpha ,$ as can be easily checked by applying the Wald formula and considering that the Poisson rate is unitary. Analogously, in the distributed-order exponential case (i.e. with the operator defined in Def.8), the mean value of $\mathcal{S}_{\psi }$ is equal to $\mathbb{E \left( (1-\alpha )/\alpha \right) $.\ On the other hand, due to the well-known power-law behavior of the Mittag-Leffler distribution given in (\ref{xx}), the expected value of \mathcal{S}_{\psi }$ and $X_{j}^{\psi }$ are both infinite and the same holds for the distributed order counterpart, obtained under Def.11. Finally, the incomplete-gamma case can be considered intermediate between the previous ones: indeed, the mean value is $\mathbb{E}\mathcal{S}_{\psi } \mathbb{E}X_{j}^{\psi }=\rho /k_{\alpha }$ (or $\mathbb{E}(\rho /k_{\alpha }) $ in the distributed case) and thus differs from the exponential case only by the constant $\rho .$ Nevertheless the behavior of the density (\re {gg5}) is completely different, in the origin, from the exponential one, since it tends to infinity, as in the Mittag-Leffler case. All the previous results can be applied to a continuous-time risk model. If we define the risk reserve proces \begin{equation} R(t):=a+\beta t-\sum_{j=1}^{N(t)}X_{j}^{\psi }, \label{rt} \end{equation where $a\geq 0$ is the initial risk reserve, $\beta >0$ is the premium collection rate and $X_{j}^{\psi },$ for $j=1,2,...,$ are the claims occurring according to the Poisson process, then we have that $R(t)=a+\beta t-S_{\psi }(t).$ If we denote the expected claim size by $\mu $ (i.e. $\mu : \mathbb{E}X_{j}^{\psi }$), it is evident that, in order to fulfill the net profit condition $\beta >\mu $ (considering that our Poisson rate is equal to $1$), we must discard the case of Mittag-Leffler distributed claim sizes, since the expected value of (\ref{xx}) is infinite. In the other cases, by considering (\ref{rt}) together with (\ref{lap3}), we can obtain the differential equation satisfied by the moment generating function of $R(t),t\geq 0,$ when the claim size has distribution with Laplace transform given in (\ref{fr}). By taking the first derivative (with respect to the time variable) o \begin{eqnarray} \Phi _{R}(\eta ,t) &:&=\mathbb{E}e^{\eta R(t)}=e^{\eta (a+\beta t)}\mathbb{E e^{-\eta \sum_{j=1}^{N(t)}X_{j}^{\psi }} \label{rt3} \\ &=&e^{\eta (a+\beta t)}\widetilde{f}_{S_{\psi }}(\eta ,t), \notag \end{eqnarray we ge \begin{eqnarray} \frac{\partial }{\partial t}\Phi _{R}(\eta ,t) &=&\beta \eta \Phi _{R}(\eta ,t)+e^{\eta (a+\beta t)}\frac{\partial }{\partial t}\widetilde{f}_{S_{\psi }}(\eta ,t) \label{rt2} \\ &=&\beta \eta \Phi _{R}(\eta ,t)+e^{\eta (a+\beta t)}\left[ e^{-t}-\psi (\eta )e^{-\psi (\eta )t}\right] \notag \\ &=&[\beta \eta -\psi (\eta )]\Phi _{R}(\eta ,t)+e^{\eta (a+\beta t)-t}\left[ 1-\psi (\eta )\right] , \notag \end{eqnarray which is satisfied by (\ref{rt3}). It is evident from the first line of (\re {rt2}) that also the differential equation governing the risk reserve process can be expressed in terms of the convolution operator $\mathcal{D _{x}^{\psi }$ treated here. \ \
1,314,259,996,223
arxiv
\section{Introduction}\label{sec:intro} \IEEEPARstart{W}{e} consider a network of \( N \) agents. Each agent \( k \) is equipped with a local, stochastic cost of the form \( J_k(w) = \E_x Q_k(w; \x_k) \), where \( w \in \mathds{R}^M \) denotes a parameter vector and \( \x_k \) denotes random data. In Part I~\cite{Vlaski19nonconvexP1}, we consider a global optimization problem of the form: \begin{equation}\label{eq:global_problem} \min_w J(w),\ \ \ \ \ \ \ \ \mathrm{where}\ J(w) \triangleq \sum_{k=1}^{N} p_k J_k(w) \end{equation} where the weights \( p_k \) are a function of the combination weights \( a_{\ell k} \) and will be specified further below in~\eqref{eq:perron}. Solutions to such problems via distributed strategies can be pursued through a variety of algorithms, including those of the consensus and diffusion type~\cite{Nedic09, Sayed14, Chen15transient, Xin19, Yuan16, Shi15, Yuan18}. In Part I~\cite{Vlaski19nonconvexP1}, we studied the diffusion strategy strategy due to its proven enhanced performance in adaptive environments in response to streaming data and drifting conditions~\cite{Tu12, Sayed14}. The strategy takes the form: \begin{subequations} \begin{align} \boldsymbol{\phi}_{k,i} &= \w_{k,i-1} - \mu \widehat{\nabla J}_{k}(\w_{k,i-1})\label{eq:adapt}\\ \w_{k,i} &= \sum_{\ell=1}^{N} a_{\ell k} \boldsymbol{\phi}_{\ell,i}\label{eq:combine} \end{align} \end{subequations} Note that the gradient step~\eqref{eq:adapt} employs a \emph{stochastic} gradient approximation \( \widehat{\nabla J}_{k}(\w_{k,i-1}) \), rather than the true gradient \( {\nabla J}_{k}(\w_{k,i-1}) \). The random approximation of the true gradient based on sampled data introduces persistent gradient noise, which seeps into the evolution of the algorithm. A commonly employed construction is \( \widehat{\nabla J}_{k}(\w_{k,i-1}) = {\nabla Q}_{k}(\w_{k,i-1}; \x_k) \){; nevertheless, we consider general stochastic gradient approximations \( \widehat{\nabla J}_{k}(\w_{k,i-1}) \) under suitable conditions on the induced gradient noise process (Assumptions~\ref{as:gradientnoise} and~\ref{as:noise_in_saddle} further ahead)}. Prior works have studied the dynamics of the diffusion strategy~\eqref{eq:adapt}--\eqref{eq:combine} and examined the implications of the gradient noise term in the \emph{{strongly}-convex} setting~\cite{Sayed14, Chen15transient, Chen15performance}. In particular, it has been shown that despite the presence of gradient noise, the iterates \( \w_{k, i} \) will approach the global solution \( w^{\star} \triangleq \argmin_w J(w) \) to the problem~\eqref{eq:global_problem} in the mean-square-error sense, namely it will hold that \( \limsup_{i \to \infty} \E {\| w^{\star} - \w_{k, i} \|}^2 = O(\mu) \). In Part I~\cite{Vlaski19nonconvexP1} we showed that many of the desirable properties of the diffusion algorithm continue to hold in the more challenging non-convex setting. We established that all agents will cluster around a common network centroid after sufficient iterations and established expected descent of the network centroid in the large-gradient regime. In this part of the work we establish that the diffusion strategy is able to escape strict-saddle points and return second-order stationary points in polynomial time. \subsection{Related Works} A general discussion on decentralized algorithms for optimization and learning~\cite{Nedic09, Sayed14, Chen15transient, Xin19, Yuan16, Shi15, Yuan18, Nassif16, Towfic15, Ying18} can be found in Part I~\cite{Vlaski19nonconvexP1}. In this section, we focus on works studying the ability of algorithms to escape strict saddle-points and reach second-order stationary points, which is the focus of this part. The desire to obtain guarantees for the escape from saddle-points is motivated by the observation that in many problems of interest, such as neural networks, saddle-points can correspond to bottlenecks of the optimization problem. As such, guarantees of convergence to first-order stationary points, i.e., points where the norm of the gradient is small, need not be sufficient to establish good performance. For this reason, there has been interest in the guarantee of convergence to second-order stationary points. Approximate second-order stationary points, like first-order stationary points, are required to have a small gradient norm, but are also restricted in terms of the smallest eigenvalues of their Hessian matrices. Works that study the ability of gradient descent algorithms to escape strict saddle-points {can broadly be classified into two approaches. The first class is based on the fact that there is at least one direction of descent at every saddle-point and leverage either second-order information~\cite{Nesterov06} or first-order strategies for identifying a negative-curvature direction~\cite{Fang18, Allen18neon, Allen18natasha} to identify the descent direction. Our work falls into a second class of strategies, which exploit the fact that \emph{strict} saddle-points (defined later) are unstable in the sense that small perturbations allow for the iterates to escape from the saddle point almost surely.} Along these lines, it has been shown in~\cite{Lee16} that under an appropriately chose random initialization scheme, the gradient descent algorithm converges to minimizers almost surely. The work~\cite{Scutari18} further leveraged this fact to establish that distributed gradient descent with appropriately chosen initialization escapes saddle points. When subjected to persistent, but diminishing perturbations, known as annealing, \emph{asymptotic} almost sure convergence to global minimizers of gradient descent-type algorithms has also been established in the centralized~\cite{Gelfand91} and more recently in the distributed setting~\cite{Swenson19}. All these useful results, while powerful in theory, still do not provide a guarantee that the procedures are efficient in the sense that they would return accurate solutions after a finite number of iterations. Actually, despite the fact that gradient descent with random initialization escapes saddle-points almost surely~\cite{Lee16}, it has been established that this process can take exponentially long~\cite{Du17}, rendering the procedure impractical. These observations have sparked interest in the design of methods that have the ability to escape saddle-points \emph{efficiently}, where efficiency is loosely defined as yielding success in polynomial, rather than exponential time. The authors in~\cite{Ge15} add persistent, i.i.d. perturbations to the exact gradient descent algorithm and establish polynomial escape from saddle-points, while the work~\cite{Jin17} adds perturbations only when the presence of a saddle-point is detected. It is important to note that in most of these works, perturbations or random initializations are selected and introduced with the explicit purpose of allowing the algorithm to escape from unstable stationary points. For example, random initialization is followed by exact gradient updates in the works~\cite{Lee16, Scutari18}, while the perturbations in~\cite{Jin17} are applied only when a saddle-point is detected via the norm of the gradient. All of these techniques still require knowledge of the \emph{exact gradient}. While the authors of~\cite{Ge15} consider persistent gradient perturbations, these are nevertheless assumed to be independentently and identically distributed. Motivated by these considerations, in this work, we focus on implementations that employ \emph{stochastic} gradient approximations and \emph{constant} step-sizes. This is driven by the fact that computation of the exact gradients \( \nabla J_k(\cdot) \) is generally infeasible in practice because (a) data may be streaming in, making it impossible to compute \( \nabla \E_{x_k} Q_k(\cdot; \x_k) \) in the absence of knowledge about the distribution of the data or (b) the data set, while available as a batch, may be so large that efficient computation of the full gradient is infeasible. As such, the exact gradient will need to be replaced by an approximate \emph{stochastic} gradient, which ends up introducing in a natural manner some form of \emph{gradient noise} into the operation of the algorithm; this noise is the difference between the true gradient and its approximation. The gradient noise seeps into the operation of the algorithm continually and becomes coupled with the evolution of the iterates, resulting in perturbations that are neither identically nor independently distributed over time. For instance, the presence of the gradient noise process complicates the dynamics of the iterate evolution relative to the centralized recursions considered in~\cite{Ge15}. There have been some recent works that study {\em stochastic} gradient scenarios as well. However, these methods alter the gradient updates in specific ways or require the gradient noise to satisfy particular conditions. For example, the work~\cite{Jin19} proposes the addition of Gaussian noise to the naturally occuring gradient noise, while the authors of~\cite{HadiDaneshmand18} leverage alternating step-sizes. The works~\cite{Fang18, Allen18neon, Allen18natasha} introduce an intermediate negative-curvature-search step. All of these works alter the traditional stochastic gradient algorithm in order to ensure efficient escape from saddle-points. The work~\cite{Fang19} studies the traditional stochastic gradient algorithm under a dispersive noise assumption. {The key contributions of this work are three-fold. To the best of our knowledge, we present the first analysis establishing \emph{efficient} (i.e., polynomial) escape from strict-staddle points in the \emph{distributed} setting. Second, we establish that the gradient noise process is sufficient to ensure efficient escape without the need to alter it by adding artificial forms of perturbations{, interlacing steps with small and large step-sizes or imposing a dispersive noise assumption, as long as there is a gradient noise component present in some descent direction for every strict saddle-point.} Third, relative to the existing literature on \emph{centralized} non-convex optimization, where the focus is mostly on deterministic or \emph{finite-sum} optimization, our modeling conditions are specifically tailored to the scenario of learning from stochastic \emph{streaming} data. In particular, we only impose bounds on the gradient noise variance in expectation, rather than assume a bound with probability \( 1 \)~\cite{HadiDaneshmand18, Fang19} or a sub-Gaussian distribution~\cite{Jin19}. Furthermore, we assume that any Lipschitz conditions only hold on the \emph{expected} stochastic gradient approximation, rather than for every realization, with probability \( 1 \)~\cite{Fang18, Allen18neon, Allen18natasha}.} For ease of reference, the modeling conditions and results from this and related works are summarized in Table~\ref{tab:references}. \begin{table*}\centering \ra{1.3} \begin{tabular}{@{}ccccccccc@{}}\toprule & \multicolumn{5}{c}{Modeling conditions} & \phantom{abc}& \multicolumn{2}{c}{Results}\\ \cmidrule{2-6} \cmidrule{8-9} & Gradient & Hessian & Initialization & Perturbations & Step-size && Stationary & Saddle \\ \midrule \textbf{Centralized}\\ ~\cite{Gelfand91} & Lipschitz & --- & --- & SGD + Annealing & diminishing && \( \checkmark \) & asymptotic\(^{\dagger}\) \\ ~\cite{Ge15} & Lipschitz \& bounded\(^{\star}\) & Lipschitz & --- & i.i.d.\ and bounded w.p. 1 & constant && \( \checkmark \) & polynomial \\ ~\cite{Lee16} & Lipschitz & --- & Random & --- & constant && \( \checkmark \) & asymptotic \\ ~\cite{Jin17} & Lipschitz & Lipschitz & --- & Selective \& bounded w.p. 1 & constant && \( \checkmark \) & polynomial \\ ~\cite{HadiDaneshmand18} & Lipschitz & Lipschitz & --- & SGD, bounded w.p. 1 & alternating && \( \checkmark \) & polynomial \\ ~\cite{Fang18} & Lipschitz & Lipschitz & --- & Bounded variance, Lipschitz w.p. 1 & constant && \( \checkmark \) & polynomial \\ ~\cite{Allen18natasha} & Lipschitz & Lipschitz & --- & Bounded variance, Lipschitz w.p. 1 & constant && \( \checkmark \) & polynomial \\ ~\cite{Allen18neon} & Lipschitz & Lipschitz & --- & Bounded variance, Lipschitz w.p. 1 & constant && \( \checkmark \) & polynomial \\ ~\cite{Fang19} & Lipschitz & Lipschitz & --- & SGD, bounded w.p. 1 & constant && \( \checkmark \) & polynomial \\ ~\cite{Jin19} & Lipschitz & Lipschitz & --- & SGD + Gaussian &constant && \( \checkmark \) & polynomial \\ \textbf{Decentralized}\\ ~\cite{Lorenzo16} & Lipschitz \& bounded & --- & --- & ---&constant && \( \checkmark \) & ---\\ ~\cite{Wang18} & Lipschitz & --- & --- & ---&constant && \( \checkmark \) & ---\\ ~\cite{Tatarenko17} & Lipschitz \& bounded & --- & --- & i.i.d.&diminishing && \( \checkmark \) & ---\\ ~\cite{Scutari18} & Lipschitz & Exists & Random & --- &constant && \( \checkmark \) & asymptotic\\ ~\cite{Swenson19} & Bounded disagreement & --- & --- & SGD + Annealing & diminishing && \( \checkmark \) & asymptotic\(^{\dagger}\) \\ \textbf{This work} & \textbf{Bounded disagreement} & \textbf{Lipschitz} & \textbf{---} & \textbf{Bounded moments} &\textbf{constant} && \( \boldsymbol{\checkmark} \) & \textbf{polynomial} \\ \bottomrule\ \end{tabular} \caption{Comparison of modeling assumptions and results for gradient-based methods. Statements marked with \(^{\star}\) are not explicitly stated but are implied by other conditions. The works marked with \(^{\dagger}\) establish global (asymptotic) convergence, which of course implies escape from saddle-points.}\label{tab:references} \end{table*} \section{Review of Part I~\cite{Vlaski19nonconvexP1}} \subsection{Modeling Conditions} In this section, we briefly list the modeling conditions employed in Part I~\cite{Vlaski19nonconvexP1}. For a more detailed discussion, we refer the reader to~\cite{Vlaski19nonconvexP1}. \begin{assumption}[\textbf{Strongly-connected graph}]\label{as:strongly_connected} The combination weights in~\eqref{eq:combine} are convex combination weights satisfying: \begin{equation}\label{eq:combinationcoef} a_{\ell k} \geq 0, \quad \sum_{\ell \in \mathcal{N}_k} a_{\ell k}=1, \quad a_{\ell k} = 0\ \mathrm{if}\ \ell \notin \mathcal{N}_k \end{equation} The symbol \( {\cal N}_k \) denotes the set of neighbors of agent \( k \). We shall assume that the graph described by the weighted combination matrix \(A=[a_{\ell k}]\) is strongly-connected~\cite{Sayed14}. This means that there exists a path with nonzero weights between any two agents in the network and, moreover, at least one agent has a nontrivial self-loop, \(a_{kk}>0\).\hfill\IEEEQED% \end{assumption} The Perron-Frobenius theorem~\cite{Horn03,Pillai05,Sayed14} then implies that \( A \) has a spectral radius of one and a single eigenvalue at one. The corresponding eigenvector can be normalized to satisfy: \begin{equation}\label{eq:perron} Ap=p, \quad \mathds{1}^{\T} p=1, \quad p_k > 0 \end{equation} where the \( \{ p_k \} \) denote the individual entries of the Perron vector, \(p\). \begin{assumption}[\textbf{Lipschitz gradients}]\label{as:lipschitz} For each \( k \), the gradient \( \nabla J_k(\cdot) \) is Lipschitz, namely, for any \( x,y \in \mathds{R}^{M} \): \begin{equation}\label{eq:lipschitz} \|\nabla J_k(x) - \nabla J_k(y)\| \le \delta \|x-y\| \end{equation} In light of~\eqref{eq:global_problem} and Jensen's inequality, this implies for the aggregate cost: \begin{equation}\label{eq:lipschitz_global} \|\nabla J(x) - \nabla J(y)\| \le \delta \|x-y\| \end{equation} \end{assumption}\hfill\IEEEQED% {\noindent The Lipschitz gradient conditions~\eqref{eq:lipschitz} and~\eqref{eq:lipschitz_global} imply \begin{align} J(y) \le J(x) + {\nabla J(x)}^{\T} \left( y-x \right) + \frac{\delta}{2} {\|x-y\|}^2 \label{eq:quadratic_upper} \end{align} For the Hessian matrix we have~\cite{Sayed14}: \begin{align}\label{eq:hessian_bound} - \delta I \le \nabla^2 J(x) \le \delta I \end{align} } { \begin{assumption}[\textbf{Bounded gradient disagreement}]\label{as:bounded} For each pair of agents \( k \) and \( \ell \), the gradient disagreement is bounded, namely, for any \( x \in \mathds{R}^{M} \): \begin{equation}\label{eq:bounded} \|\nabla J_k(x) - \nabla J_{\ell}(x)\| \le G \end{equation} \end{assumption}\hfill\IEEEQED} \begin{definition}[\textbf{Filtration}]\label{def:filtration} We denote by \( \boldsymbol{\mathcal{F}}_{i} \) the filtration generated by the random processes \( \w_{k, j} \) for all \( k \) and \( j \le i \): \begin{equation} \boldsymbol{\mathcal{F}}_{i} \triangleq \left \{ \bcw_{0}, \bcw_{1}, \ldots, \bcw_{i} \right \} \end{equation} where \( \bcw_{j} \triangleq \mathrm{col}\left \{ \w_{1, j}, \ldots, \w_{k, j} \right \} \) contains the iterates across the network at time \( j \). Informally, \( \boldsymbol{\mathcal{F}}_{i} \) captures all information that is available about the stochastic processes \( \w_{k, j} \) across the network up to time \( i \).\hfill\IEEEQED \end{definition} \begin{assumption}[\textbf{Gradient noise process}]\label{as:gradientnoise} For each \( k \), the gradient noise process is defined as \begin{equation} \s_{k,i}(\w_{k,i-1}) = \widehat{\nabla J}_k(\w_{k,i-1}) - \nabla J_k(\w_{k,i-1}) \end{equation} and satisfies \begin{subequations} \begin{align} \E \left\{ \s_{k,i}(\w_{k,i-1}) | \boldsymbol{\mathcal{F}}_{i-1} \right\} &= 0 \label{eq:conditional_zero_mean}\\ \E \left\{ \|\s_{k,i}(\w_{k,i-1})\|^4 | \boldsymbol{\mathcal{F}}_{i-1} \right\} &\le \sigma^4 \label{eq:gradientnoise_fourth} \end{align} \end{subequations} for some non-negative constant \( \sigma^4 \). We also assume that the gradient noise pocesses are pairwise uncorrelated over the space conditioned on \( \boldsymbol{\mathcal{F}}_{i-1} \), i.e.: \begin{equation} \E \left\{ \s_{k,i}(\w_{k,i-1}) \s_{\ell,i}(\w_{\ell,i-1})^{\T} | \boldsymbol{\mathcal{F}}_{i-1} \right\} = 0\label{eq:uncorrelated_noise} \end{equation} \hfill\IEEEQED% \end{assumption} The fourth-order condition also implies via Jensen's inequality: \begin{align} &\: \E \left\{ \|\s_{k,i}(\w_{k,i-1})\|^2 | \boldsymbol{\mathcal{F}}_{i-1} \right\} \le \sigma^2 \label{eq:gradientnoise} \end{align} \begin{definition}[Sets]\label{DEF:SETS} To simplify the notation in the sequel, we introduce following sets: \begin{align} \mathcal{G} &\triangleq \left \{ w : {\left \| \nabla J(w) \right \|}^2 \ge \mu \frac{c_2}{c_1}\left(1+ \frac{1}{\pi}\right) \right \} \label{eq:define_g}\\ \mathcal{G}^C &\triangleq \left \{ w : {\left \| \nabla J(w) \right \|}^2 < \mu \frac{c_2}{c_1} \left(1+\frac{1}{\pi} \right)\right \} \\ \mathcal{H} &\triangleq \left \{ w : w \in \mathcal{G}^C, \lambda_{\min}\left( \nabla^2 J(w) \right) \le -\tau \right \} \label{eq:define_h}\\ \mathcal{M} &\triangleq \left \{ w : w \in \mathcal{G}^C, \lambda_{\min}\left( \nabla^2 J(w) \right) > -\tau \right \} \label{eq:define_m} \end{align} where \( \tau \) is a small positive parameterm, \( c_1 \) and \( c_2 \) are constants: \begin{align} c_1 &\triangleq \frac{1}{2} \left(1 - 2 \mu \delta\right) = O(1) \label{eq:define_c1}\\ c_2 &\triangleq \delta \sigma^2 / 2 = O(1) \label{eq:define_c2} \end{align} and \( 0 < \pi < 1 \) is a parameter to be chosen. Note that \( \mathcal{G}^C = \mathcal{H} \cup \mathcal{M} \). We also define the probabilities \(\pi^{\mathcal{G}}_i \triangleq \mathrm{Pr}\left \{ \w_{c, i} \in \mathcal{G} \right \}\), \( \pi^{\mathcal{H}}_i \triangleq \mathrm{Pr}\left \{ \w_{c, i} \in \mathcal{H} \right \}\) and \( \pi^{\mathcal{M}}_i \triangleq \mathrm{Pr}\left \{ \w_{c, i} \in \mathcal{M} \right \} \). Then for all \( i \), we have \( \pi^{\mathcal{G}}_i + \pi^{\mathcal{H}}_i + \pi^{\mathcal{M}}_i = 1 \). \hfill\IEEEQED \end{definition} \begin{assumption}[\textbf{Lipschitz Hessians}]\label{as:lipschitz_hessians} Each \( J_k(\cdot) \) is twice-differentiable with Hessian \( \nabla^2 J_k(\cdot) \) and, there exists \( \rho \ge 0 \) such that: \begin{equation} {\| \nabla^2 J_k(x) - \nabla^2 J_k(y) \|} \le \rho \|x - y\| \end{equation} By Jensen's inequality, this implies that \( J(\cdot) = \sum_{k=1}^N p_k J_k(\cdot) \) also satisfies: \begin{equation}\label{eq:lipschitz_hessians} {\| \nabla^2 J(x) - \nabla^2 J(y) \|} \le \rho \|x - y\| \end{equation}\hfill\IEEEQED \end{assumption} \noindent Similarly to the quadratic upper bound that follows from the Lipschitz condition on the first-derivative~\eqref{eq:quadratic_upper}, this new Lipschitz condition on the second-derivative implies a cubic upper bound on the function values~\cite{Nesterov06}: \begin{align} J(y) \le&\: J(x) + {\nabla J(x)}^{\T} (y-x) + \frac{1}{2} {(y-x)}^{\T} \nabla^2 J(x) (y-x) \notag \\ &\: + \frac{\rho}{6} {\left \| y-x \right\|}^3 \label{eq:cubic_upper} \end{align} \subsection{Review of Results} An important quantity in the network dynamics of~\eqref{eq:adapt}--\eqref{eq:combine} is the weighted network centroid: \begin{equation} \w_{c, i} \triangleq \sum_{k=1}^N p_k \w_{k, i} \end{equation} where the weights \( p_k \) are elements of the Perron vector, defined in~\eqref{eq:perron}, which in turn is a function of the graph topology and weights. The network centroid can be shown to evolve according to a perturbed, centralized, exact gradient descent recursion~\cite{Chen15transient}: \begin{align}\label{eq:perturbed_gradient_descent} \w_{c, i} = \w_{c, i-1} - \mu \sum_{k=1}^N p_k {\nabla J}_k (\w_{c, i-1}) - \mu \boldsymbol{d}_{i-1} - \mu \s_i \end{align} where we defined the perturbation terms: \begin{align} \boldsymbol{d}_{i-1} &\triangleq \sum_{k=1}^N p_k \left( {\nabla J}_k (\w_{k, i-1}) - {\nabla J}_k (\w_{c, i-1}) \right) \\ \boldsymbol{s}_{i} &\triangleq \sum_{k=1}^N p_k \left( \widehat{\nabla J}_k (\w_{k, i-1}) - {\nabla J}_k (\w_{k, i-1}) \right) \label{eq:centralized_gradient_noise} \end{align} In Part I~\cite[Theorem 1]{Vlaski19nonconvexP1} we established that, under assumptions~\ref{as:strongly_connected}--\ref{as:gradientnoise}, all agents will cluster around the network centroid in the mean-fourth sense: \begin{align} &\:\E {\left \| \bcw_i - \left( \mathds{1} p^{\T} \otimes I \right) \bcw_{i} \right \|}^4 \notag \\ \le&\: \mu^4 {\left \| \mathcal{V}_L \right \|}^4 \frac{{\left \|J_{\epsilon}^{\T} \right \|}^4}{{\left(1-{\left \|J_{\epsilon}^{\T} \right \|}\right)}^4} {\| \mathcal{V}_R^{\T} \|}^4 N^2 \left( G^4 + \sigma^4 \right) + o(\mu^4)\label{eq:network_disagreement_fourth} \end{align} for \( i \ge i_o \) where \( i_o \triangleq {\log\left( o(\mu^4) \right)}/{\log\left( {\left \|J_{\epsilon}^{\T} \right \|} \right)} \). This result has two implications. First, it establishes that, despite the fact that agents may be descending along different cost functions, and despite the fact that they may have been initialized close to different local minima, the entire network will eventually agree on a common iterate in the mean-fourth sense (and via Markov's inequality with high probability). Furthermore, it allows us to bound the perturbation terms appearing in~\eqref{eq:perturbed_gradient_descent} as~\cite[Lemma 2]{Vlaski19nonconvexP1}: \begin{align} {\left( \E {\|\boldsymbol{d}_{i-1}\|}^2 \right)}^2 &\le \E {\|\boldsymbol{d}_{i-1}\|}^4 \le O(\mu^4) \label{eq:d_omufourth}\\ {\left( \E \left \{ {\|\boldsymbol{s}_{i}\|}^2 | \boldsymbol{\mathcal{F}}_{i-1} \right \}\right)}^2 &\le \E \left \{ {\|\boldsymbol{s}_{i}\|}^4 | \boldsymbol{\mathcal{F}}_{i-1} \right \} \le \sigma^4 \end{align} after sufficient iterations \( i \ge i_0 \). We conclude that all iterates, after sufficient iterations, approximately track the network centroid \(\w_{c, i} \), which in turn follows a perturbed gradient descent recursion, where the perturbation terms can be appropriately bounded. We then proceeded to study the evolution of the network centroid and establish expected descent in the large gradient regime, i.e.: \begin{align}\label{eq:descent_in_g} &\:\E \left \{ J(\w_{c, i}) | \w_{c, i-1} \in \mathcal{G} \right \} \notag \\ \le&\: \E \left \{ J(\w_{c, i-1}) | \w_{c, i-1} \in \mathcal{G} \right \} - \mu^2 \frac{c_2}{\pi} + \frac{O(\mu^3)}{\pi_{i-1}^{\mathcal{G}}} \end{align} where the set \( \mathcal{G} \) introduced in Definition~\ref{DEF:SETS} denotes the set of points with sufficiently large gradients \( {\left \| \nabla J(w) \right \|}^2 \ge O(\mu) \). While this argument could have been continued to establish the return of approximately first-order stationary points in the complement \( \mathcal{G}^C = \mathcal{M} \cup \mathcal{H} \), our objective here is to establish the return of second-order stationary points in \( \mathcal{M} \), which is a subset of \( \mathcal{G}^C \). {This} requires the escape from strict-saddle points in \( \mathcal{H} \). In the vicinity of first-order stationary points, a single gradient step is no longer sufficient to guarantee descent, and as such it is necessary to study the cumulative effect of the gradient, as well as perturbations, over several iterations. We laid the ground work for this in Part I~\cite{Vlaski19nonconvexP1} by introducing a short-term model, which is more tractable and sufficiently accurate for a limited number of iterations. This approach has been used successfully to accurately quantify the performance of adaptive networks in convex environments~\cite{Sayed14} and establish the ability of centralized perturbed gradient descent to escape saddle-points~\cite{Ge15}. Around a first-order stationary points \( \w_{c, i^{\star}} \) at time \( i^{\star} \), the short-term model is obtained by first applying the mean-value theorem to~\eqref{eq:perturbed_gradient_descent} and obtain: \begin{align} \widetilde{\w}_{i+1}^{i^{\star}} = &\: \left( I - \mu \boldsymbol{H}_{i^{\star} + i} \right) \widetilde{\w}_{i}^{i^{\star}} + \mu {\nabla J} (\w_{c, i^{\star}}) \notag \\ &\: + \mu \boldsymbol{d}_{i^{\star}+i} + \mu \s_{i^{\star}+i+1} \label{eq:error_recursion} \end{align} where \( \widetilde{\w}_{i}^{i^{\star}} \) denotes the deviation from the initial point \( \w_{c, i^{\star}} \), i.e. \( \widetilde{\w}_{i}^{i^{\star}} = \w_{c, i^{\star}} - \w_{c, i^{\star}+i} \) and \begin{equation} \boldsymbol{H}_{i^{\star}+i} \triangleq \int_0^1 \nabla^2 J\left( (1-t) \w_{c, i^{\star}+i} + t \w_{c, i^{\star}} \right) dt \end{equation} The short-term model is then obtained by replacing \( \boldsymbol{H}_{i^{\star}+i} \) by \( \nabla^2 J( \w_{c, i^{\star}}) \) and dropping the driving term \( \mu \boldsymbol{d}_{i^{\star}+i} \): \begin{align} \widetilde{\w}'{}^{i^{\star}}_{i+1} =&\: \left( I - \mu \nabla^2 J( \w_{c, i^{\star}}) \right) \widetilde{\w}_{i}'{}^{i^{\star}} + \mu \nabla J(\w_{c, i^{\star}}) + \mu \s_{i^{\star}+i+1} \label{eq:long_term_recursive} \end{align} where again \( \widetilde{\w}'{}^{i^{\star}}_{i} \) denotes the deviation from the initialization \( \widetilde{\w}_{i}'{}^{i^{\star}} = \w_{c, i^{\star}} - \w_{c, i^{\star}+i}' \). In~\cite[Lemma 4]{Vlaski19nonconvexP1}, we established that the short-term model~\eqref{eq:long_term_recursive} is a meaningful approximation of~\eqref{eq:error_recursion} in the sense that for a limited number of iterations \( i \le \frac{T}{\mu} \), we have the following bounds: \begin{align} \E \left \{ {\left \| \widetilde{\w}_{i}^{i^{\star}} \right \|}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right \} &\le O(\mu) + \frac{O(\mu^2)}{\pi_{i^{\star}}^{\mathcal{H}}} \label{eq:ms_stability}\\ \E \left \{ {\left \| \widetilde{\w}_{i}^{i^{\star}} \right \|}^3 | \w_{c, i^{\star}} \in \mathcal{H} \right \} &\le O(\mu^{3/2}) + \frac{O(\mu^3)}{{\pi_{i^{\star}}^{\mathcal{H}}}} \label{eq:mt_stability}\\ \E \left \{ {\left \| \widetilde{\w}_{i}^{i^{\star}} \right \|}^4 | \w_{c, i^{\star}} \in \mathcal{H} \right \} &\le O(\mu^{2}) + \frac{O(\mu^{4})}{\pi_{i^{\star}}^{\mathcal{H}}} \label{eq:mf_stability}\\ \E \left \{ {\left \| \widetilde{\w}_{i}^{i^{\star}} - \widetilde{\w}_{i}'{}^{i^{\star}} \right \|}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right \} &\le O(\mu^{2}) + \frac{O(\mu^{2})}{\pi_{i^{\star}}^{\mathcal{H}}} \label{eq:model_deviation}\\ \E \left \{ {\left \| \widetilde{\w}_{i}'{}^{i^{\star}} \right \|}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right \} &\le O(\mu) + \frac{O(\mu^2)}{\pi_{i^{\star}}^{\mathcal{H}}} \label{eq:longterm_deviation} \end{align} We will now proceed to argue that these deviation bounds allow us to establish decent of~\eqref{eq:error_recursion} by means of studying descent of~\eqref{eq:long_term_recursive} and leverage this fact to show that the diffusion strategy will continue to descend through strict-saddle points in Theorem~\ref{TH:DESCENT_THROUGH_SADDLE_POINTS}. This result, along with the descent for large gradients established in Part I~\cite[Theorem 2]{Vlaski19nonconvexP1} will allow us to guarantee the return of an approximately second-order stationary points in Theorem~\ref{TH:FINAL_THEOREM}. The argument is summarized in Fig.~\ref{fig:classification_points}. \begin{figure*} \centering \begin{tikzpicture}[grow cyclic, align=flush center, level 1/.style={level distance=3.5cm, sibling angle = 40}, level 2/.style={level distance=4cm, sibling angle = 30}] level 3/.style={level distance=7cm, sibling angle = 30}] \node[rounded corners, draw=blue!60, fill=blue!20, thick]{Network centroid \\\( \w_{c, i} \) at time \( i \)} [counterclockwise from=-20] child { node[rounded corners, draw=red!60, fill=red!20, thick] {\textbf{NOT} \( O(\mu) \)-stationary \\ \( \|\nabla J(\w_{c, i})\|^2 > O(\mu) \)} [counterclockwise from=-15] child { node[rounded corners, draw=red!60, thick, fill=red!20] {Descent in one iteration in Part I~\cite[Theorem 2]{Vlaski19nonconvexP1}: \\ \( \E \left \{ J(\w_{c, i}) - J(\w_{c, i+1}) | \w_{c, i} \in \mathcal{G} \right \} \ge O(\mu^2) \)} } } child { node[rounded corners, draw=blue!60, fill=blue!20, thick] {\( O(\mu) \)-stationary \\ \( \|\nabla J(\w_{c, i})\|^2 \le O(\mu) \)} [counterclockwise from=-15] child { node[rounded corners, draw=green!60, thick, fill=green!20] {\( \tau \)-strict-saddle} child { node[rounded corners, draw=green!60, fill=green!20, thick]{Descent in \( i^s = O(1/(\mu \tau)) \) iterations in Theorem~\ref{TH:DESCENT_THROUGH_SADDLE_POINTS}: \\ \( \E \left \{ J(\w_{c, i}) - J(\w_{c, i+i^s}) | \w_{c, i} \in \mathcal{H} \right \} \ge O(\mu) \)} } } child { node[rounded corners, draw=blue!60, thick, fill=blue!20] { \( \lambda_{\min}\left( \nabla^2 J(\w_{c, i} \right) > -\tau \) } child { node[rounded corners, draw=blue!60, fill=blue!20, thick]{\( \w_{c, i} \) is approximately second-order stationary.} } } } ; \end{tikzpicture} \caption{Classification of approximately stationary points. Theorem~\ref{TH:DESCENT_THROUGH_SADDLE_POINTS} in this work establishes descent in the green branch. The red branch is treated in Part I~\cite[Theorem 2]{Vlaski19nonconvexP1}. The two results are combined in Theorem~\ref{TH:FINAL_THEOREM} to establish the return of a second-order stationary point with high probability.}\label{fig:classification_points} \end{figure*} \section{Escape from Saddle-Points} The deviation bounds~\eqref{eq:ms_stability}--\eqref{eq:longterm_deviation} establish that, for the first \( O(1/\mu) \) iterations following a first-order stationary points \( \w_{c, i^{\star}} \), the trajectories of the true recursion~\eqref{eq:error_recursion} the short-term model~\eqref{eq:long_term_recursive} will remain close. As a consequence, we are able to guarantee descent of \( J(\w_{c, i^{\star} + i}) \) by studying \( J(\w_{c, i^{\star}+i}')\). Note from~\eqref{eq:quadratic_upper} that \begin{align} &\: J(\w_{c, i^{\star}+i}) \notag \\ \le&\: J(\w_{c, i^{\star}+i}') + \nabla J\left( \w_{c, i^{\star}+i}' \right)^{\mathsf{\T}} \left( \w_{c, i^{\star}+i} - \w_{c, i^{\star}+i}' \right) \notag \\ &\:+ \frac{\delta}{2} {\left \| \w_{c, i^{\star}+i} - \w_{c, i^{\star}+i}' \right \|}^2 \end{align} Taking conditional expecation yields: \begin{align} &\: \E \left \{ J(\w_{c, i^{\star}+i}) | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ \le&\: \E \left \{ J(\w_{c, i^{\star}+i}') | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ &\: + \E \left \{ \nabla J\left( \w_{c, i^{\star}+i}' \right)^{\mathsf{\T}} \left( \w_{c, i^{\star}+i} - \w_{c, i^{\star}+i}' \right) | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ &\:+ \frac{\delta}{2} \E \left \{ {\left \| \w_{c, i^{\star}+i} - \w_{c, i^{\star}+i}' \right \|}^2| \w_{c, i^{\star}} \in \mathcal{H} \right \} \end{align} The two terms appearing on the right-handside can be bounded as: \ifarx \begin{align} &\: \E \left \{ \nabla J\left( \w_{c, i^{\star}+i}' \right)^{\mathsf{\T}} \left( \w_{c, i^{\star}+i} - \w_{c, i^{\star}+i}' \right) | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ \stackrel{(a)}{\le} &\: \sqrt{ \E \left \{ {\left \| \nabla J\left( \w_{c, i^{\star}+i}' \right) \right \|}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right \} } \notag \\ &\: \times \sqrt{ \E \left \{ {\left\| \w_{c, i^{\star}+i} - \w_{c, i^{\star}+i}' \right\|}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right \} } \notag \\ \stackrel{\eqref{eq:model_deviation}}{\le} &\: \sqrt{ O(\mu) } \sqrt{ O(\mu^{2}) + \frac{O(\mu^{2})}{\pi_{i^{\star}}^{\mathcal{H}}} } \notag \\ = &\: O\left(\mu^{3/2}\right) + \frac{O(\mu^{3/2})}{\sqrt{\pi_{i^{\star}}^{\mathcal{H}}}} \notag \\ \stackrel{(b)}{\le}&\: O\left(\mu^{3/2}\right) + \frac{O(\mu^{3/2})}{{\pi_{i^{\star}}^{\mathcal{H}}}} \end{align} \else \begin{align} &\: \E \left \{ \nabla J\left( \w_{c, i^{\star}+i}' \right)^{\mathsf{\T}} \left( \w_{c, i^{\star}+i} - \w_{c, i^{\star}+i}' \right) | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ \stackrel{(a)}{\le} &\: \sqrt{ \E \left \{ {\left \| \nabla J\left( \w_{c, i^{\star}+i}' \right) \right \|}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right \} } \notag \\ &\: \times \sqrt{ \E \left \{ {\left\| \w_{c, i^{\star}+i} - \w_{c, i^{\star}+i}' \right\|}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right \} } \notag \\ \stackrel{\eqref{eq:model_deviation}}{\le} &\: \sqrt{ O(\mu) } \sqrt{ O(\mu^{2}) + \frac{O(\mu^{2})}{\pi_{i^{\star}}^{\mathcal{H}}} } \stackrel{(b)}{\le}\: O\left(\mu^{3/2}\right) + \frac{O(\mu^{3/2})}{{\pi_{i^{\star}}^{\mathcal{H}}}} \end{align}\fi where \( (a) \) follows from Cauchy-Schwarz, \( (b) \) follows from \( \sqrt{\pi_{i^{\star}}^{\mathcal{H}}} \ge\pi_{i^{\star}}^{\mathcal{H}} \) since \( \pi_{i^{\star}}^{\mathcal{H}} \le 1 \) so that: \begin{align} &\: \E \left \{ J(\w_{c, i^{\star}+i}) | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ \le&\: \E \left \{ J(\w_{c, i^{\star}+i}') | \w_{c, i^{\star}} \in \mathcal{H} \right \} + O\left(\mu^{3/2}\right) + \frac{O(\mu^{3/2})}{{\pi_{i^{\star}}^{\mathcal{H}}}} \end{align} We conclude that the function value at \( \w_{c, i^{\star}+i} \) after \( i \) iterations is upper-bounded by the function evaluated at the short-term model \( \w_{c, i^{\star}+i}' \) with an additional approximation error that is bounded. We conclude that it is sufficient to study the dynamics of the short-term model, which is more tractable. Specifically, in light of the bound~\eqref{eq:cubic_upper} following from the Lipschitz-Hessian Assumption~\ref{as:lipschitz_hessians}, we have: \begin{align} J(\w_{c, i^{\star}+i}') \le&\: J(\w_{c, i^{\star}}) - {\nabla J(\w_{c, i^{\star}})}^{\T} \widetilde{\w}_{i}'{}^{i^{\star}} \notag \\ &\: + \frac{1}{2} {\left \| \widetilde{\w}_{i}'{}^{i^{\star}} \right \|}_{\nabla^2 J(\w_{c, i^{\star}})}^2 + \frac{\rho}{6} {\left \| \widetilde{\w}_{i}'{}^{i^{\star}} \right \|}^3 \label{eq:t_step_descent} \end{align} In order to establish escape from saddle-points, we need to carefully bound each term appearing on the right handside of~\eqref{eq:t_step_descent}, and to this end, we will need study the effect to the gradient noise term over several iterations. For this purpose, we introduce the following smoothness condition on the gradient noise covariance~\cite{Sayed14}: \begin{assumption}[\textbf{Lipschitz covariances}]\label{as:lipschitz_covariance} The gradient noise process has a Lipschitz covariance matrix, i.e., \begin{equation} R_{s, k}(\w_{k, i-1}) \triangleq \E \left \{ \s_{k, i}(\w_{k, i-1}) {\s_{k, i}(\w_{k, i-1})}^{\T} | \boldsymbol{\mathcal{F}}_{i-1}\right \} \end{equation} satisfies \begin{equation}\label{eq:lipschitz_r} \| R_{s, k}(x) - R_{s, k}(y) \| \le \beta_R {\| x - y \|}^{\gamma} \end{equation} for some \( \beta_R \) and \( 0 < \gamma \le 4\).\hfill\IEEEQED \end{assumption} \begin{definition}\label{def:aggregate_covariance} We define the aggregate gradient noise covariance as: \begin{equation} \mathcal{R}_{s, i}\left( \bcw_{i-1} \right) = \E \left \{ \s_i \s_i^{\T} | \boldsymbol{\mathcal{F}}_{i-1} \right \} \end{equation} where \( \s_i \triangleq \sum_{k=1}^N p_k \s_{k, i}\left( \w_{k, i-1} \right) \) denotes the aggregate gradient noise term introduced earlier in~\eqref{eq:centralized_gradient_noise}.\hfill\IEEEQED \end{definition} Note that in light of this definition and the assumption that the gradient noise process is conditionally uncorrelated over space as in~\eqref{eq:uncorrelated_noise}, we have: \begin{align} &\: \mathcal{R}_{s, i}\left( \bcw_{i-1} \right) \notag \\ =&\: \E \left \{ \s_i \s_i^{\T} | \boldsymbol{\mathcal{F}}_{i-1} \right\} \notag \\ =&\: \E \left \{ \left(\sum_{k=1}^N p_k \s_{k, i}\left( \w_{k, i-1} \right)\right) {\left(\sum_{k=1}^N p_k \s_{k, i}\left( \w_{k, i-1} \right)\right)}^{\T} | \boldsymbol{\mathcal{F}}_{i-1} \right\}\notag \\ =&\: \E \left \{ \sum_{k=1}^N p_k^2 \s_{k, i}\left( \w_{k, i-1} \right) \s_{k, i}\left( \w_{k, i-1} \right)^{\T} | \boldsymbol{\mathcal{F}}_{i-1} \right\} \notag \\ \ifarx =&\: \sum_{k=1}^N p_k^2 \E \left \{ \s_{k, i}\left( \w_{k, i-1} \right) \s_{k, i}\left( \w_{k, i-1} \right)^{\T} | \boldsymbol{\mathcal{F}}_{i-1} \right\} \notag \\ \fi =&\: \sum_{k=1}^N p_k^2 R_{s, k}\left( \w_{k, i-1} \right)\label{eq:calR_decomp} \end{align} so that the aggregate gradient noise covariance is a weighted combination of the individual gradient noise covariances, albeit evaluated at different iterates. In light of the smoothness assumption~\ref{as:lipschitz_covariance}, we are nevertheless able to approximate the aggregate noise covariance by one that is evaluated at the centroid. \begin{lemma}[\textbf{Noise covariance at centroid}]\label{LEM:CENTROID_COVARIANCE} Under assumptions~\ref{as:strongly_connected}--\ref{as:lipschitz_covariance} and for sufficiently small step-sizes \( \mu \), we have for all \( i \) and \( w \in \mathds{R}^M \): \begin{align} &\left\|\mathcal{R}_{s, i}\left( \mathds{1} \otimes \w_{c, i-1} \right) - \mathcal{R}_{s, i}\left( \mathds{1} \otimes w \right)\right\| \le p_{\max} \beta_R \left\| \w_{c, i-1} - w \right\|^{\gamma} \label{eq:difference_r_small}\\ &\left\|\mathcal{R}_{s, i}\left( \bcw_{c, i-1} \right) - \mathcal{R}_{s, i}\left( \bcw_{i-1} \right)\right\| \le p_{\max} \beta_R \left\| \bcw_{c, i-1} - \bcw_{i-1} \right\|^{\gamma} \label{eq:difference_r_large} \end{align} \end{lemma} \begin{IEEEproof} Appendix~\ref{ap:centroid_covariance}. \end{IEEEproof} \begin{comment} \begin{assumption}[\textbf{Bounded covariances}]\label{as:bounded_covariance} The gradient noise process has a bounded covariance matrix, i.e. \begin{equation} R_{s, k}(w) \le \sigma^2 I \end{equation} for all \( w \) and some constants \( \sigma^2 > 0\). \end{assumption} \end{comment} {Note that from the bound on the aggregate gradient noise variance~\eqref{eq:gradientnoise}, we can upper bound the gradient noise covariance: \begin{align} \left \| \mathcal{R}_{s, i} \left( \cw \right) \right \| = \left \| \E \s_i \s_i^{\T} \right \|\stackrel{(a)}{\le}\E \left \| \s_i \s_i^{\T} \right \| = \E \left \| \s_i \right\|^2 \stackrel{\eqref{eq:gradientnoise}}{\le} \sigma^2\label{eq:bounded_covariance} \end{align} where \( (a) \) follows from Jensen's inequality. In order to ensure escape from saddle-points, we introduce a similar, lower-bound condition. } \begin{assumption}[\textbf{Gradient noise in strict saddle-points}]\label{as:noise_in_saddle} Suppose \( w \) is an approximate strict-saddle points, i.e., \( w \in \mathcal{H} \) and denote the eigendecomposition of the Hessian as \( \nabla^2 J(w) = V \Lambda V^{\T} \). We introduce the decomposition: \begin{equation} {V} = \left[ \begin{array}{cc} {V}^{\ge0} & {V}^{< 0} \end{array} \right], \ \ {\Lambda} = \left[ \begin{array}{cc} {\Lambda}^{\ge0} & 0\\0 & {\Lambda}^{< 0} \end{array}\right] \end{equation} where \( {\Lambda}^{\ge0} \ge 0 \) and \( {\Lambda}^{< 0} < 0 \). Then, we assume that: \begin{equation} \lambda_{\min}\left({\left({V}^{< 0}\right)}^{\T} \mathcal{R}_{s}\left(\mathds{1}\otimes w \right) {V}^{< 0} \right) \ge \sigma_{\ell}^2 \end{equation} for some \( \sigma_{\ell}^2 > 0 \) and all \( w \in \mathcal{H} \).\hfill\IEEEQED \end{assumption} Assumption~\ref{as:noise_in_saddle} is similar to the condition in~\cite{HadiDaneshmand18}, where alternating step-sizes are employed, and essentially states than for every strict-saddle point in the set \( \mathcal{H}\), there is gradient noise present along some descent direction, spanned by the eigenvectors corresponding to the negative eigenvalues of the Hessian \( \nabla^2 J(\cdot) \). \begin{theorem}[Descent through strict saddle-points]\label{TH:DESCENT_THROUGH_SADDLE_POINTS} {Suppose \( \mathrm{Pr} \left \{ \w_{c, i^{\star}} \in \mathcal{H} \right \} \neq 0 \), i.e., \( \w_{c, i^{\star}} \)} is approximately stationary with significant negative eigenvalue. Then, iterating for \( i^s \) iterations after \( i^{\star} \) with \begin{align} i^{s} = {\log\left( 2 M \frac{\sigma^2}{\sigma_{\ell}^2} + 1 \right)}{\log({1 + 2\mu\tau})} \le O\left(\frac{1}{\mu \tau} \right) \end{align} guarantees \begin{align} &\: \E \left \{ J(\w_{c, i^{\star}+i^s}) | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ \le&\: \E \left \{ J(\w_{c, i^{\star}}) | \w_{c, i^{\star}} \in \mathcal{H} \right \} - \frac{\mu}{2} M \sigma_u^2 + o(\mu) + \frac{o(\mu)}{\pi_{i^{\star}}^{\mathcal{H}}} \end{align} \end{theorem} \begin{IEEEproof} Appendix~\ref{AP:DESCENT_THROUGH_SADDLE_POINTS}. \end{IEEEproof} This result establishes that, even if \( \w_{c, i^{\star}} \) is an \( O(\mu) \)-square-stationary point and Part I~\cite[Theorem 2]{Vlaski19nonconvexP1} can no longer guarantee sufficient descent, the expected function value at the network centroid will continue to decrease, as long as the Hessian matrix has a sufficiently negative eigenvalue. \section{Main Result} In Part I~\cite[Theorem 2]{Vlaski19nonconvexP1}, we established a descent condition for points with large gradient norm \( \w_{c, i} \in \mathcal{G} \), while Theorem~\ref{TH:DESCENT_THROUGH_SADDLE_POINTS} guarantees descent in \( i^s \) iterations for strict-saddle points \( \w_{c, i} \in \mathcal{H} \). Together, they establish descent whenever \( \w_{c, i} \in \mathcal{G} \cup \mathcal{H} = \mathcal{M}^C \). Hence, we conclude that, as long as the cost is bounded from below, the algorithm must necessarily reach a point in \( \mathcal{M} \) after a finite amount of iterations. This intuition is formalized in the following theorem. \begin{theorem}\label{TH:FINAL_THEOREM} For sufficiently small step-sizes \( \mu \), we have with probability \( 1 - \pi \), that \( \w_{c, i^o} \in \mathcal{M} \), i.e., \( \| \nabla J(\w_{c, i^o}) \|^2 \le O(\mu) \) and \( \lambda_{\min}\left( \nabla^2 J(\w_{c, i^o}) \right) \ge -\tau \) in at most \( i^o \) iterations, where \begin{align} i^o \le \frac{\left( J(w_{c, 0}) - J^o \right)}{\mu^2 c_2 \pi} i^s \end{align} and \( i^s \) denotes the escape time from Theorem~\ref{TH:DESCENT_THROUGH_SADDLE_POINTS}, i.e., \begin{align} i^{s} = \frac{\log\left( 2 M \frac{\sigma^2}{\sigma_{\ell}^2} + 1 \right)}{\log({1 + 2\mu\tau})} \le O\left(\frac{1}{\mu \tau} \right) \end{align} \end{theorem} \begin{IEEEproof} Appendix~\ref{AP:FINAL_THEOREM}. \end{IEEEproof} {This final result states that with probability \( 1 - \pi \), where we are free to choose the desired confidence level, the diffusion strategy~\eqref{eq:adapt}--\eqref{eq:combine} will have visited an approximately second-order stationary point after at most \( i^o \) iterations.} \section{Simulation Results} In this section, we consider an example that will allow us to visualize the ability of the diffusion strategy to escape saddle-points. Given a binary class label \( \boldsymbol{\gamma} \in \left \{ 0, 1 \right \} \) and feature vector \( \boldsymbol{h} \in \mathds{R}^M \), we consider a neural network with a single, linear hidden layer and a logistic activation function leading into the output layer: \begin{equation} \boldsymbol{\widehat{\gamma}}\left(\h \right) \triangleq \frac{1}{1+e^{-w_1^{\T} W_2 \h}} \end{equation} with weights \( w_1 \in \mathds{R}^{L}, W_2 \in \mathds{R}^{L\times M}\) of appropriate dimensions. A popular risk function for training is the cross-entropy loss: \begin{equation}\label{eq:cross_entropy} Q(w_1, W_2; \boldsymbol{\gamma}, \h) \triangleq - \boldsymbol{\gamma} \log(\widehat{\boldsymbol{\gamma}}) - (1-\boldsymbol{\gamma}) \log(1-\widehat{\boldsymbol{\gamma}}) \end{equation} \ifarx Note that, the first term is non-zero, while the second term is zero if, and only if, \( \boldsymbol{\gamma} = 1 \), in which case we have: \begin{align} - \boldsymbol{\gamma} \log(\widehat{\boldsymbol{\gamma}}) &= \log\left({1+e^{- w_1^{\T} W_2 \h}}\right) \end{align} Similarly, the second term is non-zero while the first term is zero if, and only if, \( \boldsymbol{\gamma} = 0 \), which implies: \begin{align} - (1-\boldsymbol{\gamma}) \log(1-\widehat{\boldsymbol{\gamma}}) &= -\log\left(1-\frac{1}{1+e^{-w_1^{\T} W_2 \h}}\right) \notag \\ &= -\log\left(\frac{e^{- w_1^{\T} W_2 \h}}{1+e^{- w_1^{\T} W_2 \h}}\right) \notag \\ &= -\log\left(\frac{1}{1+e^{w_1^{\T} W_2 \h}}\right) \notag \\ &= \log\left({1+e^{w_1^{\T} W_2 \h}}\right) \end{align} Letting \( \boldsymbol{\gamma}' \in \{ -1, 1 \} \) such that: \begin{equation} \boldsymbol{\gamma}' \triangleq \begin{cases} -1, \ &\mathrm{if}\ \boldsymbol{\gamma} = 0 \\ 1, \ &\mathrm{if}\ \boldsymbol{\gamma} = 1. \end{cases} \end{equation} we can hence simplify~\eqref{eq:cross_entropy} to an equivalent logistic loss: \begin{equation} Q(w_1, W_2; \boldsymbol{\gamma}', \h) = \log\left({1+e^{- \boldsymbol{\gamma}' w_1^{\T} W_2 \h}}\right) \end{equation} \else If we let: \begin{equation} \boldsymbol{\gamma}' \triangleq \begin{cases} -1, \ &\mathrm{if}\ \boldsymbol{\gamma} = 0 \\ 1, \ &\mathrm{if}\ \boldsymbol{\gamma} = 1. \end{cases} \end{equation} it can be verified that the cross-entropy loss~\eqref{eq:cross_entropy} simplifies to an equivalent logistic loss: \begin{equation} Q(w_1, W_2; \boldsymbol{\gamma}', \h) = \log\left({1+e^{- \boldsymbol{\gamma}' w_1^{\T} W_2 \h}}\right) \end{equation} \fi The regularized learning problem can then be formulated as: \begin{equation}\label{eq:sample_problem} J(w_1, W_2) = \E Q(w_1, W_2; \boldsymbol{\gamma}', \h) + \frac{\rho}{2}\|w_1\|^2 + \frac{\rho}{2} \| W_2 \|_F^2 \end{equation} which fits into the framework~\eqref{eq:global_problem} treated in this work. In order to be able to visualize and enumerate all stationary points of~\eqref{eq:sample_problem}, we assume in the sequel that \( M = L = 1 \) so that all involved quantities are scalar variables. We can then find: \begin{align} \nabla J(w_1, W_2) &= \E \begin{pmatrix}\rho w_1 - \frac{\boldsymbol{\gamma}' W_2 \h}{e^{\boldsymbol{\gamma}' w_1 W_2 \h}} \\ \rho W_2 - \frac{\boldsymbol{\gamma}' w_1 \h}{e^{\boldsymbol{\gamma}' w_1 W_2 \h}} \end{pmatrix} \end{align} The cost surface is depicted in Fig.~\ref{fig:loss_surface}. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{surface.eps} \caption{Cost surface of a simple neural network with \( \rho = 0.1\).}\label{fig:loss_surface} \end{figure} It can be observed from the figure, and analytically verified, that \( J(\cdot) \) has two local minima in the positive and negative quadrants, respectively, and a single saddle-point at \( w_1 = W_2 = 0 \). The Hessian matrix of \( J(\cdot) \) at \( w_1 = W_2 = 0 \) evaluates to: \begin{equation} \nabla^2 J(0, 0) = \begin{pmatrix} \rho & - \E \frac{\boldsymbol{\gamma}'\boldsymbol{h}}{2} \\ -\E \frac{\boldsymbol{\gamma}'\boldsymbol{h}}{2} & \rho \end{pmatrix} \end{equation} For this example, we let \( \mathrm{Pr}\left\{ \boldsymbol{\gamma}' = -1 \right\} = \mathrm{Pr}\left\{ \boldsymbol{\gamma}' = 1 \right\} = \frac{1}{2} \) and \( \boldsymbol{h} \sim \mathcal{N} \left( \boldsymbol{\gamma}', 1 \right) \). Then, we obtain \( \E \boldsymbol{\gamma}' \h = 1\). We also let \( \rho = 0.1 \), so that: \begin{equation} \nabla^2 J(0, 0) = \begin{pmatrix} 0.1 & -0.5 \\ -0.5 & 0.1 \end{pmatrix} \end{equation} which has an eigenvalue at \( -0.4 \) with corresponding eigenvector \( \mathrm{col}\left\{ 1, 1 \right\} \). This implies that \( w_1 = W_2 = 0 \) is a strict saddle-point with local descent direction \( \mathrm{col}\left\{ 1, 1 \right\} \). It turns out, however that the gradient noise induced by the immediate stochastic gradient approximation \( \widehat{\nabla J}(\cdot) = \nabla Q(\cdot; \boldsymbol{\gamma}', \boldsymbol{h}) \) does not have a gradient noise component in the descent direction \( \mathrm{col}\left\{ 1, 1 \right\} \) at the strict saddle-point \( w_1 = W_2 = 0 \). Indeed, note that with probability one we have \( \nabla Q(0, 0; \boldsymbol{\gamma}', \boldsymbol{h} ) = \mathrm{col}\{ 0, 0 \} = \nabla J(0, 0) \) so that the gradient noise vanishes at \( w_1 = W_2 = 0 \). Hence, initializing all agents at \( w_1 = W_2 = 0 \) and iterating~\eqref{eq:adapt}--\eqref{eq:combine} would cause them to remain there with probability \( 1 \). This suggests that assumption~\ref{as:noise_in_saddle} is not merely a technical condition but indeed necessary. To satisfy the assumption we construct the stochastic gradient approximation as: \begin{equation} \widehat{\nabla J}(w_1, W_2) \triangleq \nabla Q(w_1, W_2; \boldsymbol{\gamma}', \h) + \boldsymbol{v} \cdot \mathrm{col}\left\{ 1, 1 \right\} \end{equation} where \( \boldsymbol{v} \sim \mathcal{N}(0, 1) \) acts only in the direction \( \mathrm{col}\left\{ 1, 1 \right\} \) and ensures that gradient noise is present in the descent direction around the strict saddle-point at \( w_1 = W_2 = 0 \). Two realizations of the evolution are shown in Figures~\ref{fig:different_init}--\ref{fig:same_init}. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{different_init.eps} \caption{Agents are initialized at different points in space, but nevertheless quickly cluster. They then jointly travel away from the strict saddle-point and towards one of the local minimers.}\label{fig:different_init} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{same_init.eps} \caption{Agents are initialized together precisely in the strict saddle-point. The presence of the gradient perturbation allows them to jointly escape the saddle-point.}\label{fig:same_init} \end{figure} \appendices% \section{Proof of Lemma~\ref{LEM:CENTROID_COVARIANCE}}\label{ap:centroid_covariance} \noindent Recall that \begin{align} \s_i \triangleq \sum_{k=1}^N p_k \s_{k, i}\left( \w_{k, i-1} \right) \end{align} and hence~\eqref{eq:calR_decomp} holds. Using the smoothness assumption on the gradient noise term~\eqref{eq:lipschitz_r}, we can write: \begin{align} &\: \E \left \{ \s_i \s_i^{\T} | \boldsymbol{\mathcal{F}}_{i-1} \right\} \notag \\ \ifarx =&\: \sum_{k=1}^N p_k^2 R_{s, k}\left( \w_{k, i-1} \right) \notag \\ \fi =&\: \sum_{k=1}^N p_k^2 R_{s, k}\left( \w_{c, i-1} \right) \notag \\ &\: + \left( \sum_{k=1}^N p_k^2 R_{s, k}\left( \w_{k, i-1} \right) - \sum_{k=1}^N p_k^2 R_{s, k}\left( \w_{c, i-1} \right)\right) \end{align} so that: \begin{align} &\: \left \| \mathcal{R}_s\left( \mathds{1} \otimes \w_{c, i-1} \right) - \mathcal{R}_s\left( \mathds{1} \otimes w \right) \right \| \notag \\ \ifarx =&\: \left \| \sum_{k=1}^N p_k^2 R_{s, k}\left( \w_{c, i-1} \right) - \sum_{k=1}^N p_k^2 R_{s, k}\left( w \right) \right \| \notag \\ \fi =&\: \left \| \sum_{k=1}^N p_k^2 \left( R_{s, k}\left( \w_{c, i-1} \right) - R_{s, k}\left( w \right) \right) \right \| \notag \\ \stackrel{(a)}{\le}&\: \sum_{k=1}^N p_k \left \| p_k \left( R_{s, k}\left( \w_{c, i-1} \right) - R_{s, k}\left( w \right) \right) \right \| \notag \\ {\le}&\: p_{\max} \sum_{k=1}^N p_k\left \| R_{s, k}\left( \w_{c, i-1} \right) - R_{s, k}\left( w \right) \right \| \notag \\ \stackrel{(b)}{\le}&\: p_{\max} \beta_R \left \| \w_{c, i-1} - w \right \|^{\gamma} \end{align} where \( (a) \) follows from Jensen's inequality and \( (b) \) follows from the Lipschitz condition on the gradient noise covariance~\eqref{eq:lipschitz_r} and \( \sum_{k=1}^N p_k = 1 \). Similarly: \begin{align} &\: \left \| R_s\left( \bcw_{i-1} \right) - R_s\left( \bcw_{c, i-1} \right) \right \| \notag \\ \ifarx =&\: \left \| \sum_{k=1}^N p_k^2 R_{s, k}\left( \w_{k, i-1} \right) - \sum_{k=1}^N p_k^2 R_{s, k}\left( \w_{c, i-1} \right) \right \| \notag \\ \fi \ifarx =&\: \left \| \sum_{k=1}^N p_k^2 \left( R_{s, k}\left( \w_{k, i-1} \right) - R_{s, k}\left( \w_{c, i-1} \right) \right) \right \| \notag \\ \le&\: \sum_{k=1}^N p_k \left \| p_k \left( R_{s, k}\left( \w_{k, i-1} \right) - R_{s, k}\left( \w_{c, i-1} \right) \right) \right \| \notag \\ \fi \le&\: p_{\max} \beta_R \sum_{k=1}^N p_k \left \| \w_{k, i-1} - \w_{c, i-1} \right \|^{\gamma} \notag \\ \stackrel{(a)}{\le}&\: p_{\max} \beta_R \sum_{k=1}^N p_k \left \| \bcw_{i-1} - \bcw_{c, i-1} \right \|^{\gamma} \notag \\ =&\: p_{\max} \beta_R \left \| \bcw_{i-1} - \bcw_{c, i-1} \right \|^{\gamma} \end{align} where \( (a) \) follows from the fact that \( x^{\gamma} \) is monotonically increasing in \(\gamma\) for \( x, \gamma >0 \) and: \begin{align} \left \| \bcw_{i-1} - \bcw_{c, i-1} \right \|^2 =&\: \sum_{k=1}^N \left \| \w_{k, i-1} - \w_{c, i-1} \right \|^2 \notag \\ \ge&\: \left \| \w_{\ell, i-1} - \w_{c, i-1} \right \|^2, \ \ \forall \ \ell \end{align} \section{Proof of Theorem~\ref{TH:DESCENT_THROUGH_SADDLE_POINTS}}\label{AP:DESCENT_THROUGH_SADDLE_POINTS} We shall carefully bound each of the terms appearing on the righthand side of~\eqref{eq:t_step_descent}, which we repeat here again for reference: \begin{align} J(\w_{c, i^{\star}+i}') \le&\: J(\w_{c, i^{\star}}) - {\nabla J(\w_{c, i^{\star}})}^{\T} \widetilde{\w}_{i}'{}^{i^{\star}} \notag \\ &\: + \frac{1}{2} {\left \| \widetilde{\w}_{i}'{}^{i^{\star}} \right \|}_{\nabla^2 J(\w_{c, i^{\star}})}^2 + \frac{\rho}{6} {\left \| \widetilde{\w}_{i}'{}^{i^{\star}} \right \|}^3 \label{eq:t_step_descent_app} \end{align} We begin by establishing a bound on the linear term in~\eqref{eq:t_step_descent_app}. Iterating the recursive relation for the short-term model~\eqref{eq:long_term_recursive} and taking expectations conditioned on \( \boldsymbol{\mathcal{F}}_{i^{\star}+i} \) yields: \begin{align} &\: \E \left \{ \widetilde{\w}'{}^{i^{\star}}_{i+1} | \boldsymbol{\mathcal{F}}_{i^{\star}+i} \right \} \notag \\ =&\: \left( I - \mu \nabla^2 J( \w_{c, i^{\star}}) \right) \widetilde{\w}'{}^{i^{\star}}_{i} \notag \\ &\: + \mu \nabla J(\w_{c, i^{\star}}) + \mu \E \left \{ \s_{i^{\star} + i + 1} | \boldsymbol{\mathcal{F}}_{i^{\star}+i} \right \} \notag \\ =&\: \left( I - \mu \nabla^2 J( \w_{c, i^{\star}}) \right) \widetilde{\w}'{}^{i^{\star}}_{i} + \mu \nabla J(\w_{c, i^{\star}})\label{eq:interdalfsldfa} \end{align} where the gradient-noise term disappeared in light of \begin{equation} \E \left \{ \s_{i^{\star} + i + 1} | \boldsymbol{\mathcal{F}}_{i^{\star}+i} \right \} = 0 \end{equation} by Assumption~\ref{as:gradientnoise}. {Note that \( \boldsymbol{\mathcal{F}}_{i^{\star}+i} \) denotes the information captured in \( \w_{k, j} \) up to time \( i^{\star}+i \), while \( \boldsymbol{\mathcal{F}}_{i^{\star}} \) denotes the information available up to time \( i^{\star} \). Hence: \begin{equation} \boldsymbol{\mathcal{F}}_{i^{\star}+i} = \boldsymbol{\mathcal{F}}_{i^{\star}} \cup \mathrm{filtration}\left \{ \w_{k, i^{\star}+1}, \ldots, \w_{k, i^{\star}+i} \right \} \end{equation} Hence, taking expectation of~\eqref{eq:interdalfsldfa} conditioned on \( \boldsymbol{\mathcal{F}}_{i^{\star}} \) removes the elements in \( \mathrm{filtration}\left \{ \w_{k, i^{\star}+1}, \ldots, \w_{k, i^{\star}+i} \right \} \) contained in \( \boldsymbol{\mathcal{F}}_{i^{\star}} \) and yields:} \begin{align} \E \left \{ \widetilde{\w}'{}^{i^{\star}}_{i+1} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} =&\: \left( I - \mu \nabla^2 J( \w_{c, i^{\star}}) \right) \E \left \{ \widetilde{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ &\:+ \mu \nabla J(\w_{c, i^{\star}}) \label{eq:conditional_mean_recursion} \end{align} Since \( \widetilde{\w}'{}^{i^{\star}}_{0} = 0 \), iterating starting at \( i=0 \) yields: \begin{equation} \E \left \{ \widetilde{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} = \mu \left( \sum_{k=1}^{i}{\left( I - \mu \nabla^2 J( \w_{c, i^{\star}}) \right)}^{k-1} \right) \nabla J(\w_{c, i^{\star}}) \label{eq:mean_deviation_saddle} \end{equation} This allows us to bound the linear term appearing in~\eqref{eq:t_step_descent_app} as: \begin{align} &\: -\E \left\{ {\nabla J(\w_{c, i^{\star}})}^{\T} \widetilde{\w}_{i}'{}^{i^{\star}} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ =&\: - {\nabla J(\w_{c, i^{\star}})}^{\T} \E \left \{ \widetilde{\w}_{i}'{}^{i^{\star}} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ \stackrel{\eqref{eq:mean_deviation_saddle}}{=}&\: -\mu {\nabla J(\w_{c, i^{\star}})}^{\T} \left( \sum_{k=1}^{i} {\left( I - \mu \nabla^2 J( \w_{c, i^{\star}}) \right)}^{k-1} \right) \nabla J(\w_{c, i^{\star}}) \notag \\ =&\: -\mu {\left\|\nabla J(\w_{c, i^{\star}})\right\|}^2_{\sum_{k=1}^{i} {\left( I - \mu \nabla^2 J( \w_{c, i^{\star}}) \right)}^{k-1}} \label{eq:linear_term_bound} \end{align} We now examine the quadratic term in~\eqref{eq:t_step_descent_app}. To this end, we introduce the eigenvalue decomposition of the Hessian around the iterate at time \( i^{\star} \): \begin{equation} \nabla^2 J(\w_{c, i^{\star}}) \triangleq \boldsymbol{V}_{i^{\star}} \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{V}_{i^{\star}}^{\T} \end{equation} Note that both \( \boldsymbol{V}_{i^{\star}} \) and \( \boldsymbol{\Lambda}_{i^{\star}} \) inherit their randomness from \( \w_{c, i^{\star}} \). As such, they are random but become deterministic when conditioning on \( \boldsymbol{\mathcal{F}}_{i^{\star}} \). This fact will be exploited further below. To begin with, note that: \begin{align} {\left \| \widetilde{\w}'{}^{i^{\star}}_{i+1} \right \|}_{\nabla^2 J(\w_{c, i^{\star}})}^2 =&\: {\left \| \widetilde{\w}'{}^{i^{\star}}_{i+1} \right \|}_{\boldsymbol{V}_{i^{\star}} \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{V}_{i^{\star}}^{\T}}^2 \notag \\ =&\: {\left \| \boldsymbol{V}_{i^{\star}}^{\T} {\w}_{c, i^{\star}} - \boldsymbol{V}_{i^{\star}}^{\T} {\w}_{c, i^{\star}+i+1}' \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \notag \\ =&\: {\left \| \overline{\w}'{}^{i^{\star}}_{i+1} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \end{align} where we introducted: \begin{align} \overline{\w}'{}^{i^{\star}}_{i+1} \triangleq \boldsymbol{V}_{i^{\star}}^{\T} \widetilde{\w}'{}^{i^{\star}}_{i+1} \end{align} Under this transformation, recursion~\eqref{eq:long_term_recursive} is also diagonalized, yielding: \begin{align} &\:\overline{\w}'{}^{i^{\star}}_{i+1} \notag \\ \ifarx \triangleq&\: \boldsymbol{V}_{i^{\star}}^{\T} \widetilde{\w}'{}^{i^{\star}}_{i+1} \notag \\ \fi =&\: \boldsymbol{V}_{i^{\star}}^{\T} \left( I - \mu \nabla^2 J( \w_{c, i^{\star}}) \right) \boldsymbol{V}_{i^{\star}} \boldsymbol{V}_{i^{\star}}^{\T} \widetilde{\w}'{}^{i^{\star}}_{i} \notag \\ &\: + \mu \boldsymbol{V}_{i^{\star}}^{\T} {\nabla} J(\w_{c, i^{\star}}) + \mu \boldsymbol{V}_{i^{\star}}^{\T} \s_{i^{\star}+i+1} \notag \\ =&\: \left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right) \overline{\w}'{}^{i^{\star}}_{i} + \mu \overline{\nabla} J(\w_{c, i^{\star}}) + \mu \overline{\s}_{i^{\star}+i+1} \label{eq:long_term_transformed} \end{align} with \(\overline{\nabla} J(\w_{c, i^{\star}}) \triangleq \boldsymbol{V}_{i^{\star}}^{\T} {\nabla} J(\w_{c, i^{\star}}) \) and \( \overline{\s}_{i^{\star}+i+1} \triangleq \boldsymbol{V}_{i^{\star}}^{\T} \s_{i^{\star}+i+1} \). The presence of the gradient term, which is deterministic conditioned on \( \boldsymbol{\mathcal{F}}_{i^{\star}} \) complicates the analysis of the evolution. It can be removed by (conditionally) centering the random variable. Specifically, applying the same tranformation to the conditional mean recursion~\eqref{eq:conditional_mean_recursion}, and subtracting the transformed conditional mean on both sides of~\eqref{eq:long_term_transformed}, we find: \begin{align} &\:\overline{\w}'{}^{i^{\star}}_{i+1} - \E \left \{ \overline{\w}'{}^{i^{\star}}_{i+1} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ =&\: \left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right) \left( \overline{\w}'{}^{i^{\star}}_{i} - \E \left \{ \overline{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right) + \mu \overline{\s}_{i^{\star}+i+1} \end{align} which allows us to cancel the driving term involving the gradient. For brevity, define the (conditionally) centered random variable: \begin{equation} \check{\w}'{}^{i^{\star}}_{i+1} = \overline{\w}'{}^{i^{\star}}_{i+1} - \E \left \{ \overline{\w}'{}^{i^{\star}}_{i+1} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \end{equation} so that: \begin{align} \check{\w}'{}^{i^{\star}}_{i+1} = \left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right) \check{\w}'{}^{i^{\star}}_{i} + \mu \overline{\s}_{i^{\star}+i+1} \label{eq:centered_recursive} \end{align} Before proceeding, note that we can express: \begin{align} &\: \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ =&\: \E \left \{ {\left \| \overline{\w}'{}^{i^{\star}}_{i} - \E \left \{ \overline{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ =&\: \E \left \{ {\left \| \overline{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} - {\left \| \E \left \{ \overline{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \end{align} Hence, we have: \begin{align} &\: \E \left \{ {\left \| \widetilde{\w}'{}^{i^{\star}}_{i} \right \|}_{\nabla^2 J(\w_{c, i^{\star}})}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ =&\: \E \left \{ {\left \| \overline{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ =&\: \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} + {\left \| \E \left \{ \overline{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \label{eq:intermediate} \end{align} In order to make claims about \( \E \left \{ {\left \| \widetilde{\w}'{}^{i^{\star}}_{i} \right \|}_{\nabla^2 J(\w_{c, i^{\star}})}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \) by studying \( \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \), we need to establish a bound on \( {\left \| \E \left \{ \overline{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \). We have: \begin{align} &\: {\left \| \E \left \{ \overline{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \notag \\ =&\: {\left \| \E \left \{ \boldsymbol{V}_{i^{\star}}^{\T} \widetilde{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \notag \\ \stackrel{\eqref{eq:mean_deviation_saddle}}{=}&\:\mu^2 {\left \| \boldsymbol{V}_{i^{\star}}^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \nabla^2 J( \w_{c, i^{\star}}) \right)}^{k-1} \right) \nabla J(\w_{c, i^{\star}}) \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \notag \\ =&\:\mu^2 {\left \| \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}}) \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \notag \\ =&\:\mu^2 {\overline{\nabla} J(\w_{c, i^{\star}}) }^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right)}^{k-1} \right) \boldsymbol{\Lambda}_{i^{\star}} \notag \\ &\: \ \ \ \ \times \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}})\label{eq:intermediate_some_variance} \end{align} {We shall order the eigenvalues of \( \nabla^2 J(\w_{c, i^{\star}}) \), such that its eigendecomposition has a block structure: \begin{equation}\label{eq:hessian_eigendecomposition} \boldsymbol{V}_{i^{\star}} = \left[ \begin{array}{cc} \boldsymbol{V}_{i^{\star}}^{\ge0} & \boldsymbol{V}_{i^{\star}}^{< 0} \end{array} \right], \ \ \boldsymbol{\Lambda}_{i^{\star}} = \left[ \begin{array}{cc} \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} & 0\\0 & \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \end{array}\right] \end{equation} with \( \delta I \ge \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \ge 0 \) and \( \boldsymbol{\Lambda}_{i^{\star}}^{< 0} < 0 \). Note that since \( \nabla^2 J(\w_{c, i^{\star}}) \) is random, the decomposition itself is random as well. Nevertheless, it exists with probability one. We also decompose the transformed gradient vector with appropriate dimesions: \begin{equation} {\overline{\nabla} J(\w_{c, i^{\star}}) } = \mathrm{col}\left \{ {\overline{\nabla} J(\w_{c, i^{\star}}) }^{\ge0}, {\overline{\nabla} J(\w_{c, i^{\star}}) }^{<0} \right \} \end{equation} We can then decompose~\eqref{eq:intermediate_some_variance}: \begin{align} &\: {\left \| \E \left \{ \overline{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \notag \\ \ifarx {=}&\:\mu^2 {\overline{\nabla} J(\w_{c, i^{\star}}) }^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right)}^{k-1} \right) \boldsymbol{\Lambda}_{i^{\star}} \notag \\ &\: \ \ \ \ \times \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}}) \notag \\ \fi \ifarx =&\: \mu^2 {\left({\overline{\nabla} J(\w_{c, i^{\star}}) }^{\ge0}\right)}^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \right) \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \notag \\ &\: \ \ \ \ \times \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}})^{\ge0} \notag \\ &\:+ \mu^2 {\left({\overline{\nabla} J(\w_{c, i^{\star}}) }^{<0}\right)}^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{<0} \right)}^{k-1} \right) \boldsymbol{\Lambda}_{i^{\star}}^{<0} \notag \\ &\: \ \ \ \ \times \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{<0} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}})^{<0} \notag \\ \fi \stackrel{(a)}{\le}&\: \mu^2 {\left({\overline{\nabla} J(\w_{c, i^{\star}}) }^{\ge0}\right)}^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \right) \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \notag \\ &\: \ \ \ \ \times \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}})^{\ge0} \notag \\ \stackrel{(b)}{\le}&\: \mu^2 {\left({\overline{\nabla} J(\w_{c, i^{\star}}) }^{\ge0}\right)}^{\T} \left( \sum_{k=1}^{\infty}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \right) \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \notag \\ &\: \ \ \ \ \times \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}})^{\ge0} \notag \\ \stackrel{(c)}{=}&\: \mu^2 {\left({\overline{\nabla} J(\w_{c, i^{\star}}) }^{\ge0}\right)}^{\T} \left( \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)^{-1} \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \notag \\ &\: \ \ \ \ \times \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}})^{\ge0} \notag \\ {=}&\: \mu{\left({\overline{\nabla} J(\w_{c, i^{\star}}) }^{\ge0}\right)}^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}})^{\ge0} \notag \\ \stackrel{(d)}{\le}&\: \mu{\left({\overline{\nabla} J(\w_{c, i^{\star}}) }^{\ge0}\right)}^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}})^{\ge0} \notag \\ &\:+ \mu{\left({\overline{\nabla} J(\w_{c, i^{\star}}) }^{<0}\right)}^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{<0} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}})^{<0} \notag \\ {\le}&\: \mu{{\overline{\nabla} J(\w_{c, i^{\star}}) }}^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}\right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}}) \notag \\ {=}&\: \mu{\left \|{\overline{\nabla} J(\w_{c, i^{\star}}) }\right \|}^{2}_{\sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}\right)}^{k-1}}\label{eq:centered_error} \end{align} where \( (a) \) follows from \( \boldsymbol{\Lambda}_{i^{\star}}^{<0} < 0 \), \( (b) \) follows from: \begin{equation} \sum_{k=1}^{k}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \le \sum_{k=1}^{\infty}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \end{equation} for \( \mu < \frac{1}{\delta} \). Step \( (c) \) follows from the formula for the geometric matrix series, and \( (d) \) follows from: \begin{equation} \mu{\left({\overline{\nabla} J(\w_{c, i^{\star}}) }^{\ge0}\right)}^{\T} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{k-1} \right) \overline{\nabla} J(\w_{c, i^{\star}})^{\ge0} \ge 0 \end{equation}} Comparing~\eqref{eq:centered_error} to~\eqref{eq:linear_term_bound}, we find that we can bound: \begin{align} -\E \left\{ {\nabla J(\w_{c, i^{\star}})}^{\T} \widetilde{\w}_{i}'{}^{i^{\star}} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} + {\left \| \E \left \{ \overline{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \le 0 \end{align} To recap, we can simplify~\eqref{eq:t_step_descent_app} as: \begin{align} &\: \E \left \{ J(\w_{c, i^{\star}+i}') | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ \le&\: J(\w_{c, i^{\star}}) + \frac{1}{2} \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} + \frac{\rho}{6} \E \left \{ {\left \| \widetilde{\w}_{i}'{}^{i^{\star}} \right \|}^3 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \}\label{eq:t_step_descent_simplified} \end{align} We proceed with the now simplified quadratic term. \begin{comment} We shall order the eigenvalues of \( \nabla^2 J(\w_{c, i^{\star}}) \), such that its eigendecomposition has a block structure: \begin{equation}\label{eq:hessian_eigendecomposition} \boldsymbol{V}_{i^{\star}} = \left[ \begin{array}{cc} \boldsymbol{V}_{i^{\star}}^{\ge0} & \boldsymbol{V}_{i^{\star}}^{< 0} \end{array} \right], \ \ \boldsymbol{\Lambda}_{i^{\star}} = \left[ \begin{array}{cc} \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} & 0\\0 & \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \end{array}\right] \end{equation} with \( \delta I \ge \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \ge 0 \) and \( \boldsymbol{\Lambda}_{i^{\star}}^{< 0} < 0 \). Note that since \( \nabla^2 J(\w_{c, i^{\star}}) \) is random, the decomposition itself is random as well. Conditioned on \( \boldsymbol{\mathcal{F}}_{i^{\star}} \), however, it becomes deterministic. We can then decompose: \begin{align} &\: {\left \| \E \left \{ \overline{\w}'{}^{i^{\star}}_{i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 \notag \\ =&\:\mu^2 {\left\|\overline{\nabla} J(\w_{c, i^{\star}})\right\|}^2_{\left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right)}^{k-1} \right) \boldsymbol{\Lambda}_{i^{\star}} \left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right)}^{k-1} \right)} \notag \\ =&\:\mu^2 {\left\|\boldsymbol{V}_{i^{\star}}^{\ge0} {\nabla} J(\w_{c, i^{\star}})\right\|}^2_{\left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge 0} \right)}^{k} \right)^2 \boldsymbol{\Lambda}_{i^{\star}}^{\ge 0}} \notag \\ &\: + \mu^2 {\left\|\boldsymbol{V}_{i^{\star}}^{<0} {\nabla} J(\w_{c, i^{\star}})\right\|}^2_{\left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^{k} \right)^2 \boldsymbol{\Lambda}_{i^{\star}}^{< 0}} \notag \\ \le&\:\mu^2 {\left\|\boldsymbol{V}_{i^{\star}}^{\ge0} {\nabla} J(\w_{c, i^{\star}})\right\|}^2_{\left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge 0} \right)}^{k} \right)^2 \boldsymbol{\Lambda}_{i^{\star}}^{\ge 0}} \notag \\ =&\:\mu^2 {\left\|\boldsymbol{V}_{i^{\star}}^{>0} {\nabla} J(\w_{c, i^{\star}})\right\|}^2_{\left( \sum_{k=1}^{i}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{> 0} \right)}^{k} \right)^2 \boldsymbol{\Lambda}_{i^{\star}}^{> 0}} \notag \\ \le&\:\mu^2 {\left\|\boldsymbol{V}_{i^{\star}}^{>0} {\nabla} J(\w_{c, i^{\star}})\right\|}^2_{\left( \sum_{k=1}^{\infty}{\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}}^{> 0} \right)}^{k} \right)^2 \boldsymbol{\Lambda}_{i^{\star}}^{> 0}} \notag \\ \le&\:{\left\|\boldsymbol{V}_{i^{\star}}^{>0} {\nabla} J(\w_{c, i^{\star}})\right\|}^2_{\left(\boldsymbol{\Lambda}_{i^{\star}}^{> 0}\right)^{-1}} \notag \\ \end{align} \end{comment} We square both sides of~\eqref{eq:centered_recursive} under an arbitrary diagonal weighting matrix \( \boldsymbol{\Sigma}_i \), deterministic conditioned on \( \w_{c, i^{\star}} \) and \( \w_{c, i^{\star}+i} \), to obtain: \begin{align} &{\left \| \check{\w}'{}^{i^{\star}}_{i+1} \right \|}_{\boldsymbol{\Sigma}_i}^2 \notag \\ =&\: {\left \| \left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right) \check{\w}'{}^{i^{\star}}_{i} + \mu \overline{\s}_{i^{\star}+i+1} \right \|}_{\boldsymbol{\Sigma}_{i}}^2 \notag \\ =&\: {\left \| \left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right) \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Sigma}_{i}}^2 + \mu^2 {\left \| \overline{\s}_{i^{\star}+i+1} \right \|}_{\boldsymbol{\Sigma}_{i}}^2 \notag \\ &+ 2 \mu { \check{\w}'{}^{i^{\star}}_{i} }^{\T} \left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right) \boldsymbol{\Sigma}_{i} \overline{\s}_{i^{\star}+i+1} \end{align} Note that upon conditioning on \( \boldsymbol{\mathcal{F}}_{i^{\star}+i} \), all elements of the cross-term, aside from \( \overline{\s}_{i^{\star}+i+1} \), become deterministic, and as such the term disappears when taking expecations. We obtain: \begin{align} &\: \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i+1} \right \|}_{\boldsymbol{\Sigma}_{i}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}+i} \right \} \notag \\ =&\: {\left \| \left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right) \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Sigma}_{i}}^2 + \mu^2 \E \left \{ {\left \| \overline{\s}_{i^{\star}+i+1} \right \|}_{\boldsymbol{\Sigma}_{i}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}+i} \right \} \notag \\ \ifarx =&\: {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Sigma}_{i} - 2 \mu \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{\Sigma}_{i} + \mu^2 \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{\Sigma}_{i} \boldsymbol{\Lambda}_{i^{\star}}}^2 \notag \\ &\: + \mu^2 \mathrm{Tr}\left( \boldsymbol{V}_{i^{\star}} \boldsymbol{\Sigma}_i \boldsymbol{V}_{i^{\star}}^{\T} \mathcal{R}_{s}\left( \bcw_{i^{\star}+i} \right) \right) \notag \\ \fi =&\: {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Sigma}_{i} - 2 \mu \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{\Sigma}_{i} }^2 + \mu^2 \mathrm{Tr}\left( \boldsymbol{V}_{i^{\star}} \boldsymbol{\Sigma}_i \boldsymbol{V}_{i^{\star}}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \right) \notag \\ &\: + \mu^2 \mathrm{Tr}\left( \boldsymbol{V}_{i^{\star}} \boldsymbol{\Sigma}_i \boldsymbol{V}_{i^{\star}}^{\T} \left( \mathcal{R}_{s}\left( \bcw_{i^{\star}+i} \right) - \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right)\right) \right) \notag \\ &\: + \mu^2 {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{\Sigma}_{i} \boldsymbol{\Lambda}_{i^{\star}}}^2 \end{align} \begin{comment} where we defined: \begin{align} &\:\boldsymbol{q}_{i^{\star}+i} \notag \\ \triangleq&\: \mathrm{Tr}\left( \boldsymbol{V}_{i^{\star}} \boldsymbol{\Sigma}_i \boldsymbol{V}_{i^{\star}}^{\T} \left( \mathcal{R}_{s}\left( \bcw_{i^{\star}+i} \right) - \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right)\right) \right) \notag \\ &\: + {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{\Sigma}_{i} \boldsymbol{\Lambda}_{i^{\star}}}^2 \notag \\ \triangleq&\: \mathrm{Tr}\left( \boldsymbol{V}_{i^{\star}} \boldsymbol{\Lambda}_{i^{\star}} {\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right)}^i \boldsymbol{V}_{i^{\star}}^{\T} \left( \mathcal{R}_{s}\left( \bcw_{i^{\star}+i} \right) - \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right)\right) \right) \notag \\ &\: + {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{\Lambda}_{i^{\star}} {\left( I - \mu \boldsymbol{\Lambda}_{i^{\star}} \right)}^i \boldsymbol{\Lambda}_{i^{\star}}}^2 \end{align} \end{comment} We proceed to bound the last two terms. First, we have: \begin{align} &\:\mathrm{Tr}\left( \boldsymbol{V}_{i^{\star}} \boldsymbol{\Sigma}_i \boldsymbol{V}_{i^{\star}}^{\T} \left( \mathcal{R}_{s}\left( \bcw_{i^{\star}+i} \right) - \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right)\right) \right) \notag \\ \overset{(a)}{\le}&\: \left \| \boldsymbol{V}_{i^{\star}} \boldsymbol{\Sigma}_i \boldsymbol{V}_{i^{\star}}^{\T} \right \| \left \| \mathcal{R}_{s}\left( \bcw_{i^{\star}+i} \right) - \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \right \| \notag \\ \ifarx {\le}&\: \left \| \boldsymbol{V}_{i^{\star}} \boldsymbol{\Sigma}_i \boldsymbol{V}_{i^{\star}}^{\T} \right \| \| \mathcal{R}_{s}\left( \bcw_{i^{\star}+i} \right) - \mathcal{R}_{s}\left( \bcw_{c, i^{\star}+i} \right) \notag \\ &\: \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \mathcal{R}_{s}\left( \bcw_{c, i^{\star}+i} \right) - \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \| \notag \\ \fi {\le}&\: \left \| \boldsymbol{V}_{i^{\star}} \boldsymbol{\Sigma}_i \boldsymbol{V}_{i^{\star}}^{\T} \right \| \left \| \mathcal{R}_{s}\left( \bcw_{i^{\star}+i} \right) - \mathcal{R}_{s}\left( \bcw_{c, i^{\star}+i} \right) \right \| \notag \\ &\: + \left \| \boldsymbol{V}_{i^{\star}} \boldsymbol{\Sigma}_i \boldsymbol{V}_{i^{\star}}^{\T} \right \| \left \| \mathcal{R}_{s}\left( \bcw_{c, i^{\star}+i} \right) - \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \right \| \notag \\ \overset{(b)}{\le}&\: \rho \left( \boldsymbol{\Sigma}_i \right) \beta_R p_{\max} \left( {\left \| \w_{c, i^{\star}+i} - \w_{c, i^{\star}} \right \|}^{\gamma} + {\left \| \bcw_{c, i^{\star}+i} - \bcw_{i^{\star}+i} \right \|}^{\gamma} \right) \notag \\ {=}&\: \rho \left( \boldsymbol{\Sigma}_i \right) \beta_R p_{\max}\left( {\left \| \widetilde{\w}^{{i}^{\star}}_{i}\right \|}^{\gamma} + \left\|\bcw_{c, i^{\star}+i} - \bcw_{i^{\star}+i}\right\|^{\gamma} \right) \end{align} where \( (a) \) follows from {Cauchy-Schwarz, since \( \mathrm{Tr}(A^{\T} B) \) is an inner product over the space of symmetric matricess, and hence,} \( |\mathrm{Tr}(A^{\T} B)| \le \|A \|\|B\| \), and \( (b) \) follows from Lemma~\eqref{LEM:CENTROID_COVARIANCE}. For the second term, we have: \begin{align} {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{\Sigma}_{i} \boldsymbol{\Lambda}_{i^{\star}}}^2 &\le \rho\left( \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{\Sigma}_{i} \boldsymbol{\Lambda}_{i^{\star}} \right) {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}^2 \notag \\ &\le \delta^2 \rho\left( \boldsymbol{\Sigma}_{i} \right) {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}^2 \end{align} We conclude that \begin{align} &\: \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i+1} \right \|}_{\boldsymbol{\Sigma}_{i}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ =&\: \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Sigma}_{i} - 2 \mu \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{\Sigma}_{i} }^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ &\: + \mu^2 \mathrm{Tr}\left( \boldsymbol{V}_{i^{\star}} \boldsymbol{\Sigma}_i \boldsymbol{V}_{i^{\star}}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \right) + \mu^2 \rho\left( \boldsymbol{\Sigma}_i \right) \E \left \{ \boldsymbol{q}_{i^{\star}+i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \end{align} where \begin{align} \boldsymbol{q}_{i^{\star}+i} \triangleq \beta_R p_{\max}\left( {\left \| \widetilde{\w}^{{i}^{\star}}_{i}\right \|}^{\gamma} + \left\|\bcw_{c, i^{\star}+i} - \bcw_{i^{\star}+i}\right\|^{\gamma} \right) + \delta^2 {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}^2\label{eq:perturbation_definition} \end{align} For brevity, we define \begin{align} \boldsymbol{D} &\triangleq I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}} \\ \boldsymbol{Y} &\triangleq \boldsymbol{V}_{i^{\star}}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}} \end{align} With these substitutions we obtain: \begin{align} &\: \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i+1} \right \|}_{\boldsymbol{\Sigma}_{i}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ =&\: \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{D} \boldsymbol{\Sigma}_i }^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} + \mu^2 \mathrm{Tr}\left( \boldsymbol{\Sigma}_i \boldsymbol{Y} \right) \notag \\ &\: + \mu^2 \rho \left( \boldsymbol{\Sigma}_i \right) \E \left \{ \boldsymbol{q}_{i^{\star}+i} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \end{align} At \( i = 0 \), we have: \begin{equation} \check{\w}'{}^{i^{\star}}_{0} = \overline{\w}'{}^{i^{\star}}_{0} - \E \left \{ \overline{\w}'{}^{i^{\star}}_{0} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} = 0 - 0 = 0 \end{equation} Letting \( \boldsymbol{\Sigma}_i = \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{D}^i \), we can iterate to obtain: \begin{align} &\: \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i+1} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ \ifarx =&\: \mu^2 \sum_{n=0}^i \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{D}^n \boldsymbol{Y} \right) \notag \\ &\:+ \mu^2 \sum_{n=0}^i \rho\left( \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{D}^n \right) \cdot \E \left \{ \boldsymbol{q}_{i^{\star}+n} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \notag \\ \fi =&\: \mu^2 \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}} \left( \sum_{n=0}^i \boldsymbol{D}^n \right) \boldsymbol{Y} \right) \notag \\ &\:+ \mu^2 \sum_{n=0}^i \rho\left( \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{D}^n \right) \cdot \E \left \{ \boldsymbol{q}_{i^{\star}+n} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \}\label{eq:centered_error_recursion} \end{align} since \( \overline{\w}_{c, i^{\star}+i+1}' = \overline{\w}_{c, i^{\star}} \) at \( i = 0 \). Our objective is to show that the first term on the right-hand side yields sufficient descent (i.e., will be sufficiently negative), while the second term is small enough to be negligible. To this end, we again make use of the structured eigendecomposition~\eqref{eq:hessian_eigendecomposition}. We have: \ifarx \begin{align} &\: \mu^2 \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}} \left( \sum_{n=0}^i \boldsymbol{D}^n \right) \boldsymbol{V}_{i^{\star}}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}} \right) \notag \\ \stackrel{(a)}{=}&\: \mu^2 \mathrm{Tr}\Bigg( \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^n \right) \notag \\ &\: \ \ \ \ \ \ \ \ \times {\left(\boldsymbol{V}_{i^{\star}}^{\ge0}\right)}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}}^{\ge0} \Bigg) \notag \\ &\:+ \mu^2 \mathrm{Tr}\Bigg( \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \notag \\ &\: \ \ \ \ \ \ \ \ \ \ \ \ \times {\left(\boldsymbol{V}_{i^{\star}}^{< 0}\right)}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}}^{< 0} \Bigg) \notag \\ \stackrel{(b)}{=}&\: \mu^2 \mathrm{Tr}\Bigg( \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^n \right) \notag \\ &\: \ \ \ \ \ \ \ \ \times {\left(\boldsymbol{V}_{i^{\star}}^{\ge0}\right)}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}}^{\ge0} \Bigg) \notag \\ &\:- \mu^2 \mathrm{Tr}\Bigg( \left(-\boldsymbol{\Lambda}_{i^{\star}}^{< 0}\right) \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \notag \\ &\: \ \ \ \ \ \ \ \ \ \ \ \times {\left(\boldsymbol{V}_{i^{\star}}^{< 0}\right)}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}}^{< 0} \Bigg) \notag \\ \stackrel{(c)}{\le}&\: \mu^2 \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^n \right) \right) \notag \\ &\:\times \lambda_{\max}\left({\left(\boldsymbol{V}_{i^{\star}}^{\ge0}\right)}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}}^{\ge0} \right) \notag \\ &\:- \mu^2 \mathrm{Tr}\left( \left(-\boldsymbol{\Lambda}_{i^{\star}}^{< 0}\right) \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \right) \notag \\ &\:\times \lambda_{\min}\left({\left(\boldsymbol{V}_{i^{\star}}^{< 0}\right)}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}}^{< 0} \right) \notag \\ \stackrel{(d)}{\le}&\: \mu^2 \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^n \right) \right) \sigma_u^2 \notag \\ &\:- \mu^2 \mathrm{Tr}\left( \left(-\boldsymbol{\Lambda}_{i^{\star}}^{< 0}\right) \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \right) \sigma_{\ell}^2 \label{eq:previous_term_5646346} \end{align} where in \( (a) \) we decomposed the trace since \( \boldsymbol{\Lambda}_{i^{\star}} \left( \sum_{n=0}^i \boldsymbol{D}^n \right) \) is a diagonal matrix, \( (b) \) applies \( - \left( - \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right) = \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \). \else\begin{align} &\: \mu^2 \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}} \left( \sum_{n=0}^i \boldsymbol{D}^n \right) \boldsymbol{V}_{i^{\star}}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}} \right) \notag \\ \stackrel{(a)}{=}&\: \mu^2 \mathrm{Tr}\Bigg( \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^n \right) \notag \\ &\: \ \ \ \ \ \ \ \ \times {\left(\boldsymbol{V}_{i^{\star}}^{\ge0}\right)}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}}^{\ge0} \Bigg) \notag \\ &\:- \mu^2 \mathrm{Tr}\Bigg( \left(-\boldsymbol{\Lambda}_{i^{\star}}^{< 0}\right) \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \notag \\ &\: \ \ \ \ \ \ \ \ \ \ \ \times {\left(\boldsymbol{V}_{i^{\star}}^{< 0}\right)}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}}^{< 0} \Bigg) \notag \\ \stackrel{(b)}{\le}&\: \mu^2 \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^n \right) \right) \notag \\ &\:\times \lambda_{\max}\left({\left(\boldsymbol{V}_{i^{\star}}^{\ge0}\right)}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}}^{\ge0} \right) \notag \\ &\:- \mu^2 \mathrm{Tr}\left( \left(-\boldsymbol{\Lambda}_{i^{\star}}^{< 0}\right) \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \right) \notag \\ &\:\times \lambda_{\min}\left({\left(\boldsymbol{V}_{i^{\star}}^{< 0}\right)}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}}^{< 0} \right) \notag \\ \stackrel{(c)}{\le}&\: \mu^2 \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^n \right) \right) \sigma_u^2 \notag \\ &\:- \mu^2 \mathrm{Tr}\left( \left(-\boldsymbol{\Lambda}_{i^{\star}}^{< 0}\right) \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \right) \sigma_{\ell}^2 \label{eq:previous_term_5646346} \end{align} where in \( (a) \) we decomposed the trace since \( \boldsymbol{\Lambda}_{i^{\star}} \left( \sum_{n=0}^i \boldsymbol{D}^n \right) \) is a diagonal matrix and applied \( - \left( - \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right) = \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \). \fi% Step \( (b) \) follows from \( \mathrm{Tr}(A) \lambda_{\min}(B) \le \mathrm{Tr}(AB) \le \mathrm{Tr}(A) \lambda_{\max}(B) \) which holds for \( A = A^{\T}, B = B^{\T} \ge 0 \), and \( (c) \) follows from the bounded covariance property~\eqref{eq:bounded_covariance} and Assumption~\ref{as:noise_in_saddle}. For the positive term, we have: \begin{align} &\: \mu^2 \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^n \right) \right) \sigma_u^2 \notag \\ \stackrel{(a)}{\le}&\: \mu^2 \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \left( \sum_{n=0}^{\infty} {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^n \right) \right) \sigma_u^2 \notag \\ \stackrel{(b)}{\le}&\: \mu^2 \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} {\left( 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \right)}^{-1} \right) \sigma_u^2 \stackrel{(c)}{\le}\: \frac{\mu}{2} M \sigma_u^2 \end{align} where \( (a) \) follows since \( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{\ge0} \) is elementwise non-negative for \( \mu \le \frac{2}{\delta} \), \( (b) \) follows from \( \sum_{n=0}^{\infty} A^n = {\left( I - A \right)}^{-1} \) and \( (c) \) follows since \( \nabla^2 J(\w_{c, i^{\star}}) \) is of dimension \( M \). For the negative term, we have under expectation conditioned on \( \w_{c, i^{\star}} \in \mathcal{H} \): \begin{align} &\: \E \Bigg \{ \mathrm{Tr}\left( \left(-\boldsymbol{\Lambda}_{i^{\star}}^{< 0}\right) \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \right) \sigma_{\ell}^2 \Bigg | \w_{c, i^{\star}} \in \mathcal{H} \Bigg \} \notag \\ \stackrel{(a)}{\ge}&\: \E \left \{ \tau \left( \sum_{n=0}^{i} {\left( 1 + 2 \mu \tau \right)}^n \right) \sigma_{\ell}^2 \Bigg | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ \stackrel{(b)}=&\: \tau \left( \sum_{n=0}^{i} {\left( 1 + 2 \mu \tau \right)}^n \right) \sigma_{\ell}^2 \stackrel{(c)}{=}\: \tau \frac{1 - {\left( 1 + 2\mu\tau \right)}^{i+1}}{1 - (1 + 2 \mu \tau)} \sigma_{\ell}^2 \notag \\ {=}&\: \frac{1}{2\mu} \left({\left( 1 + 2\mu\tau \right)}^{i+1} - 1 \right) \sigma_{\ell}^2 \end{align} Step \( (a) \) makes use of the fact that \( \left(-\boldsymbol{\Lambda}_{i^{\star}}^{< 0}\right) \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \) is a diagonal matrix, where all elements are non-negative. Hence, its trace can be bounded by any of its diagonal elements: \begin{align} &\: \mathrm{Tr}\left( \left(-\boldsymbol{\Lambda}_{i^{\star}}^{< 0}\right) \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \right) \notag \\ \stackrel{\eqref{eq:define_h}}{\ge}&\: \tau \left( \sum_{n=0}^{i} {\left( 1 + 2 \mu \tau \right)}^n \right) \end{align} In \( (b) \) we dropped the expectation since the expression is no longer random, and \( (c) \) is the result of a geometric series. \begin{comment} Returning to~\eqref{eq:previous_term_12314324}, we find: \begin{align} &\: - \mu^2 \E \mathrm{Tr}\left( \left(-\boldsymbol{\Lambda}_{i^{\star}}^{< 0}\right) \left( \sum_{n=0}^i {\left( I - 2 \mu \boldsymbol{\Lambda}_{i^{\star}}^{< 0} \right)}^n \right) \right) \sigma_{\ell}^2 \notag \\ \le&\: -\frac{\mu}{2} \left({\left( 1 + 2\mu\tau \right)}^{i+1} - 1 \right) \sigma_{\ell}^2 \pi \end{align}\end{comment} We return to the full expression~\eqref{eq:previous_term_5646346} and find: \begin{align} &\: \mu^2 \E \Bigg \{ \mathrm{Tr}\Bigg( \boldsymbol{\Lambda}_{i^{\star}} \left( \sum_{n=0}^i \boldsymbol{D}^n \right) \notag \\ &\: \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \boldsymbol{V}_{i^{\star}}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}} \Bigg) | \w_{c, i^{\star}} \in \mathcal{H} \Bigg \} \notag \\ \le&\: \frac{\mu}{2} M \sigma_u^2 - \frac{\mu}{2} \left({\left( 1 + 2\mu\tau \right)}^{i+1} - 1 \right) \sigma_{\ell}^2 \stackrel{(a)}{\le}\:-\frac{\mu}{2}M \sigma_u^2 \end{align} where \( (a) \) holds if, and only if, \begin{align} \:& \frac{\mu}{2} M \sigma_u^2 - \frac{\mu}{2} \left({\left( 1 + 2\mu\tau \right)}^{i+1} - 1 \right) \sigma_{\ell}^2 \le -\frac{\mu}{2}M \sigma_u^2\notag \\ \Longleftrightarrow \:& 2 M \frac{\sigma_u^2}{\sigma_{\ell}^2} + 1 \le {\left( 1 + 2\mu\tau \right)}^{i+1} \notag \\ \Longleftrightarrow \:& \log\left(2 M \frac{\sigma_u^2}{\sigma_{\ell}^2} + 1\right) \le (i+1){\log{\left( 1 + 2\mu\tau \right)}} \notag \\ \Longleftrightarrow \:& \frac{\log\left(2 M \frac{\sigma_u^2}{\sigma_{\ell}^2} + 1\right)}{\log{\left( 1 + 2\mu\tau \right)}} \le {i+1} \notag \\ \Longleftrightarrow \:& \frac{\log\left(2 M \frac{\sigma_u^2}{\sigma_{\ell}^2} + 1\right)}{O(\mu\tau)} \le {i+1} \end{align} where the last line follows from \( \lim_{x \to 0} 1/x\log(1+x) = 1 \). We conclude that there exists a bounded \( i^{s} \) such that: \begin{align} &\: \mu^2 \E \left\{ \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}} \left( \sum_{n=0}^{i^s} \boldsymbol{D}^n \right) \boldsymbol{V}_{i^{\star}}^{\T} \mathcal{R}_{s}\left( \bcw_{c, i^{\star}} \right) \boldsymbol{V}_{i^{\star}} \right) \right \} \notag \\ \le&\: - \frac{\mu}{2} M \sigma_u^2 \end{align} Applying this relation to~\eqref{eq:centered_error_recursion} and {taking expectations over \( \w_{c, i^{\star}} \in \mathcal{H} \), we obtain:} \begin{align} &\: \E\left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i^s+1} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right\} \notag \\ \le&\: \mu^2 \sum_{n=0}^{i^s} \E\left \{ \left( \mathrm{Tr}\left( \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{D}^n \right) \cdot \E \left \{ \boldsymbol{q}_{i^{\star}+n} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right) | \w_{c, i^{\star}} \in \mathcal{H} \right\} \notag \\ &\: - \frac{\mu}{2} M \sigma_u^2 \end{align} We now bound the perturbation term: \begin{align} &\:\mu^2 \sum_{n=0}^{i^s} \E\left \{ \left( \rho\left( \boldsymbol{\Lambda}_{i^{\star}} \boldsymbol{D}^n \right) \cdot \E \left \{ \boldsymbol{q}_{i^{\star}+n} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right) | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ \le&\:\mu^2 \sum_{n=0}^{i^s} \E \left \{ \left( \rho\left( \delta I {\left( I + 2 \mu \delta I \right)}^n \right) \cdot \E \left \{ \boldsymbol{q}_{i^{\star}+n} | \boldsymbol{\mathcal{F}}_{i^{\star}} \right \} \right) | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ =&\:\mu^2 \sum_{n=0}^{i^s} \left( \delta {\left( 1 + 2 \mu \delta \right)}^n \cdot \E \left\{ \boldsymbol{q}_{i^{\star}+n} | \w_{c, i^{\star}} \in \mathcal{H} \right\} \right)\notag \\ \stackrel{\eqref{eq:perturbation_definition}}{=}&\:\mu^2 \sum_{n=0}^{i^s} \delta {\left( 1 + 2 \mu \delta \right)}^n \cdot \Bigg( \beta_R p_{\max}\Big( \E \left\{ {\left \| \widetilde{\w}^{{i}^{\star}}_{i}\right \|}^{\gamma}| \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ &\: \ \ \ + \E \left \{ \left\|\bcw_{c, i^{\star}+i} - \bcw_{i^{\star}+i}\right\|^{\gamma} | \w_{c, i^{\star}} \in \mathcal{H} \right \} \Big) \notag \\ &\: \ \ \ + \delta^2 \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right \} \Bigg) \notag \\ \ifarx \le&\:\mu^2 \sum_{n=0}^{i^s} \delta {\left( 1 + 2 \mu \delta \right)}^n \cdot \left( O(\mu^{\gamma}) + \frac{O(\mu^{\gamma})}{\pi_{i^{\star}}^{\mathcal{H}}} + O(\mu^2) \right) \notag \\ \fi \le&\: \delta \left( \sum_{n=0}^{i^s} {\left( 1 + 2 \mu \delta \right)}^n\right) \left( O(\mu^{2+\gamma}) + \frac{O(\mu^{2+\gamma})}{\pi_{i^{\star}}^{\mathcal{H}}} \right) \notag \\ \stackrel{(a)}{\le}&\: O(\mu^{1+\gamma}) + \frac{O(\mu^{1+\gamma})}{\pi_{i^{\star}}^{\mathcal{H}}} =\: o(\mu) + \frac{o(\mu)}{\pi_{i^{\star}}^{\mathcal{H}}} \end{align} where \( (a) \) follows from Lemma~\cite[Lemma 3]{Vlaski19nonconvexP1}. We conclude: \begin{align} \E \left\{{\left \| \check{\w}'{}^{i^{\star}}_{i^s+1} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right\} \le - \frac{\mu}{2} M \sigma_u^2 + o(\mu) + \frac{o(\mu)}{\pi_{i^{\star}}^{\mathcal{H}}} \label{eq:centered_deviation_bound} \end{align} Returning to~\eqref{eq:t_step_descent_simplified}, we find: \begin{align} &\: \E \left \{ J(\w_{c, i^{\star}+i}') | \w_{c, i^{\star}} \in \mathcal{H} \right\} \notag \\ \le&\: \E \left \{J(\w_{c, i^{\star}}) | \w_{c, i^{\star}} \in \mathcal{H} \right\} + \frac{1}{2} \E \left \{ {\left \| \check{\w}'{}^{i^{\star}}_{i} \right \|}_{\boldsymbol{\Lambda}_{i^{\star}}}^2 | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ &\: + \frac{\rho}{6} \E \left \{ {\left \| \widetilde{\w}_{i}'{}^{i^{\star}} \right \|}^3 | \w_{c, i^{\star}} \in \mathcal{H} \right \} \notag \\ \le&\: \E \left \{J(\w_{c, i^{\star}}) | \w_{c, i^{\star}} \in \mathcal{H} \right\} - \frac{\mu}{2} M \sigma_u^2 + o(\mu) + \frac{o(\mu)}{\pi_{i^{\star}}^{\mathcal{H}}} \end{align} \section{Proof of Theorem~\ref{TH:FINAL_THEOREM}}\label{AP:FINAL_THEOREM} The proof follows by constructing a particular telescoping sum and subsequently applying~\cite[Theorem 2]{Vlaski19nonconvexP1} and~\ref{TH:DESCENT_THROUGH_SADDLE_POINTS}. To begin with, we define the stochastic process: \begin{equation} \mathbf{t}(k+1) = \begin{cases} \mathbf{t}(k) + 1, \ &\mathrm{if} \ \w_{c, \mathbf{t}(k)} \in \mathcal{G}, \\\mathbf{t}(k) + 1, \ &\mathrm{if} \ \w_{c, \mathbf{t}(k)} \in \mathcal{M}, \\ \mathbf{t}(k) + i_s, \ &\mathrm{if} \ \w_{c, \mathbf{t}(k)} \in \mathcal{H}. \end{cases} \end{equation} where \( \mathbf{t}(0) = 0 \). We then have: \begin{align} &\: \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - J(\w_{c, \mathbf{t}(k+1)})} | \w_{c, \mathbf{t}(k)} \in \mathcal{G} \right \} \notag \\ =&\: \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - J(\w_{c, \mathbf{t}(k)+1})} | \w_{c, \mathbf{t}(k)} \in \mathcal{G} \right \} \notag \\ \stackrel{}{\ge}&\: \mu^2 \frac{c_2}{\pi} - O(\mu^3) - \frac{O(\mu^3)}{\pi_i^{\mathcal{G}}} \end{align} and \begin{align} &\: \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - J(\w_{c, \mathbf{t}(k+1)})} | \w_{c, \mathbf{t}(k)} \in \mathcal{H} \right \} \notag \\ =&\: \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - J(\w_{c, \mathbf{t}(k)+1})} | \w_{c, \mathbf{t}(k)} \in \mathcal{H} \right \} \notag \\ {\ge}&\: \frac{\mu}{2} M \sigma_u^2 - o(\mu) - \frac{o(\mu)}{\pi_i^{\mathcal{H}}} \end{align} Finally, we have: \begin{align} &\: \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - J(\w_{c, \mathbf{t}(k+1)})} | \w_{c, \mathbf{t}(k)} \in \mathcal{M} \right \} \notag \\ =&\: \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - J(\w_{c, \mathbf{t}(k)+1})} | \w_{c, \mathbf{t}(k)} \in \mathcal{M} \right \} \notag \\ \stackrel{}{\ge}&\: - \mu^2 c_2 - O(\mu^3) - \frac{O(\mu^3)}{\pi_i^{\mathcal{M}}} \end{align} where \( (a) \) follows since \( \mathbf{t}(k+1) - \mathbf{t}(k) = 1 \) when \( \w_{c, \mathbf{t}(k)} \in \mathcal{M} \). We can combine these relations to obtain: \begin{align} &\: \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - \E J(\w_{c, \mathbf{t}(k+1)})} \right \} \notag \\ =&\: \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - \E J(\w_{c, \mathbf{t}(k+1)})} | \w_{c, \mathbf{t}(k)} \in \mathcal{G} \right \} \cdot \pi_{\mathbf{t}(k)}^{\mathcal{G}} \notag \\ &\:+ \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - \E J(\w_{c, \mathbf{t}(k+1)})} | \w_{c, \mathbf{t}(k)} \in \mathcal{H} \right \} \cdot \pi_{\mathbf{t}(k)}^{\mathcal{H}} \notag \\ &\:+ \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - \E J(\w_{c, \mathbf{t}(k+1)})} | \w_{c, \mathbf{t}(k)} \in \mathcal{M} \right \} \cdot \pi_{\mathbf{t}(k)}^{\mathcal{M}} \notag \\ =&\: \left( \mu^2 \frac{c_2}{\pi} - O(\mu^3) - \frac{O(\mu^3)}{\pi_i^{\mathcal{G}}} \right) \cdot \pi_{\mathbf{t}(k)}^{\mathcal{G}} \notag \\ &\:+ \left( \mu \frac{c_2}{\pi i_s} - o(\mu) - \frac{o(\mu)}{\pi_i^{\mathcal{H}}} \right) \cdot \pi_{\mathbf{t}(k)}^{\mathcal{H}} \notag \\ &\:+ \left( - \mu^2 c_2 - O(\mu^3) - \frac{O(\mu^3)}{\pi_i^{\mathcal{M}}} \right) \cdot \pi_{\mathbf{t}(k)}^{\mathcal{M}} \notag \\ =&\: \mu^2 \frac{c_2}{\pi} \cdot \pi_{\mathbf{t}(k)}^{\mathcal{G}} + \left(\frac{\mu}{2} M \sigma_u^2 - o(\mu) \right) \cdot \pi_{\mathbf{t}(k)}^{\mathcal{H}} \notag \\ &\: - \mu^2 c_2 \cdot \pi_{\mathbf{t}(k)}^{\mathcal{M}} - o(\mu^2) \end{align} Suppose \( \pi_{\mathbf{t}(k)}^{\mathcal{M}} \le 1 - \pi \) for all \( i \). Then \( \pi_{\mathbf{t}(k)}^{\mathcal{G}} + \pi_{\mathbf{t}(k)}^{\mathcal{H}} \ge \pi \) for all \( i \), and \begin{align} &\: \E \left \{ {J(\w_{c, \mathbf{t}(k)}) - \E J(\w_{c, \mathbf{t}(k+1)})} \right \} \notag \\ \ge&\: \mu^2 \frac{c_2}{\pi} \cdot \left( \pi - \pi_{\mathbf{t}(k)}^{\mathcal{H}} \right)+ \left( \frac{\mu}{2} M \sigma_u^2 - o(\mu) \right) \cdot \pi_{\mathbf{t}(k)}^{\mathcal{H}} \notag \\ &\: - \mu^2 c_2 \cdot \left( 1 - \pi \right) - o(\mu^2) \notag \\ =&\: \mu^2 {c_2}\pi +\left( \frac{\mu}{2} M \sigma_u^2 - \mu^2 \frac{c_2}{\pi} - o(\mu)\right) \pi_{\mathbf{t}(k)}^{\mathcal{H}} - o(\mu^2) \notag \\ \stackrel{(a)}{\ge}&\: \mu^2 {c_2}\pi - o(\mu^2) \end{align} where \( (a) \) holds whenever \( \frac{\mu}{2} M \sigma_u^2 - \mu^2 \frac{c_2}{\pi} - o(\mu) \ge 0 \), which holds whenever \( \mu \) is sufficiently small. We hence have by telescoping: \begin{align} &\: J(w_{c, 0}) - J^o \notag \\ \ge&\: \E J(w_{c, \mathbf{t}(0)}) - \E J(\w_{c, \mathbf{t}(k)}) \notag \\ =&\: \E J(w_{c, \mathbf{t}(0)}) - \E J(\w_{c, \mathbf{t}(1)}) \notag \\ &\: + \E J(\w_{c, \mathbf{t}(1)}) - \E J(\w_{c, \mathbf{t}(2)}) \notag \\ &\: + \cdots \notag \\ &\: + \E J(\w_{c, \mathbf{t}(k-1)}) - \E J(\w_{c, \mathbf{t}(k)}) \notag \\ \ge&\: \mu^2 c_2 \pi k \end{align} Rearranging yields: \begin{equation} k \le \frac{J(w_{c, 0}) - J^o}{\mu^2 c_2 \pi} \end{equation} We conclude by definition of the stochastic process \( \boldsymbol{t}_k \): \begin{equation} i = \boldsymbol{t}(k) \le k \cdot i^s \le \frac{\left( J(w_{c, 0}) - J^o \right)}{\mu^2 c_2 \pi} i^s \end{equation} \bibliographystyle{IEEEbib}
1,314,259,996,224
arxiv
\section{Introduction} \label{sec:intro} The discovery of neutrino mass is one of the first clear cases of physics beyond the Standard Model. Neutrino oscillation is now firmly established as the leading mechanism for flavor changes in neutrinos~\cite{solar,kl,atm,chooz,k2k,minos,limits}. This has triggered substantial interest in precision measurements of neutrino properties, with a special emphasis on oscillation experiments. A new generation of long baseline~\cite{t2k,nova} and reactor~\cite{reactor} experiments is currently trying to measure $\theta_{13}$. This is considered to be the first step to a large scale experimental program of long baseline neutrino experiments which aim at determining the neutrino mass ordering and to study leptonic CP violation, see {\it e.g.}~\cite{iss,issphysics}. Due to experimental constraints on the available neutrino flavors in both source and detector, the most promising oscillation channels are the ones where a muon (anti-)neutrino oscillates into an electron (anti-)neutrino or {\it vice versa}. In spite of using both neutrinos and anti-neutrinos, a serious problem with all long baseline experiments involving these channels, arises from discrete degeneracies which manifest themselves in three forms: the ($\theta_{13},\delta_{\mathrm{CP}}$) intrinsic degeneracy \cite{intrinsic}, the ($sgn(\Delta m^2_{31}),\delta_{\mathrm{CP}}$) degeneracy \cite{minadeg}, and the ($\theta_{23},\pi/2-\theta_{23}$) degeneracy \cite{th23octant}. This leads to an eight-fold degeneracy \cite{eight}, with several degenerate solutions in addition to the true one. The presence of these degenerate solutions can severely limit the sensitivity of an experiment. In this paper, we will consider a beta-beam, as originally proposed in~\cite{zucc}: a flavor pure beam of electron (anti-)neutrinos from the beta decay of ions of short lived isotopes, which are accelerated to a Lorentz factor, $\gamma>100$. This idea has attracted a large community and there is a vast literature on the many different options for such a facility~\cite{volpe,book_betabeam,agarwalla,cernmemphys,paper1,shortnote, optimization,pee,two_baseline,rparity,sanjib_vt1,oldpapers,donini130,doninibeta, newdonini,bc1,bc2,fnal,betaoptim,doninialter,boulby}. We present a study of beta-beam facility at FNAL. A somewhat similar idea has been explored in~\cite{fnal}: there, however the emphasis was on possible synergies between the new facility and NO$\nu$A. We, on the other hand, focus on a full fledged, stand-alone beta-beam. We will use $^{18}$Ne~ (for ${\nu_e}$) and $^6$He~ (for ${\bar\nu_e}$) ions. The maximum Lorentz boost factors using the Tevatron are $\gamma_{\rm Ne} = 585$ and $\gamma_{\rm He} = 350$ which yield energy spectra which peak at $2.3\,\mathrm{GeV}$ and $1.4\,\mathrm{GeV}$, respectively. Our detector is located at DUSEL~\cite{dusel,duselwhite,uslongbaseline} at a distance of $L \simeq 1300\,\mathrm{km}$ from FNAL. We propose to use a $300\,\mathrm{kt}$ WC detector or a $50\,\mathrm{kt}$ LArTPC as a possible detector candidate. The first and second oscillation maxima for the FNAL - DUSEL baseline are at $2.5\,\mathrm{GeV}$ and $0.8\,\mathrm{GeV}$ for $\Delta m^2_{31} = 2.4 \cdot 10^{-3}\,\mathrm{eV}^2$. Therefore, this set-up provides an unique opportunity to work at the first oscillation maximum using the ${\nu_e}$ beam and the peak energy of the ${\bar\nu_e}$ beam is very close to the second oscillation maximum. The paper is organized as follows. We begin with a brief description of the FNAL based beta-beam facility in Section~2. In Section~3, we deal with the relevant oscillation probabilities. In the following section (Section~4) we describe the characteristics of the WC detector and the LArTPC; also, we introduce the superbeam which will used for comparison. We also present the expected event rates for these two detectors. The details of our numerical technique and analysis procedure are presented in Section~5. In Section~6, we present our results and provide a summary. \section{Fermilab based Beta-beam} \label{sec:beta-beam} \begin{figure}[t] \includegraphics[width=0.6\textwidth]{flux.eps} \mycaption{\label{fig:flux} The un-oscillated beta-beam flux spectrum arriving at a detector placed at DUSEL at a distance of $1300\,\mathrm{km}$ from FNAL. The red solid line shows the ${\nu_e}$ spectrum generated from $^{18}$Ne~ with $\gamma = 585$. The black dashed curve depicts the ${\bar\nu_e}$ spectrum originated from $^6$He~ with $\gamma = 350$. The blue dot-dashed and the brown double-dot-dashed vertical lines show the locations of first and second oscillation maximum for the FNAL -DUSEL baseline corresponding to $\Delta m^2_{31} = 2.4 \cdot 10^{-3}\,\mathrm{eV}^2$.} \end{figure} The concept of a beta-beam was proposed in~\cite{zucc}: a pure, intense, collimated beam of electron neutrinos or their antiparticles from the beta-decay of completely ionized unstable radioactive ions circulating in a decay ring. The first stage in a beta-beam are protons with an energy of a few GeV which impinge a neutron spallation target. These neutrons, then interact in a secondary target to produce the desired unstable isotopes, in our case $^{18}$Ne~\ and $^6$He~ (see, table~\ref{tab:ions}). Both are noble gases and, therefore can easily diffuse out of the secondary target, where they are collected, ionized and bunched. Now, they can be accelerated and be put into a storage ring with long straight sections. The decay of the highly boosted ions in the straight sections then yields an intense, well collimated, flavor pure electron (anti-)neutrino beam of known flux and spectrum, see {\it e.g.}~\cite{lindroos,betabeampage}. Feasibility of this proposal and its physics reach is being studied in great detail~\cite{book_betabeam}. The set-up we propose will take full advantage of existing accelerator facilities at FNAL~\cite{fnal}. The main limitation is the maximum energy per nucleon the Tevatron can achieve. The maximum $\gamma$ available for the ions considered here, using the Tevatron as accelerator, are $\gamma_{\rm Ne} = 585$ and $\gamma_{\rm He} = 350$. We have also studied the possibility to use $^8$B and $^8$Li. These two isotopes have much larger endpoint energies, of more than $10\,\mathrm{MeV}$ and thus would allow to have a peak energy for both neutrino and anti-neutrinos around the 1st oscillation maximum. However, since they would require a much lower $\gamma$, they would yield a considerably smaller neutrino flux. We have performed a full numerical study for these ions for the full range of available $\gamma$ and found that their performance is always worse than for $^6$He~\ and $^{18}$Ne~. In a similar fashion, we have optimized the $\gamma$ for $^6$He~\ and $^{18}$Ne~\ and the values chosen above represent the optimum in terms of physics sensitivities at the given baseline. Since, ion production requires a proton power of only a few $50\,\mathrm{kW}$, the currently available proton intensities may be sufficient. A new decay ring of about $14.7\,\mathrm{km}$ in circumference (assuming $5\,\mathrm{T}$ magnetic field and $36\%$ useful decay fraction), however, needs to be built to store the ions. \begin{table}[t] \begin{center} \begin{tabular}{||c||c||c||c||c||c||c||c||} \hline \hline Ion & t$_{1/2}$ (s) & E$_0$ (MeV) & Total useful decays & $\gamma$ (Tevatron) & Beam & E$^{\rm peak} _{\rm lab}$ (GeV) \\ \hline $^{18} _{10}$Ne & 1.67 & 3.92 & $5\cdot(1.1 \cdot 10^{18})$ & 585 & $\nu_{e}$ & 2.3 \\ \hline $^6 _2$He & 0.81 & 4.02 & $5\cdot(2.9 \cdot 10^{18})$ & 350 & $\bar\nu_{e}$ & 1.4 \\ \hline \hline \end{tabular} \caption{\label{tab:ions} The beta-decay parameters, half-life t$_{1/2}$ and electron total end-point energy E$_0$ are shown in the first two columns \cite{beta}. In the third column, we list the total number of useful ion decays considered in this work. The maximum $\gamma$ available for these ions using the Tevatron are mentioned in column four. The peak energies of the ${\nu_e}$ and ${\bar\nu_e}$ spectrum in the lab frame are shown in the last column.} \end{center} \end{table} While the shape of the beam spectrum depends on the end-point energy E$_0$ and the Lorentz boost $\gamma$ of the parent ions, the flux normalization is controlled by the number of useful ion decays per year in one of the straight sections of the storage ring N$_{\beta}$. Table \ref{tab:ions} depicts the relevant details of the properties of these ions and their considered luminosities and the choices of $\gamma$. We have assumed $1.1 \cdot 10^{18}$ ($\nu_e$) and $2.9\cdot 10^{18}$ ($\bar{\nu}_e$) useful ion decays per year for $^{18}$Ne~ and $^6$He~ ions respectively~\cite{beamnorm}. Figure \ref{fig:flux} shows the un-oscillated beta-beam flux reaching at a detector placed at the DUSEL site at a distance of $1300\,\mathrm{km}$ from FNAL. We see from this figure that the ${\nu_e}$ (${\bar\nu_e}$) spectrum peaks at $2.3 (1.4)\,\mathrm{GeV}$. The first and second oscillation maxima for the FNAL - DUSEL baseline are at $2.5\,\mathrm{GeV}$ and $0.8\,\mathrm{GeV}$ for $\Delta m^2_{31} = 2.4 \cdot 10^{-3}\,\mathrm{eV}^2$. Therefore, the ${\nu_e}$ spectrum is well suited for the first oscillation maximum whereas the ${\bar\nu_e}$ flux is sensitive to the second oscillation maximum. \section{The $\mathbf{P_{e\mu}}$ oscillation channel} \label{sec:prob} The simulation work presented in this paper is based on the full three flavor neutrino oscillation probabilities in matter, using the preliminary reference Earth model for the Earth matter density~\cite{prem}. However, to explain the nature of neutrino oscillations as a function of baseline and/or neutrino energy, it is crucial to use approximate analytic expression for $P_{e\mu}$ in matter \cite{msw1,msw2,msw3}, keeping terms only up to second order in the small quantities $\theta_{13}$ and $\alpha \equiv \Delta m^2_{21}/\Delta m^2_{31}$ \cite{golden,freund} \begin{eqnarray} P_{e\mu} &\simeq& {\underbrace{\sin^2\theta_{23} \sin^22\theta_{13} \frac{\sin^2[(1-\hat{A})\Delta]}{(1-\hat{A})^2} + \alpha^2 \cos^2\theta_{23} \sin^22\theta_{12} \frac{\sin^2(\hat{A}\Delta)}{{\hat{A}}^2}}_{T_0}} \nonumber \\ &\pm& {\underbrace{\alpha \sin2\theta_{13} \sin2\theta_{12} \sin2\theta_{23} \sin(\Delta) \frac{\sin(\hat{A}\Delta)}{\hat{A}} \frac{\sin[(1-\hat{A})\Delta]}{(1-\hat{A})}}_{T_-}} \sin\delta_{\mathrm{CP}} \nonumber \\ &+& {\underbrace{\alpha \sin2\theta_{13} \sin2\theta_{12} \sin2\theta_{23} \cos(\Delta) \frac{\sin(\hat{A}\Delta)}{\hat{A}} \frac{\sin[(1-\hat{A})\Delta]}{(1-\hat{A})}}_{T_+}} \cos\delta_{\mathrm{CP}} , \label{eq:pemu} \end{eqnarray} where \begin{eqnarray} \Delta\equiv \frac{\Delta m^2_{31} L}{4E}, ~~ \hat{A} \equiv \frac{A}{\Delta m^2_{31}}, ~~ A=\pm 2\sqrt{2}G_FN_eE. \label{eq:matt} \end{eqnarray} Here, $A$ is the matter potential, expressed in terms of the electron density $N_e$ and the (anti)neutrino energy $E$; `$+$' sign refers to neutrinos whereas `$-$' to anti-neutrinos. \section{Event Rates at the Detectors at DUSEL} \label{sec:event} \begin{table}[t] \begin{center} \begin{tabular}{||c||c||c||} \hline\hline \multicolumn{1}{||c||}{{\rule[0mm]{0mm}{6mm}\multirow{2}{*}{Detector Characteristics}}} & \multicolumn{1}{|c||}{\rule[-3mm]{0mm}{6mm}{WC}} & \multicolumn{1}{|c||}{\rule[-3mm]{0mm}{6mm}{LArTPC}} \cr & (Both $\mu^{\pm}$ \& $e^{\pm}$) & (Both $\mu^{\pm}$ \& $e^{\pm}$) \cr & (Only QE Sample) & (QE \& IE Sample) \cr \hline\hline Fiducial Mass & $300\,\mathrm{kt}$ & $50\,\mathrm{kt}$ \cr\hline Energy Threshold & WC:~A/WC:~B & 0.2 GeV \cr \hline Detection Efficiency ($\epsilon$) & 80\% & 80\% \cr \hline \multirow{2}{*}{Energy Resolution ($\delta E$) (GeV)}& \multirow{2}{*}{0.085+0.05$\sqrt{\rm E/GeV}$} & 0.085+0.05$\sqrt{\rm E/GeV}$ for QE Sample \cr & & 0.085+0.2$\sqrt{\rm E/GeV}$ for IE Sample \cr \hline Bin Size & 0.2 GeV & 0.2 GeV \cr \hline Background Rejection & WC:~A/WC:~B & 10$^{-3}$/10$^{-4}$ \cr \hline Signal error (syst.) & 2.5\% & 2.5\% \cr \hline Background error (syst.) & 5\% & 5\% \cr \hline\hline \end{tabular} \caption{\label{tab:detector} Detector characteristics used in the simulations. The bin size is kept fixed, while the number of bins is varied according to the maximum energy. We use two different simulation methods (WC:~A \& WC:~B) to treat the backgrounds in WC detector. Details can be found in Section~4.} \end{center} \end{table} Currently, two technologies for large underground neutrino detection are considered for DUSEL: either a $300\,\mathrm{kt}$ WC detector or a $50\,\mathrm{kt}$ LArTPC~\cite{dusel,uslongbaseline}. In the following, we will describe their different properties as far as needed for this work and we summarize them in table~\ref{tab:detector}. We do not consider backgrounds due to atmospheric neutrinos for either detector. The timing information of the ion bunches turns out to be sufficient to reduce these backgrounds down to an insignificant level, see, {\it e.g.} the appendix of \cite{two_baseline}. \subsection{Water Cherenkov~ Detector} \label{sec:wc} The water Cherenkov~ technology is well understood on a large scale~\cite{atm} and has demonstrated its excellent capability to distinguish muons from electrons. In a beta-beam, the appearance signal are muons from charged current $\nu_\mu$ interactions; this has the advantage that they are easier to distinguish from neutral current (NC) events than electrons. Nonetheless, NC events, in particular those involving one or several neutral pions, are problematic and they are the major source of background. Above the pion production threshold, the background level depends on how well neutral pions can be identified and distinguished from muons. Here, we consider only quasi-elastic (QE) charged current events for the appearance signal. We assume 80\% detection efficiency, $\epsilon$, for both muon and electron QE events (see table~\ref{tab:detector}). More precisely, we should consider single ring events, {\it i.e.} for signal events, these events are characterized by only one charged particle being above Cherenkov~ threshold. For the NC background, this signature can be mimicked if the two photons from a $\pi^0$ decay are so close that the subsequent Cherenkov~ rings cannot be separated. This is more likely to happen for energetic $\pi^0$ since the opening angle between the two photons is determined by the Lorentz $\gamma$ of the parent particle. Note, that identifying the quasi-elastic events with the single ring events is a reasonable approximation for the signal events. For the spectral analysis, we include Fermi-motion by term of a constant width of $85\,\mathrm{MeV}$~\cite{t2k} in the resolution function. The energy resolution for the muon and electron is 5\% of $\sqrt{\rm E/GeV}$ and the resulting width of the Gau\ss ian energy resolution function is the sum of both terms. As mentioned previously, for NC background events there is no simple, physically accurate approximation to their rate. Therefore, we will resort to two different phenomenological parametrization. The first one, method A, is based on an actual Super-K based Monte Carlo simulation, whereas the second one, method B, assumes an energy independent NC rejection efficiency. In method A, labeled as `WC:~A', we follow the results presented in~\cite{ishihara} where the authors study the performance of a water Cherenkov~ detector in a beta-beam using the current simulation and analysis tools developed for the Super-Kamiokande experiment. Results are derived for a baseline of $700\,(130)\,\mathrm{km}$ and taking $\gamma = 350\,(100)$ for both $^{18}$Ne~ and $^6$He~ ions. As far as the background is concerned, the main outcome of this study is, that the major background events in the search for ${\nu_{\mu}}$ appearance signal are indeed NC interactions involving pions. In a NC interaction, the outgoing neutrino carries a large and generally unknown fraction of the incoming neutrino energy. Therefore, those NC events which pass the single ring selection criteria tend to be reconstructed with an energy much lower than the true, incoming neutrino energy. It turns out, that, because of this, it is possible to maximize the signal to background ratio by imposing a cut in reconstructed energy. Unfortunately, in reference~\cite{ishihara} only results for $\gamma=100$ and $\gamma=350$ are presented, therefore we will extrapolate these results to the values of $\gamma$ of interest for this work. We assume that the average energy of a mis-reconstructed NC event, {\it i.e.} the ones which pass the muon single ring selection, is proportional to the incoming, true neutrino energy \emph{and} that the proportionality constant does not change appreciable over the energy range considered. This intuition is inspired by the form of the differential NC cross section $d\sigma/d y$, where $y=E_q/E$, with $E$ being the energy of the incoming neutrino and $E_q$ the energy transfered to the target. Therefore, we will assume that the energy cut which effectively removes the NC background is proportional to the average true neutrino event energy $\langle E \rangle_{NC}$ \begin{equation} \langle E \rangle_{NC} = {\int \phi(E) E \sigma_{NC}(E) dE \over \int \phi(E) \sigma_{NC}(E) dE} \, , \label{eq:nc2} \end{equation} where $\phi(E)$ is the (anti-)neutrino beta-beam flux produced at the source and $\sigma_{NC}(E)$ is the NC cross-section. We define a so called threshold factor, $T_f$ which is different for neutrinos and anti-neutrinos, but independent of $\gamma$ \begin{equation} E_T = T_f \langle E \rangle_{NC} \, , \label{eq:nc1} \end{equation} with $E_T$ being the threshold in reconstructed energy above which there is no NC background left. It is straightforward to compute $\langle E \rangle_{NC} = 1.57\,(1.69)\,\mathrm{GeV}$ for $^{18}$Ne~ ($^6$He~) ions with $\gamma = 350$. From reference~\cite{ishihara}, we find that $E_T=1\,\mathrm{GeV}$ is sufficient for $\gamma = 350$ to eliminate all the backgrounds for both neutrinos and anti-neutrinos. Using equation~\ref{eq:nc1}, we obtain \begin{equation} T_f = 0.64\,(0.59) \quad\text{for $^{18}$Ne~ ($^6$He~)} \, . \label{eq:nc3} \end{equation} Reference~\cite{ishihara} provides also results for $\gamma=100$ and we can use these to test our assumption that $T_f$ is independent of $\gamma$. For $\gamma = 100$, we have $\langle E \rangle_{NC} = 0.48\,(0.5)\,\mathrm{GeV}$ for $^{18}$Ne~ ($^6$He~) ions. Now using the values of $T_f$ given by equation~\ref{eq:nc3}, we get $E_T = 0.3\,\mathrm{GeV}$ for both $^{18}$Ne~ \& $^6$He~ ions which matches exactly with the value of $E_T$ obtained in~\cite{ishihara} for $\gamma = 100$. This nicely demonstrates that the parametrization in terms of $T_f$ and $\langle E \rangle_{NC}$ works over a reasonably large range of energies. With $\gamma = 585$ for $^{18}$Ne~ ions we have $\langle E \rangle_{NC} = 2.62\,\mathrm{GeV}$. Now using the value $T_f = 0.64$ given in equation~\ref{eq:nc3}, we obtain $E_T = 1.7\,\mathrm{GeV}$. Summarizing, for method A, we use a threshold energy of $1.7\,(1)\mathrm{GeV}$ for $\gamma = 585\,(350)$ for both neutrinos and anti-neutrinos and assume that there is no NC background above $E_T$. In the second method, labeled as `WC:~B', we take a threshold of $0.2\,\mathrm{GeV}$, which is close the production threshold for muons and take the background to be a $10^{-3}$ of the NC current rate. The shape of this background is identical to $\phi(E)\sigma_\mathrm{NC}(E)$ and for simplicity, we use the same energy resolution function to smear the NC backgrounds that we use for QE signal events. The impact of these two different simulation methods (WC:~A \& WC:~B) is quite different while we calculate the sensitivity and we will discuss it in detail in Section~6. For our superbeam result which are shown for comparison only, we use the identical setup as in Section~10 of~\cite{uslongbaseline}. \subsection{Liquid Argon Time Projection Chamber} A LArTPC works due to the fact, that by applying an electric field, free electrons created by the passage of an ionizing particle can be drifted over large distances, $\mathcal{O}(m)$, without distortion. This allows, to obtain three projections of the particle track by just reading out the surface of the volume. For this reason, it seems feasible to build very large LArTPCs at a reasonable cost. The three projections of the track can be used to reconstruct the three dimensional path of the particle with an accuracy of a few mm. The currently largest LArTPC ever built, is the ICARUS T600 module~\cite{t600} with a mass of $600\,\mathrm{t}$. It is recognized that scaling T600 by at least two orders of magnitude requires considerable R\&D. Since this technology is still in its R\&D phase, much less knowledge regarding its performance exists. Thus, what we present here, are essentially educated guesses~\cite{bonnie}. We divide the signal event into samples of QE and inelastic (IE) events because energy reconstruction works quite differently for these two event classes\footnote{We assume that QE and IE events are fully uncorrelated with each other and that they can be cleanly separated.}. For QE events there are only very few tracks, typically the muon and the proton, thus it is feasible to perform a full kinematic analysis. Therefore, QE events will have a rather good energy resolution. IE events, on the other hand, will produce a large number of tracks, which the detector will not be able to separate and reconstruct individually. Most likely, only the muon track and a shower-like object can be identified, while the muon still is reconstructed quite well, only summary information will be available for the shower, {\it e.g.} total charge. Thus, the shower energy resolution will be much worse than the muon resolution. Therefore, we will use two different energy resolution functions for QE and IE events~\cite{bonnie} \begin{eqnarray} \delta E_{QE}(E)&=&\left(0.085+0.05\sqrt{E/\mathrm{GeV}}\right)\,\mathrm{GeV}\quad\text{for QE}\,,\\ \delta E_{IE}(E)&=&\left(0.085+0.2\sqrt{E/\mathrm{GeV}}\right)\,\mathrm{GeV}\quad\text{for IE }\,. \end{eqnarray} We use 80\% detection efficiency, $\epsilon$, for both muon and electron QE and IE events (see table \ref{tab:detector}). We calculate the sensitivity using two different values, 10$^{-3}$ and 10$^{-4}$, for the background rejection factor. The present status of the simulation study of the LArTPC at the DUSEL site can be found in Section~10 of \cite{uslongbaseline}. As we divide the signal events into two parts, we divide the NC backgrounds into two parts as well. At a given energy $E$, the NC backgrounds, which are relevant to estimate the sensitivity, are calculated in the following way \begin{eqnarray} { \rm (NC)}^E_{total} = {\underbrace{{\rm (NC)}^E_{\rm total} \times \frac{\sigma_{\rm QE}}{\sigma_{\rm total}}}_{{\rm (NC)}^E_{\rm QE}}} + {\underbrace{{\rm (NC)}^E_{\rm total} \times \frac{\sigma_{\rm IE}} {\sigma_{\rm total}}}_{{\rm (NC)}^E_{\rm IE}}} , \label{eq:bkg} \end{eqnarray} where (NC)$^E_{\rm QE}$ and (NC)$^E_{\rm IE}$ are the NC backgrounds\footnote{The (NC)$^E_{\rm QE}$ and (NC)$^E_{\rm IE}$ backgrounds are smeared using the same energy resolution function that we use for QE and IE signal events respectively. Again, this choice is justified more by its apparent simplicity than by its realism.} applicable for QE and IE events respectively. $\sigma_{\rm QE}$, $\sigma_{\rm IE}$ and $\sigma_{\rm total}$ are the relevant neutrino interaction cross-sections. \subsection{Event Rates} The number of (anti-)muon events\footnote{In principle, both detectors, also are sensitive to electron (positron) events. The number of electron events can be calculated using equation~\ref{eq:events}, by making appropriate changes to the oscillation probability and cross-sections. However, as was noted in \cite{pee}, electron disappearance has hardly any sensitivity to $\theta_{13}$ and mass ordering at short baselines like the one under discussion. Also, it does not depend on $\delta_{\mathrm{CP}}$.} in the $i$-th energy bin in the detector is given by \begin{eqnarray} N_{i} = \frac{T\, n_n\, \epsilon}{4\pi L^2}~ \int_0^{E_{\rm max}} dE \int_{E_{A_i}^{\rm min}}^{E_{A_i}^{\rm max}} dE_A \,\phi(E) \,\sigma_{\nu_{\mu}}(E) \,R(E,E_A)\, P_{e\mu}(E) \, , \label{eq:events} \end{eqnarray} where $T$ is the total running time, $n_n$ is the number of target nucleons in the detector, $\epsilon$ is the detector efficiency and $R(E,E_A)$ is the Gau\ss ian energy resolution function of the detector. For muon (anti-muon) events, $\sigma_{\nu_{\mu}}$ is the neutrino (anti-neutrino) interaction cross-section. The quantities $E$ and $E_A$ are the true and reconstructed (anti-)neutrino energies respectively and $L$ is the baseline. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{18ne_g585_wc.eps} \includegraphics[width=0.49\textwidth]{6he_g350_wc.eps} \vskip0.5cm \includegraphics[width=0.49\textwidth]{18ne_g585_LAr.eps} \includegraphics[width=0.49\textwidth]{6he_g350_LAr.eps} \mycaption{\label{fig:event} Total event rates in five years as a function of $\sin^2 2\theta_{13}$ in the FNAL - DUSEL set-up for $^{18}$Ne~ (${\nu_e}$) with $\gamma = 585$ and $^6$He~ (${\bar\nu_e}$) with $\gamma = 350$ are shown. The upper panels are for a $300\,\mathrm{kt}$ WC detector, while the lower ones are for a $50\,\mathrm{kt}$ LArTPC. Results are depicted for four different values of $\delta_{\mathrm{CP}}$: -90$^\circ$, 0$^\circ$, 90$^\circ$, and 180$^\circ$. Normal hierarchy has been assumed. For all other oscillation parameters we use the values given in equation~\ref{eq:central}.} \end{figure} We simulate the signal event spectrum using equation~\ref{eq:events} for our assumed true\footnote{We distinguish between the ``true'' values of the oscillation parameters, which are used to compute the data, and their fitted values. Throughout this paper we denote the true value of a parameter by putting ``(true)'' after the symbol for the parameter.} values for the set of oscillation parameters as given in equation~\ref{eq:central}. The left-hand panels of figure~\ref{fig:event} portray the number of neutrino events expected at DUSEL from five years exposure of the $^{18}$Ne~ beta-beam from FNAL with $\gamma = 585$ while the right-hand panels show the anti-neutrino events from five years run of the $^6$He~ beta-beam with $\gamma = 350$. Results are presented as a function of $\sin^2 2\theta_{13}$ for four different values of $\delta_{\mathrm{CP}}$: -90$^\circ$, 0$^\circ$, 90$^\circ$, and 180$^\circ$ with normal mass hierarchy. The upper panels are for a $300\,\mathrm{kt}$ WC detector, while the lower panels are for a $50\,\mathrm{kt}$ LArTPC. The number of events (both for $^{18}$Ne~ and $^6$He~) varies in a wide range with the choice of $\delta_{\mathrm{CP}}$. Out of the four different choices of $\delta_{\mathrm{CP}}$, the maximum (minimum) number of events for neutrinos is obtained with 90$^\circ$ (-90$^\circ$) irrespective of the choice of $\sin^2 2\theta_{13}$. For anti-neutrinos, the same is true with $\delta_{\mathrm{CP}}\rightarrow -\delta_{\mathrm{CP}}$. We can explain this fact with the help of equation~\ref{eq:pemu}. At the FNAL - DUSEL baseline, matter effects are small and hence $\hat{A}\ll 1$ in equation~\ref{eq:matt}. The oscillation probabilities can be expressed as $P_{e\mu} \simeq T_0 + T_- \sin \delta_{\mathrm{CP}} + T_+ \cos \delta_{\mathrm{CP}}$ and $P_{\bar{e}\bar{\mu}} \simeq T_0 - T_- \sin \delta_{\mathrm{CP}} + T_+ \cos \delta_{\mathrm{CP}}$, where $T_0, T_\pm$ are independent of $\delta_{\mathrm{CP}}$, whence the symmetry is manifest. For most of the energies $T_-$ is positive assuming normal hierarchy. Now for $\delta_{\mathrm{CP}}=90^\circ$ (-90$^\circ$) the term $T_- \sin \delta_{\mathrm{CP}}$ gives a positive (negative) contribution towards the probability for neutrinos. The opposite is true for anti-neutrinos. The behavior of the number of events with $\sin^2 2\theta_{13}$ is also understandable. As we increase the value of $\sin^2 2\theta_{13}$ from a near-zero value, the 2nd and 3rd terms in equation~\ref{eq:pemu}, which are linear in $\sin 2\theta_{13}$ and are dependent on $\delta_{\mathrm{CP}}$, begin to contribute at first. Beyond a certain value of $\sin^2 2\theta_{13}$, the first term, which does not depend on $\delta_{\mathrm{CP}}$, takes over leading to the rise for all curves, but the relative size of CP effects decreases. This is the reason, why the discovery of leptonic CP violation does not become any easy task at large $\sin^2 2\theta_{13}$. As far as the total number of neutrino events is concerned, the performance of the different detectors is less different than indicated by a factor of six difference in the detector masses (see the upper and lower left panels of figure~\ref{fig:event}). The reason is, that for the WC detector we consider only QE events but for the LArTPC we take into account both QE and IE events. At the same time the IE cross section is larger than the QE one at the relevant energies, see figure~\ref{fig:flux}. \subsection{Reference set-up} \label{sec:wbb} In order to compare the beta-beam set-up to alternative possibilities at FNAL, we introduce the wide-band beam concept. This project has been studied in detail in~\cite{wbb,uslongbaseline}. Here, a conventional neutrino beam will be sent from FNAL to a $300\,\mathrm{kt}$ WC in DUSEL. This beam has a spectrum wide enough to cover the first and second oscillation maximum and therefore can resolve most degeneracies. We use the same implementation as in~\cite{wbb}, with exception of the beam spectrum, which has been updated to correctly represent the beam which would be produced at FNAL~\cite{Bishai:2008zza}. Also, the beam intensity now corresponds to $1.2\,\mathrm{MW}$, this is the maximum beam power which can be achieved at FNAL without Project~X~\cite{fnalprotonplan}. The resulting physics sensitivities have been computed using {\sf GLoBES}~\cite{globes} and are shown as green shaded regions in figure~\ref{fig:sensitivity}. \section{The Numerical Technique} \label{sec:numerics} For all calculations we use as central (true) values \begin{eqnarray} \label{eq:central} \left|\Delta m^2_{31}\right|=2.4\cdot10^{-3}\,\mathrm{eV}^2 \pm 5\%\,,&\quad&\sin^22\theta_{23}=1.0\pm 1\%\,\nonumber\\ \Delta m^2_{21}=7.6\cdot10^{-3}\,\mathrm{eV}^2 \pm 2\%\,,&\quad& \sin^2\theta_{12}=0.32\pm 6\%\,. \end{eqnarray} In all fits, these parameters are allowed to vary within the stated $1\,\sigma$ intervals. The central values are the current best fit values~\cite{Maltoni:2004ei}; also the errors on the solar parameters are taken from~\cite{Maltoni:2004ei}. The errors on the atmospheric parameters correspond to the result which are expected from T2K and NO$\nu$A~\cite{Huber:2009cw}. Here, we describe in detail the numerical procedure adopted to calculate the discovery potential of the FNAL - DUSEL beta-beam set-up. For our statistical analysis we use the following $\chi^2$ functions for WC detector and LArTPC \begin{eqnarray} (\chi^2_{total})_{\rm WC} &=& \chi^2_{({\nu_e} \rightarrow {\nu_{\mu}})_{\rm QE}} + \chi^2_{(\bar{\nu_e} \rightarrow \bar{\nu_{\mu}})_{\rm QE}} \nonumber \\ &+& \chi^2_{({\nu_e} \rightarrow {\nu_e})_{\rm QE}} + \chi^2_{(\bar{\nu_e} \rightarrow \bar{\nu_e})_{\rm QE}} \nonumber \\ &+& \chi^2_{prior} \label{eq:tot_chisq_wc} \end{eqnarray} and \begin{eqnarray} (\chi^2_{total})_{\rm LArTPC} &=& \chi^2_{({\nu_e} \rightarrow {\nu_{\mu}})_{\rm QE}} + \chi^2_{({\nu_e} \rightarrow {\nu_{\mu}})_{\rm IE}} + \chi^2_{(\bar\nu_e \rightarrow \bar{\nu_{\mu}})_{\rm QE}} + \chi^2_{(\bar\nu_e \rightarrow \bar{\nu_{\mu}})_{\rm IE}} \nonumber \\ &+& \chi^2_{(\nu_e \rightarrow {\nu_e})_{\rm QE}} + \chi^2_{(\nu_e \rightarrow {\nu_e})_{\rm IE}} + \chi^2_{(\bar\nu_e \rightarrow \bar{\nu_e})_{\rm QE}} + \chi^2_{(\bar\nu_e \rightarrow \bar{\nu_e})_{\rm IE}} \nonumber \\ &+& \chi^2_{prior}~. \label{eq:tot_chisq_LArTPC} \end{eqnarray} The $\chi^2_{({\nu_e} \rightarrow {\nu_{\mu}})_{\rm QE}}$ is given by \begin{eqnarray} \chi^2_{({\nu_e} \rightarrow {\nu_{\mu}})_{\rm QE}} = min_{\xi_s, \xi_b}\left[2\sum^{n}_{i=1} (\tilde{y}_{i}-x_{i} - x_{i} \ln \frac{\tilde{y}_{i}}{x_{i}}) + \xi_s^2 + \xi_b^2\right ]~, \label{eq:chipull} \end{eqnarray} where $n$ is the total number of bins and \begin{eqnarray} \tilde{y}_{i}(\{\omega\},\{\xi_s, \xi_b\}) = N^{th}_i(\{\omega\}) \left[ 1+ \pi^s \xi_s \right] + N^{b}_i \left[1+ \pi^b \xi_b \right]~. \label{eq:rth} \end{eqnarray} Above, $N^{th}_i(\{\omega\})$ is the predicted number of QE events (calculated using equation~\ref{eq:events}) in the $i$-th energy bin for a set of oscillation parameters $\omega$ and $N_i^b$ are the number of background events in bin $i$. The quantities $\pi^s$ and $\pi^b$ in equation~\ref{eq:rth} are the systematical errors on signals and backgrounds respectively. We consider $\pi^s = 2.5\%$ and $\pi^b = 5\%$ (see table~\ref{tab:detector}). The quantities $\xi_s$ and $\xi_b$ are the pulls due to the systematical error on signal and background respectively. The data from equation~\ref{eq:chipull} enters through the variable $x_i=N_i^{ex}+N_i^b$, where $ N_i^{ex}$ is the number of observed QE signal events in the detector and $N_i^b$ is the background, as mentioned earlier. We simulate the QE signal event spectrum using equation~\ref{eq:events} for our assumed true values for the set of oscillation parameters given in equation~\ref{eq:central}. We consider all the values of $\sin^2 2\theta_{13}{\mbox {~(true)}}$ and $\delta_{\mathrm{CP}}{\mbox {~(true)}}$ in their allowed range and assume NH as true hierarchy. In a similar way, we estimate the contributions towards $\chi^2_{total}$ coming from other oscillation channels and event types (for both neutrino and anti-neutrino modes). In our $\chi^2$ fit we marginalize over {\it all} oscillation parameters and as well as the neutrino mass ordering, as applicable. We perform this by allowing all of these to vary freely in the fit and picking the smallest value of the $\chi^2$ function. However, we assume that some of these parameters which are poorly constrained by this experimental set-up, will be measured better from other experiments. Therefore, we impose a prior, or external constraint, on these parameters through $\chi^2_{prior}$, given by \begin{eqnarray} \chi^2_{prior} &=& \left (\frac{|\Delta m^2_{31}|- |\Delta m^2_{31}{\mbox {~(true)}}|}{\sigma(|\Delta m^2_{31}|)} \right )^2 + \left (\frac{\sin^22 \theta_{23}-\sin^22 \theta_{23}{\mbox {~(true)}}}{\sigma(\sin^22 \theta_{23})} \right )^2\nonumber \\ &+& \left (\frac{\Delta m^2_{21}- \Delta m^2_{21}{\mbox {~(true)}}}{\sigma(\Delta m^2_{21})} \right )^2 + \left (\frac{\sin^2 \theta_{12}-\sin^2 \theta_{12}{\mbox {~(true)}}}{\sigma(\sin^2 \theta_{12})} \right )^2~. \label{eq:prior} \end{eqnarray} where the $1\sigma$ errors on these, are given in equation~\ref{eq:central}. We minimize the $\chi^2_{total}$ using the same procedure as it was described in the appendix of \cite{shortnote}. \section{Results \& Summary} \label{sec:results} \begin{figure}[t] \includegraphics[width=0.6\textwidth]{730km_cpv.eps} \mycaption{\label{fig:730km} CP violation discovery potential of beta-beam at $730\,\mathrm{km}$ as range of $\delta_{\mathrm{CP}}{\mbox {~(true)}}$ as a function of the $\sin^2 2\theta_{13}{\mbox {~(true)}}$ assuming normal hierarchy as true hierarchy at the 1 d.o.f. $3\sigma$~ C.L. Here we take $\gamma = 350$ for both $^{18}$Ne~ \& $^6$He~ ions. The results are shown for $300\,\mathrm{kt}$ WC detector assuming two different simulation methods. See the text for details.} \end{figure} We evaluate the physics reach of the FNAL - DUSEL beta-beam set-up in terms of its discovery potentials for $\sin^2 2\theta_{13}$, CP violation and the mass hierarchy. These discovery potentials quantify for any given $\sin^2 2\theta_{13}{\mbox {~(true)}}$ for which range of possible values of $\delta_{\mathrm{CP}}{\mbox {~(true)}}$ the corresponding quantity will be discovered or measured at the chosen confidence level. The discovery reach for $\sin^2 2\theta_{13}$ is defined by the minimum value of $\sin^2 2\theta_{13}{\mbox {~(true)}}$ which allows us to rule out $\sin^2 2\theta_{13}=0$ in the fit. The CP violation discovery potential is defined as the range of $\delta_{\mathrm{CP}}{\mbox {~(true)}}$ as a function of $\sin^2 2\theta_{13}{\mbox {~(true)}}$ for which one can use the data to exclude the CP conserving solutions $\delta_{\mathrm{CP}}=0^\circ$ and $\delta_{\mathrm{CP}}=180^\circ$. The mass hierarchy discovery reach is the limiting value of $\sin^2 2\theta_{13}{\mbox {~(true)}}$ for which the wrong hierarchy can be excluded. Before we present our results for FNAL - DUSEL beta-beam set-up, we would like to discuss the CP violation discovery potential of beta-beam using a $300\,\mathrm{kt}$ WC detector at $730\,\mathrm{km}$ taking $\gamma = 350$ for both $^{18}$Ne~ \& $^6$He~ ions (see figure~\ref{fig:730km}). This setup has already been considered in the literature and has been shown provide high performance~\cite{bc1}. The first and second oscillation maxima for $730\,\mathrm{km}$ baseline are at $1.4\,\mathrm{GeV}$ and $0.5\,\mathrm{GeV}$ for $\Delta m^2_{31} = 2.4 \cdot 10^{-3}\,\mathrm{eV}^2$. At this baseline, both the ${\nu_e}$ and the ${\bar\nu_e}$ beam peak at the first oscillation maximum for $\gamma = 350$. Also for this setup, we use two methods to treat the NC background in the water Cherenkov detector as described in section~\ref{sec:wc}. For method A, we have a threshold energy of $1\,\mathrm{GeV}$ for both $^{18}$Ne~ and $^6$He~ ions and assume no backgrounds above the threshold. Since there is no background left above $1\,\mathrm{GeV}$, it is possible to probe CP violation at very small values of $\sin^2 2\theta_{13}{\mbox {~(true)}}$ as depicted by the dash-dotted blue line of figure~\ref{fig:730km}. The major drawback of this method is that we do not have any access to the second oscillation maximum. Therefore, the ($sgn(\Delta m^2_{31}),\delta_{\mathrm{CP}}$) degeneracy for large values of $\sin^2 2\theta_{13}{\mbox {~(true)}}$ is fully developed; as a result there is a sizable gap in sensitivity around $\sin^2 2\theta_{13}{\mbox {~(true)}} = 3 \cdot 10^{-2}$. This effect has been described in~\cite{Huber:2002mx} and has been termed $\pi$-transit. For method B, shown as dash-double-dotted black line in figure~\ref{fig:730km}, where the threshold is $0.2\,\mathrm{GeV}$ and a background rejection factor of 10$^{-3}$ is used for both the ions. Although the total backgrounds are higher, the second oscillation maximum can be used, nonetheless. Therefore, the effects of $\pi$-transit are mitigated for most of the parameter space, {\it i.e.} the sensitivity at large $\theta_{13}$ has essentially no gaps. The higher backgrounds result, however, in a reduced reach for small values of $\theta_{13}$. This concludes our comparison with previous results. \begin{figure}[t] \includegraphics[width=0.328\textwidth]{new_th13.eps} \includegraphics[width=0.328\textwidth]{new_cpv.eps} \includegraphics[width=0.328\textwidth]{new_mass.eps} \mycaption{\label{fig:sensitivity} Performance of FNAL - DUSEL beta-beam set-up at the 1 d.o.f. $3\sigma$~ C.L. in addressing $\sin^2 2\theta_{13}$, CP violation, and mass hierarchy discovery potential as range of $\delta_{\mathrm{CP}}{\mbox {~(true)}}$ as a function of the $\sin^2 2\theta_{13}{\mbox {~(true)}}$ assuming normal hierarchy as true hierarchy. The results are shown for $300\,\mathrm{kt}$ WC detector (assuming the simulation methods WC:~A \& WC:~B. See Section~4 for details) and $50\,\mathrm{kt}$ LArTPC (assuming the background rejection factor of 10$^{-3}$ and 10$^{-4}$). The green shaded regions show the sensitivity of a wide band beam using the WC detector, as defined in detail in section~\ref{sec:wbb}.} \end{figure} Our results are summarized in figure~\ref{fig:sensitivity}, where the physics sensitivities for $\sin^2 2\theta_{13}$ (left panel), CP violation (middle panel), and for the mass hierarchy (right panel) are shown for a baseline of $1300\,\mathrm{km}$. Results are shown at the $3\,\sigma$ confidence level for one degree of freedom. The various line styles are for the different detector options and background levels as given in the legend. The green shaded areas are the corresponding results for a conventional superbeam from FNAL to DUSEL using a $300\,\mathrm{kt}$ WC detector as described in detail in~\cite{wbb}. For the WC detector, we observe a distinct difference in sensitivities between the two different schemes to include background (black, dash-double-dotted line and blue, dash-dotted line). Clearly, using a hard energy cut (WC:~A, blue, dash-dotted line), which eliminates essentially all background and all information from the second oscillation maximum, performs better than WC:~B (black, dash-double-dotted line). The only exception is found for the CP violation reach around $\sin^22\theta_{13}=5\cdot10^{-3}$ for a CP phase of approximately $-120^\circ$. The reason, is the loss in information from the second oscillation maximum. Taking WC:~A as our benchmark case, we find that a beta beam would result in about one order of magnitude improvement over a superbeam for all three measurements for at least one half of the CP phases. The performance of a $50\,\mathrm{kt}$ LArTPC is rather similar to the one of a six times larger WC detector using background scheme B. In comparison to the superbeam, shown as green shaded regions and described in section~\ref{sec:wbb}, we see that using the same $300\,\mathrm{kt}$ WC detector the gain from using a beta-beam is obvious and most pronounced for $0<\delta_{CP}<180^\circ$. This statement holds also for the use of a six times smaller $50\,\mathrm{kt}$ LArTPC. It is left to the judgement of the reader, whether the physics gain from a beta beam is commensurate to the effort of building a decay ring and of running the Tevatron for another decade. We also have shown that, given the performance parameters of the Tevatron, a baseline of around $730\,\mathrm{km}$ would provide a much more compelling physics gain. However, this baseline violates the boundary condition of locating the far detector at DUSEL. In summary, while a beta beam is a very interesting option to pursue high precision neutrino physics, especially the search for CP violation, it seems not to fit very well with existing and planed infrastructure in the US. \acknowledgments We would like to thank J.~Link and D.~Mohapatra for useful discussions. In particular we thank M.~Mezzetto for detailed information on backgrounds in a Water Cherenkov detector. We acknowledge support from the U.S. Department of Energy under award number DE-SC0003915.
1,314,259,996,225
arxiv
\section{Introduction} Question Answering (QA) is an important task to evaluate the reading comprehension capacity of an intelligent system and can be directly applied to real applications such as search engines~\citep{kwiatkowski-etal-2019-natural} and dialogue systems~\citep{reddy-etal-2019-coqa,choi-etal-2018-quac}. This paper studies extractive QA which is a specific type of QA; i.e., answering the question using a span from the context~\citep{rajpurkar-etal-2016-squad,fisch-etal-2019-mrqa}. Extractive readers~\citep{Seo2017BidirectionalAF,devlin-etal-2019-bert} are widely used to tackle such a task, where the goal is to classify start and end positions of the answer in the context. Generative readers~\citep{Raffel2020ExploringTL,Lewis2020RetrievalAugmentedGF,izacard-grave-2021-leveraging} have also shown remarkable performance, where the goal is to generate answers by autoregressively predicting tokens. Both the state-of-the-art extractive and generative readers are based on large pretrained language models (PrLMs) and show good performance on different datasets. However, a systematic comparison between them has been largely unexplored. Such a comparison reveals the strengths and weaknesses of each reader, which in turn can provide more principled guidance on which reader and PrLM should be applied in which cases, and also open up future research opportunities grounded on identified concrete challenges to improve reader models. However fair comparisons between these have been difficult to perform mainly because 1) the PrLMs for extractive and generative are different, i.e., extractive readers are usually built on top of encoder-only PrLM while generative ones are based on encoder-decoder PrLMs, and 2) the size of generative and extractive readers are not the same, which can greatly affect the performance. We design two main set of controlled experiments to address such challenges in comparing extractive and generative readers in a principled manner. In the first set of experiments, we compare extractive and generative readers using the same PrLMs. Specifically, T5~\citep{Raffel2020ExploringTL} generative reader is compared with T5 extractive reader and similarly for BART~\citep{lewis2020bart}. This allows a fair comparison of different answer prediction approaches without being affected by different architecture or prior knowledge of PrLMs. Moreover, we challenge the conventional formulation of extractive readers, which are often built upon encoder-only PrLMs, by leveraging the encoder of encoder-decoder PrLMs as a variable alternative. More concretely, we use the encoders of T5 and BART models to explore their capacity as an extractive reader to better understand the effect of different pre-training strategies on the final QA performance. While the aforementioned comparison strategy adopts the same PrLMs, it remains unclear how generative readers compare with the conventional extractive readers that are built upon encoder-only PrLMs. Thus, in the second experiment, we compare different architecture PrLMs, including T5, BART, ELECTRA~\cite{Clark2020ELECTRAPT} and RoBERTa~\cite{Liu2019RoBERTaAR}, to draw more generalizable and grounded conclusions. All models in this suite of experiments have similar sizes, thus reducing the impact of model size on performance. With these two experiments, we present a systematic comparison of extractive and generative readers using nine readers on the MRQA task~\citep{fisch-etal-2019-mrqa}, a collection of multiple extractive QA datasets. This evaluation results in five insightful findings: \begin{enumerate}[nosep,noitemsep,leftmargin=*] \item The first experiment reveals that the choice of PrLM affects the performance. Specifically, for T5, the generative reader is better than the extractive one, but for BART, extractive readers are better than the generative ones. \item The second experiment shows that on average, extractive readers performs better than the generative ones, with the extractive reader built on the encoder of T5 performing the best among the different types of PrLMs. \item Extractive readers perform better in short context and have better generalization on out-of-domain datasets and rare answers, but the generative readers perform better in the long context. \item The encoder of encoder-decoder PrLMs are also good extractive readers. Extractive readers built on top of the encoder of BART or T5 are better than encoder-only PrLMs, like RoBERTa. \item While the inference length is usually chosen to be the same as in the training time, we find that longer inference length has a positive effect for all PrLMs. Using longer lengths for long contexts leads to greater gains than short contexts. \end{enumerate} Our work presents an in-depth study of extractive and generative readers for QA task, an important NLP task toward building intelligent systems. Our findings shed light on key considerations behind reader selection and would be helpful for formulating future research on advancing reader models. \section{Related Work} \paragraph{Pretrained Language Models} Here, we mainly discuss two types of pre-trained models based on transformers architecture~\citep{Vaswani2017AttentionIA}, autoencoder and encoder-decoder models, which are widely used for QA tasks. Autoencoder only relies on the encoder part in the original transformer, and in the pretraining time, the input is a corrupted sentence, for example, a sentence with mask tokens, such as BERT~\citep{devlin-etal-2019-bert} and RoBERTa ~\citep{Liu2019RoBERTaAR} and ELECTRA~\citep{Clark2020ELECTRAPT}. Both RoBERTa and ELECTRA has the same architecture as BERT but perform better than BERT on many tasks. RoBERTa mainly benefits from larger training corpus consisting of news, books, stories, and web text. ELECTRA adapts GAN-style training~\citep{Mirza2014ConditionalGA} and aims to detect if a token is replaced or is from the original text. Large ELECTRA is trained on similar data as RoBERTa. BART~\citep{lewis-etal-2020-bart} and T5~\citep{Raffel2020ExploringTL} belong to encoder-decoder architecture. BART is pretrained on the same data as RoBERTa, while T5 is pre-trained on Colossal Clean Common Crawl Corpus as well as the multiple downstream tasks. \paragraph{Question Answering Systems} We focus on QA systems that are built upon PrLMs. Extractive QA readers assume that answers can be found in the context and aim to predict the corresponding start and end tokens from the context~\citep{fisch-etal-2019-mrqa,li-etal-2019-net,Clark2020ELECTRAPT,karpukhin-etal-2020-dense}. Differently, generative QA readers are not restricted to the input context, where they can freely generate answers token by token using the entire vocabulary in an autoregressive manner~\citep{Raffel2020ExploringTL}. Generative readers are more often used in open domain ~\cite{Lewis2020RetrievalAugmentedGF,izacard-grave-2021-leveraging,Xiong2021AnsweringCO} and unified settings~\citep{khashabi-etal-2020-unifiedqa,Tafjord2021GeneralPurposeQW}. ~\citet{Fajcik2021PruningTI} combines extractive and generative readers by adding a classification module to decide which reader predicts answers. \citet{cheng-etal-2021-unitedqa} proposes a unified system of extractive and generative readers, but different from \citep{Fajcik2021PruningTI}, the output is computed by both extractive and generative readers. \section{Model}\label{sec:model} We mainly study the QA models based on PrLMs with extractive and generative approaches. \subsection{Extractive Reader} \label{sec:ext_model} In extractive reader, an encoder firstly receives the concatenation of a question $\mathbf{q:}\{q_1, \dots, q_t\}$ and a context $\mathbf{c:}\{c_1, \dots, c_m\}$, where $q_i$ and $c_j$ are tokens in question and context, respectively. Then, it produces $\mathbf{h}: [ h_1 | \cdots | h_m ] \in \mathbb{R}^{d \times m}$, where $h_j$ corresponds to the $d$-dimensional contextual representation of context token $c_j$. We then stack two linear layers on top of the contextual representations to independently predict the probability of each context token being start and end positions of the correct answer. More formally, given a tuple $(\mathbf{q}, \mathbf{c}, \mathbf{a})$, where $\mathbf{a}$ is an answer, the training objective is to minimize the following loss function \begin{align} \mathcal{L}_{\text{Ext}} = -\log(\mathbf{P_{start,s}}) -\log(\mathbf{P_{end,e}}) \end{align} where $\mathbf{P_{start}}, \mathbf{P_{end}} \in \mathbb{R}^{m}$ are defined by \begin{align} \mathbf{P_{start}} &= \text{softmax}(\mathbf{w_{start}}\mathbf{h})\\ \mathbf{P_{end}} &= \text{softmax}(\mathbf{w_{end}}\mathbf{h}) \end{align} where $\mathbf{w_{start}}$ and $\mathbf{w_{end}}$ denote for the linear layers to predict start and end tokens, ${\mathbf{P_{start,s}}}$ and ${\mathbf{P_{end,e}}}$ denote the probability of the ground truth start and end tokens of answer $\mathbf{a}$, respectively. In testing time, the answer span is decoded by $\text{argmax}_{i, j} \{{\mathbf{P_{start,i}}} \times {\mathbf{P_{end,j}}}\}$. In this work, we have two variants of extractive readers. One is encoder-only models to get the contextual representation of each token. We call such kind of reader as \textbf{E-Extractive reader}. Apart from taking the conventional PrLMs such as RoBERTa and ELECTRA, we also apply the encoder part in T5 and BART to be E-Extractive reader. The other one is using the encoder-decoder models where the decoder is to obtained the contextual representation of each token in the context in an autoregressive way (see \S\ref{sec:gen_model}). We use both BART and T5 PrLMs and term this kind of reader as \textbf{ED-Extractive reader}. \subsection{Generative Reader} \label{sec:gen_model} We consider a generative reader consisting of an encoder and a decoder where the decoder is used to generate answers in an autoregressive way. Specially, the encoder takes a question $\mathbf{q}$ and a context $\mathbf{c}$ as input and outputs contextual representation $\mathbf{h}$. Then, the decoder takes the previously generated answer tokens as input and performs attention over $\mathbf{h}$ and then generates the next token. Formally, given a tuple $(\mathbf{q}, \mathbf{c}, \mathbf{a})$, the training objective is to minimize the following loss function \begin{equation} \mathcal{L}_{\text{Gen}} = \sum_{i=1}^{K}\log \mathbf{P}(a_i\mid \mathbf{h}, a_{:i}) \end{equation} where $K$ is the number of tokens in answer $\mathbf{a}$, $a_i$ is the $i^{th}$ token in $\mathbf{a}$, and $a_0$ corresponds to a special beginning of sequence (\texttt{BOS}) token. In the inference time, we use the greedy search method to autoregressively generate the answer. \section{Experiments} \subsection{Dataset} We conduct experiments on MRQA benchmark which provides six in-domain (IID) datasets, and six out-of-domain (OOD) datasets for generalization evaluation. MRQA covers different domains (e.g. News and biomedical) and different types of questions, (e.g. single hop and multi-hop). Table \ref{tab:datasets} shows the statistic of each IID and OOD dataset. Some datasets have long context and others are short context. More details about MRQA are presented in Appendix \ref{apd:datasets}. \input{dataset} \subsection {Learning Strategy} {\bf Single Task Learning:} we use each IID datasets to train extractive and generative readers. {\bf Multi-Task Learning:} we consider training with all (six) IID datasets as multi-task learning for two reasons. As \citep{su-etal-2019-generalizing} showed that different IID datasets share a low similarity, therefore, they may require different reasoning skills. In addition, Table \ref{tab:datasets} shows that different datasets have different question and context lengths, which may lead to different difficulties between datasets. \subsection{Experimental Setup} \input{new_tables/models_para} \input{new_tables/diff_infer_length} We use Huggingface~\citep{wolf-etal-2020-transformers} and Pytorch~\citep{NEURIPS2019_9015} implementation for training each model. All models are trained using maximum input length of 512 and other details is provided in Appendix \ref{apd:setup}\footnote{While we fix the training hyperparameters for all the models for the sake of experimental efficiency, the performance of our setting is close to the original results.}. In Table \ref{tab:model_size}, we summarize the size of each evaluated model and the size of PrLMs are chosen based on a comparable way and the best computation power. For example, we choose T5 base model for generative reader since the large T5 is too larger (737M). \noindent\textbf{Input Format}: Given a question {\bf Q} and a context {\bf C}, the input to extractive readers is \{{\bf Q} [SEP] {\bf C}\} and the input to generative readers is \{{\textit{question:} \bf Q} [SEP] \textit{context: }{\bf C}\}. We also considered other input formats, which are reported in Appendix \ref{apd:two_input_format}. \noindent\textbf{Answer Length of Generative Reader}: We set the maximum generated answer length as 16 for generative reader. Using longer generation lengths (32 and 64) do not yield noticeable improvement as reported in Appendix \ref{apd:gen_ans_length}. \section{Results and Analysis} \label{sec:result} We first present the study of using different inference length for each model since it guides us to choose the best performance of each model. Then, we compare the generative and extractive readers using the same PrLMs and the different PrLMs. Last, we present a detail analysis to diagnose the difference among extractive and generative reader. F1 is used to measure performance. Note that since we test each model on 12 datasets, the observation and conclusion we draw are mostly based on the average across all datasets. \subsection{The Effect of Context Length} \label{sec:length_effect} While all models are trained with $512$ maximum length, the inference length can be longer than this. We experiment with three lengths, $512$, $1024$, and the full length of input question and context. Due to the tokenization and pretraining maximum length of each PrLM, ELECTRA only allows $512$ maximum inference length, RoBERTa and BART allows $1024$, and T5 allows the full length of input. We present the average performance of each model on both IID and OOD in Table \ref{tab:diff_infer_length}\footnote{Note that in single-task learning, the performance on OOD are extracted from the best performance of each single-task model on every dataset and this applies to all other tables in this paper.}, from which three trends are observed. (1) When using $512$ inference length, ELECTRA is the best model in single-task learning on IID datasets and multi-task in both IID and OOD datasets. (2) Increasing the inference length actually improves all models' performance. (3) The length affects the T5 models more significantly than others, for example, in single-task learning, the largest improvement of length $1024$ for T5 model on IID and OOD datasets are $2.77\%$ and $5.49\%$, while for other models, the largest improvement of length $1024$ compared to $512$ are $1.32\%$ and $1.65\%$. The performance of using 512 and 1024 are given in Appendix \ref{apd:infer_length}, and we present the performance of each dataset using the best input length in the following sections. \subsection{Comparison within Same PrLMs} \label{sec:comp_same_prlms} We compare different readers when using the same PrLMs. Two PrLMs, T5 and BART, are considered, where T5-base model is applied to each T5 reader, and BART-large model is applied to each BART reader. We have three comparison as there are two types of extractive and one type of generative readers (\S\ref{sec:model}). We present the average performance in each comparison and the detail performance on each datasets are given in Appendix \ref{apd:same_prlms}. \paragraph{ED-Extractive and E-Extractive} Since the E-Extractive reader is only use the encoder part of the PrML without the decoder, the size of E-Extractive reader is less than the ED-Extractive. But even under this disadvantage, surprisingly, we find that the encoder part actually perform well on QA tasks. In Figure \ref{fig:compare_ed_e_ext} , the \textcolor{black}{red} and \textcolor{black}{green} bars compare the \textcolor{black}{ED-Extractive} and \textcolor{black}{E-Extractive} reader. For BART model, the E-Extractive reader outperforms ED-Extractive reader on average on IID and OOD datasets in single task learning as well as multi-task learning. This indicates that the decoder in BART is not crucial for the extractive reader. On the other hand, for T5, the ED-Extractive reader outperforms E-Extractive reader on average on both IID and OOD datasets. This suggests that the decoder in T5 still plays a role to yield better performance. But the performances are similar even that the E-Extractive reader has less parameters. \begin{figure}[h!] \noindent\begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=0.99\textwidth]{images/comp_ed_e_single} \end{subfigure}% \noindent\begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=0.99\textwidth]{images/comp_ed_e_multi} \end{subfigure}% \caption{ Left for single-task and right for multi-tasks settings. For T5, ED-Ext performs better than E-Ext reader; for BART, E-Ext is better than ED-Ext reader even though the former has less parameters.} \label{fig:compare_ed_e_ext} \end{figure} \paragraph{ED-Extractive and ED-Generative Reader} Here, the model size of extractive reader and generative reader are almost the same (see Table \ref{tab:model_size}) and also the pre-owned knowledge of two readers are the same since both readers use the encoder and decoder parts. In Figure \ref{fig:comp_ed_ext_gen}, the \textcolor{black}{red} and \textcolor{black}{blue} bars compare the \textcolor{black}{ED-Extractive} and \textcolor{black}{ED-Generative} reader. For T5, generative models performs better than the extractive one on four cases, IID and OOD datasets and single- and multi-tasks learning. For BART PrLM, in single-task learning, the extractive model is much better than the generative model. This probably explains why in most of the previous work, when BART is applied to extractive QA tasks, it is used as extractive reader even though it belongs to encoder-decoder model family\footnote{The original BART paper takes BART as an extractive and also the implementation of using BART for QA in Huggingface library do the same.}. The story for multi-task learning is different, and we find that the BART generative reader benefits significantly from multi-task learning and even outperforms the BART ED-extractive reader on IID datasets. It indicates that the decoder in BART requires larger and more diversified datasets to learn the QA task. \begin{figure}[h!] \noindent\begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=0.99\textwidth]{images/comp_ed_ext_gen_single} \end{subfigure}% \noindent\begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=0.99\textwidth]{images/comp_ed_ext_gen_multi} \end{subfigure}% \caption{ Left for single-task and right for multi-tasks settings. For T5, ED-Gen performs better than ED-Ext; For BART, ED-Ext is better than ED-Gen in single task learning, but worse in multi-task learning.} \label{fig:comp_ed_ext_gen} \end{figure} \paragraph{E-Extractive and Generative Reader} In this comparison, the extractive reader has less advantage than the generative ones since the decoder has been removed in E-Extractive reader. In Figure \ref{fig:comp_e_ed_ext_gen}, the \textcolor{black}{green} and \textcolor{black}{blue} bars compare the \textcolor{black}{E-Extractive} and \textcolor{black}{ED-Generative} reader. For T5 model, the generative reader are better than the extractive ones in both single- and multi-tasks and IID and OOD datasets. But again, this disadvantages of extractive readers might come from the smaller model size as we discussed in previous comparison. For BART model, E-Extractive reader outperforms generative reader significantly on both IID and OOD datasets and the advantage of E-Extractive reader are much more significantly in single-task learning scenario. \begin{figure}[h!] \noindent\begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=0.99\textwidth]{images/comp_e_ed_ext_gen_single} \end{subfigure}% \noindent\begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=0.99\textwidth]{images/comp_e_ed_ext_gen_multi} \end{subfigure}% \caption{ Left for single-task and right for multi-tasks settings. For T5, ED-Gen is better than E-Ext reader; for BART, E-Ext is better than ED-Gen reader even though the former has less parameters.} \label{fig:comp_e_ed_ext_gen} \end{figure} To summarize, \begin{enumerate}[nosep,noitemsep,leftmargin=*] \item The encoder part itself in both T5 and BART can perform well as an extractive reader. \item The comparison among three types of reader using BART and T5 suggests that although both PrLMs are of encoder-decoder architecture, three types of readers behave quite differently. This might caused by different pre-training objectives and knowledge. \item For BART model, the E-Extractive reader outperforms ED-Extractive reader and generative reader regardless of less parameters, thus should be used as an extractive reader. \item The BART generative reader requires large and diversified datasets to learn the QA task and thus benefits significantly from multi-task learning. \item For T5, the performance of generative reader consistently outperforms two types of extractive reader. The deficiency of T5-Extractive reader might be caused by less parameters. \end{enumerate} \subsection{Comparison within Different PrLMs} \label{sec:comp_diff_prlms} \input{new_tables/com_diff_prlm} The previous section compares the generative and extractive readers using the same PrLMs and both PrLMs are encoder-decoder models. On one hand, such comparison reduces the impacts of PrLMs architecture and pre-owned knowledge. On the other hand, it raises two concerns. First, whether extractive readers using an encoder-decoder PrLMs are good for representatives of extractive readers? After all, encoder-only PrLMs are more standard choice for extractive readers in most previous work. Second, whether the smaller size of the extractive reader cause its deficiency compared to the generative one, particularly that the T5 E-Extractive reader is half size of the T5 generative reader in previous comparison. To clear out the first concern, here, we present the comparison cross different PrLMs including standard encoder-only models for extractive readers. To address the second concern, we carefully select the model size so that each model is of relative comparable size. \paragraph{The Selection of Each Model's size} We use the encoder in T5 large model for the T5 E-Extractive reader so that it is of similar size as RoBERTa and ELECTRA extractive readers ($\sim$330M)\footnote{ Note that the T5 PrLM is already trained on SQuAD, while others do not. However, based on the results on SQuAD, T5 does not have advantage over other models on this dataset.}. When using BART PrLMs for extractive reader, we only use BART E-Extractive reader but not ED-Extractive reader because the former performs better even though it has less parameters (204M) than the later one has larger size. T5 generative reader is also smaller (223M), but this is better than using T5 large generative reader to compare with others, which is way too larger than other readers (737M). For BART generative reader, it is larger than other readers (406M). One potential issue for the abovementioned setting is that even though we choose the best comparison setting, still each model size are different, and thus if a model perform inferior than others, it might due to the smaller model size. However, the following conclusion we draw does not effect by this issue. \paragraph{Are Encoder-decoder PrLMs Good for Extractive Readers?} Based on Table \ref{tab:compare_diff_prlm}, we find that encoder-decoder PrLMs outperform encoder-only PrLMs as extractive readers on average. Both T5 and BART E-Extractive readers perform better than RoBERTa and ELECTRA on IID and OOD datasets under single- as well as multi-task learning regardless of less parameters of T5 and BART. This observation is exciting since instead of using standard encoder-only PrLMs for extractive reader, encoder-decoder PrLMs are actually better choice. \paragraph{Which reader generalize better on OOD?} The extractive reader generalize better on OOD datasets. In both single- and multi-task learning, T5 E-Extractive reader shows the best performance, especially beating the BART generative reader even though the latter one has more parameters. BART E-Extractive reader also generalize well on OOD, and it also beats the BART generative reader even though the former has less parameters than the later. \paragraph{Which PrLM is the best?} Based on Table \ref{tab:compare_diff_prlm}, we see that T5 is the best among four PrLMs in both single- and multi-tasks learning scenario on IID as well as OOD datasets. We observe two advantages of T5 over other PrLMs. First, T5 is much better than ELECTRA and RoBERTa on NewsQA data. In both single- and multi- task learning, RoBERTa and ELECTRA achieve around $60\%$ F1 score on NewsQA, while both T5 extractive and generative reader achieved higher than $70\%$ F1 score, yielding more than $10\%$ improvements. Second, T5 is better at long context dataset. In IID, TQA and SQA, T5 ED-Generative reader outperforms other readers at least $3.30\%$ and $3.67\%$ in single-task, $7.05\%$ and $4.43\%$ in multi-task learning. On OOD datasets, TbQA and DuoRC, T5 E-Extractive reader is better than others at least by $9.61\%$ and $1.45\%$ in single-task, $8.61\%$ and $3.06\%$ in multi-task. We would like to mention that this advantage of T5 is conditioned on using full inference length, when using short input length such as 512, this advantage does not exhibit as we shown in \S\ref{sec:length_effect}. \paragraph{Which PrLM benefits more from Multi-task Learning?} While multi-task learning is in general beneficial for all PrLMs, we find BART benefits the most from multi-task learning, especially for the generative reader. For example, on IID datasets. BART generative reader improves more than $8\%$ on average while all other readers improves less than $1\%$. Similarly for OOD datasets, the improvement of multi-task learning on BART generative reader are more significant than other readers. To summarize, \begin{enumerate}[nosep,noitemsep,leftmargin=*] \item Encoder-decoder PrLMs are in fact can be used as extractive readers, they are even better than the conventional choice (encoder-only PrLMs) of extractive readers on average. \item Extractive readers performs better than the generative readers on OOD datasets, especially for the extractive readers based on the encoder-decoder PrLMs. \item T5 is the best among four PrLMs for it performs better in News domain and the long context. And the advantage of T5 is conditioned on using full inference length. \item While in general multi-task learning benefits for all PrLMs, BART PrLM benefits the most. \end{enumerate} \subsection{In-Depth Diagnosis} \label{sec:additional_findings} We investigate the behavior of extractive and generative models in long and short context and predicting answers which include rare characters. Multi-task models in \S\ref{sec:comp_diff_prlms} are chosen for comparison. \subsubsection{Long and Short Context} \label{sec:long_short_length} \begin{figure*} \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{images/context_length_iid_6} \end{minipage}\hfill \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{images/context_length_ood_6} \end{minipage} \caption{Comparison among generative and extractive readers on different length of the question and context. Left part for IID and right part for OOD datasets. Dash line for extractive and solid line for generative readers.} \label{fig:compare_length} \end{figure*} As we discussed in previous section that generative readers have advantage over extractive counterparts. To further support this trend, we divide the testing sets into five subsets, where we count the total words in question and context, and choose five thresholds, 2/4/6/8/10 hundreds. It is worth to mention that since all extractive readers use the window-stride strategy (i.e. if the input length is longer than the maximum length, then the input is segmented into multiple inputs), so that the entire context is observable for extractive readers. From Figure \ref{fig:compare_length}, we have two observations. First, on IID datasets, for questions and contexts with less than 600 words, the extractive ones always perform better than the generative ones (the dash lines are higher than the solid ones), but when the length are more than 600 words, the generative ones consistently outperform the extractive ones. This suggests that the extractive readers performs better in the short context while the generative readers perform better in long context. Second, on OOD datasets, T5 generative reader still presents advantage in the long context (more than 600 words), while BART generative reader performs worse than the extractive one in both short and long context. But the gap between the BART generative and extractive readers is less on the long context compared to the short context. It might suggest that the extractive reader has better generalization capacity than the generative one thus the advantage of generative reader in long context is weakened. \subsubsection{Rare Characters in Answer} We find that some answers of testing sets include rare characters such as \textit{\'{n}} and \textit{ł} (119 are found), thus we divide the testing sets into two subsets, one is the normal answer set where the answer does not have rare characters\footnote{Rare characters are any characters which are not belongs to the printable characters in the string library of Python. The printable characters include lower and upper case alphabets, digits, punctuation, and white-space.}, the other one is with rare characters. The percentage of rare cases for IID and OOD datasets is 1.4\% and 2\%, respectively. From Table \ref{tab:rare}, we have two observations. First, in normal case, the performance of extractive and generative readers are relatively comparable on both IID and OOD datasets, but in rare case, the extractive readers are better than the generative ones This suggests that the extractive reader has better generalization than the generative ones. Second, we see that the rare tokens has worse impact on T5 than BART generative readers in both in- and out-of-domain datasets. Further investigation finds that 94 out of 119 rare characters can not be represented by T5 tokenizer (i.e. T5 tokenizer uses `<unk>' special tokens to represent these characters), and tends to ignore these special characters in the generation time as the two examples shown in Table \ref{tab:rare_example}. Differently, BART tokenizer can represent all rare characters. Improving generative readers performance in predicting rare answers is an important future work. To summarize, \begin{enumerate}[nosep,noitemsep,leftmargin=*] \item Extractive readers performs better than the generative reader on short context, but generative one performs better on long context. \item Generative readers performs worse in predicting answers with rare characters, and T5 performs worse than BART. \end{enumerate} \input{new_tables/rare_char} \input{rare_example} \section{Conclusion and Future Work} We systematically compare the extractive and generative readers for QA tasks. Two sets of experiments are designed to control the effects of different PrLMs and the size of models. By conducting experiments on 12 QA datasets, our findings provide guidelines on how to choose extractive or generative readers given their strength and weakness. Investigating the reason behind the observations and improving the generative and extractive reader will be interesting research questions for future. \newpage \section{More Details of MRQA Datasets} \label{apd:datasets} MRQA provides six datasets for training and six for out-of-domain evaluations. In Table \ref{tab:dataset_source}, we present the source of each datasets, and we can see that the domains are diversified. Figure \ref{fig:histogram-in} and \ref{fig:histogram-out} show the histogram of the context length of IID and OOD dataset. The distribution shows that some datasets are mainly short, some are mainly long, and others are the combination of short and long. We use short annotation for some datasets, TQA: TriviaQA; SQA:SearchQA; HQA:HotpotQA; NQ: NaturalQuestions; TbQA:TextbookQA; RE:RelationExtraction. \begin{table}[h] \centering \small \resizebox{0.97\linewidth}{!}{ \begin{tabular}{@{}p{0.2\linewidth}p{0.75\linewidth}@{}} \toprule Dataset & Source \\ \hline SQuAD & Wikipedia\\ NewsQA & News article \\ TQA & Trivia and quiz-league websites \\ SQA & Jeopardy! TV show \\ HQA & Wikipedia\\ NQ & Wikipedia \\ DROP & Wikipedia \\ RACE & English reading comprehension exams for middle and high school \\ BioASQ & Science (PubMed) articles \\ TbQA & Lessons from middle school Life Science, Earth Science, and Physical Science textbooks \\ RE & Wikiread \\ DuoRC & wikipedia \\ \toprule \end{tabular} } \caption{The source of each dataset} \label{tab:dataset_source} \end{table} \input{histogram} \section{Training Setup} \label{apd:setup} We use Huggingface~\cite{wolf-etal-2020-transformers} implementation and Pytorch~\cite{NEURIPS2019_9015} to train each model. All model are trained on 4 GTX1080 GPUs in 4 epochs with a learning rate of 1e-4, batch size of 128, random seed 1234. While we fix these hyperparameters for all models, we get similar results as the original paper (i.e. the difference in terms of F1 are mostly within 2 percent.) In details, on SQuAD dataset, RoBERTa in \cite{Liu2019RoBERTaAR} and in ours achieves 94.6 and 92.64 F1 scores, respectively; BART in \cite{lewis2020bart} and in ours achieves 94.6 and 92.51 F1 scores, respectively; ELECTRA in \cite{Clark2020ELECTRAPT} and in ours achieves 94.2 and 93.39 F1 scores, respectively; T5 in \cite{Raffel2020ExploringTL} and in ours achieves 80.88 and 82.56 EM scores, respectively. \section{Two Input Format}\label{apd:two_input_format} When fine-tuning generative reader on question answering task, some special words are added before the real input to denote the type of task. In an extractive reader, usually, there are no special words added. Here, we evaluate these two formats for T5 and BART generative reader. Particularly, given a question {\bf Q} and a context {\bf C}, format 1 is to add the ``question:'' and ``context:'' in front of the real question and context such that the input is \{{\textit{question:} \bf Q} [SEP] \textit{context: }{\bf C}\}; and format 2 is without these special words such that the input is \{{\bf Q} [SEP] {\bf C}\}. To keep the training process be efficient, we evaluate on two datasets SearchQA and HotpotQA, instead of all datasets. Table \ref{tab:two_input_format} shows that format 1 yields slightly better performance for T5 and much better performance for BART on SQA datasets, and thus we use this format for all generative reader. \begin{table}[h] \centering \renewcommand{\arraystretch}{1.2} { \resizebox{0.9\linewidth}{!}{ \small \begin{tabular}{c|c|cc|cc} \toprule \multirow{2}{*}{Model} & \multirow{2}{*}{Format} & \multicolumn{2}{c}{SQA} & \multicolumn{2}{|c}{HQA}\\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} ~& ~ & EM & F1 & EM & F1 \\ \toprule \multirow{2}{*}{T5} & 1 & 81.07 & 86.21 & 64.04 & 79.89\\ ~ & 2 & 80.65 & 85.76 & 63.23 & 79.42\\ \midrule \multirow{2}{*}{BART} & 1 & 72.86 & 78.89 & 55.77 & 73.22\\ ~ & 2 & 49.28 & 58.00 & 55.72 & 73.20 \\ \bottomrule \end{tabular} } } \caption{Comparison between different input format on two datasets. Format1 means input with ``question:'' and ``context:'' as format1, and format2 means without.} \label{tab:two_input_format} \end{table} \section{Answer Length of Generative Reader} \label{apd:gen_ans_length} For the generative reader, we tried different maximum lengths of the generated answer: 16, 32, and 64. Table \ref{tab:gen_answer_len} shows that increasing the length of the target does not make improvement, this might be because the answer in the testing data is usually short and thus length of 16 is sufficient. \begin{table*}[t] \centering \renewcommand{\arraystretch}{1.2} { \resizebox{\linewidth}{!}{ \begin{tabular}{c|ccccccc|ccccccc} \toprule \multirow{2}{*}{Length} & \multicolumn{7}{c}{IID Datasets} & \multicolumn{7}{|c}{OOD Datasets}\\ \cmidrule(lr){2-8} \cmidrule(lr){9-15} ~& SQuAD & NewsQA & TQA & SQA & HQA & NQ & Avg. & DROP & RACE & BioASQ & TbQA & RE & DuoRC & Avg. \\ \toprule 16 &91.41 & 71.29 & 80.01 & 86.46 & 79.7 & 78.09 & 81.16 & 51.2 & 49.66 & 68.72 & 62.9 & 85.84 & 63.76 & 63.68\\ 32 & 91.41 & 71.29 & 80.01 & 86.46 & 79.7 & 78.09 & 81.16 & 51.2 & 49.66 & 68.72 & 62.9 & 85.84 & 63.76 & 63.68\\ 64 & 91.41 & 71.29 & 80.01 & 86.46 & 79.7 & 78.09 & 81.16 & 51.2 & 49.66 & 68.72 & 62.9 & 85.84 & 63.76 & 63.68\\ \midrule 16 & 88.63 & 68.91 & 74.91 & 82.52 & 80.53 & 75.78 & 78.55 & 55.2 & 50.04 & 63.78 & 54.81 & 80.94 & 58.47 & 60.54\\ 32 & 88.72 & 69.05 & 74.91 & 82.52 & 80.56 & 75.93 & 78.61 & 55.21 & 50.05 & 63.74 & 54.82 & 80.92 & 58.49 & 60.54\\ 64 & 88.72 & 69.05 & 74.91 & 82.52 & 80.56 & 75.93 & 78.61 & 55.21 & 50.05 & 63.74 & 54.82 & 80.92 & 58.49 & 60.54\\ \bottomrule \end{tabular} } } \caption{Performance of using different Answer length for generative reader. First block is the result for T5 model and the second block for BART model.} \label{tab:gen_answer_len} \end{table*} \section {Inference Length} \label{apd:infer_length} We present the results of using 512 and 1024 length and full length in Table \ref{tab:table_512}, \ref{tab:table_1024}, \ref{tab:table_max} separately. Note that due the tokenization approach adapted by each model, for Electra using 1024 or full length is same as using 512, for RoBERTa and BART, using full length is the same as length 1024. Furthermore, the detailed performance of each single task model is given in Table \ref{tab:apd_table_f1_best}, using the best inference of each model, i.e. full length for T5, 1024 for RoBERTa and BART, and 512 for ELECTRA. \input{new_tables/table512} \input{new_tables/table1024} \input{new_tables/table_max} \section{Detailed Comparison Results for Using Same PrLMs} \label{apd:same_prlms} Table \ref{tab:same_prlms} presents the F1 score of each readers when using the same PrLMs as we discussed in \S\ref{sec:comp_same_prlms}. \input{new_tables/table_same_prlms} \input{new_tables/apd_table_f1_best}
1,314,259,996,226
arxiv
\section{} \section{Introduction} {The analysis of the phases of strong interactions presents many fascinating aspects -- mechanisms of confinement, different realisations of the chiral symmetry, the nature of the symmetric phase, the emergence of conformality, and many others. All these topics are under active scrutiny both theoretically and experimentally \cite{SCGT}. While strong interactions spontaneously break chiral symmetry in ordinary QCD at zero temperature, the chiral symmetry is realised either at high temperatures -- in the so-called quark-gluon plasma (QGP) phase -- and at a large number of flavours $N_f > N_f^*$ (even at zero temperature) \cite{Appelquist:1996dq,Miransky:1997,Appelquist:1999hr,Sannino:2009za}. In the latter case, the theory is expected to become not only chirally but also conformally invariant. This is due to the emergence of an infra-red fixed point (IRFP) for $N_f > N_f^*$ at a coupling which is not strong enough to break chiral symmetry. Both physics intuition and phenomenological analysis based on functional renormalisation group~\cite{BraunGies} and finite temperature holographic QCD \cite{Alho:2012mh} indicate that the conformal phase of cold, many flavour QCD and the high temperature chirally symmetric phase are continuously connected. In particular, the onset of the conformal window coincides with the vanishing of the transition temperature, and the conformal window appears as a zero temperature limit of a possibly strongly interacting QGP.} The analysis of the finite temperature phase transition is a well-established line of research within the lattice community, and our approach will be completely conventional here. According to the Pisarski-Wilczek scenario~\cite{Pisarski:1983ms}, the most likely possibility for $N_f \ge 3$ is a first order chiral transition in the chiral limit, turning into a crossover above a critical mass endpoint, and/or on lattices which are not large enough. We will identify such crossover with confidence for a number of flavours ranging from four to eight, and we will complement these results with those of the deconfinement transition in the quenched model. Then, we study the approach to the conformal phase in the light of the chiral phase transition at finite temperature with variable number of flavours. One problem of this approach is the setting of a common scale among theories which are essentially different. We will propose two alternative possibilities to handle this problem, one evolving from our previous work \cite{Miura:2011mc}, and the other from a recent analysis \cite{Liao:2012tw}. Interestingly, this latter approach analyses the dependence of the confinement parameters on the matter content, and proposes microscopic mechanisms for confinement motivated by such $N_f$ dependence. Further, we will argue that even results in the bare lattice parameters can be used directly to locate the critical number of flavours, thus generalising to finite temperature the Miransky-Yamawaki phase diagram, Ref. \cite{Miransky:1997}. A second zero of the two-loop beta-function of a non-Abelian gauge theory implies, at least perturbatively, the appearance of IRFP conformal symmetry \cite{Caswell:1974gg,Banks:1981nn}. In colour SU($3$) gauge theory with $N_f$ massless fundamental fermions, the second zero appears at $N_f\gtrsim 8.05$, before the loss of asymptotic freedom (LAF) at $N_f^{\mathrm{LAF}}=16.5$. Analytic studies of the conformal transition of strong interactions have produced a variety of predictions for the conformal threshold: the Schwinger-Dyson approach with rainbow resummations \cite{Appelquist:1996dq,Miransky:1997,Appelquist:1999hr} or the functional renormalisation group method \cite{BraunGies} suggest the onset of conformal window around $N_f^* \sim 12$. An all-order perturbative beta-function \cite{Ryttov:2007cx} inspired by the NSVZ beta-function of SQCD \cite{Novikov:1983uc} leads to a bound $N_f^* > 8.25$. Instanton studies at large $N_f$ \cite{Velkovsky:1997fe} claimed a qualitative change of behaviour at $N_f=6$. $N_f^{*}$ has also been estimated for different fermion representations \cite{Dietrich:2006cm}. {The sub--critical region, when $N_f$ gets closer and closer to $N_f^*$, is interesting per se: the question is whether the chiral dynamics there shows any difference with the standard QCD dynamics. Significant differences with respect to the QCD dynamics might offer a basis to model builders interested in beyond-the{--}standard{--}model theories. The recent discovery of a 125 GeV boson at the LHC poses the question as to whether there are light composite scalars which might be identified with such boson, in alternative to a standard model Higgs boson. Pre--conformal dynamics might well help these studies ~\cite{Sannino:2009za,Chivukula:2012ug}. In our study, such pre-conformal dynamics could manifest itself either with a clear observation of a separation of scales, or with a manifestation of a critical behaviour when approaching $N_f^*$. One possibility is to observe the Miransky-Yamawaki essential singularity~\cite{Miransky:1997}. Alternatively, in an FRG approach~\cite{BraunGies}, the pseudo-critical line is almost linear with $N_f$ for small $N_f$, and displays a singular behaviour when approaching $N_f^*$, which could be the only observable effects, beyond Miransky scaling. A {\em jumping} scenario in which the change from a QCD dynamics to the conformal window is abrupt is also a distinct possibility \cite{Antipin:2012sm}.} Clearly, as in any system undergoing a phase transition, the nature and extent of the critical window are purely dynamical questions whose answer cannot be guessed a priori. Since the underlying dynamics is completely non-perturbative, lattice calculations are the only tool to perform an ab initio, rigorous study of these phenomena, and many lattice studies have recently appeared~\cite{DelDebbio:2010zz}. This paper is one step of our ongoing program \cite{Miura:xQCD12}--\cite{Deuzeman:2008sc} which aims at elucidating the phase diagram of QCD with fundamental fermions on the lattice, and in the continuum. Further studies either with fundamental fermions \cite{Appelquist:2011dp} -- \cite{Fodor:2009wk} or other representations \cite{Finland:MWT,Svetitsky:sextet,Kogut:2010cz,Fodor:2009ar,Fodor:2012ty} have contributed to our current understanding of this challenging field. However, only a subset of these studies has addressed issues related with pre--conformal dynamics \cite{Appelquist:2009ka}-- \cite{Lucini:MWT}, \cite{Miura:2011mc,Fodor:2012ni,Fodor:2012ty} which are the main theme of this paper. {The direct inspection of theories at fixed $N_f$ is often inconclusive, especially close to the expected threshold $N_f^*$. Also because of this, we feel it is a useful approach to try to observe directly the approach to conformality by monitoring the evolution of the pre--conformal results as a function of $N_f$.} In this paper, we investigate the thermal chiral phase transition for $N_f=0,4,6,8$ colour SU$(N_c=3)$ QCD by using lattice QCD Monte Carlo simulations with staggered fermions. Here, $N_f=6$ and $8$ is expected to be in the important regime as suggested by the results in Refs.~\cite{Velkovsky:1997fe,Appelquist:2011dp}. We combine our findings with those of our early work for $N_f=6$ and $8$ \cite{Miura:2011mc,Deuzeman:2008sc}. This paper grows out of our early study \cite{Miura:2011mc} and extends it in several ways: We have accumulated more statistics and added more parameters, and we present here an extended set of simulations and details. We develop a new scale setting procedure, so that we can more confidently measure the critical temperature on a common reference scale among theories with different flavour content. Furthermore, we present new estimates of the critical number of flavours $N_f^*$. Partly motivated by the recent work \cite{Liao:2012tw}, we introduce a typical interaction strength $\gTC$ at the critical temperature based on our lattice results, and compare it with a four-loop IRFP coupling ($\gIRFP$)~\cite{Ryttov:2012nt} and a critical coupling ($g_{\mathrm{SD}}$) estimated by using a two-loop Schwinger-Dyson equation~\cite{Appelquist:1998rb}. Further, we introduce and discuss the finite temperature version of the Miransky-Yamawaki phase diagram, and propose a strategy to locate the critical number of flavour motivated by the properties of the lattice step-scaling function in the vicinity of the IRFP \cite{Hasenfratz:2011xn}. Some of the new results presented here have been anticipated in a recent proceeding, and talks\cite{Miura:xQCD12,confx}. This paper is organized as follows: In the next section, we will explain the simulation setups. In Section \ref{sec:result}, we show our results for the chiral crossover at finite $T$, for each $N_f$, and then, we collect the critical lattice couplings associated with the chiral crossovers at $N_f = 0,\ 4,\ 6,\ 8$. In Section \ref{sec:AS}, we investigate the asymptotic scaling of our critical couplings at each $N_f$. In Section \ref{sec:discuss}, we investigate the $N_f$ dependences of the chiral crossovers, and estimate the lower edge of the conformal window $N_f^*$. Finally in Section \ref{sec:sum}, we provide concluding remarks. The Appendix is devoted to the summary tables of the simulation parameters and the numerical results obtained by analysing the simulation outputs. \section{Simulations' setup}\label{sec:setup} We investigate finite temperature QCD with different number of flavour $N_f = (0,\ 4,\ 6,\ 8)$ by utilising the publicly available MILC code~\cite{MILC}. The temperature $T$ is related to the inverse of the lattice temporal extension, \begin{align} &T\equiv \frac{1}{a(\beta_{\mathrm{L}})\cdot N_t}\ .\label{eq:T} \end{align} and we control it by varying $\betaL$ at fixed $N_t$. The number of lattice points in the spacial directions $N_s$ is chosen such that the aspect ratio $N_s/N_t \ge 2$ in all our runs. For each $N_f$, we use a single bare fermion mass $ma = 0.02$. The simulation parameters used in this study are summarised in \ref{app:sum_table}. \begin{figure*} \begin{center} \includegraphics[width=6.5cm]{./PBP_traj_6_24_12_xx_002.eps} \includegraphics[width=6.5cm]{./PBP_Jack_6_24_12_xx_002.eps} \caption{ Left: The Monte Carlo trajectories of the chiral condensate (PBP) obtained by using the lattice volume $24^3\times 12$ just before the chiral crossover $\betaL = 5.45 - 5.50$ at $N_f = 6$. Right: The jackknife errors as a function of a bin-size for the trajectories shown in the left panel. } \label{Fig:traj} \end{center} \end{figure*} {\subsection{Action and algorithm}} The setup for the action explained below is the same as the one used for $N_f=8$ in Ref.~\cite{Deuzeman:2008sc} up to the number of flavour. We use an improved version of the staggered action, the Asqtad action~\cite{Lepage:1998vj}, with a one-loop Symanzik \cite{Bernard:2006nj,LuscherWeisz} and tadpole~\cite{LM1985} improved gauge action, \begin{align} S = -\frac{N_f}{4}\mathrm{Tr}\log M[am,U,u_0] + \sum_{i=p,r,pg}\beta_i(g^2_{\mathrm{L}}) \mathrm{Re}\bigl[1-U_{C_i}\bigr]\ ,\label{eq:action} \end{align} where $\gL$ is the lattice bare coupling, and $\beta_i$ are defined as \begin{align} & \bigl( \beta_p,\beta_r,\beta_{pg} \bigr) = \biggl( \frac{10}{\gL^2}, -\frac{\beta_p(1-0.4805\alpha_s)}{20u_0^2}, -\frac{\beta_p}{u_0^2}0.03325\alpha_s \biggr)\ \label{eq:beta} \\ & \alpha_s=-4\log\frac{u_0}{3.0684}\ ,\quad u_0=\langle U_{C_p}\rangle^{1/4}\ . \end{align} The plaquette coupling $\beta_p=10/\gL^2\equiv \beta_{\mathrm{L}}$ is a simulation input. The $M[am,U,u_0]$ in Eq.~(\ref{eq:action}) denotes the matrix for a single flavour Asqtad fermion with bare lattice mass $am$, and $U_{C_i}$ represents the trace of the ordered product of link variables along $C_i$, for the $1\times 1$ plaquettes ($i=p$), the $1\times 2$ and $2\times 1$ rectangles ($i=r$), and the $1\times 1\times 1$ parallelograms ($i=pg$), respectively - all divided by the number of colours. The tadpole factor $u_0$ is determined by performing zero temperature simulations on the $12^4$ lattice (the second column of Tables~\ref{tab:Nf0_Nt4} - \ref{tab:Nf8_Nt8}), and used as an input for finite temperature simulations. To generate configurations with mass degenerate dynamical flavours, we have used the rational hybrid Monte Carlo algorithm (RHMC)~\cite{Clark:2006wq}, which allows to simulate an arbitrary number of flavours $N_f$ through varying the number of pseudo-fermions. The quenched ($N_f = 0$) system has been realised by using massive bare fermion mass $ma = 1.0$ in the four flavour system. The six flavour system has been realised by using two pseudo-fermions in the rational approximation with a quarter root technique, $N_f = 4\cdot 2\cdot 3/4 = 6$. Then, we have assumed the rooting does not affect the results within the accuracy of our simulation. For the other number of flavour ($N_f = 0,\ 4,\ 8$), we do not use the rooting. We have adjusted the micro-canonical step length $\delta \tau$ and the step length of a single trajectory $\Delta\tau=20\times \delta\tau$ to realise $75-80$ percent Metropolis acceptances. Details are reported in the fourth column of Tables~\ref{tab:Nf0_Nt4} - \ref{tab:Nf8_Nt8}. For each parameter set, we have collected a number of trajectories ranging from a one thousand to ten thousands - the latter closer to the chiral crossover regime. \subsection{Observables} The focus of this paper is the analysis of the chiral phase transition The fundamental observable is then the order parameter for chiral symmetry, the chiral condensate: \begin{equation} a^3\langle\bar{\psi}\psi\rangle = \frac{N_f}{4N_s^3N_t} \Big\langle\mathrm{Tr\bigl[M^{-1}\bigr]}\Big\rangle \ ,\label{eq:PBP} \end{equation} where $N_s~(N_t)$ represents the number of lattice sites in the spatial (temporal) direction. We have measured $a^3\langle\bar{\psi}\psi\rangle$ by using a stochastic estimator with 20 repetitions. We have also measured connected and disconnected chiral susceptibilities, \begin{align} a^2\chi_{\mathrm{conn}} &= -\frac{N_f}{4 N_s^3 N_t} \langle \mathrm{Tr} \left[( MM )^{-1}\right ] \rangle \ ,\nonumber\\ a^2\chi_{\mathrm{disc}} &= \frac{N_f^2}{16 N_s^3 N_t} \left [ \langle \mathrm{Tr} \left[M^{-1}\right] ^2\rangle - \langle \mathrm{Tr} \left[M^{-1}\right] \rangle^2\right ]\ . \end{align} Here we have conveniently written the chiral condensate and its susceptibilities in terms of traces of (products of) the staggered fermion matrix $M$. We note that the MILC convention for the chiral condensate gives the twice of Eq.~(\ref{eq:PBP}), as will be indicated several times in the following sections for results. We have measured the susceptibilities $a^2\chi_{\mathrm{conn}}$ and $a^2\chi_{\mathrm{disc}}$ separately. The disconnected chiral susceptibility is a non-local quantity which can be estimated from the variance of the bulk behaviour of the chiral condensate. Since we have used the stochastic estimator for the chiral condensate measurements, the variance would automatically include part of the connected contributions through random sources multiplying themselves. Following Bernard et al. \cite{Bernard:1996zw}, we take into account this effect in our estimate for the disconnected part $a^2\chi_{\mathrm{disc}}$ by considering the only off-diagonal elements of the covariance matrix for the random sources. The measurements of $\PBP$ and $a^2\chi_{\mathrm{conn,diss}}$ allow us to construct two physically relevant quantities: the scalar and pseudo-scalar susceptibilities, \begin{align} \chi_\sigma &\equiv \frac {\partial \langle \bar \psi \psi\rangle}{\partial m} = \chi_\mathrm{conn} + \chi_\mathrm{disc}\ ,\label{eq:sus_sig}\\ \chi_\pi &= \frac {\langle \bar \psi \psi\rangle}{m}\ . \end{align} Their associated cumulant \begin{equation} R_\pi \equiv \frac{\chi_\sigma}{\chi_\pi}\ , \label{eq:R_pi} \end{equation} is a probe of the chiral symmetry \cite{Deuzeman:2008sc,Kocic:1992is}. This is owing to the fact that $\chi_\sigma$ and $\chi_\pi$ are related through Ward identities to the spacetime volume integral of the scalar ($\sigma$) and pseudo-scalar ($\pi$) propagators. In the chiral limit, the susceptibility ratio $R_{\pi}$ should be one in chirally symmetric regime due to the degeneracy of the chiral partners, while it should be zero in the spontaneously broken phase. Even including a finite bare fermion mass, $R_{\pi}$ still has a strong signal for the chiral transition or crossover. {In particular, $R_\pi \sim 1.0$ in the chirally symmetric regime holds true till the chiral condensate is dominated by the linear mass term contribution.} It turns out that $R_\pi$ allows the identification of a pseudo-critical coupling $\betaLC$ associated with the chiral crossover, {which , in the cases we have studied, coincides in the error with the pseudo-critical coupling determined from the maximum of the chiral susceptibility.} In the gauge sector, we measure the Polyakov loop, \begin{equation} L = \frac{1}{N_cN_s^3}\sum_{\mathbf{x}} \mathrm{Re} \bigg\langle \mathrm{tr}_c\prod_{t=1}^{N_t}U_{4,t\mathbf{x}} \bigg\rangle \ ,\label{eq:PLOOP} \end{equation} where $\mathrm{tr}_c$ denotes the trace in colour space, and $U_{4,t\mathbf{x}}$ is the temporal link variable. From the variance of $L$, we also evaluate the susceptibility for the Polyakov loops. \subsection{Statistics, and error analysis} In the vicinity of the chiral crossover, we have a long auto-correlation time, {and {thermalization} checks require extra care.} Here we explain our analyses by using a typical example: The left panel of Fig. \ref{Fig:traj} displays the evolution of the chiral condensate on the lattice volume $24^3\times 12$ just before the chiral crossover $\betaL = 5.45$ and $5.50$ at $N_f = 6$, one of the most time-consuming examples in our simulations. (In order to shorten the simulation time, we started the evolution from thermalized configurations obtained at $\betaL < 5.45$.) We have computed the ensemble averages by using the last $2500$ ($2000$) trajectories at $\betaL = 5.45$ ($5.50$) and we have confirmed that they are consistent with those obtained by using last $2000$ ($1500$) trajectories. We have then used the latter trajectories to evaluate the average. In the cases we are considering, this corresponds to the data found in the right-hand side of the vertical green (dashed) lines in the left panel of Fig. \ref{Fig:traj}. We divide the obtained data set into several bins and utilise the jackknife method in order to take into account the auto-correlation effect in the error estimate. As a bin-size $s_{\mathrm{bin}}$ becomes larger, the jackknife error increases (the right panel of Fig.~\ref{Fig:traj}), which is due to the decrease of the effective number of (uncorrelated) data ($n_{\mathrm{ave}}/s_{\mathrm{bin}}$, $n_{\mathrm{ave}} = $ the number of trajectories to calculate the average). For a sufficiently large $s_{\mathrm{bin}}$, the jackknife errors at $\betaL = 5.45$ and $5.50$ level off, giving a reliable error estimate. Here is the result obtained from the above procedures: \begin{align} \betaL &\qquad\qquad n_{\mathrm{traj}} & n_{\mathrm{ave}} &\qquad\qquad s_{\mathrm{bin}} & 2\PBP \nn\\ 5.45 &\qquad\qquad 3100 & 2000 &\qquad\qquad 400 & 0.0622(2)\ , \nn\\ 5.50 &\qquad\qquad 4100 & 1500 &\qquad\qquad 375 & 0.0570(5)\ , \nn \end{align} We have performed the analyses explained here for all the various $\betaL$, $N_f$, and the lattice volumes. The results are summarised in \ref{app:sum_table}. \section{Results on the lattice thermal transition}\label{sec:result} In this section, we show our simulation results on the chiral and deconfinement crossover for the different number of flavours $N_f$. \begin{table*} \caption{ Summary of the (pseudo) critical lattice couplings $\betaLC$ for the theories with $N_f=0,~4,~6,~8$, $am=0.02$ and varying $N_t=4,~6,~8,~12$. The entries with $\ast$ are the update for our previous results \protect\cite{Miura:2011mc}. The entries with $\dagger$ have been quoted from our previous studies on $N_f = 8$ \protect\cite{Deuzeman:2008sc}. }\label{Tab:bc} \begin{center} \begin{tabular}{c|cccc} \hline\hline $N_f\backslash N_t$ & $4$& $6$& $8$& $12$\\ \hline $0$ & $7.35\pm 0.05$& $7.97^{\ast}\pm 0.07$& $8.26\pm 0.06$& $-$\\ $4$ & $5.65\pm 0.05$& $6.00^{\ast}\pm 0.05$& $6.15\pm 0.15$& $-$\\ $6$ & $4.675^{\ast}\pm 0.05$& $5.025^{\ast}\pm 0.05$& $5.20^{\ast}\pm 0.05$& $5.55^{\ast}\pm 0.1$\\ $8$ & $-$& $4.1125^{\dagger}\pm 0.0125$& $4.275\pm 0.05$& $4.34^{\dagger}\pm 0.04$\\ \hline\hline \end{tabular} \end{center} \end{table*} We have used a common bare fermion mass $ma = 0.02$ for all simulations at finite $N_f$. According to the Pisarski-Wilczek scenario~\cite{Pisarski:1983ms}, the most likely possibility for $N_f \ge 3$ is a first order chiral transition in the chiral limit. Introducing a bare fermion mass, the first order phase transition will eventually turn into a crossover for masses larger than some critical mass. Since the chiral condensate looks smooth in our results, we are most likely above the critical endpoint in all the cases we have studied, and we use the terminology of ``chiral crossover'' in the following. The finite bare mass $ma=0.02$ might have a different physical relevance at each $N_f$, as well as for different bare coupling for a fixed $N_f$. It remains then to be seen how our results would change in the chiral limit, and we hope to come back to this point in a future study. Since we have noted that at strong coupling and small masses the improvement term in the Action might be responsible for the spurious phases \cite{Deuzeman:2012ee,daSilva:2012wg} observed also in Ref. \cite{Cheng:2011ic,Deuzeman:2011pa}, we might also consider an unimproved action for this study. Before entering into details, let us summarise our main results, {\em i.e.} the critical lattice couplings $\betaLC$ associated with the chiral crossover in Table \ref{Tab:bc}. For $N_f = (4,6,8)$ we have observed that the peak position of the chiral susceptibility $a^2\chi_{\sigma}$, whenever clearly defined, coincides within the errors with the inflection point of $R_{\pi}$ defined in Eq.~(\ref{eq:R_pi}), as well as with the inflection point of the chiral condensate and that of the Polyakov loop. {This indicates that the crossover region is rather narrow, as different indicators give consistent pseudo-critical points. We then quote the common pseudo-scalar coupling, with a conservative error estimate.} For the quenched ($N_f = 0$) case, we have extracted the pseudo-critical coupling from the deconfinement crossover by evaluating the peak position of the Polyakov loop susceptibility. In the following subsections, we present these results in detail, starting from $N_f = 6$ and $8$ in the first two subsections, and continuing with the $N_f = 4$ and $N_f=0$. The reader who is not interested in these technical details is advised to skip the rest of this Section and proceed directly to the next one. \subsection{Chiral crossover at $N_f = 6$} \label{subsec:Nf6} We show the $N_f=6$ results for a fixed bare fermion mass $ma = 0.02$. In Figs.~\ref{Fig:Nf6}, the chiral condensate $a^3\langle\bar{\psi}\psi\rangle$ (PBP, red $\bigcirc$) the real part of Polyakov loop $L$ (Re[PLOOP], blue $\Box$), the chiral susceptibility ($\chi_{\sigma}$, red $+$), and the chiral susceptibility ratio ($R_{\pi}$, blue $\times$) are displayed as a function of a lattice coupling $\betaL = 10/\gL^2$. The first, second, third, and fourth lines in the figure show the results obtained by using temporal extensions $N_t = 4,\ 6,\ 8$, and $12$, respectively. We shall now extract the critical lattice couplings $\betaLC$ associated with the thermal chiral crossover from these results. \begin{figure*} \begin{center} \includegraphics[width=6.5cm]{PBPL_6_16_4_beta_002_xx.eps} \includegraphics[width=6.5cm]{PBP_sus_6_16_4_beta_002_xx.eps} \includegraphics[width=6.5cm]{PBPL_6_16_6_beta_002_xx.eps} \includegraphics[width=6.5cm]{PBP_sus_6_16_6_beta_002_xx.eps} \includegraphics[width=6.5cm]{PBPL_6_24_8_beta_002_xx.eps} \includegraphics[width=6.5cm]{PBP_sus_6_24_8_beta_002_xx.eps} \includegraphics[width=6.5cm]{PBPL_6_24_12_beta_002_xx.eps} \includegraphics[width=6.5cm]{PBP_sus_6_24_12_beta_002_xx.eps} \caption{The $N_f = 6$ results for a fixed bare fermion mass $ma = 0.02$. The first, second, third, and fourth lines show the results obtained by using temporal extensions $N_t = 4,\ 6,\ 8$, and $12$, respectively. In each line, the left panel shows the chiral condensate in lattice unit (PBP, red $\bigcirc$) and the real part of Polyakov loops (Re[PLOOP], blue $\Box$), and the right panel displays the chiral susceptibility ($\chi_{\sigma}$, red $+$) and the chiral susceptibility ratio ($R_{\pi}$, blue $\times$), as a function of $\betaL$. For $N_t = 4$, the Gaussian fit for the chiral susceptibility has been performed in the range $[4.4,4.9]$ to capture the peak structure. }\label{Fig:Nf6} \end{center} \end{figure*} As shown in the left panel of the first line in Fig.~\ref{Fig:Nf6}, the largest decrease of chiral condensates (PBP, red $\bigcirc$) (as well as a drastic increase of the real part of Polyakov loops (Re[PLOOP], blue $\Box$)) is found between $\betaL=4.65$ and $4.70$. Thus, we expect the chiral crossover in this region. As shown in the right panel of the first line in Fig.~\ref{Fig:Nf6}, the chiral susceptibility $a^2\chi_{\sigma}$ (red $+$) has a clear peak at $\betaL = 4.65$. In order to have a practical and coherent procedure to estimate the maximum, we have performed Gaussian fits: The Gaussian fit for the susceptibilities in the range $[4.4,4.9]$ leads to a maximum at a slightly larger $\betaL$ (red dashed line). Further, the susceptibility ratio $R_{\pi}$ (blue $\times$) has an inflection point around $\betaL = 4.65 - 4.70$. For larger $\betaL$, the increasing rate of $R_{\pi}$ significantly reduces, and gets to almost unity. Thus, all observables consistently indicate the pseudo-critical coupling to be $\betaLC = 4.675\pm 0.05$ for $(N_f,N_t) = (6,4)$. The error is determined to include the next-to-neighbour data and the maximum of the Gaussian fit. The second line in Fig.~\ref{Fig:Nf6} displays the results for $N_t = 6$. As shown in the left panel, the largest decrease of chiral condensates (PBP, red $\bigcirc$) (as well as a drastic increase of the real part of Polyakov loops (Re[PLOOP], blue $\Box$)) is found between $\betaL=5.00$ and $5.05$, and we expect the chiral crossover in this region. As shown in the right panel, the chiral susceptibility $a^2\chi_{\sigma}$ (red $+$) has a peak at $\betaL = 5.05$, and the Gaussian fit for the susceptibilities in whole range of $\betaL$ has a maximum at a slightly smaller $\betaL = 5.0$ (red dashed line). The susceptibility ratio $R_{\pi}$ (blue $\times$) has an inflection point around $\betaL = 5.00 - 5.05$, and then, it goes into the plateau domain. All observables consistently indicate the pseudo-critical coupling to be $\betaLC = 5.025\pm 0.05$ for $(N_f,N_t) = (6,6)$. The error is determined to include both $\betaL = 5.0$ and $5.05$ enough. The third line in Fig.~\ref{Fig:Nf6} shows the results for $N_t = 8$. As indicated by the left panel, the chiral condensates as well as the Polyakov loops look smooth at almost everywhere, and it is difficult to locate the crossover point from them. As shown in the right panel, the chiral susceptibility $a^2\chi_{\sigma}$ (red $+$) has a peak at $\betaL = 5.2$, and the Gaussian fit for the susceptibilities in whole range of $\betaL$ has a maximum at a slightly smaller $\betaL = 5.17$ (red dashed line). The susceptibility ratio $R_{\pi}$ (blue $\times$) exhibits the largest variation between $\betaL = 5.15$ and $5.2$, after which the increasing rate of $R_{\pi}$ reduces and eventually evolves into almost unity. From the peak position of $a^2\chi_{\sigma}$, we estimate the critical coupling to be around $\betaLC = 5.20\pm 0.05$ for $(N_f,N_t) = (6,8)$. The error is determined to include the next neighbour data, the maximum of the Gaussian fit for $\chi_{\sigma}$, and the $R_{\pi}$ inflection point. Finally, we analyse the results for $N_t = 12$, the largest temporal extension in our $N_f = 6$ simulations. The $\betaL$ dependences of chiral condensates are found to be particularly smooth for whole range of $\betaL = 4.7 - 5.7$. Note that in this case, the aspect ratio ($N_s/N_t$) is only two, and larger volumes would be required to reach a comparable clarity in the signal. As shown in the left panel of final line in Fig.~\ref{Fig:Nf6}, the onset for the Polyakov loop at $\betaL = 5.525$ is still appreciable (blue $\Box$). We here notice that the increase of Polyakov loops so far has been found just before the chiral crossover in the case of smaller temporal extensions $N_t = 4 - 8$, though the Polyakov loop itself is not associated with the chiral dynamics. Based on such an experience, we assume that the chiral crossover at $N_t = 12$ is in the vicinity of the onset of the Polyakov loop, and carefully investigate the corresponding region $\betaL = 5.35 - 5.60$. The chiral condensates do not have any clear signal (red $\bigcirc$ in the left panel). As shown in the right panel, the chiral susceptibility $a^2\chi_{\sigma}$ (red $+$) has a small peak-like structure at $\betaL = 5.575$, and a bump-like structure at $\betaL = 5.50$. The chiral susceptibility ratio $R_\pi$ (blue $\times$) has the largest increase between $\betaL = 5.55$ and $5.575$, and tends to be flat in $\betaL \geq 5.575$. Thus, the critical lattice coupling would be in the range $5.50 \leq \betaLC \leq 5.575$. Here, we employ a conservative estimate $\betaLC = 5.55\pm 0.1$, which sufficiently covers the whole candidate range. Our $\betaLC$ collection at $N_f = 6$ is found in the third line of Table~\ref{Tab:bc}. As will be shown in the next subsection, the $N_t$ dependent nature of $\betaLC$ at $N_f = 6$ (a thermal scaling) is associated with the uniqueness of the physical critical temperature, indicating the chiral (non-conformal) dynamics at $N_f = 6$. \subsection{Chiral crossover at $N_f = 8$} \label{subsec:Nf8} In our previous paper~\cite{Deuzeman:2008sc}, we have studied the chiral phase transition at $N_f = 8$ by using two lattice temporal extensions: $N_t = 6$ and $12$. One of the main results was that the chiral phase transition at $N_f = 8$ still showed a thermal scaling property, which indicated the existence of a typical scale associated with the chiral dynamics rather than the conformality. We here add additional data computed at $N_t = 8$, and confirm the thermal scaling at $N_f = 8$, for this largish mass. \begin{figure*} \begin{center} \includegraphics[width=6.6cm]{./PBPL_8_24_8_beta_002_xx.eps} \includegraphics[width=6.6cm]{./PBP_sus_8_24_8_beta_002_xx_II.eps} \caption{The $N_f = 8$ results obtained by using $24^3\times 8$ lattice volume with $ma = 0.02$: The chiral condensate in lattice unit (PBP, red $\bigcirc$ in left panel), the real part of Polyakov loop (Re[PLOOP], blue $\Box$ in left panel), and the chiral susceptibility ($\chi_{\sigma}$, red $+$ in right panel), and the chiral susceptibility ratio ($R_{\pi}$, blue $\times$ in right panel) are shown as a function of $\betaL$. }\label{Fig:Nf8} \end{center} \end{figure*} The left panel of Fig.~\ref{Fig:Nf8} shows ensemble averages of chiral condensates $a^3\langle\bar{\psi}\psi\rangle$ (PBP, red $\bigcirc$), the real part of Polyakov loop $L$ (Re[PLOOP], blue $\Box$) as a function of $\betaL$. We observe the largest decrease of the chiral condensate between $\betaL = 4.25$ and $4.30$, while the real part of the Polyakov loop stars growing around $\betaL = 4.25$. Although the error is huge, the chiral susceptibility ratio $R_{\pi}$ seems to have a larger increase between $\betaL = 4.25$ and $4.30$. The large error of $R_{\pi}$ at $\betaL = 4.25$ comes from a very long auto-correlation in the Monte Carlo trajectories, which would have required a much larger statistics. The long correlation hints at a criticality. All observations consistently indicate the critical coupling to be $\betaLC = 4.275\pm0.05$. Combining with $N_t = 6$ and $12$ data~\cite{Deuzeman:2008sc}, we summarise the critical coupling $\betaLC$ at $N_f = 8$ in the final line of Table~\ref{Tab:bc}. Here we should put some caveats on the $N_f = 8$ results: First, we have not observed a peak-like structure in the chiral susceptibility $\chi_{\sigma}$. We should probably study larger spatial volumes, with similar aspect ratio as those studied in other cases. The location of the pseudo-critical point might change. For the time being, we rely on the experiences with the other systems, and infer the pseudo-critical coupling from the other observables. Second, even in the strong coupling region $\beta < \betaLC$, $R_{\pi}$ shows relatively large value $\sim 0.7 - 0.8$. We should go to even stronger coupling and smaller masses before observing a clear mass gap. And third, the $N_t$ dependence of $\betaLC$ shows a large deviation from the two-loop asymptotic scaling law as will be shown in the next subsection. This is not surprising, given that the couplings explored for $N_f=8$ are larger than in other cases. This could imply something which cannot be captured at two-loop, for example, the pre-conformal dynamics. Apparently, these caveats call for more detailed and quantitative lattice studies with a larger lattice size and a smaller bare fermion mass before drawing definite conclusions on $N_f=8$. We note a recent study claiming the conformality emerges for $N_f=8$ for small enough quark masses \cite{Schaich:2012fr}. \subsection{Chiral crossover at $N_f = 4$} \label{subsec:Nf4} In Figs.~\ref{Fig:Nf4}, we show the chiral condensate $a^3\langle\bar{\psi}\psi\rangle$ (PBP, red $\bigcirc$), the real part of Polyakov loop $L$ (PLOOP, blue $\Box$), the chiral susceptibility ($\chi_{\sigma}$, red $+$), and the chiral susceptibility ratio ($R_{\pi}$, blue $\times$) are displayed as a function of a lattice coupling $\betaL = 10/\gL^2$. The first, second, and third lines in the figure show the results obtained by using temporal extensions $N_t = 4,\ 6$, and $8$, respectively. \begin{figure*} \begin{center} \includegraphics[width=6.6cm]{PBPL_4_16_4_beta_002_xx.eps} \includegraphics[width=6.6cm]{PBP_sus_4_16_4_beta_002_xx.eps} \includegraphics[width=6.6cm]{PBPL_4_16_6_beta_002_xx.eps} \includegraphics[width=6.6cm]{PBP_sus_4_16_6_beta_002_xx.eps} \includegraphics[width=6.6cm]{PBPL_4_24_8_beta_002_xx.eps} \includegraphics[width=6.6cm]{PBP_sus_4_24_8_beta_002_xx.eps} \caption{The $N_f = 4$ results for a fixed bare fermion mass $ma = 0.02$. The first, second, and third lines show the results obtained by using temporal extensions $N_t = 4,\ 6$, and $8$, respectively. In each line, the left panel shows the chiral condensate in lattice unit (PBP, red $\bigcirc$) and the real part of Polyakov loops (Re[PLOOP], blue $\Box$), and the right panel displays the chiral susceptibility ($\chi_{\sigma}$, red $+$) and the chiral susceptibility ratio ($R_{\pi}$, blue $\times$), as a function of $\betaL$. {For $N_t = 4$ and $6$, the Gaussian fits for the chiral susceptibilities have been performed in the range $[5.4,5.8]$ and $[5.8,6.2]$, respectively.} }\label{Fig:Nf4} \end{center} \end{figure*} As shown in the left panel of the first line in Fig.~\ref{Fig:Nf4}, the largest decrease of chiral condensates (PBP, red $\bigcirc$) (as well as a drastic increase of the real part of Polyakov loops (Re[PLOOP], blue $\Box$)) is found between $\betaL=5.60$ and $5.65$, and we expect the chiral crossover in this region. As shown in the right panel of the first line in Fig.~\ref{Fig:Nf4}, the chiral susceptibility $a^2\chi_{\sigma}$ (red $+$) gets to a maximum at $\betaL = 5.65$. The Gaussian fit for the susceptibilities in the range $[5.4,5.8]$ leads to a maximum at a slightly larger $\betaL$ (red dashed line). Further, the susceptibility ratio $R_{\pi}$ (blue $\times$) has an inflection point around $\betaL = 5.60 - 5.65$. For larger $\betaL$, the increasing rate of $R_{\pi}$ significantly reduces, and eventually evolves into almost unity. Thus, all observables consistently indicate the pseudo-critical coupling to be $\betaLC = 5.65\pm 0.05$ for $(N_f,N_t) = (4,4)$. The error is determined to include the next-to-neighbour data and the maximum of the Gaussian fit. The second line in Fig.~\ref{Fig:Nf4} displays the results for $N_t = 6$. As shown in the left panel, the chiral condensates (PBP, red $\bigcirc$) are found to be smooth, and it is difficult to locate the chiral crossover. The real part of Polyakov loops (Re[PLOOP], blue $\Box$) starts increasing around $\betaL = 5.95$. Based on our previous experiences, the chiral crossover could be around this region. As shown in the right panel, the chiral susceptibility $a^2\chi_{\sigma}$ (red $+$) has a maximum at $\betaL = 6.00$, and the Gaussian fit for the susceptibilities in the range of $[5.8,6.2]$ has a maximum at $\betaL = 6.00$ (red dashed line). From the maximum position of the chiral susceptibilities, we estimate the pseudo-critical coupling to be $\betaLC = 6.00\pm 0.05$ for $(N_f,N_t) = (4,6)$. The error is determined to include the next-to-neighbour data. The susceptibility ratio $R_{\pi}$ (blue $\times$) has a significant increase in $\betaL > 5.9$, and then, it flattens at $\betaL = 6.1$. This behaviour would be consistent to the above estimate $\betaLC = 6.00$. The third line in Fig.~\ref{Fig:Nf4} shows the results for $N_t = 8$. As indicated by the left panel, the chiral condensates as well as the Polyakov loops look smooth at almost everywhere, and it is difficult to locate the crossover point from them. As shown in the right panel, the chiral susceptibility $a^2\chi_{\sigma}$ (red $+$) has a peak at $\betaL = 5.95$. Indeed the susceptibility ratio $R_{\pi}$ (blue $\times$) also shows a bump structure around $\betaL = 5.95$, and implies some kinds of an instability of the system. However, the value of $R_{\pi}$ at $\betaL = 5.95$ turns out to be at most $0.4$, which indicates a large remaining of the chiral symmetry breaking. In turn, $R_{\pi}$ keeps increasing till it approximately reaches to unity at $\betaL = 6.30$, and thus, the first peak at $\betaL = 5.95$ would not well capture the position of the chiral crossover. We here postpone the precise determination of the chiral crossover, and just provide a rough estimate of the pseudo-critical coupling: The susceptibility ratio $R_{\pi}$ has a large increasing rate in the range $[6.0,6.3]$. We adopt the intermediate value as the pseudo-critical coupling with the error covering whole range of $[6.0,6.3]$, $\betaLC = 6.15 \pm 0.15$. This also includes the maximum of the Gaussian fit for the chiral susceptibility $\betaL = 6.04$. The second line of Table~\ref{Tab:bc} provides a summary of $\betaLC$ for $N_f = 4$. \subsection{Deconfinement at $N_f = 0$} \label{subsec:Nf0} In this subsection, we estimate the critical lattice coupling $\betaLC$ for deconfinement in the quenched ($N_f = 0$) system. We note that both deconfinement and chiral transitions are associated with the thermal phase transition from the hadronic phase to the non-Abelian plasma phase with a drastic increase of the pressure (degrees of freedom). Then, our interest is the probe of the system with various $N_f$ in light of such a thermal phase transition. In this sense, we regard the deconfinement crossover at $N_f = 0$ as a {continuation} to the chiral crossover at finite $N_f$. In our setup, this connection is made explicit by the fact that we are realising a quenched system by use of a large mass in the four flavour system. It should be noted, anyway, that our result for the (pre-)conformal dynamics does not crucially depend on the quench{ed} data. \begin{figure*} \begin{center} \includegraphics[width=6.6cm]{abs_PLOOP_4_16_4_beta_100.eps} \includegraphics[width=6.6cm]{abs_PLOOP_sus_4_16_4_beta_100.eps} \includegraphics[width=6.6cm]{abs_PLOOP_4_16_6_beta_100.eps} \includegraphics[width=6.6cm]{abs_PLOOP_sus_4_16_6_beta_100.eps} \includegraphics[width=6.6cm]{abs_PLOOP_4_24_8_beta_100.eps} \includegraphics[width=6.6cm]{abs_PLOOP_sus_4_24_8_beta_100.eps} \caption{The quenched results. The first, second, and third lines show the results obtained by using temporal extensions $N_t = 4,\ 6$, and $8$, respectively. In each line, the left panel shows the absolute value of Polyakov loops ($|L|$, $\Box$), and the right panel displays the susceptibility calculated from the variance of $|L|$ ($\chi_{|L|}$, symbol $+$). The Gaussian fits are particularly bad here, however the identification of the maximum is clear. For $N_t = 8$, the fit has been performed excluding the data in $\betaL > 8.8$, because as shown in the left panel, the drastic increase of $|L|$ is found {for a} much smaller $\betaL$. }\label{Fig:Nf0} \end{center} \end{figure*} In Figs.~\ref{Fig:Nf0}, the thermalized ensemble averages of the absolute of Polyakov loop ($|L|$, blue $\Box$), its susceptibility ($\chi_{|L|}$, red $+$), are displayed as a function of a lattice coupling $\betaL = 10/\gL^2$. The first, second, and third lines in the figure show the results obtained by using temporal extensions $N_t = 4,\ 6$, and $8$, respectively. As shown in the left panel of the first line in Fig.~\ref{Fig:Nf0}, the largest increase of the absolute value of Polyakov loops ($|L|$, blue $\Box$) is found between $\betaL=7.30$ and $7.35$, and we expect the deconfinement crossover in this region. As shown in the right panel, the susceptibility for $|L|$ (red $+$) has a clear peak at $\betaL = 7.35$, hence we estimate the pseudo-critical coupling to be $\betaLC = 7.35\pm 0.05$ for $(N_f,N_t) = (0,4)$. The error is determined to include the next-to-neighbour data. The second line in Fig.~\ref{Fig:Nf0} displays the results for $N_t = 6$. As shown in the left panel, the largest increase of the absolute value of Polyakov loops ($|L|$, blue $\Box$) is found between $\betaL=7.80$ and $7.90$, and we expect the deconfinement crossover in this region. {The maximum of the susceptibility evaluated from $|L|$ (red $+$) is observed at $N_f = 7.9$. The large error indicates the long correlation time of the Monte Carlo trajectories. The Gaussian fit for the susceptibility has a maximum at $\betaL = 7.97$.} From this, we estimate the pseudo-critical coupling to be $\betaLC = 7.97\pm 0.07$ for $(N_f,N_t) = (0,6)$. The error is determined to include the next-to-next-to-neighbour data from the maximum point. The third line in Fig.~\ref{Fig:Nf0} represents the results for $N_t = 8$. As shown in the left panel, the absolute value of Polyakov loop ($|L|$, blue $\Box$) starts increasing around $\betaL = 8.15$, and we expect the deconfinement crossover in this region. {The maximum of the susceptibility evaluated from $|L|$ (red $+$) is observed at $N_f = 8.25$. Again, we find the large error in the vicinity of the maximum, indicating the long correlation time of the Monte Carlo trajectories. The Gaussian fit for the susceptibility in the range $[7.8,8.6]$ has a maximum at $\betaL = 8.26$. Note that the fit range sufficiently covers whole region of the drastic increases of $|L|$ shown in the left panel.} We adopt the maximum of the Gaussian fit as a critical coupling: $\betaLC = 8.26\pm 0.06$ for $(N_f,N_t) = (0,6)$. The error is determined to include the next-to-next-to-neighbour data from the maximum point. The first line of Table~\ref{Tab:bc} provides a summary of $\betaLC$ for $N_f = 0$. The $N_t$ dependent nature of $\betaLC$ reflects the thermal nature of the crossover. \section{Asymptotic scaling analyses for chiral phase transition} \label{sec:AS} In this section, we investigate the asymptotic scaling of the pseudo-critical temperatures $T_c$ \begin{align} &T_c\equiv \frac{1}{a(\betaLC)\cdot N_t}\ .\label{eq:Tc} \end{align} where $\betaLC$ have been computed in the previous section, and discuss the connection to the continuum physics. In the first subsection \ref{subsec:TcL}, we introduce the normalised critical temperature $T_c/\Lambda_{\mathrm{L/E}}$ (see e.g. \cite{Gupta:2000hr}) where $\Lambda_{\mathrm{L}}$ ($\Lambda_{\mathrm{E}}$) represents the lattice (E-scheme) Lambda-parameter defined in the two-loop perturbation theory with or without a renormalisation group inspired improvement~\cite{CAllton}. Then in the subsections \ref{subsec:TcL_Nt}, the asymptotic scaling will be assessed by studying the $N_t$ (in)dependence of $T_c/\Lambda_{\mathrm{L/E}}$ for each $N_f$. \subsection{Normalised critical temperature $T_c/\Lambda_{\mathrm{L/E}}$}\label{subsec:TcL} We consider the two-loop beta-function \begin{align} &\beta(g) =-(b_0 {g}^3 + b_1 {g}^5)\ ,\label{eq:beta_func}\\ &b_0 = \frac{1}{(4\pi)^2} \Biggl( \frac{11C_2[G]}{3}-\frac{4T[F]N_f}{3} \Biggr)\ ,\label{eq:b0}\\ &b_1 = \frac{1}{(4\pi)^4} \Biggl( \frac{34(C_2[G])^2}{3} -\biggl(\frac{20C_2[G]}{3}+4C_2[F]\biggr)T[F]N_f \Biggr)\ ,\label{eq:b1} \end{align} with $(C_2[G],\,C_2[F],\,T[F])=(N_c,\,(N_c^2-1)/(2N_c),\, 1/2)$. The coupling $g$ can be either the lattice bare coupling $\gL = \sqrt{10/\betaL}$ or the E-scheme renormalised coupling $\gE = \sqrt{3(1-\langle P \rangle(\gL))}$, where $\langle P\rangle(\gL)$ is the zero temperature plaquette value. If the one-loop perturbation theory exactly holds, the E-scheme coincides the lattice scheme. Integrating Eq.~(\ref{eq:beta_func}), we obtain the well-known two-loop asymptotic scaling relation, \begin{align} R(\gLE)\equiv a(\gLE)\Lambda_{\mathrm{L/E}} = \bigl(b_0\gLE^2\bigr)^{-b_1/(2b_0^2)} \exp\biggl[ \frac{-1}{2b_0\gLE} \biggr] \ ,\label{eq:RL} \end{align} where $\Lambda_{\mathrm{L}}$ ($\Lambda_{\mathrm{E}}$) is the Lattice (E-scheme) Lambda-parameter. To take into account higher order corrections, we have also considered the renormalisation group inspired improvement~\cite{CAllton} \begin{align} R^{\mathrm{imp}}(\betaLE)= \LEI~a(\betaLE) \equiv \frac{R(\betaLE)}{1+h} \times \Biggl[ 1 + h\ \frac{R^2(\betaLE)}{R^2(\beta_0)} \Biggr]\ , \label{eq:RL_imp} \end{align} where $\betaLE = 10/(\gLE)^2$. The coupling $\beta_0$ can be arbitrarily set and the parameter $h$ is adjusted so as to minimise the scaling violation. Note that $h = 0$ reproduces the standard asymptotic scaling law Eq.~(\ref{eq:RL}). \begin{figure} \begin{center} \includegraphics[width=6.6cm,height=6.0cm]{./Tc_L_6_Nt_2_1.eps} \includegraphics[width=6.6cm,height=6.0cm]{./Tc_L_6_Nt_2_1_e.eps} \caption{Scaling at $N_f = 6$ from the $N_t$ dependence of the normalised critical temperature. Left: The bare lattice scheme results. The red symbol $\times$ shows $T_c/\Lambda_{\mathrm{L}}$, and the blue $\Box$ symbols represent $T_c/\LI$ obtained by using the parameters $h = 0.03$ and $\beta_0 = \betaLC(N_f = 6, N_t = 12) = 5.55$ in Eq.~(\protect\ref{eq:RL_imp}). Right: The E-scheme results $T_c/\Lambda_{\mathrm{E}}$.} \label{Fig:TcL_Nf6} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6.6cm,height=6.0cm]{./Tc_L_8_Nt_2_1.eps} \includegraphics[width=6.6cm,height=6.0cm]{./Tc_L_8_Nt_2_1_e.eps} \caption{Scaling at $N_f = 8$ from the $N_t$ dependence of the normalised critical temperature. Left: The bare lattice scheme results. The red symbol $\times$ shows $T_c/\Lambda_{\mathrm{L}}$, and the blue $\Box$ symbols represent $T_c/\LI$ obtained by using the parameters $h = 1.08$ and $\beta_0 = \betaLC(N_f = 8, N_t = 12) = 4.34$ in Eq.~(\protect\ref{eq:RL_imp}). Right: The E-scheme results. The red symbol $\times$ shows $T_c/\Lambda_{\mathrm{L}}$, and the blue $\Box$ symbols represent $T_c/\EI$ obtained by using the parameters $h = 0.4$ and $\beta_0 = \betaEC(N_f = 8, N_t = 12) \simeq 5.94$ in Eq.~(\protect\ref{eq:RL_imp}).} \label{Fig:TcL_Nf8} \end{center} \end{figure} The asymptotic scaling as described above is valid in the massless limit. In the following, we will use it to analyse results obtained at finite bare fermion mass $ma = 0.02$ by assuming that the shift of the (pseudo) critical coupling induced by a non-zero mass is smaller than other errors. This assumption should ultimately be tested in future studies by performing simulations with different masses and extrapolating to the chiral limit. We now substitute $\betaLEC$ into the temperature definition Eq.~(\ref{eq:T}), and insert the scale $\Lambda_{\mathrm{L/E}}$: \begin{align} \frac{1}{N_t}=\frac{T_c}{\Lambda_{\mathrm{L/E}}} \times \Bigl(\Lambda_{\mathrm{L/E}}~a(\betaLEC)\Bigr) \ .\label{eq:T_Lam} \end{align} The left-hand side is a given number, and $\Lambda_{\mathrm{L/E}}~a(\betaLEC)$ in the right-hand side is evaluated by using Eq.~(\ref{eq:RL}) and our critical couplings $\betaLEC$. Thus, Eq.~(\ref{eq:T_Lam}) allows us to convert the critical couplings into the (normalised) critical temperature $T_c/\Lambda_{\mathrm{L/E}}$. When we adopt the improvement Eq.~(\ref{eq:RL_imp}), $T_c/\Lambda_{\mathrm{L/E}}$ is upgraded into $T_c/\LEI$. \subsection{Results for $T_c/\Lambda_{\mathrm{L/E}}$ and $T_c/\LEI$ }\label{subsec:TcL_Nt} In this section we consider \begin{align} \frac{T_c}{\Lambda_{\mathrm{L/E}}} =\frac{R(\gLE)}{N_t} = \bigl(b_0\gLE^2\bigr)^{-b_1/(2b_0^2)} \exp\biggl[ \frac{-1}{2b_0} \biggr] \ ,\label{eq:TcL} \end{align} where $\gLE$ denotes either the bare lattice coupling or the coupling defined in the E scheme. In addition, we consider the renormalisation group inspired definition, \begin{align} \frac{T_c}{\LEI} = \frac{R^{\mathrm{imp}}(\gLE)}{N_t} \ ,\label{eq:TcL_imp} \end{align} where $R^{\mathrm{imp}}$ is given by Eq.~(\ref{eq:RL_imp}). The numerical results for $T_c/\Lambda_{\mathrm{L/E}}$ and $T_c/\LEI$ are collected in Table \ref{tab:TcL} and Table \ref{tab:TcLE}. \begin{table*} \caption{ Summary of $T_c/\Lambda_\mathrm{L}$ and $T_c/\LI$ for various $(N_f,N_t)$. The first (second) line at fixed $(N_f,N_t)$ shows the value of $T_c/\Lambda_\mathrm{L}$ ($T_c/\LI$), and the last two columns provide the parameter $h$ and $\beta_0$ appeared in the improved asymptotic scaling Eq.~(\protect\ref{eq:RL_imp}). }\label{tab:TcL} \begin{center} \begin{tabular}{c|cccc|cc} \hline\hline $N_f\backslash N_t$ & $4$& $6$& $8$& $12$& $h$& $\beta_0$\\ \hline $0$ & $18.11\pm 0.65$& $18.21\pm 0.91$& $16.56\pm 0.71$& $-$& $-$& $-$\\ \quad & $16.29\pm 0.75$& $17.81\pm 1.02$& $16.56\pm 0.78$& $-$& $0.05$& $8.26$\\ \hline $4$& $21.99\pm 1.04$& $19.98\pm 0.95$& $17.12\pm 2.43$& $-$& $-$& $-$\\ \quad & $16.56\pm 1.44$& $18.67\pm 1.38$& $17.12\pm 3.41$& $-$& $0.30$& $6.15$\\ \hline $6$ & $25.41\pm 1.43$& $25.33\pm 1.43$& $22.94\pm 1.29$& $22.30\pm 2.52$& $-$& $-$\\ \quad & $21.66\pm 1.64$& $23.87\pm 1.58$& $22.21\pm 1.40$& $22.30\pm 2.66$& $0.03$& $5.55$\\ \hline $8$ & $-$& $50.05\pm 0.87$& $47.06\pm 3.28$& $34.34\pm 1.91$& $-$& $-$\\ \quad & $-$& $34.32\pm 1.40$& $42.67\pm 6.33$& $34.34\pm 3.90$& $1.08$& $4.34$\\ \hline\hline \end{tabular} \end{center} \end{table*} \subsubsection{$N_f=6$} The left panel of Fig. ~\ref{Fig:TcL_Nf6} shows $T_c/\Lambda_{\mathrm{L}}$ as a function of $N_t$ for $N_f = 6$, without ( red ($\times$) symbols) and with (blue ($\Box$) symbols) improvement. The improvement parameter $h=0.03$ is adjusted to minimise the $N_t$ dependence. We have checked that the results are stable against small variation of $h$. Moreover $\beta_0$ is adjusted to match the results at $\beta_0 = \betaLC (N_t = 12) = 5.55$. Similarly, the right panel of Fig. ~\ref{Fig:TcL_Nf6} shows $T_c/\Lambda_{\mathrm{E}}$ , which is nearly constant with $N_t$. The overall behaviour suggests that the residual scaling violations at $N_t =12$ are small. \subsubsection{$N_f=8$} In the left panel of Fig. \ref{Fig:TcL_Nf8}, we show the $N_t$ dependence of the normalised critical temperature in the lattice scheme, without ( red ($\times$) symbols) and with (blue ($\Box$) symbols) improvement. Again we adjust the improvement parameter $h$ so to minimise the $N_t$ dependence, and we find that a larger $h \simeq 1.08$ is needed. This is consistent with the larger scaling violation observed between $N_t=6$ and $N_t=12$. Similar observations can be made on the E-scheme, where the scaling violations, as seen in the $N_t$ dependence, appear to be larger than those in $N_f=6$ case. Introducing the improvement in the E-scheme again requires a largish $h = 0.4$ ($\beta_0 = \betaEC(N_f = 8, N_t = 12) = 5.94$). In short summary, the system with $N_f = 8$ shows much larger and less controllable deviations from the two-loop asymptotic scaling, than the ones observed for $N_f = 6$ (and for $N_f=4$ and $N_f=0$, to be discussed next). This might be natural, in view of the largish values of the coupling involved. These observations confirm that in this case the $\beta$ function at two loops cannot offer a quantitative guidance to strongly coupled pre-conformal dynamics. \subsubsection{$N_f=0$ and $N_f=4$} For $N_f=0$, we find about $10$ percent variation of $T_c/\Lambda_\mathrm{L}$ in the whole range $N_t \in [4,8]$ (In this case $T_c$ represents the critical temperature associated with the deconfinement transition rather than the chiral transition). The $N_t$ dependence can be reduced to less than $10$ percent by using the improved scaling with a small $h=0.05$. Turning to $N_f=4$ we find about $20$ percent variations of $T_c/\Lambda_\mathrm{L}$ between the $N_t = 4$ and the $N_t = 8$ results. The improved asymptotic scaling Eq.~(\ref{eq:RL_imp}) works well, and the variation reduces into $10$ percent level in whole range of $N_t = 4 - 8$. \begin{table*} \caption{ Summary of $T_c/\Lambda_\mathrm{E}$ and $T_c/\LEI$ for $N_f = 6$ and $N_f = 8$. The first (second) line at fixed $(N_f,N_t)$ shows the value of $T_c/\Lambda_\mathrm{E}$ ($T_c/\LEI$), and the last two columns give the parameter $h$ and $\beta_0$ appeared in the improved asymptotic scaling Eq.~(\protect\ref{eq:RL_imp}). For $N_f = 6$, the improvement was not necessary. }\label{tab:TcLE} \begin{center} \begin{tabular}{c|cccc|cc} \hline\hline $N_f\backslash N_t$ & $4$& $6$& $8$& $12$& $h$& $\beta_0$\\ \hline $6$ & $74.22\pm 5.86$& $75.47\pm 8.17$& $74.56\pm 9.08$& $75.13\pm 10.76$& $-$& $-$\\ \hline $8$ & $-$& $422.54\pm 23.06$& $422.61\pm 38.59$& $316.03\pm 20.06$& $-$& $-$\\ \quad & $-$& $312.16\pm 33.13$& $393.58\pm 60.01$& $316.03\pm 31.52$& $0.40$& $4.34$\\ \hline\hline \end{tabular} \end{center} \end{table*} \subsubsection{Scale separation?} In summary, $T_c/\Lambda$ computed using different schemes ($\Lambda = \Lambda_{\mathrm{L}}$ or $\Lambda_{\mathrm{E}}$) consistently shows an increase with $N_f$, confirming and extending the findings of our early work \cite{Miura:2011mc}. As discussed in \cite{Miura:2011mc} this indicates that $\Lambda_{\mathrm{L/E}}$ vanishes faster than $T_c$ upon approaching the critical number of flavour. Within the various uncertainties discussed here, this can be taken as a qualitative indication of scale separation close to the critical number of flavors. \section{Onset of the conformal window}\label{sec:discuss} In this section, we study the emergence of the conformal phase with increasing $N_f$. In the first Subsection \ref{subsec:MY}, we consider the phase diagram in the space spanned by the bare coupling $\gL$ and the number of flavor $N_f$. We consider the (pseudo)critical thermal lines which connect the lattice (pseudo)critical couplings for a fixed $N_t$ . We argue that the critical number of flavor $N_f^*$ can be identified with the crossing point of the pseudo-critical thermal lines obtained for various $N_t$'s. In the second Subsection \ref{subsec:gTC}, we introduce the thermal critical coupling $\gTC(N_f)$ as a typical interaction strength at the scale of the critical temperature $T_c(N_f)$. Since $T_c$ approaches zero when the number of flavor approaches the lower edge of conformal window $N_f^*$, $\gTC (N_f = N_f^*)$ should be equal to the zero temperature critical coupling $g^c$ (to be specified and estimated in the following). The relation $\gTC(N_f^*) = g^c$ thus defines implicitly the critical number of flavor $N_f^*$. In the final Subsection \ref{subsec:TcM}, we develop an improved version of the approach used in our early paper \cite{Miura:2011mc}. We introduce a UV $N_f$ independent reference scale, and compute the critical temperature for each $N_f$ in units of this scale. The infra-red dynamics affecting the critical temperature $T_c$ is then clearly exposed, and $N_f^*$ can be estimated from the vanishing of $T_c$. Before turning to the details, we summarise our results for $N_f^*$: \begin{align} N_f^* \sim \begin{cases} 11.1\pm 1.6\quad & (\text{from the lattice thermal lines} )\ ,\\ 12.5\pm 1.6\quad & (\text{from the strength of the coupling at $T_c$})\ ,\\ 10.4\pm 1.2\quad & (\text{from the vanishing of $T_c$})\ . \end{cases} \label{eq:Nf_IRFP} \end{align} \subsection{The critical number of flavor from the lattice thermal lines}\label{subsec:MY} We plot the lattice critical couplings $\gLC(N_f,N_t) = \sqrt{10/\betaLC(N_f,N_t)}$ (Table \ref{Tab:bc}) in the space spanned by the bare coupling $\gL$ and the number of flavor $N_f$, and we consider the lines which connect $\gLC$ with $N_t$ fixed: $\gLC(N_f)|_{N_t=\mathrm{fix}}$ (see Fig.~\ref{Fig:MY}). These pseudo-critical thermal lines separate a phase where chiral symmetry (approximately) holds from a phase where chiral symmetry is spontaneously broken \footnote{ It would be of interest to study the interrelation of such lines with the zero temperature first order transition line observed in the conformal window \cite{Deuzeman:2012ee,Deuzeman:2011pa,Deuzeman:2012pv,Deuzeman:2009mh,Schaich:2012fr,Cheng:2011ic,Hasenfratz:2010fi,Hasenfratz:2011xn,Damgaard:1997ut,deForcrand}.}. The resultant phase diagram may be seen as an extension of the well-known Miransky-Yamawaki phase diagram \cite{Miransky:1997} to finite temperature. We here argue that the critical number of flavor $N_f^*$ can be read off from the crossing point of thermal lines obtained for different $N_t$. To see this, we consider the well-known step-scaling function: \begin{equation} \Delta\betaL^s = \betaL - {\betaL}^{\prime} \end{equation} where $\betaL$ and $\betaL^{\prime}$ give the same physical scale $\xi$: \begin{align} \xi = a(\betaL)\hat{\xi} = a({\betaL}^{\prime})\hat{\xi}^{\prime} \ .\label{eq:unique_xi} \end{align} Here, $\hat{\xi}$ is the dimension-less lattice correlation length, and $\hat{\xi}/ \hat {\xi^{\prime}} = s$. In our case, $\xi = T_c^{-1}$, $\hat{\xi} = N_t, \hat{\xi}^{\prime} = N_t^{\prime}$, and the above relation Eq.~(\ref{eq:unique_xi}) reads \begin{align} T_c^{-1} = N_t\ a(\betaLC) = N_t^{\prime}\ a({\betaLC}^{\prime}) \ .\label{eq:unique_Tc} \end{align} As discussed in the previous study Ref.~\cite{Hasenfratz:2011xn}, $\Delta \betaL^s = 0$ holds at the IRFP regardless the scale factor $s$. In principle, we could then compute the step-scaling function from our numerical results, and try to see where it vanishes. Alternatively, we can look for the intersection of pseudo-critical thermal lines: obviously, $\Delta \betaL^s = 0$ holds at the intersection point regardless the value of the scale factor $s$. To demonstrate this procedure, we consider the pseudo-critical lines obtained for $N_t = 6$ and $N_t=12$ as shown in Fig.~\ref{Fig:MY}. Note their positive slope: the lattice critical coupling $\gLC$ is an increasing function of $N_f$. This is a consequence of enhanced fermionic screening for a large number of flavor, as noted first in Ref.~\cite{Kogut:1985pp}. Interestingly, the slope decreases with increasing $N_t$, which allows for a crossing point at a larger $N_f$. Thus, we estimate the intersection at $(\gLC, N_f^*) = (1.79\pm 0.12, 11.1\pm 1.6)$. We underscore at this stage that the above analysis is merely a qualitative discussion: the precise shape of the pseudo-critical thermal lines with fixed $N_t$ is dictated by the beta-function, which is unknown. Hence, a linear extrapolation which only uses two values of $N_t$ has only the meaning to illustrate a viable strategy which we plan to further pursue in the future. This caveat issued, the agreement of the results found here with the estimates presented below is rather gratifying. \begin{figure} \begin{center} \includegraphics[width=9.5cm]{./MY_fit_select.eps} \caption{(Pseudo) critical values of the lattice coupling $\gLC=\sqrt{10/\betaLC}$ for theories with $N_f=0,~4,~6,~8$ and for several values of $N_t$ in the Miransky-Yamawaki phase diagram. We have picked up $\gLC$ at $N_f = 6$ and $8$, and considered ``constant $N_t$'' lines with $N_t = 6,\ 12$. {If the system is still described by one parameter beta-function in this range of coupling, the IRFP could be located at the intersection of the fixed $N_t$ lines -- or equivalently, in the region where the step-scaling function vanishes. To demonstrate the procedure --as a preliminary example -- we have considered the intersection of the $N_t = 12$ and $N_f = 6$ lines}.} \label{Fig:MY} \end{center} \end{figure} \subsection{The critical number of flavor from the interaction strength at $T_c$} \label{subsec:gTC} In this second subsection, we will follow the approach of a recent paper \cite{Liao:2012tw}, and compute the coupling $\gTC (N_f)$ at the scale of the critical temperature for each $N_f$. To obtain the coupling at the scale of the temperature, we evolve the coupling at the scale of the lattice spacing $a$ up to the temperature inverse scale $N_t a$. To this end, we make use of two-loop expressions. Consider the renormalisation flow: \begin{align} &\bar{R}(\gLC,\gLR) \equiv \frac{M(\gLR)}{a^{-1}(\gLC)} =\exp\biggl[ \int_{\gLC}^{\gLR}\frac{d\gL}{\beta(\gL)} \biggr]\nn\\ &\simeq \Biggl( \frac{(\gLC)^2}{(\gLC)^2b_1 + b_0} \frac{(\gLR)^2b_1 + b_0}{(\gLR)^2} \Biggr)^{-b_1/(2b_0^2)}\nn\\ &\qquad\times \exp\Biggl[ \frac{1}{2b_0} \biggl(\frac{1}{(\gLR)^2} - \frac{1}{(\gLC)^2}\biggr) \Biggr]\ ,\label{eq:RV} \end{align} Since we are interested in the thermal critical coupling $\gTC$, we set the reference mass to be the critical temperature itself: $M(\gLR) = 1/N_t ~ a(\gL^c)$. Inserting this into Eq. (\ref{eq:RV}), we see that $\gTC$ is implicitly given as: \begin{align} R(g_L^c,\gTC) = 1/N_t\ , \end{align} where we use the following $g_L^c$ from Table \ref{Tab:bc}: \begin{align} \gLC &= \sqrt{10/\betaLC} = \begin{cases} 1.100\pm 0.004\quad (N_f = 0,\ N_t = 8)\\ 1.275\pm 0.040\quad (N_f = 4,\ N_t = 8)\\ 1.342\pm 0.032\quad (N_f = 6,\ N_t = 12)\\ 1.518\pm 0.021\quad (N_f = 8,\ N_t = 12) \end{cases} \ ,\label{eq:gc_best} \end{align} Alternative choices corresponding to smaller $N_t$ produce results for $\gTC$ suffering from the modest scaling violations discussed above. The red ($\Box$) symbol in Fig.~\ref{Fig:gTC} shows $\gTC$ as a function of $N_f$. We superimpose a fit obtained by using the ansatz proposed in Ref. \cite{Liao:2012tw} \begin{align} {N_f(\gTC) = A\cdot \log~ \bigl[B\cdot(\gTC- \gTC|_{N_f=0}) + 1\bigr]\ .\label{eq:gTC_fit}} \end{align} with $A$ and $B$ fit parameters, which describes well the data. Since the critical temperature is zero in the conformal phase, the thermal critical coupling $\gTC$ should equal a {\em zero temperature} critical coupling $g^c$ when $N_f = N_f^*$. Of course, $g^c$ is not known exactly and we have to rely on approximations. The first estimate is based on the best available value $\gSD$ obtained by using the two-loop Schwinger-Dyson equation \cite{Appelquist:1998rb}. In this case, the lower edge of the conformal window $N_f^*$ is defined by the condition $\gTC(N_f^*) = \gSD(N_f^*)$. In Fig.~\ref{Fig:gTC} $\gSD$ is plotted as a blue solid line. We then estimate the intersection of $\gTC$ and $\gSD$ -- hence the onset of the conformal window as well as the IRFP coupling at $N_f^*$ -- at $(g^*,N_f^*) = (2.79,13.2)\pm (0.13,0.6)$. One second possibility for estimating $N_f^*$ is the following: the conformal phase would emerge when the coupling at IRFP ($g^{\mathrm{IRFP}}$) is not strong enough to break chiral symmetry, {\em i.e.} $g^{\mathrm{IRFP}} \leq \gTC$. Here, we utilise the four-loop result for $\gIRFP$ \cite{Ryttov:2012nt} as the best available. In Fig.~\ref{Fig:gTC}, we show $\gIRFP$ as magenta $\bigcirc$, with superimposed a linear interpolation. In the plot, we use the results for $\gIRFP$ in the $\bar{\text{MS}}$ scheme. The errors are estimated by considering the scheme dependence \cite{Ryttov:2012nt}, which turns out to be rather mild at four loops. We can then locate the intersection of $\gTC$ and $\gIRFP$ and obtain $(g^*,N_f^*) = (2.51,11.8)\pm (0.15,0.9)$. Ideally, the three lines in Fig. \ref{Fig:gTC} should meet at a (single) IRFP fixed point, if all the quoted results -- including the analytic ones -- were exact. Indeed the intersections we have estimated are consistent within the largish errors. We then quote the average of the above two estimates as our final result from this analysis, $N_f^*\sim 12.5\pm 1.6$. In addition, and on a slightly different aspect, we note that $\gTC$ is an increasing function of $N_f$. This indicates that the quark-gluon plasma is more strongly coupled at larger $N_f$, as discussed in Ref.~\cite{Liao:2012tw}. \begin{figure} \begin{center} \includegraphics[width=10.5cm]{./gT_2_1.eps} \caption{ The thermal critical coupling (red $\Box$) and the fit for them (dashed red line, with the ansatz Eq.~(\protect\ref{eq:gTC_fit})) and the values of the zero temperature couplings in the conformal phase from different estimates, see text for details. At the critical number of flavour the thermal critical coupling should equal the critical coupling associated with the IRFP. The procedure is motivated by a recent study by Shuryak in Ref.~\protect\cite{Liao:2012tw}.} \label{Fig:gTC} \end{center} \end{figure} \subsection{The critical number of flavor and the vanishing critical temperature} \label{subsec:TcM} Finally, we present an estimate of the onset of the conformal window which closely follows our previous approach \cite{Miura:2011mc}, based on the analysis of the $N_f$ dependence of the pseudo-critical temperature in units of a UV dominated scale. In this subsection, we introduce and exploit a new UV reference scale $M$. \begin{figure} \begin{center} \includegraphics[width=10.0cm]{./u0_beta_all_II.eps} \caption{ The $\betaL$ dependences of the tadpole factor $u_0$ at zero temperature ($12^4$ lattice volume). {At each $N_f$, the dashed line represents the fit for data with the ansatz $u_0 = 1 - A/(1 + B\cdot\betaL^2)$.} We consider a constant $u_0$ ({\em e.g.} $u_0=0.8$ in figure), and read off the corresponding lattice bare couplings $\betaL$, which are used to define the scale $M$ at each theory with $N_f$ flavours.} \label{Fig:u0_const} \end{center} \end{figure} Before going to details, we first explain the basic idea which follows the FRG analysis by Braun and Gies~\cite{BraunGies}. They used the $\tau$ lepton mass $m_{\tau} = 1.777$ (GeV) as an $N_f$ independent UV reference scale for theories with any number of flavours. The initial condition of the renormalisation flow has been specified via the strong coupling constant in an $N_f$ independent way: \begin{align} \alpha_s(\mu = m_{\tau}) = 0.322\quad \text{for}\quad {}^{\forall}N_f\ .\label{eq:ini_FRG} \end{align} Starting from the common initial condition Eq.~(\ref{eq:ini_FRG}), the $N_f$ dependence of the critical temperature $T_c(N_f)$ emerges from the $N_f$ dependent renormalisation flow at the chiral phase transition scale $\mu\sim\Lambda_{\mathrm{QCD}}\ll m_{\tau}$. The $N_f$ dependence of $T_c$ as well as its novel non-analytic behaviour in the pre-conformal region becomes free from the choice of the reference scale~\cite{BraunGies} by using an $N_f$ independent UV reference scale much larger than $T_c$. Ideally, we would like to set our UV scale by measuring on the lattice some physical quantity insensitive to IR dynamics -- for instance by fixing the value of $\alpha_s$ in the V-scheme to some appropriate value, as done in the computation of $r_0$ or variations thereof, following Ref.~\cite{Sommer:1993ce} and related applications. These large scale simulations are now starting \cite{future}. Here we design a simplified procedure. In order to determine the reference coupling $\gLR$ which appears in Eq.~(\ref{eq:RV}), we utilise our plaquette results $\langle P \rangle$ (equivalently, the tadpole factor $u_0 = \langle P \rangle^{1/4}$) shown in Fig.~\ref{Fig:u0_const}. Let us consider a constant $u_0$, for instance $u_0 = 0.8$ in figure, and read off the corresponding bare lattice couplings at each $N_f$. The obtained $\gL(N_f)$ is used as a reference coupling $\gLR$ and the corresponding mass scale $M(\gLR)$ is again computed according to Eq.~(\ref{eq:RV}). Some remarks on the aforementioned scale setting are in order: First, we recall the scale setting procedure in the potential scheme, where the measured normalised force $r^2F(r)$ is proportional to the renormalised coupling $\bar{g}$, and the specification $\bar{g}^2\propto r_X^2F(r_X) = {}^\exists X$ sets a scale $r_X^{-1}$. In short, we use our $u_0$ (or equivalently plaquettes) to define $\bar{g}$, and $u_0 = X$ is regarded as the analog of the potential scheme scale setting. Second, in the leading order of the perturbative expansion, the renormalised coupling is $N_f$ independent, and proportional to the Wilson loop~\cite{Wong:2005jx} -- a property that we have already exploited in the E-scheme calculation. Hence the use of an $N_f$ independent $u_0$ approximately gives an $N_f$ independent scale setting, similarly to the FRG scale setting method Eq.~(\ref{eq:ini_FRG}). And third, such an $N_f$ independent scale setting can be performed in a sufficiently UV regime $T_c(N_f) \ll M(\gLR)$ by adjusting the value of $u_0$ to satisfy the condition $\gLR \ll \gTC({}^{\forall}N_f)$. \begin{figure} \begin{center} \includegraphics[width=7.5cm,height=5.6cm]{./Tc_M_Nf_2_1_II.eps} \includegraphics[width=7.5cm,height=5.6cm]{./Nfc_II.eps} \caption{ Left:~ The $N_f$ dependence of $T_c/M$ where $M$ is determined to be a UV scale corresponding to $u_0=0.79$ (red box), $0.80$ (blue $\bigcirc$), and $0.81$ (magenta triangle) at each theory with $N_f$. The dashed lines represent fits for data by assuming the expression Eq.~(\protect\ref{eq:BG_scaling}). Right: The $u_0$ dependence of $N_f^*$. The three data in the left side are determined within the condition $M(\gLR) \lesssim a^{-1}(\gLC)$, while for the others $M(\gLR)$ exceeds the lattice cutoff. {This more robust procedure confirms our early results, and should be ultimately confirmed by use a rigorous lattice scale setting which is in progress \protect\cite{future}}.} \label{Fig:TcM} \end{center} \end{figure} Note that the coupling at the lattice cutoff $a^{-1}(\gLC)$ is $N_t\gg 1$ times larger than $T_c$. Then, the scale hierarchy $T_c(N_f) \ll a^{-1}(\gLC(N_f))$ allows us to consider a reference scale much larger than critical temperature but smaller than the lattice cutoff $T_c(N_f) \ll M(\gLR) < a^{-1}(\gLC(N_f))$. We find that $u_0\sim 0.8$ meets this requirement. In summary, the use of $\gLR$ given by $u_0\sim 0.8$ is analogous to the FRG scale setting method Eq.~(\ref{eq:ini_FRG}), and is suitable for studying the vanishing of the critical temperature by utilising $T_c/M(\gLR)$. The left panel of Fig.~\ref{Fig:TcM} displays the $N_f$ dependence of $T_c/M(\gLR)$ for $u_0 = 0.79$, $0.80$, and $0.81$. Fitting the data points for $T_c/M(\gLR)$ at $N_f \geq 4$ by using the FRG motivated ansatz, \begin{align} T_c= K|N_f^* - N_f|^{(-2b_0^2/b_1)(N_f^*)} \ ,\label{eq:BG_scaling} \end{align} where $b_{0,1}$ has been defined in Eqs.~(\ref{eq:b0}) and (\ref{eq:b1}), the lower edge of the conformal window is estimated as: $N_f^* = 10.48 \pm 1.01$ ($u_0 = 0.79$), $N_f^* = 10.34 \pm 0.88$ ($u_0 = 0.80$), $N_f^* = 10.23 \pm 0.80$ ($u_0 = 0.81$). The error-bars involve both fit errors and statistical errors of data. We have further investigated the stability against different choices of $u_0$: As shown in the right panel of Fig. \ref{Fig:TcM}, $N_f^*$ is relatively stable within the range $0.79\leq u_0\leq 0.94$. The scale cannot be pushed further towards the UV because of discritization errors. On the other hand, a small $u_0 \lesssim 0.7$ leads to $M(\gLR)\sim T_c$ or smaller. In such a case, the reference scale $M(\gLR)$ is affected by infra-red physics and cannot be used to study the vanishing of $T_c$. Despite these limitations, the window of relative stability is however reasonably large, and suffices to define an average value for $N_f^*$. We quote the average among the three results obtained for $u_0=(0.79,0.80,0.81)$, i.e. $N_f^* = 10.4 \pm 1.2$. \section{Summary}\label{sec:sum} We have investigated the phase transition (crossover) at finite temperature $T$ in colour SU$(3)$ QCD-like theories with various number of flavours $N_f = 0$ (quenched), $4$, $6$, and $8$ by using lattice Monte Carlo simulations. We have used a single bare fermion mass $ma = 0.02$. For all number of flavours, we have used the Asqtad action with a one-loop Symanzik and tadpole improved gauge action. The main focus in this paper is to investigate the chiral crossover at finite $T$ as a function of $N_f$, and discuss the possible implication for the (pre-)conformal dynamics at large $N_f$. In Eq.~(\ref{eq:Nf_IRFP}), we provide the summary for the number of flavour $N_f^*$ where the conformal window would emerge. The observables in our simulations were the chiral condensate, the Polyakov loop, and their susceptibilities for various lattice couplings $\betaL$, lattice sizes, and the number of flavours $N_f$. We have collected the (pseudo) critical lattice coupling $\betaLC$ as a function of ($N_f,N_t$). Table \ref{Tab:bc} provides the summary for $\betaLC$. Our $\betaLC$ are consistent with enhanced fermionic screening at larger $N_f$. The use of several $N_t$ allows us to study the asymptotic scaling of the critical temperature. Further, by utilising $\betaLC$, we have discussed a possible implication for the (pre-)conformal dynamics at large $N_f$. We have estimated the $N_f^*$ from the vanishing thermal scaling by extrapolating our critical couplings $\gLC$ to larger $N_f$ . This gives $N_f^*\sim 11.1\pm 1.6$. We have extracted a typical interaction strength $\gTC$ at the scale of critical temperature $T_c$ by utilising our $\gLC$ and the two-loop beta-function, and compared $\gTC$ to the zero temperature critical couplings ($\gSD$) estimated by the two-loop Schwinger-Dyson equation \cite{Appelquist:1998rb} as well as the IRFP position ($\gIRFP$) of the four-loop beta-function \cite{Ryttov:2012nt}. The coincidence between $\gTC$ and $\gSD$ or $\gIRFP$ indicates the vanishing critical temperature with the emergence of the conformal phase. Based on this reasoning, we have estimated the onset of the conformal window as $N_f^*\sim 12.5\pm 1.6$. We have also confirmed the increasing of $\gTC$ at larger $N_f$ which has been discussed in Ref.~\cite{Liao:2012tw} and indicates more strongly interacting non-Abelian plasma at larger $N_f$. Further, we have examined the $N_f$ dependence of $T_c/M$ by introducing a UV $N_f$ independent reference scale $M$ which is determined by utilising the tadpole factor $u_0$ in analogous ways to the potential scheme scale setting. Then, $T_c/M$ turns out to be a decreasing function of $N_f$ consistently to the FRG observations~\cite{BraunGies}, and the vanishing $T_c/M$ indicates the emergence of the conformal window around $N_f^* \sim 10.4 \pm 1.2$. As a future perspective, we plan to perform more rigorous scale settings, by exploiting state-of-art measurements of lattice potential. It is also mandatory to investigate the chiral limit and the thermodynamic limit at large $N_f$. This, together with a more extended set of flavour numbers, will allow a quantitative analysis of the critical behaviour in the vicinity of the conformal IR fixed point. We expect that our {\em thermodynamic} lattice study for the large $N_f$ non-Abelian gauge theory plays an important role as a new connection between the lattice and the Gauge/Gravity duality \cite{Gursoy:2010fj,Alho:2012mh}. \section*{Acknowledgements} We enjoyed discussing these topics with George Fleming, Edward Shuryak, Anna Hasenfratz, Philippe de Forcrand, and Marc Wagner. We also wish to thank Carleton DeTar and Urs Heller for sharing their notes on the normalization of chiral observables in the MILC code. This project is one step in our study of the phase diagram of strong interactions, and we warmly thank Elisabetta Pallante, Albert Deuzeman and Tiago Nunes da Silva for many interesting conversations and a pleasant collaboration. We also thank them for granting access to some of $u_0$ data at $N_f = 8$ used in this study, and in particular Tiago Nunes da Silva for related communications. This work was in part based on the MILC Collaboration's public lattice gauge theory code~\cite{MILC}. The numerical calculations were carried out on the IBM-SP6 and BG/P at CINECA, Italian-Grid-Infrastructures in Italy, and the Hitachi SR-16000 at YITP, Kyoto University in Japan.
1,314,259,996,227
arxiv
\section{Introduction} In the general framework of machine learning, the learning procedure can be viewed as a system (referred to as \emph{agent} in this paper) that runs an algorithm on a given dataset in order to return a hypothesis for predicting the unseen data. Typically, these algorithms require a large amount of data in order to make predictions with an acceptable level of performance. \citet{chen2016lifelong} termed such a paradigm as \emph{isolated learning}, since it does not consider any other related information or previously learned knowledge. Instead, humans have the ability to continually learn over time by accommodating new knowledge while retaining previously learned experiences. Such a continuous learning procedure has represented a challenging problem for machine learning and for the development of artificial intelligence. This continual learning idea has inspired various machine learning strategies. In \emph{domain adaptation} \citep{pan2010survey}, the goal is to transfer the knowledge from a given task (i.e., source domain) to another task with few labeled or unlabeled observations (i.e., target domain). In the case of \emph{multi-task learning} \citep{zhang2017survey}, the goal is to set a good performance on different but related tasks simultaneously. Concerning \emph{lifelong learning} \citep{mitchell1993explanation,chen2016lifelong}, the learning procedure can be viewed as a continuous transfer learning procedure over incrementally available tasks from the underlying distribution. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{online_lifelong.pdf} \caption{Two levels of sequences in lifelong online learning: 1) task sequence (blue arrows) and 2) instance sequence in each task (green arrows). For task $j$ at time $t$, the instances $\mathbf{x}_t$ also arrive sequentially, while in the general lifelong learning, each task is processed in a batch manner.} \label{fig:online_lifelong} \end{figure} Lifelong learning has recently received increasing attention due to its implications in autonomous learning agents and robots. However, in most published work of lifelong learning, e.g., \citep{mitchell1993explanation,ruvolo2013ella,pentina2014pac}, the learning procedure for each task is \emph{offline}. That means the algorithm is generally based on the Empirical Risk Minimization (ERM) rule to obtain a good generalization performance for the unseen tasks from the same distribution. Nonetheless, there exist considerable real scenarios in which the learning procedure for each individual task is \emph{interactive}. For example, in the personalized product recommendation system, we suppose that a \emph{knowledge base} from current clients has been constructed, which can predict the preferences of existing clients. For an unknown new client, current lifelong learning algorithms must first collect a batch of data, then train such a batch with the help of the knowledge base, which cannot effectively handle the interaction demands. From this point of view, we should consider a new paradigm -- \emph{lifelong online learning} in which the agent interacts with users in each task, as showed in Figure~\ref{fig:online_lifelong}. As opposed to the usual theoretical settings of lifelong learning, in lifelong online learning we do not assume the task generation distribution. Then the tasks are not necessarily generated in an i.i.d. fashion but are rather arbitrarily or even adversary generated, which makes this problem more challenging. Moreover, in the batch lifelong learning framework, only the total number of tasks may be unknown to us such as in \citep{ruvolo2013ella}. While in the lifelong online learning, both the total number of tasks and the number of instances in each task can be unknown for us because both of them arrive sequentially. Motivated by the practical and theoretical considerations, we develop a new algorithm in which the agent can interactively learn with the observation in each task and also benefit from the contextual information learned by the accumulated knowledge. At each task, the algorithm will predict the arriving data via a combination of two aspects: the predictions provided by the accumulating knowledge and the current classifier constructed by the agent from scratch in the current task. As the interaction times increase, the prediction by the current classifier will gradually play a more dominant role, because the agent has continuously learned from the instances and will be more confident about the current task. This mechanism will be useful when facing a long term interactions and overcoming the possible \emph{negative transfer} from the knowledge base \citep{pan2010survey}. The contribution of this paper can be summarized as follows: \begin{itemize} \item We propose a computationally efficient algorithm in the lifelong online classification, which effectively leverages the information from the accumulated knowledge and the classifier which we have constructed for the current task. \item We theoretically provide an upper bound of the cumulative error of the proposed algorithm when facing a new task. We find that under some mild conditions, our approach can still benefit a small cumulative error even in the non i.i.d. task generation distribution. \item Our empirical results on both synthetic and various real datasets show good performances compared with some baseline approaches. \end{itemize} The rest of the paper will be organized as follows. We first introduced the related work in this field. Secondly we set up the problem and discuss the proposed algorithm. In the third part, we derive the theoretical bound for the algorithm. Finally the experimental results validate the proposed algorithm. \section{Related works} \paragraph{Batch Lifelong learning} From the practical point of view, the lifelong learning approaches can be categorized as two families: \emph{parameter transfer} and \emph{representation transfer} \citep{chen2016lifelong}. In parameter transfer, the agent uses previously learned model parameters to help learn model parameter in the current task \citep{evgeniou2004regularized,fei2016learning}. As for the representation transfer, each task shares a common representation in a lower dimensional subspace and generally the \emph{sparse dictionary learning} approach is developed, e.g., \citep{ruvolo2013ella,maurer2013sparse,sun2018active}. While in the theoretical understanding of lifelong learning, \citet{baxter2000model} proposed the generalization bound on the VC dimension, while \citet{pentina2014pac} analyzed the lifelong learning through PAC-Bayesian theory. As for the the non i.i.d. task generation assumption, \citet{pentina2015lifelong} analyzed lifelong learning with the task environment changing over time but limited in a consistent manner. \citet{pentina2016lifelong} proposed a weighted majority vote algorithm to theoretically prove the sample complexity reduction phenomena in the \emph{lifelong learning}. \paragraph{Online transfer/multi-task/lifelong learning} Online transfer learning and online multi-task can also be viewed as a sequential learning procedure from some \emph{prior} knowledge. In online transfer learning, the knowledge is transferred only once from the source domain to the target domain \citep{zhao2014online,wang2015online}. In online multi-task learning \citep{saha2011online,ruvolo2013active,murugesan2017multi}, the goal is rather to learn the task relationship and try to minimize the cumulative errors on all tasks. There are only few published works in the lifelong online learning. For instance, \citet{alquier2017regret} proposed a similar concept of \emph{online-within-online} lifelong learning based on the representation transfer, where the data arrives sequentially in each task. However this approach is not exactly \emph{online} since the high computational burden and storage requirements (i.e., all observed data) make it not scalable for real dataset. \citet{denevi2018incremental} proposed an improved algorithm but applied on the linear regression and adapted to the i.i.d. generation task. Moreover, the representation transfer based algorithms are still vulnerable when treating the non i.i.d. task problem. Indeed, a simple adversarial strategy is to generate instances according to two distributions which hold the same sub-feature space but different labeling distribution, for which a representation-based approach can suffer from an important cumulative error. \section{Problem setup} Let us define $\{j\}_{j=1}^\infty $ as an ordered set of tasks. For each task $j$, let us denote $N_{j}$ the total number of instances $\{(\mathbf{x}_t^{(j)},y_t^{(j)})\}_{t=1}^{N_{j}}$ where $\mathbf{x}_t^{(j)} \in \mathbb{R}^d$, $y_{t}^{(j)}\in \{-1,+1\}$, and $h_j$ as the corresponding classifiers. Suppose that we are at task $T+1$ and the $N_{T+1}$ examples in the task $T+1$ will arrive sequentially ($N_{T+1}$ might be unknown to us), we denote $\mathrm{K}_{T}$ the current \emph{knowledge base} that contains all the historical classifiers up to the last completed task $T$ (i.e., $K_{T}=\{h_j\}_{j=1}^{T} $). In the beginning of the interactive learning procedure of the current task $T+1$, the classifier $h_{T+1}$ cannot make an exact decision since it has not observed a sufficient number of instances. However, $\mathrm{K}_{T}$ can provide some contextual information which can help the learner to perform better. Therefore, the final prediction rule $\mathcal{P}_{T+1}$ of the current task $T+1$ will involve two parts, as shown in the prediction model given in Figure~\ref{fig:online_lifelong}. \section{Proposed algorithm} The proposed lifelong online algorithm proceeds by using the contextual information available in the accumulated knowledge to perform better in the current learning procedure. For the current task $T+1$, two prediction stages are performed. \begin{description}[labelwidth=0in,labelindent=0em,leftmargin=0\labelsep] \item[Prediction from the knowledge base ($O_{1:T}$):] constructed classifiers $\{h_j\}_{j=1}^{T}$ are evaluated on the new task $T+1$ using: \begin{equation} h_j(\mathbf{x}_t^{(T+1)}) = \langle \mathbf{w}^{(j)}, \mathbf{x}_t^{(T+1)} \rangle, \label{pred_rule} \end{equation} where $\mathbf{w}^{(j)}$ is the parameter corresponding task $j \in [1, T]$. This accumulated knowledge is used through the application of an \emph{expert model} to make predictions over the arriving observations through the pool of the previous models, as shown in~\citep{cesa2006prediction}. \item[Prediction from the current classifier ($O_{T+1}$):] current model $h_{T+1}$ is being interactively updated by the arriving observations $\{\mathbf{x}_t\}_{t=1}^{N_{T+1}}$ and the prediction rule is the same as Equation \ref{pred_rule}. \end{description} Updates to the current model $h_{T+1}$ are being interactively done via \emph{online gradient descent} of the regularized convex loss~\citep{hazan2016introduction}: $$\mathcal{L}(\mathbf{w})= \ell_t(\mathbf{w}) + \frac{\lambda}{2}\|\mathbf{w}\|^2.$$ In the present paper we fix $\ell_t$ to be the \emph{hinge loss}, i.e.: $\ell_t(\mathbf{w}^{(T+1)})~=~\max\{0,1-y_t\langle \mathbf{w}^{(T+1)},\mathbf{x}_t \rangle \},$ where $\mathbf{w}^{(T+1)}$ is the vector of parameters corresponding task $(T+1)$. We propose to balance between the prediction $O_{T+1}$ using only the current task and the prediction $O_{1:T}$ using the accumulated knowledge base through a non-increasing series $\{\alpha_t\}_{t=1}^{N_{T+1}}$ with $\forall t,~0\leq\alpha_{t+1}\leq\alpha_{t}\leq 1$ as the following: $$\mathcal{P}_{T+1}(\mathbf{x})=\alpha_t O_{1:T}(\mathbf{x}) + (1-\alpha_t) O_{T+1}(\mathbf{x}).$$ Intuitively, when gradually receiving more and more instances, the impact of the knowledge base on the final prediction will also decrease gradually. Therefore, the decreasing $\alpha_t$ aims at balancing predictions between the knowledge base and the current classifier $h_{T+1}$. And as for the first task, we only learn from the current classifier $h_{1}$ since $K_0=\emptyset$, i.e., $\mathcal{P}_{1}(\mathbf{x}) = O_1(\mathbf{x}).$ In this case one could see the problem as a classical online learning algorithm, with the predictions being performed by a classifier interactively updated by the arriving observations. For all $T$, to evaluate $\mathcal{P}_{T+1}$, we benefit from the fact that the proposed updating rule for the knowledge base is additive. Algorithms~\ref{pred_T} and~\ref{select} present the approach more formally. Moreover, two distinct approaches (options) are proposed in order to make predictions from the knowledge base for Accumulated Knowledge Lifelong Online (AKLO) learning: \emph{AKLO\,Sum} and \emph{AKLO\,Sample}. \begin{description}[labelwidth=0in,labelindent=0em,leftmargin=0\labelsep] \item[AKLO\,Sum] Predictions are made by a weighted vote from the models in the knowledge base: \begin{equation} O_{1:T}(\mathbf{x}_t^{(T+1)})= \mathcal{T}_{[-1,1]} \big( \sum_{i=1}^{T} p_t(i) \langle \mathbf{w}^{(i)}, \mathbf{x}_t^{(T+1)} \rangle \big),\label{our_sum} \end{equation} where $\mathcal{T}_{[a, b]}$ is a piece-wise function defined as: \begin{equation*} \mathcal{T}_{[a,b]}(x) = \left\{ \begin{array}{ll} a & \quad x\leq a \\ x & \quad a< x<b\\ b & \quad x \geq b \end{array}. \right. \end{equation*} \item[AKLO\,Sample] Predictions are made by sampling a model $i$ from the knowledge base according to the Categorical distribution $i \sim \mathrm{Cat}(p_t)$: \begin{equation} O_{1:T}(\mathbf{x}_t^{(T+1)}) = \mathcal{T}_{[-1,1]} \big( \langle \mathbf{w}^{(i)}, \mathbf{x}_t^{(T+1)} \rangle \big), \label{our_sample} \end{equation} \end{description} where the weight $p_t$ is defined as the $T$ simplex with $\sum_i^T p_t(i)~=~1,~\forall i,~p_t(i)\geq 0$, estimated by their historical behaviors $p_t(i) \propto \exp\{-\epsilon_t L_t(i)\}$. $L_t(i) = \sum_{k=1}^t e_k(i)$ represents the cumulative error at task $T+1$ for model $h_i$ ($1\leq i \leq T$) in the interactive learning until time $t$. It is also worth mentioning that since the tasks are not necessarily i.i.d. generated, the performance of the algorithm is measured by a small cumulative error in the task $T+1$. \begin{center} \begin{algorithm}[t] \caption{Prediction $\mathcal{P}_{T+1}$ at task $T+1$} \begin{algorithmic}[1] \REQUIRE $\eta_t >0$, $\forall t \in \{1,\dots,N_{T+1}\}$ \ENSURE $\mathbf{w}^{(T+1)}_{1} = \mathbf{0}$, $p_{1}= \frac{1}{T}\mathbf{1}$ \FOR{$t= 1$~to~$N_{T+1}$} \STATE Observe $\mathbf{x}_t^{(T+1)}$ \STATE Predict using the current classifier $h_{T}$ by computing the confidence:\\ $O_{T+1}(\mathbf{x}_t^{(T+1)})=\mathcal{T}_{[-1,1]}\big(\langle \mathbf{w}^{(T+1)}_{t}, \mathbf{x}_t^{(T+1)}\rangle\big)$ \STATE Predict using cumulative knowledge $K_T$ by applying Equation \ref{our_sum} or \ref{our_sample}, get confidence $O_{1:T}(\mathbf{x}_t^{(T+1)})$ \STATE Final Prediction: $$\hspace{-1em} \hat{y}_t^{(T+1)} =\mathrm{sign}[(1-\alpha_t) O_{T+1}(\mathbf{x}_t^{(T+1)}) + \alpha_t O_{1:T}(\mathbf{x}_t^{(T+1)})]$$ \STATE Receive the real label $y_t^{(T+1)}$ \STATE Update weight $p_t$ by $y_t^{(T+1)}$ with Algorithm \ref{select} \STATE Update $\mathbf{w}^{(T+1)}_{t}$: \IF{$y_t^{(T+1)}\langle \mathbf{w}^{(T+1)}_{t}, \mathbf{x}_t^{(T+1)} \rangle \geq 1$} \STATE $\mathbf{w}^{(T+1)}_{t+1} = (1-\eta_t \lambda)\mathbf{w}^{(T+1)}_{t}$ \ELSE \STATE $\mathbf{w}^{(T+1)}_{t+1} = (1-\eta_t \lambda)\mathbf{w}^{(T+1)}_{t} + \eta_t y_t^{(T+1)}\mathbf{x}_t^{(T+1)} $ \ENDIF \ENDFOR \end{algorithmic} \label{pred_T} \end{algorithm} \end{center} \begin{center} \begin{algorithm}[t] \caption{Updating the knowledge base $\mathrm{K}_{T}$ } \begin{algorithmic}[1] \REQUIRE Knowledge $\mathrm{K}_{T}$, $\epsilon_t >0$, $\forall t\in \{1,\dots,N_{T+1}\}$ \ENSURE $L_{1} = \mathbf{0}$ \FOR{$t= 1$ to $N_{T+1}$} \STATE Receive label $y_t^{(T+1)}$ \STATE Compute error $e_t$ for each model $j\in\{1,\dots,T \}$ in the knowledge base $K_T$: $$ e_t(h_j) = \Big(\mathcal{T}_{[-1,1]}\big( \langle \mathbf{w}^{(j)},\mathbf{x}^{(T+1)}_{t}\rangle \big) - y_t^{(T+1)}\Big)^2 $$ \STATE Update $L_{t+1} = L_{t} + e_t$ \STATE Update $p_t(i) = \frac{\exp\{-\epsilon_t L_{t}(i)\}}{\sum_{j=1}^{T}\exp\{-\epsilon_t L_{t}(j)\}}$ \ENDFOR \end{algorithmic} \label{select} \end{algorithm} \end{center} \paragraph{Computational complexity} The computational complexity of updating rule in Algorithm~\ref{select} is $\mathcal{O}(Td + T)$. Then plugging in Algorithm \ref{pred_T} we derive a global complexity of $\mathcal{O}(N_{T+1}(Td+T+d))$, which corresponds to a \emph{linear time} algorithm that is able to efficiently process high dimensionality datasets. \section{Theoretical analysis} In the following, we analyze the error bound of the proposed algorithm at current task $T+1$. This is coherent with \cite{pentina2016lifelong}, where a batch non i.i.d. lifelong learning problem is analyzed. Two scenarios are discussed: a known horizon $N_{T+1}$ and a fixed but unknown horizon $N_{T+1}$. \subsection{Known horizon $N_{T+1}$} \paragraph{Theorem 1} Supposing $\|\mathbf{x}\|\leq X$ and $\|\mathbf{w}\|\leq R$, for any $\{\alpha_t\}_{t=1}^{N_{T+1}}$ such that $\alpha_1 = 1$ and $\forall t,~ 0 \leq \alpha_{t+1} \leq \alpha_{t} \leq 1$, we set $\lambda = \frac{X+R}{R}\sqrt{\frac{\log(N_{T+1})+1}{N_{T+1}}} $, $\eta_t = 1/(\lambda t)$, $\epsilon_t = \sqrt{\frac{\log(T)}{8\sum_{t=1}^{N_{T+1}}\alpha_t}}$ and we use Equation~\ref{our_sum} (AKLO\,Sum) as the prediction rule. If the algorithm predicts $\hat{y}_t$, then the cumulative error $E_{T+1} = \sum_{t=1}^{N_{T+1}}\mathbf{1}\{\hat{y}_t \neq y_t \} $ at task $T+1$ can be bounded by: \begin{equation*} \begin{split} E_{T+1} \leq & \sum_{t=1}^{N_{T+1}} \alpha_t e_t(\mathbf{w}^{\star\star}) + \sum_{t=1}^{N_{T+1}}(1-\alpha_t)\ell_t(\mathbf{w}^{\star})\\ & +4 \sqrt{2\log(T) \sum_{t=1}^{N_{T+1}}\alpha_t}\\ & +\sum_{t=1}^{N_{T+1}}(1-\alpha_t) R(X+R) \sqrt{\frac{\log(N_{T+1})+1}{N_{T+1}}}, \end{split} \end{equation*} where: \begin{align*} \mathbf{w}^{\star} & = \underset{\|\mathbf{w}\|\leq R}{\min} \sum_{t=1}^{N_{T+1}} \ell_t(\mathbf{w}),\\ \mathbf{w}^{\star \star} & = \underset{\mathbf{w}\in\{\mathbf{w}^{(1)},\dots,\mathbf{w}^{(T)}\}}{\min} \sum_{t=1}^{N_{T+1}} e_t(\mathbf{w}). \end{align*} We provide the complete demonstration in the supplementary material. If we set $\alpha_t \equiv 1$, then the error bound can be simplified as $\sum_{t=1}^{N_{T+1}} e_t(\mathbf{w}^{\star\star}) +4 \sqrt{2 N_{T+1}\log(T)}$, which exactly recovers to the error bound of the expert problem \citep{cesa2006prediction}. Furthermore, if we set $\alpha_t \equiv 0$, the error bound can be simplified as $\sum_{t=1}^{N_{T+1}}\ell_t(\mathbf{w}^{\star}) + R(X+R)\sqrt{N_{T+1}(\log(N_{T+1})+1)}$, which recovers to the error bound of the Follow The Adaptive Regularized Leader (FTARL) problem \citep{mcmahan2017survey}. We should also point out that the proposed algorithm is not sensible to the size of the knowledge base because of the $\log(T)$ term in the bound. Besides the direct conclusion from Theorem 1, we can also derive the Corollary 1 if the number of instances $N_{T+1}$ is small. \paragraph{Corollary 1} For $\gamma \in(0,1)$, if $N_{T+1}$ satisfies $N_{T+1} \leq t_0$, with $t_0 = \max\{t|\alpha_t \geq \frac{K}{1+K}\}$, if we assume that $\zeta = \underset{\mathbf{w}\in\{\mathbf{w}^{(1)},\dots,\mathbf{w}^{(T)}\}}{\min} \sum_{t=1}^{N_{T+1}} e_t(\mathbf{w}) $ is non zero ($\zeta >0$) and $K \geq \max\{\frac{1+XR}{\gamma\zeta},\frac{R(X+R)}{4\gamma\sqrt{2\log(T)}} \}$, the cumulative error bound $E_{T+1}$ in Theorem 1 can be simplified as: \begin{equation*} E_{T+1} \leq (1+\gamma)\Big(\sum_{t=1}^{N_{T+1}} \alpha_t e_t(\mathbf{w}^{\star\star}) + 4 \sqrt{2\log(T) \sum_{t=1}^{N_{T+1}}\alpha_t} \Big). \end{equation*} The proof of Corollary 1 is also provided in the supplementary material. This corollary reveals an interesting fact: if $N_{T+1}$ is smaller than a predefined threshold, then the best model in the knowledge base $\mathrm{K}_T$ will play a dominant role in the error bound. Therefore, in the current task $T+1$ and even with small interaction times $N_{T+1}$, it is still possible to obtain a good performance although the current classifier $h_{T+1}$ is not well trained. \paragraph{Corollary 2} Supposing all the conditions of Theorem 1 hold and now we use Equation~\ref{our_sample} (AKLO\,Sampling) as the prediction rule. If the algorithm predicts $\hat{y}_t$, then with the probability higher than $1-\delta$, with $\forall \delta \in (0,1)$, the accumulative error $ E_{T+1}$ at task $T+1$ can be bounded by: \begin{equation*} \begin{split} E_{T+1} \leq & \sum_{t=1}^{N_{T+1}} \alpha_t e_t(\mathbf{w}^{\star\star}) + \sum_{t=1}^{N_{T+1}}(1-\alpha_t)\ell_t(\mathbf{w}^{\star})\\ & +4 \sqrt{2 \log(T) \sum_{t=1}^{N_{T+1}}\alpha_t} + \sqrt{8\sum_{t=1}^{N_{T+1}}\alpha_t^2 \log(\frac{1}{\delta})}\\ & + \sum_{t=1}^{N_{T+1}}(1-\alpha_t) R(X+R) \sqrt{\frac{\log(N_{T+1})+1}{N_{T+1}}}. \end{split} \end{equation*} The proof of Corollary 2 is based on Theorem 1 and Hoeffding-Azuma inequality. We also provide the demonstration in the supplementary material. \subsection{Unknown horizon $N_{T+1}$} As we described in the introduction, in the lifelong online learning, both the number of tasks and the number of examples in each task can all be unknown to us. However, parameters $\eta_t$ and $\epsilon_t$ directly depend on $N_{T+1}$. In the following, we develop some strategies for setting these hyper-parameters without the knowledge of $N_{T+1}$. For the current learner $h_{T+1}$, if we directly adjust $\lambda = \frac{X+R}{R}$, we can also derive a bound in this setting, where the whole proof procedure is similar to Theorem 1. As for the prediction from Algorithm \ref{select} in the knowledge base, we apply the \emph{double trick} for online learning \citep{cesa2006prediction}, which divides the time interval into periods $I_m = [2^m,2^{m+1}-1]$ of length $2^m$, for $m=0,1,\dots$, until the task completes. The modified algorithm is almost the same as Algorithm \ref{select}, with the difference of a reset $L_{t} = 0$ at the beginning of each new interval $I_m$ and the use of $\epsilon_t = \sqrt{\frac{\log(T)}{8\sum_{j=1}^{2^m}\alpha_j}} $ for $t\in I_m$. Based on such a modified algorithm, we can derive the new error bound for the unknown $N_{T+1}$ at task $T+1$. \paragraph{Theorem 2} Supposing $\|\mathbf{x}\|\leq X$ and $\|\mathbf{w}\|\leq R$, for any $\{\alpha_t\}_{t=1}^{N_{T+1}}$ with $\alpha_1 = 1$ and $\forall t,~0 \leq \alpha_{t+1} \leq \alpha_{t} \leq 1$, we set $\lambda = \frac{X+R}{R}$, $\eta_t = 1/(\lambda t)$, $\epsilon_t$ decided by the double trick and we choose Equation~\ref{our_sum} (AKLO\,Sum) as the prediction rule. If the algorithm predicts $\hat{y}_t$, then the cumulative error $E_{T+1}$ at task $T+1$ can be bounded by: \begin{equation*} \begin{split} E_{T+1} \leq & \sum_{t=1}^{N_{T+1}} \alpha_t e_t(\mathbf{w}^{\star\star}) + \sum_{t=1}^{N_{T+1}}(1-\alpha_t)\ell_t(\mathbf{w}^{\star})\\ & + 4 \log(N_{T+1}) \sqrt{2\log(T) \sum_{t=1}^{N_{T+1}}\alpha_t} \\ & + R(X+R) \sum_{t=1}^{N_{T+1}}(1-\alpha_t). \end{split} \end{equation*} The proof of Theorem 2 is provided in the supplementary material. We should point out that this error bound is worse than the original bound proposed in Theorem 1 since we do not know the $N_{T+1}$ in advance. However, in the real lifelong learning problem, the value $N_{T+1}$ is generally not too large for each task, such that $\log(N_{T+1})$ and $\sum_{t=1}^{N_{T+1}}(1-\alpha_t)$ are both small, making the learning procedure effective. \section{Empirical evaluations} We evaluate the empirical performance of the proposed algorithm in the online setting. We will test the proposed algorithm by two synthetic and three real datasets. Concerning real datasets, the data are already separated in different categories (i.e., tasks) and for each task a uniform sampling without replacement is performed on the original data to keep only a portion of examples. The reason behind this is twofold: 1) in the lifelong learning context the number of examples for each task is generally not large; and 2) keeping a relative smaller size theoretically emphasizes the effectiveness of the proposed algorithm, as shown in Corollary~1. In the following a further description of the dataset used is given. \subsection{Dataset description} \paragraph{Synthetic data 1 (syn1)} For testing the behavior of the proposed algorithm particularity in the non i.i.d. assumption, we create a set of tasks $ \{j\}_1^{50} $ generated by two different distributions, namely $\mathcal{D}_1$ and $\mathcal{D}_2$ (25 tasks for each distribution). Each task $j$ is composed by $N_j = 100$ instances $\{(\mathbf{x}_t^j, y_t^j)\}_{t=1}^{100},$ where $\mathbf{x}_t^j \in \mathbb{R}^2$ and $y_t^j \in \{-1,+1\}$. For a given task $T$ from the first distribution $\mathcal{D}_1$, $\mathbf{x}_t^T $ is generated from the normal multivariate distribution $\mathbf{x}_t^T \sim \mathcal{N}(\boldsymbol{\mu}_1,\sigma_1\mathbf{I})$ with $\boldsymbol{\mu}_1=[10,10]$ and $\sigma_1 = 1$ and the labeling function $y_t^T$ is expressed as $y_t^T = \mathrm{sign}(\mathbf{a}_{T}^{\top}\mathbf{x}_t)$, where $\mathbf{a}_{T} = [-1+\epsilon_T,1+\epsilon_T]^{\top}$ is the decision function and $\epsilon_T \sim \mathcal{N}(0,10^{-3})$ is the variant part for task $T$. Similarly, for a given task $T^{\prime}$ from the second distribution $\mathcal{D}_2$ the same strategy is applied using different parameter settings. Thus, for $\mathcal{D}_2$ we set $\boldsymbol{\mu}_2 = [20,5]$, $\sigma_2 = 1$, $y_t^{T^{\prime}}= \mathrm{sign}(\mathbf{a}_{T^{\prime}}^{\top}\mathbf{x}_t)$ and the decision boundary $\mathbf{a}_{T^{\prime}} = [-0.25+\epsilon_{T^{\prime}},1+\epsilon_{T^{\prime}}]^{\top}$ with $\epsilon_{T^{\prime}}\sim\mathcal{N}(0,10^{-3})$. Furthermore, the agent has no access to the data structure and generation information and the tasks from these two distributions will be arbitrarily provided to the agent. \paragraph{Synthetic data 2 (syn2)} With this dataset, we will test the performance of the proposed algorithm in a non-obvious situation (e.g., adversarial setting). For this, we adopt similar settings as in the syn1 with the same number of tasks and the same number of examples for each task. We also keep the same generation technique from $\mathcal{D}_1$ and $\mathcal{D}_2$ . However, for all the tasks the observations have the same feature generation distribution (i.e., the same marginal distribution) $\mathbf{x}_t \sim \mathcal{N}(\boldsymbol{\mu},\sigma\mathbf{I})$ and a totally different labeling function. As for the first distribution $\mathcal{D}_1$, the labeling function is $y_t = \mathrm{sign}(\mathbf{a}^{\top}\mathbf{x}_t)$ and for the second the distribution $\mathcal{D}_2$, the labeling function is adversarial given by $y_t = \mathrm{sign}(-\mathbf{a}^{\top}\mathbf{x}_t)$, where $\mathbf{a} = [-1+\epsilon_T,1+\epsilon_T]^{\top}$ and $\epsilon_T$ is kept the same as in syn1. The non-obvious generation is common in the non i.i.d. settings. For example, in the personalized product recommendation for two groups of clients, the product might have the same features but these two groups may have totally different preferences on the same product. In the experiment, the agent still knows nothing except $(\mathbf{x}_t,y_t)$. \paragraph{Landmine detection\footnote{ \scriptsize\url{http://www.ee.duke.edu/~lcarin/LandmineData.zip}} (Landmine)} This dataset contains 29 binary classification tasks corresponding to 29 geographical regions. For each task, the goal is to detect landmines ($+1$) or clutters ($-1$). Each example contains $9$ features, we also add a bias term, resulting in $10$ features during the experiment. We randomly sample $150$ examples for each task. \paragraph{Spam detection\footnote{\scriptsize\url{http://ecmlpkdd2006.org/challenge.html}} (Spam)} We adopt the dataset from ECML PKDD 2006 Discovery challenge for the spam detection task of $14$ different users. Each user can be viewed as a individual task and the goal is to build a personalized mail filtering system. This task is a binary classification problem with label spam ($+1$) and non-spam ($-1$). Each example has an extremely high number of features ($\approx 1.5\times 10^5$) representing the word occurrence frequency (bag of word model). We randomly select $200$ examples for each task. \paragraph{Shoes data\footnote{\scriptsize\url{http://vision.cs.utexas.edu/whittlesearch/}} (Shoes)} We used the shoes dataset with attributes from \citet{kovashka2012whittlesearch} and the same setting as in \citep{pentina2015curriculum}. In this experiment, we study the scenario of learning visual attributes that characterize shoes across different shoe models. We have $10$ attributes representing the different tasks in the proposed lifelong online setting (\emph{pointy at the front, open, bright in color, covered with ornaments, shiny, high at the heel, long on the leg, formal, sporty, feminine}) which describes the shoes models. Each attribute (i.e., task) is a binary classification problem with $100$ examples: $(+1)$ means holding such a property and $(-1)$ means not. Each example has $990$ dimensional features from the original image. In this dataset, it is worth mentioning that some tasks are clearly related such as \emph{high heel} and \emph{shiny}, and some tasks are not, such as \emph{high heel} and \emph{sporty}. \subsection{Methods and measurements} In our experiments, we compare different baseline approaches to verify that the proposed algorithms can effectively learn from the accumulating knowledge during the lifelong online learning process: \begin{description}[labelwidth=0in,labelindent=0em,leftmargin=0\labelsep] \item[ITOL:] Independent Task Online Learning --- for each task, we train the classifier in an online way without taking into account the accumulated experience from the previous tasks; \item[TOL:] Tasks Online Learning --- only one online classifier is performed by concatenating the data of all the tasks; \item[AKLO\,Sum:] Accumulating Knowledge for Lifelong Online learning using the sum updating rule described in Equation~\ref{our_sum} --- the prediction is the weighted sum of the learned classifiers in the \emph{knowledge base}; \item[AKLO\,Sample:] Accumulating Knowledge for Lifelong Online learning using the sampling updating rule described in Equation~\ref{our_sample} --- the prediction comes from a random sampling w.r.t. the estimated normalized weight; \item[Unif\,Sum:] Uniform sum from the \emph{knowledge base} --- this approach is similar to the proposed algorithm with Equation~\ref{our_sum}, but the prediction is directly the average of the advice from the \emph{knowledge base}, i.e., $p_t(i) = \frac{1}{T}$; \item[Unif\,Sample:] Uniform sampling from the \emph{knowledge base} --- this approach is similar to the proposed algorithm with Equation~\ref{our_sample}, with the uniform sampling strategy. \end{description} To fix the hyper-parameters $\alpha_t$, $\epsilon_t$ and $\lambda$ of the proposed model, we adopt a simple linear time function for $\alpha_t$: $\alpha_t = 1- \frac{(t-1)}{N_{T+1}}$ and $\epsilon_t = \sqrt{\frac{\log(T)}{8\sum_{t=1}^{N_{T+1}}\alpha_t}}$ as described in Theorem 1. As for selecting the $\lambda$, we choose it as the best $\lambda$ from $[10^{-3},10^{-2},\dots,10^{3}]$ obtained from averaging the performance over ITOL. For some datasets which already have a training and testing set (such as Spam and Shoes datasets), we validate the best $\lambda$ from the training set. For the other datasets, for each task, we randomly sample a small portion and we shuffle the sampled examples. Lastly, the learning rate in Algorithm~\ref{pred_T} is set accordingly as $\eta_t = 1/(\lambda t)$. \subsection{Results and analysis} To measure the performance, we report the average cumulative errors (ACE) over the tasks in the lifelong learning: $\mathrm{ACE} = \frac{1}{T}\sum_{j=1}^{T}\frac{1}{N_{j}}\sum_{t=1}^{N_{j}}\mathbf{1}\{\hat{y}_t \neq y_t \}$. Table~\ref{error_table} presents 10 repetitions by randomly shuffling the task order and the example order in each task. The proposed AKLO\,Sum and AKLO\,Sample methods demonstrate their effectiveness by using the accumulating knowledge compared to the baseline approaches. Particularly the proposed approaches are more efficient when using datasets where the tasks are not i.i.d. generated (i.e., syn1, syn2). It is worth mentioning that the Unif\,Sample is even worse than the independent training (ITOL) as \emph{negative transfers} are likely to occur with non i.i.d. datasets. By comparing results obtained with Unif\,Sum to AKLO\,Sum, it reveals that the proposed AKLO\,Sum is not simply leveraging the information from the previous experiences, but is able to make an online selection of related tasks when effectively combining them to make predictions, such as the \emph{shoes} dataset. AKLO\,Sum has about $10\%$ lower error rate than the Unif\,Sum since some tasks are not related and Unif\,Sum approach treats all the tasks equivalently. Moreover, in the \emph{landmine} dataset, we find that the performance of the proposed methods are slightly higher than the results given by for instance Unif\,Sum technique compared to other dataset. A possible reason for this is that \emph{landmine} is generally regarded as an i.i.d. realization by the task distribution \cite{pentina2014pac} and Unif\,Sum behaves similarly to a \emph{parameter transfer} approach in this case. Fig.~\ref{result_fig}(a)-(e) show the the evolution of the average cumulative error rate with the tasks, following the same trends reported in Table~\ref{error_table}. Fig.~\ref{result_fig}(f) reports the cumulative errors of one realization of tested algorithms for the last task of the \emph{shoes} dataset --- comparisons of other datasets are provided in the supplementary material. Such results support the claims that even in the context of non i.i.d. tasks and a relatively small scale interactions, the algorithms proposed are still able to provide a good performance for each task. Table~\ref{time_table} shows the average running time for each algorithm, where proposed AKLO\,Sample and AKLO\,Sum algorithms demonstrates their time efficiency. We should also point out that for the \emph{Spam} dataset, the execution time are longer than the others datasets given the extremely high dimensionality of the observations. Moreover, Unif\,Sample and AKLO\,Sample approaches take a little more running time, due to the sampling operation. \begin{table*}[!tbp] \centering \begin{tabular}{@{}ccccccc@{}} \toprule \textbf{Dataset} & \textbf{ITOL} & \textbf{TOL} & \textbf{Unif\,Sample} & \textbf{Unif\,Sum} & \textbf{AKLO\,Sample} & \textbf{AKLO\,Sum} \\ \midrule \textbf{Syn1} & $41.10 $\tiny$\pm 1.80$ & $22.08 $\tiny$\pm 2.11$ & $40.77 $\tiny$\pm 2.38$ & $35.31$\tiny$\pm 1.87$ & $13.87 $\tiny$\pm 1.57$ & $\mathbf{11.00 } $\tiny$\mathbf{\pm 1.52}$ \\ \textbf{Syn2} & $41.85 $\tiny$\pm 1.35$ & $43.35 $\tiny$\pm 2.95$ & $49.75 $\tiny$\pm 0.92$ & $41.84$\tiny$\pm 1.29$ & $16.06 $\tiny$\pm 2.22$ & $\mathbf{12.91} $\tiny$\mathbf{\pm 2.42}$ \\ \textbf{Landmine} & $18.81 $\tiny$\pm 0.33$ & $14.49 $\tiny$\pm 0.37$ & $21.94 $\tiny$\pm 2.10$ & $11.14$\tiny$\pm 0.91$ & $13.52 $\tiny$\pm 0.76$ & $\mathbf{10.39 } $\tiny$\mathbf{\pm 0.38}$ \\ \textbf{Spam} & $24.53 $\tiny$\pm 1.23$ & $17.47 $\tiny$\pm 0.65$ & $30.02 $\tiny$\pm 2.92$ & $16.68$\tiny$\pm 1.66$ & $18.20 $\tiny$\pm 1.18$ & $\mathbf{14.30 } $\tiny$\mathbf{\pm 1.32}$ \\ \textbf{Shoes} & $31.16 $\tiny$\pm 1.97$ & $29.81 $\tiny$\pm 1.87$ & $43.27 $\tiny$\pm 5.10$ & $31.64$\tiny$\pm 3.99$ & $26.22 $\tiny$\pm 2.96$ & $\mathbf{20.74 } $\tiny$\mathbf{\pm 3.24}$ \\ \bottomrule \end{tabular} \caption{Average cumulative error rate ($\%$) $\pm$ standard deviation ($\%$) over $10$ repetitions.} \label{error_table} \end{table*} \begin{table*}[!tbp] \centering \begin{tabular}{@{}ccccccc@{}} \toprule \textbf{Dataset} & \textbf{ITOL} & \textbf{TOL} & \textbf{Unif\,Sample} & \textbf{Unif\,Sum} & \textbf{AKLO\,Sample} & \textbf{AKLO\,Sum}\\ \midrule \multirow{2}{*}{\textbf{Syn1}}& $0.029$ & $0.023 $ & $0.426 $ & $0.211 $ & $0.425 $ & $0.211 $ \\[-5pt] & \tiny$\pm 1.57\times 10^{-4}$& \tiny$\pm 4.39 \times 10^{-4}$ & \tiny$\pm 3.63\times 10^{-4}$ & \tiny$\pm 7.77\times 10^{-4}$ & \tiny $\pm 6.89\times 10^{-4}$ &\tiny $\pm 5.47\times 10^{-4}$ \\ [4pt] \multirow{2}{*}{\textbf{Syn2}} & $0.028 $ & $0.028 $ & $0.418 $ & $0.205 $ & $0.416 $ & $0.205 $ \\[-5pt] & \tiny$ \pm 1.25\times 10^{-4}$ & \tiny$ \pm 5.42 \times 10^{-4}$ &\tiny $ \pm 4.93\times 10^{-4}$ & \tiny$ \pm 4.00\times 10^{-4}$ & \tiny $ \pm 3.70\times 10^{-4}$ & \tiny $ \pm 3.47\times 10^{-4}$ \\[4pt] \multirow{2}{*}{\textbf{Landmine}} & $0.022 $ & $0.019 $ & $0.361 $ & $0.179 $ & $0.360 $ & $0.179 $ \\[-5pt] &\tiny $ \pm 7.82\times 10^{-5}$ & \tiny$ \pm 7.92 \times 10^{-5}$ &\tiny $ \pm 2.36\times 10^{-3}$ &\tiny $ \pm 3.30\times 10^{-4}$ &\tiny $ \pm 1.18\times 10^{-3}$ & \tiny$ \pm 6.17\times 10^{-4}$ \\[4pt] \multirow{2}{*}{\textbf{Spam}} & $0.380 $ & $0.398 $ & $2.260 $ & $2.249 $ & $2.467 $ & $2.238 $ \\[-5pt] & \tiny$ \pm 0.012$ &\tiny $ \pm 0.011$ & \tiny$ \pm 0.015$ &\tiny $ \pm 0.019$ & \tiny $ \pm 0.011$ &\tiny $ \pm 0.011$ \\ [4pt] \multirow{2}{*}{\textbf{Shoes}} & $0.006 $ & $0.006 $ & $0.080 $ & $0.043 $ & $0.080 $ & $0.043 $ \\ [-5pt] & \tiny$ \pm 5.73\times 10^{-5}$ &\tiny$ \pm 6.67 \times 10^{-5}$ & \tiny$ \pm 1.44\times 10^{-4}$ &\tiny $ \pm 1.23\times 10^{-4}$ &\tiny $ \pm 1.63\times 10^{-4}$ & \tiny $ \pm 1.02\times 10^{-4}$ \\ \bottomrule \end{tabular} \caption{Average running time (seconds) $\pm$ standard deviation (seconds) over $10$ repetitions.} \label{time_table} \end{table*} \begin{figure*}[!tbp] \centering \subfloat[syn1]{\includegraphics[width=0.33\textwidth]{syn1_evol.pdf}} \subfloat[syn2]{\includegraphics[width=0.33\textwidth]{syn2_evol.pdf}} \subfloat[Landmine]{\includegraphics[width=0.33\textwidth]{landmine_evol.pdf}}\\ \subfloat[Spam]{\includegraphics[width=0.33\textwidth]{spam_evol.pdf}} \subfloat[Shoes]{\includegraphics[width=0.33\textwidth]{shoes_evol.pdf}} \subfloat[Shoes' last task]{\includegraphics[width=0.33\textwidth]{shoesof1.pdf}}\\ \caption{Evolution of performances of tested algorithms on different datasets: (a)-(e) evolution of Average Cumulative Error rate (ACE in $\%$) over the tasks; (f) cumulative errors for the last task in shoes dataset, for one experiment.} \label{result_fig} \end{figure*} \section{Conclusion} We are proposing a novel lifelong online learning framework to deal with consecutive online learning tasks relying on knowledge accumulated during past experiences. Specific methods are given to effectively leverage the predictions from the current task classifier and the models built for previous tasks. A theoretical analysis shows the effectiveness of the proposed method even without assumptions on the distributions. Several experiments on both synthetic and real datasets with different contexts are also providing an empirical validation of the proper behaviour of the proposed algorithms.
1,314,259,996,228
arxiv
\section{} \section{Introduction}\label{sec:Intro} Since the early work of Sandstr{\"o}m \cite{Sandstrom08} on marine glacial discharges in Norwegian Fjords and his pioneering work on ocean circulation through fresh/cold and salty/warm water differential inputs, attempts to link differential heating \cite{Rossby65,Rossby98} and/or salt and fresh water input to a deep meridional circulation capable of driving the world's ocean circulation led to series of results predicting that differential buoyancy forcing on a horizontal surface alone could not explain the deep water cycle that, over a a millennial time scale, conveys the world's ocean waters around the globe \cite{defant1961physical}. The amount of circulation, which can be sought in terms of dissipation of kinetic energy in natural convection, is bounded by the amount of heat uptake absorbed at the surface \cite{PaparellaY02}. Natural convection driven by a buoyancy gradient along a geopotential iso-surface is a particular flow archetype. Paparella \& Young\cite{PaparellaY02} derived a bound on the amount of dissipation and argued that when viscosity vanishes, the turbulent dissipation in HC also vanishes, unlike, Rayleigh-B\'enard convection where it is expected to reach a finite value. Although that "anti-turbulence" theorem has been established, horizontal convection was shown to undergo turbulence\cite{ScottiW11,Gayen14} but the transport of heat and momentum between the buoyancy the sources and sinks is expected to follow scaling exponents which are essentially lower than Rayleigh-B\'enard convection\cite{ShishkinaGL16} in the Prandtl-Rayleigh landscape (cf. except in the particular case of laminar steady HC flows at high Prandtl numbers\cite{ShishkinaW16}). This is in part due to the rate at which the energy of the flow is dissipated which, so far, proved to consistently follow laminar-type scaling laws with respect to the magnitude of the forcing \cite{Rossby98,Hughes07,ShishkinaW16}, as long as the buoyancy gradient was unidirectional \cite{griffiths2015turbulent}, and this, despite the flow being unstable with respect to two- and three-dimensional perturbations \cite{Gayen14,PassaggiaSW17}. An analogue of Rayleigh-B\'enard theory was recently proposed and applied to horizontal convection with the aim of characterising the regime diagram of laminar and turbulent regimes \cite{ShishkinaGL16}. However the validity of a such map is currently under investigation \cite{ShishkinaW16,PassaggiaSW18}. In this work, we investigate numerically the low-Prandtl region of this regime diagram. Our aim is to investigate which limiting regimes are effectively observed and where turbulent horizontal convection starts to appear. According to the recent work of Shishkina {\it et al.} \cite{ShishkinaGL16}, the transition to the limiting turbulent regime, appearing at sufficiently high Rayleigh numbers, should be observed first at low Prandtl numbers and we show here that it is the case.\\ Griffiths \& Gayen \cite{griffiths2015turbulent} considered the problem of horizontal convection forced by spatially periodic forcing. Their result showed that horizontal convection would become turbulent in the core. Their forcing, localised on a length scale smaller than the depth of the domain, and with variation in both horizontal directions show turbulence throughout the domain, a regime transition to a dominant domain-scale circulation, and a region of logarithmic velocity in the boundary layer. The same geometry was further analysed by Rosevear {\it et al.} \cite{rosevear2017turbulent} where they observed that the non-dimensional heat flux, denoted by the Nusselt number had a steeper scaling with respect to the Rayleigh number than the (laminar) Rossby scaling\cite{Rossby65}. Their scaling analysis suggest that for deep enough domains, the flow is fully driven by the core (i.e. the interior) of the flow, located between the boundary layer and the opposite side of the domain. One interesting fact is that despite the existence of a log-layer in their direct numerical simulation, they did not observe log-type corrections in the scaling for the heat transfer. This is relatively surprising since it is now well established in Rayleigh-B\'enard convection that heat transfers are buffered through the log-layer\cite{GrossmannL11,ahlers2012logarithmic,ahlers2014logarithmic}. Recent work by Shishkina \& Wagner\cite{ShishkinaW16} report a similar exponent in the case of large Prandlt number and low Rayleigh numbers. Their study shows that when the boundary layer extends all the way to the bottom of the domain, horizontal convection was highly effective at transporting heat. In addition, they report the dependence on Prandtl numbers and in their study which follows either their new regime, denoted by $I^*_l$ in their study or the Rossby regime denoted by $I_l$. In our analysis we follow the same nomenclature in an attempt to unify the results of both aforementioned groups. Note that these results are also observed experimentally in the companion paper \cite{Passaggia2019LimitigB}. Very recently, Reiter \& Shishkina \cite{reiter2020classical} analysed classical and symmetrical horizontal convection in Rayleigh numbers up to $10^{12}$ and three Prandtl numbers. They found that for large Rayleigh numbers at $10^{11}$ and large-aspect ration domains, the plume detaches and exhibits low-frequency oscillations while the Nusselt exhibits locally a steeper scaling. In this paper, we confirm these results in a different setup and extend the Rayleigh number range by three orders of magnitude, up to $10^{15}$. While the ratio of viscosity to heat diffusion, taken here as the Prandtl number ($\rm Pr$) is $O(1)$ or larger in atmospheric and oceanic applications, Horizontal Convection (HC) at low Prandtl numbers has interesting geophysical applications, such as in the highly thermally conductive part of the mantle. Although a lot of attention has been devoted to Rayleigh-B\'enard Convection (RBC) for the outer core's dynamics, it is only very recently that HC has attracted the attention of planetary scientists \cite{alboussiere:12}. For example, Takehiro\cite{takehiro:11} suggest that it could be a potential mechanism to drive zonal heat and momentum near the inner core through the Joule effect due to Earth's magnetic field. At the edge of Earth's inner core, horizontal regions of thermally stable (crystallising) and unstable (melting) stratified layers explain the East-West asymmetry of the inner core\cite{alboussiere:12}. However, only very little is known about the properties of the turbulent horizontal flows generated in these regions and HC appears as an interesting candidate to analyse such flows.\\ In this study, we report Direct Numerical Simulation (DNS) results on how the Reynolds number ($\rm{Re}$) and the Nusselt number ($\rm{Nu}$) depend on the Rayleigh number ($\rm{Ra}$) and the Prandtl number ($\rm{Pr}$) in turbulent HC at low to intermediate $\rm{Pr}$ for values characteristic of convection in gases where $0.1<\rm{Pr}<1$ \cite{Roche02,Taylor13}, and liquid metals where $\rm{Pr}=\mathcal{O}(10^{-2})$ (see ref.\cite{takehiro:11}). The results are in agreement with the scaling power laws recently derived by Shishkina {\it et al.} \cite{ShishkinaGL16} based on the original work of Grossmann \& Lohse \cite{GL00} (GL) and numerical simulations of Takehiro \cite{takehiro:11}. Furthermore, we provide evidence that the regime observed by Rosevear {\it et al.} \cite{rosevear2017turbulent} generalises to horizontal convection over a monotonic temperature profile with a turbulent log-layer which indeed acts as a buffer to heat transfer and slightly decreases the exponent previously reported. It also provides for the first time, a connection between the GL theory and the plume driven dynamics derived by Hughes {\it et al.} \cite{Hughes07}. Our simulations cover the laminar Rossby regime $I_l$ (see ref.\cite{Rossby65}), the high-$\rm{Pr}$ laminar regime $I^*_l$ recently reported by Shishkina \& Wagner \cite{ShishkinaW16} and a new low-$\rm{Pr}$ turbulent regimes named $II_l$ (see ref.\cite{shishkina2017scaling,GL00} for theoretical predictions of HC and RBC), which is a new turbulent limiting regime reported in HC. We also observe the plume dominated flow regime of Hughes {\it et al.} (see ref.\cite{Hughes07}), that we name $II_u$ according to the SGL theory. We also report the existence of the turbulent interior-dominated regime $IV_u$ at high Rayleigh number amended with the appropriate log-type corrections. An important contribution of our work is that these results agree and extend the regime diagram of horizontal convection proposed in Hughes \& Griffiths (see ref.\cite{hughes2008horizontal}) to fit within the theoretical prediction Shishkina {\it et al.} \cite{ShishkinaGL16}, blending all known regimes of horizontal convection (See the companion paper \cite{Passaggia2019LimitigB}). In the final section, we further explore the relation between the Reynolds number characterising the magnitude of the overturning flow and turbulent dissipation. This analysis allows for condensing this complex regime transitions diagram into a more traditional laminar, transitional, soft turbulence, and hard turbulence diagram, solely dependent on the Reynolds number. In this section, we further confirm that a hard turbulent regime cannot be observed for the Prandtl numbers considered in this work. This assumption is justified theoretically with a bound on the minimum Richardson in the stably stratified layer which cannot even approach the threshold for instabilities. Similarly to Shishkina \& Wagner \cite{ShishkinaW16}, we exploit the idea that in turbulent thermal convection, the time- and volume-averaged thermal and viscous dissipation rates are determined to leading order by their bulk or Boundary Layer (BL) contributions. For the ease of comparison, we follow the same presentation as Shishkina \& Wagner \cite{ShishkinaW16}. \section{Problem description} We consider here the problem of convection in the Boussinesq limit, where the density difference $\Delta\rho=\rho_{max}-\rho_{min}$ across the horizontal surface is a small deviation from the reference density $\rho_{min}$. In this limit, the equations of fluid motion are \begin{subeqnarray} \frac{D \mathbf{u}}{D t} &=& -\nabla p + b\mathbf{e_z} + \left(\frac{\rm{Pr}}{\rm{Ra}}\right)^{1/2}\nabla^2 \mathbf{u},\\ \nabla\cdot\mathbf{u}&=&0, \\ \frac{D b}{D t} &=& \left(\rm{Pr}\,\rm{Ra}\right)^{-1/2}\nabla^2 b, \label{NS} \end{subeqnarray} where $D/Dt$ denotes the material derivative, $\mathbf{u}=(u,v,w)^T$ is the velocity vector, $b=-g(\rho-\rho_{min})/\rho_{min}$ is the buoyancy, $g$ is the acceleration of gravity along the vertical unit vector $\mathbf{e}_z$ and $p$ is the hydrodynamic pressure. The Prandtl number is given by $\rm{Pr}=\nu/\kappa$ where $\nu$ and $\kappa$ are the viscosity and stratifying agent's diffusivity respectively. The Rayleigh number is defined such that ${\rm Ra}=\Delta L^3/(\nu\kappa)$ where $L$ is the horizontal length scale of the domain and $\Delta=-g(\rho_{max}-\rho_{min})/\rho_{min}$. The computational domain is a parallelepiped of aspect ratio $\Gamma=L/H=4$ with dimensions $[L,W,H]=[1,1/8,1/4]$ where $W$ is the width of the computational domain \cite{Scotti08}. A buoyancy profile is imposed at the surface $z=H$ where $H$ is the height of the domain using a buoyancy profile such that $b(x)|_{z=H}=(1+\tanh(9.5x))/2$, which is a smoothed version of the sharp profile used in our previous calculations \cite{ScottiW11,PassaggiaSW17}. This proved to be necessary in order to keep the numerical strategy stable in the forthcoming numerical simulations. The equations are non-dimensionalized using the length of the box $L$ as reference length and the buoyancy difference imposed along the non-isolating horizontal boundary such that \begin{equation} \mathbf{x}=\mathbf{x}^*/L, \;\; b=b^*/\Delta , \;\; \mathbf{u}=\mathbf{u}^*/\sqrt{L\Delta}. \end{equation} In what follows, we define integral values to be linked with the control parameters $\rm Ra$ and $\rm Pr$: the magnitude of the large-scale flow is defined $\rm{Re}={(\overbar{\mathbf{u}\cdot\mathbf{u}})}^{1/2}L/\nu$ where the overbar denotes the spatio-temporal average over the computational domain, similarly the P\'eclet number $\rm{Pe}={(\overbar{\mathbf{u}\cdot\mathbf{u}})}^{1/2}L/\kappa$. For the Nusselt, we use ${\rm Nu}=\overbar{\partial b/\partial z}|_{z=H,b=1}/\Phi_c$, where $\Phi_c=\overbar{\partial b_c/\partial z}|_{z=H,b_c=1}$ is the average gradient in the purely conducting case (i.e. when ${\rm Ra}<10^3$)\cite{siggers2004bounds}, though other definitions have been considered \cite{rocha2019heat}. Its value depends only on the geometry of the domain and of the boundary conditions. For the geometry considered here, its value was found numerically to be $\Phi_c=0.53\, \Delta L$. \begin{figure*}[t] \centering \scalebox{0.7}{\input{schematics.tex}}\put(-187,4.5){\large$\Gamma$}\put(-343.5,111.5){\large\rotatebox{90}{$\Gamma$}}% \caption{Schematic of the present setup using a snapshot of a simulation performed at $\rm Ra=6.4\, 10^{14}$ and $\rm Pr=0.1$ where the colour is the value of the buoyancy $b$, the solid line is counter-clockwise rotating streamfunction whereas the red dashed line represents the clockwise rotating part. To the right, the different length scales of the thermal BL $\lambda_b$ and the kinetic BL $\lambda_u$ are shown together with the full depth $H$ and the large overturning scale $h$ used for the theoretical prediction.} \label{schematic} \end{figure*} \section{Previously known regimes of laminar and turbulent HC}\label{sec:preregimes} In this section, we review the existing scaling laws for heat and momentum exchanges in horizontal convection. Parts of the landscape previously explored using direct numerical simulations and experiments are reported together with the parts of the Rayleigh-Prandtl map investigated in this paper and its companion Part II\cite{Passaggia2019LimitigB}. The following subsections introduce these known scaling exponents, tied with what is known as the Paparella \& Young \cite{PaparellaY02} inequality which relates the mean mechanical dissipation of the system with the input of heat through the horizontal boundary and opens the door for choices in modelling which lengthscales drive convection in different regimes. \subsection{Rossby's (1965) original idea} H. Rossby\cite{Rossby65} explored horizontal convection induced by differential heating in a parallelepipedic container with an aspect ratio $L/H=2.5$ and derived a scaling law relating the Nusselt number as a function of the Rayleigh number. In his original work, Rossby analysed temperature measurements from his experiments to derive a scaling relationship between the thickness of the boundary layer and the streamfunction (cf. pp. 13 in\cite{Rossby65}). Taking the curl of eq. (\ref{NS}a), neglecting the nonlinear terms, and defining the streamfunction $\psi$ such that $\psi_x=-w$ and $\psi_z=u$, the two dimensional Navier-Stokes equations reduce to \begin{subeqnarray} \left(\frac{\rm{Pr}}{\rm{Ra}}\right)^{1/2}\nabla^4 \psi &=& \partial_x b\\ -\partial_x\psi\partial_zb &=& \left(\rm{Pr}\,\rm{Ra}\right)^{-1/2}\nabla^2 b. \label{stream_func} \end{subeqnarray} Near the conducting wall, the flow is governed by the laminar boundary-layer whose thickness is defined by $\lambda$ and eqs. (\ref{stream_func}a),(\ref{stream_func}b) reduce at leading order to \begin{equation} \left(\frac{\rm{Pr}}{\rm{Ra}}\right)^{1/2} \frac{\psi}{\lambda^4} \sim \frac{\Delta}{L}, \quad \mbox{and} \quad \frac{\psi \Delta}{\lambda L} \sim \left(\rm{Pr}\,\rm{Ra}\right)^{-1/2} \frac{\Delta}{\lambda^2}. \label{Rossby_scale} \end{equation} Combining these equations, Rossby obtained the relation \begin{equation} \lambda \sim L Ra^{-1/5}. \end{equation} What Rossby had not recognised in his original work was that the thickness of the boundary layer $\lambda$ could be defined using either the thermal boundary layer thickness, denoted by the subscript $_b$ and the kinetic boundary layer denoted by the subscript $_u$, which is one of the important aspects of this study. While this has no implication for the $\rm Ra$-dependence as shown in the next subsection, the Prandtl-number dependence may not be predicted accurately for different values of the Rayleigh number $\rm Ra$ and varying $\rm Pr$. \subsection{Paparella \& Young (2002) inequality} Horizontal and Rayleigh-B\'enard convection are both closed systems driven by the heat/buoyancy flux imposed through their boundaries. Paparella and Young (PY) \cite{PaparellaY02} first performed a spatio-temporal average of the kinetic-energy equation (i.e. $\overbar{\mathbf{u}\cdot(\mbox{\ref{NS}a})}$) leading to the equality \begin{equation} \overbar{\epsilon_u} \;=\; \overbar{wb}, \label{epsu1} \end{equation} where $\overbar{\epsilon_u}$ is the mean kinetic-energy-dissipation rate $\overbar{\epsilon_u}\equiv\nu\sum_{i,j}(\partial u_j/\partial x_i)^2$. Another condition can be written using the spatio-temporal average of eq. (\ref{NS}c), and integrating over $z$ leads to \begin{equation} \overbar{wb}=\kappa\langle \partial b/\partial z\rangle_H, \end{equation} where $\langle\rangle_{H}$ denotes the surface and time average at $z=H$. This equality can be recast into an inequality for the buoyancy between the top an the bottom of the domain which writes \begin{equation} \overbar{wb}\leq\kappa(\langle b\rangle_{z=H}-\langle b\rangle_{z=0})/H = B(\Gamma/2)\kappa\Delta/L, \label{PYbound} \end{equation} where $1<B<0$ is an arbitrary constant which depends on the domain geometry and boundary conditions\cite{ShishkinaW16}. The PY inequality thus writes \begin{equation} \overbar{\epsilon_u} \;=\; B(\Gamma/2)\nu^{3}L^{-4}\rm{Ra}\rm{Pr}^{-2}, \label{epsu2} \end{equation} which, once combined with the original idea of Rossby, opens possibilities for relating the dissipation in the boundary layer or the core with the heat transfer coefficient near the horizontal boundary.\\ One interesting fact is that PY's inequality suggests that as $\rm Ra$ increases while keeping $\rm Pr$ and $\Gamma$ constant, the flow becomes progressively confined under the conducting boundary. This effect is also known as the anti-turbulence theorem and implies that beyond a certain point, the overturning depth scale becomes $$h<H,$$ and a zone of stratified fluid nearly at rest will form on the insulating boundary adjacent to the conducting horizontal boundary. This follows Sandstr\"om\cite{Sandstrom16} inference where at large $\rm Ra$ or for high $\rm Pr$, the flow becomes confined to a progressively thinner surface layer and the core becomes a stagnant pool of stratified water\cite{defant1961physical}. Although such regimes were only observed in direct numerical simulations of laminar HC \cite{ilicak2012simulations} at high Pr and theoretically by \cite{chiu2008very} for the same regimes, experiments by Wang \& Huang\cite{wang2005experimental} show the onset of such behaviour at intermediate $\rm Pr$ and relatively low $\rm Ra$.\\ \subsection{Rossby's laminar regime $I_l$} Rossby's laminar regime can be recast to obtain a more accurate prediction for the Prandtl number dependence. The idea is to start with the steady thermal boundary layer equation, which is obtained from eq. (\ref{NS}c) and write an advection-diffusion balance in the boundary layer \begin{equation} u b_x + v b_z = \kappa b_{zz}. \end{equation} The dominant terms in this expression reduce to $U\Delta/L = \kappa \Delta/\lambda_b^2$ where $\lambda_b$ is the thickness of the thermal BL, which scales as $\lambda_b \sim \rm{Nu}^{-1}$. Combining the above reduces to the well known thermal-laminar BL scaling \begin{equation} \rm Nu=Re^{1/2}Pr^{1/2}, \label{Nu} \end{equation} and provides a relation tying $\rm{Nu}$, $\rm{Re}$ and $\rm{Pr}$. In laminar regimes, the buoyancy variance is essentially concentrated in the boundary layer and writes \begin{equation} \overbar{\epsilon_{b, \mathrm{BL}}} \sim \kappa \frac{\Delta^{2}}{\lambda_{b}^{2}} \frac{\lambda_{b}}{h}=\kappa \frac{\Delta^{2}}{h^{2}} \frac{\lambda_{u}}{\lambda_{b}} \operatorname{Re}^{1 / 2}, \label{epsbbl_lam} \end{equation} where the dependence on the aspect ratio $\Gamma$ was omitted. Noting that the thickness of the laminar boundary layer scales as $\lambda_u/H \sim Re^{-1/2}$, the scaling for the mean dissipation in the particular case of laminar BL\cite{Landau87} is \begin{equation} \overbar{\epsilon_{u,BL}}\sim\nu\frac{U^2}{\lambda^2_u}\frac{\lambda_u}{h}=\nu^3h^{-4}{\rm Re}^{5/2}. \label{epsu_lam} \end{equation} Combining (\ref{Nu}), (\ref{epsu2}) and (\ref{epsu_lam}), and assuming that $h=H$, one recovers the laminar scaling \cite{Rossby65,Rossby98,Gayen14,ShishkinaGL16} \begin{subeqnarray} \rm{Re} &\sim& \rm{Ra}^{2/5}\rm{Pr}^{-4/5}, \\ \rm{Nu} &\sim& \rm{Ra}^{1/5}\rm{Pr}^{1/10}. \label{lam_scal} \end{subeqnarray} By analogy to the notation in the GL theory for RBC \cite{GL00,ShishkinaGL16}, this scaling regime is denoted as $I_l$, where the subscript $l$ stands for low-$\rm{Pr}$ fluids. \subsection{Hughes {\bf{\it et al.}}'s (2007) laminar boundary-layer/turbulent plume regime $II_u$} Increasing $\rm{Ra}$ and for intermediate $\rm{Pr}$, the kinetic boundary layer becomes progressively thinner while the boundary remains relatively thick in comparison. In this case, it is the thermal boundary layer that drives the dynamics and leads to a turbulent plume, detached from the bottom [see Fig. \ref{3D}(b)]. This particular case was theorised by Hughes {\it et al.}\cite{Hughes07} with a plume model inside a filling box. Here we recast their model according to the SGL theory (i.e. see the plume model definition eq. (2.15)-(2.20) in ref.\cite{Hughes07}) and the dissipation in the boundary layer is balanced by the ratio between the thermal and the kinetic boundary layer $\lambda_b/\lambda_u$ which writes \begin{equation} \overbar{\epsilon_{u}}_{HGM} \sim\nu\frac{U^2}{\lambda^2_u}\frac{\lambda_u}{h}\frac{\lambda_b}{\lambda_u}= \nu^{3}h^{-4}\rm{Re}^{5/2}\rm{Pr}^{-1/2}, \label{epsuh} \end{equation} where the dissipation now scales with the thickness of the thermal layer, not the kinetic BL, where $h=H$ is enforced, and is given by \begin{equation} \overbar{\epsilon_{u}}_{HGM}\sim\nu U^2/(\lambda_b L). \end{equation} Now combining (\ref{Nu}), (\ref{epsu2}) and (\ref{epsuh}), the heat and momentum exchanges are given by \begin{subeqnarray} \rm{Re}&\sim&\rm{Ra}^{2/5}\rm{Pr}^{-3/5}, \\ \rm{Nu}&\sim&\rm{Ra}^{1/5}\rm{Pr}^{1/5}, \label{NuRe1/5h} \end{subeqnarray} which is denoted as $II_u$ and was first observed in the experiments of Mullarney {\it et al.}\cite{mullarney2004convection} and Wang \& Huang\cite{wang2005experimental} and later confirmed in the direct numerical simulations of Gayen {\it et al.}\cite{Gayen14}.\\ \subsection{Shishkina \& Wagner (2016) laminar regime $I^*_l$} At low $\rm{Ra}$ and for large $\rm{Pr}$ and/or large aspect ratio $\Gamma$, the BL thickness $\lambda_u$ saturates and reaches the depth of the domain which gives $\lambda_u\approx h=H$ and eq. (\ref{epsu1}) becomes equivalent to the dissipation in a pressured-driven laminar flow in a channel which writes \begin{equation} \overbar{\epsilon_{u}}_{SW} \sim\nu\frac{U^2}{H^2}= \nu^{3}H^{-4}\rm{Re}^{2}. \label{epsu_Shishkina} \end{equation} Combining (\ref{Nu}), (\ref{epsu2}) and (\ref{epsu_Shishkina}), one obtains the laminar scaling derived in Shishkina \& Wagner \cite{ShishkinaW16} \begin{subeqnarray} \rm{Re}&\sim&\rm{Ra}^{1/2}\rm{Pr}^{-1}, \\ \rm{Nu}&\sim&\rm{Ra}^{1/4}\rm{Pr}^{0}, \label{NuRe1/4} \end{subeqnarray} denoted as $I^*_l$ and first observed by Beardsley \& Festa \cite{beardsley1972numerical} in their numerical simulations. Note that Rossby \cite{Rossby98} also observed a steeper scaling than $\rm Nu\sim {\rm Ra}^{1/5}$ in his numerical simulations for low $\rm Ra$ (see page 248 in \cite{Rossby98}). More recent numerical simulations performed by Ramme \& Hansen \cite{ramme2019transition} at infinite Prandtl numbers suggests a similar scaling. However, it should be noted that the observation of this scaling is hardly reported across a decade of Rayleigh numbers in both studies. It is important to stress that in this regime, The circulation is assumed to span the entire box and this particular aspect will be further explored in the present and companion paper \cite{Passaggia2019LimitigB}.\\ \subsection{Turbulent regimes and associated bounds} Most of the existing work on HC highlighted laminar-type flows, dominated by the behaviour of the boundary layer, at the exception of an analogue of HC Griffiths \& Gayen (2015)\cite{griffiths2015turbulent} and Rosevear {\it et al.}\cite{rosevear2017turbulent}. In a recent study, they considered a spatially periodic forcing at the conducting boundary with a short wavelength compared to the depth of the domain. In this particular setup, Rosevear {\it et al.} were able to show that $Nu\sim Ra^{1/4}$ with a turbulent-core driven by inertia. They were also the first to report turbulent boundary layers with a log-type layer developing along the conducting wall. This feature is important since it is a necessary condition for turbulent convection scaling to arise in Rayleigh-B\'enard convection\cite{GL00,GrossmannL11}.\\ Siggers {\it et al.}'s (2004)\cite{siggers2004bounds} theorised a fully turbulent regime based on the assumption that boundary layers and thus the Nusselt number scales as $\rm Nu\sim Ra^{1/3}$. Recent work by Rocha {\it et al.} refined their original results and showed that short oscillations with large amplitudes of the forcing boundary where favourable to observe a such regime when $\rm Ra\rightarrow \infty$. However, a such regime is unlikely to exist for a forcing such as step-like boundary condition or a linear profile where the stably stratified layer remains undisturbed over a sufficiently long span, as suggested by Passaggia {\it et al.}\cite{passaggia2016global} who recast the PY inequality with eq. (\ref{PYbound}) to obtain a bound on the Richardson number in the stable layer underneath the conducting boundary, that is \begin{equation} {\rm Ri} \equiv \left(\frac{N}{\partial_z U}\right)^2 \sim \frac{\Delta}{\lambda_b}\left(\frac{\lambda_u}{U}\right)^2 \sim \Delta L^{-1}\frac{L}{H}\frac{H}{\lambda_b}L^2\frac{\lambda_u^2}{L^2}\frac{\nu^2}{L^2U^2}{L^2}{\nu^{-2}} \sim \Gamma\,{\rm Ra\,Pr^{-1}\,Nu\,Re^{-2}}\frac{\lambda_u^2}{L^2}. \end{equation} For a fully turbulent regime the boundary layers must scale as\cite{siggers2004bounds} $\rm Ra^{-1/3}$, thus $\rm Nu \sim Ra^{1/3}$ (see also Rocha {\it et al.} \cite{rocha2020improved} for an improved estimate). Thus \begin{equation} {\rm Ri}\sim \Gamma\,\rm Pr^{-1}\,Ra^{2/3}\,Re^{-2}. \end{equation} In order for $\rm Ri$ to become asymptotically small (at fixed $\rm Pr$), we must have $\rm Re\sim Ra^\alpha$, with $\alpha>1/3$. However, under these assumptions, the normalised dissipation $\overbar{\epsilon_u} L/U^3<\rm Ra^{1-3\alpha}$ (see eq.(\ref{epsu2})) would decay asymptotically, violating the hypothesis of fully turbulent flow. What this means is that the stability of the shear layer is not controlled by the Rayleigh number. Note that a such behaviour was observed for instance by Whitehead \& Wang \cite{whitehead2008laboratory} in the their laboratory experiments where a forcing was added. As a conclusion, the Richardson number will only increase with increasing $\rm{Ra}$. We finally take this moment to underline that this condition is only valid for non-rotating horizontal convection and would no hold true in the case where rotation is taken into account \cite{barkan2013rotating,vreugdenhil2017geostrophic}. \section{Numerical calculations}\label{sec:numerics} \begin{figure*}[t!] \centering \hspace{-0mm} \hspace{-2mm}\scalebox{0.2}{\Huge\input{snapshot_flow_field.tex}}% \put(-530,139){$1/4$}\put(-500,150){$1/16$}\put(-526,65){$0$}\put(-330,-10){$1$} \put(-536,104){$z$}\put(-455,39){$x$}\put(-307,5){$y$} \put(-279,107){$z$}\put(-190,37){$x$}\put(-44,4){$y$} \put(-525,0){(a)}\put(-255,0){(b)} \put(-290,70){$b$}\put(-25,70){$b$}\\ (c)\hspace{-2mm}\scalebox{0.7}{\Large\input{spectra_2}}% \vspace{-3mm} \caption{Snapshot of the iso-contours of $\Lambda_2=-\rm{Pr}^{-1}/8$ criteria (blue) and buoyancy $b$ (background) at (a) $\rm Ra=6.4\times 10^{13}$, $\rm Pr=0.01$ showing the new regime ($II_l$) and (b) $\rm{Ra}=6.4\times 10^{13}$, $Pr=1$ corresponding to the Hughes' regime \cite{Hughes07} ($II_u$). (c) Turbulent spectra $E(\mathcal{K}_x)$ at $\rm Ra=6.4\times 10^{13}$ and $\rm{Pr}=0.05$ (red), and $E(\mathcal{K}_x)/2$ at $\rm{Pr}=1$ (blue) for the same $\rm{Ra}$.} \label{3D} \end{figure*} The Navier-Stokes equations are solved numerically on a Cartesian grid, stretched near the upper boundary, using a standard second-order in space and time projection method. Since we are interested in turbulence dominated regimes where the scaling is not determined by the buoyancy forcing profile, nor the aspect ratio \cite{ShishkinaGL16,Sheard2011}, nor the type of boundary condition \cite{Rossby98,chiu2008very,beardsley1972numerical}, free-slip boundary conditions are used for the velocity on the upper, lower and end walls at $x=\pm L/2$ (see ref.\cite{ScottiW11}) while the domain is assumed periodic in the transverse direction $y$. This is in contrast with Shishkina \& Wagner\cite{ShishkinaW16} where they used no-slip boundary conditions and end walls in the transverse direction. Our approach avoids the numerical difficulties involved with resolving the no-slip BL and a finite domain in the transverse direction and allows us to efficiently explore Rayleigh numbers as high as $1.92\times10^{15}$ for a wide range of Prandtl numbers. Snapshots of the flow are shown in figure \ref{3D}(a-b) and by mean of the $\Lambda_2$ criteria defined by the second largest eigenvalue of the matrix $S^2+\Omega^2$ where $S$ and $\Omega$ are the symmetric and anti-symmetric parts of the velocity gradient respectively. \begin{figure*}[t!] \hspace{-2mm}\scalebox{1.25}{\input{Ra_Pr_All}}% \caption{Sketch of the phase diagram in the $(\rm{Ra}, \rm{Pr})$ plane for the laminar regimes $I_l$ and $I^*_l$ together with the turbulent scaling $II_{l}$ with the conducted DNS. The yellow stripes shows the transition from $I^*_l$ to $I_{l}$, and $I_l$ to $II_{l}$, with a slope $\rm Pr\approx Ra^{1/2}$. The transition from $II_l$ to $II_u$ with a slope $\rm Pr\approx Ra^{-1}$. Symbols reflect the computational meshes in $(x,y,z)$, used in the DNS: $512\times256\times256$ (circle), $1024\times384\times128$ (squares), and $2048\times256\times256$ (triangles). The values ($\alpha,\beta$) in each region provide the exponents $\rm{Nu}\sim \rm{Ra}^{\alpha}\rm{Pr}^{\beta}$ measured in the DNS and derived in the theory. The transition between regimes is given as follows: from $I^*_l$ to $I_l$ is given by $\rm Pr \sim 3.\,10^{-5} Ra^{1/2}$, from $I_l$ to $II_l$ by $\rm Pr \sim 3.\,10^{-6} Ra^{1/2}$, from $II_u$ to $IV_u$ by $\rm Pr \sim 3.\,10^{-8} Ra^{1/2}$, and from $II_l$ to $II_u$ by $\rm Pr \sim 3.\,10^{11} Ra^{-1}$. The black dashed lines show when the boundary layer becomes turbulent and follows from $II_u$ to $IV_u$ by $\rm Pr \sim 3.\,10^{-8} Ra^{1/2}$ and from $II_l$ to $IV_u$ by $\rm Ra \approx 3.\,10^{12}$.} \label{Ra_Pr} \end{figure*} The turbulent scaling for momentum and buoyancy transport are computed using Direct Numerical Simulations (DNS) in the range $\rm Ra=[6.4\times10^5,1.92\times10^{15}]$ and $0.002\leq \rm Pr\leq 2$. For $\rm Ra<10^8$ and $0.5\leq \rm Pr\leq 2$, the HC flows are steady \cite{ShishkinaW16,PassaggiaSW17}. With increasing $\rm{Ra}$ and/or decreasing values of $\rm{Pr}$, HC flows become increasingly unsteady, leading to turbulence (as shown in figure \ref{3D}(c)) and the mesh size is decreased in order to resolve the Kolmogorov length scale (see ref.\cite{ScottiW11} for details about turbulent HC). In the case of homogeneous turbulence, the Kolmogorov length scale is given by $\eta\approx(\nu^3/\epsilon_u)^{1/4}$. Using the PY inequality, an approximation yield $\eta/L\approx Pr^{1/2}/(\Gamma B Ra)^{1/4} \gtrsim 10^{-4}$ for the largest value of $\rm{Ra}$ and smallest $\rm{Pr}$ considered in this work. These estimates are valid for homogeneous turbulence. In the present case, a substantial amount of the dissipation is located in the boundary layer where the mesh is refined up to $\Delta z/L = 10^{-4}$ in the vertical direction and $\Delta y/L = 10^{-3}$ in the horizontal direction and should ensure that most scales are captured throughout our DNS. Mesh sizes are reported in Fig.\ref{Ra_Pr}(b) in the $(\rm{Ra},\rm{Pr})$ plane along with the different regimes reported later in this manuscript. Note that turbulence in HC for moderate values of $\rm{Pr}$ is confined to a narrow region located under the cooling/heavy boundary consisting of the plume and the BL where the fluid is statically unstable (cf. Fig.\ref{3D}) \cite{Gayen14,ScottiW11}. Decreasing values of $\rm{Pr}$ increases the volume of fluid subject to turbulence (see Fig.\ref{3D}) and decreases the depth of the circulation.\\ \begin{figure*}[t!] \begin{minipage}[b]{\linewidth} \scalebox{0.7}{\Large\input{NuRa1e025_Ra_L} \scalebox{0.7}{\Large\input{NuPr-03_Pr_L}} \put(-510,10){(a)}\put(-250,10){(b)}\\ \scalebox{0.7}{\Large\input{ReRa-03_Ra_L} \scalebox{0.7}{\Large\input{RePr06_Pr_L}} \put(-510,10){(c)}\put(-250,10){(d)} \end{minipage} \vspace{-7mm} \caption{(a),(c) $\rm{Ra}$ dependencies and (b),(d) $\rm{Pr}$ dependencies of (a),(b) the Nusselt number and (c),(d) the Reynolds number, as obtained in the DNS for (a),(c) $\rm{Pr}=1$ (squares), $\rm{Pr}=0.1$ (circles), $\rm{Pr}=0.01$ (triangles) and for (b),(d) $\rm{Ra}=6.4\times10^{10}$ (diamonds) and $\rm{Ra}=1.92\times10^{12}$ (pentagons). $\rm{Pr}$ dependence of $\bar{\epsilon_u}$ with $\rm{Re}$ (d). The DNS results support the scaling in the regime $I_l$ (solid lines) [Eqs. (\ref{lam_scal}a) and (\ref{lam_scal}b)], transition to $II_l$ (dotted lines) [Eqs. (\ref{NuRe1/6}a),(\ref{NuRe1/6}b) and \ref{NuRe1/6corr}),(\ref{NuRe1/6corr})], transition to $II_u$ (dotted lines) [Eqs. (\ref{NuRe1/5h}a) and (\ref{NuRe1/5h}b)].} \label{Nu_Re} \end{figure*} \begin{figure*}[t!] \scalebox{0.7}{\Large\input{NuRe05_Ra_L}}% \scalebox{0.7}{\Large\input{NuRe05_Pr_L}}% \put(-510,10){(a)}\put(-250,10){(b)}\\ \scalebox{0.7}{\Large\input{L4nu-3epsilonRa-1_Ra_L}}% \scalebox{0.7}{\Large\input{L4nu-3epsilonRa-1_Pr_L}}% \put(-510,10){(c)}\put(-250,10){(d)} \caption{(a),(c) $\rm{Ra}$ dependencies and (b),(d) $\rm{Pr}$ dependencies of (a),(b) $\rm{Nu}\rm{Re}^{-1/2}$ and (c),(d) $L^4\nu^{-3}\bar{\epsilon_u}\rm{Ra}^{-1}$, as obtained in the DNS for (a),(c) $\rm{Pr}=1$ (squares), $\rm{Pr}=0.1$ (circles), $\rm{Pr}=0.01$ (triangles) and for (b),(d) $Ra=10^9$ (diamonds) and $\rm{Ra}=2\times10^{10}$ (pentagons). The upper figures support the estimates in eq. (\ref{Nu}) and eq. (\ref{NuRe1/6corr}b), while the lower figures illustrate eq. (\ref{PYbound}).} \label{Re_eps} \end{figure*} \section{Results and scaling analysis}\label{ssec:low_Pr} The regimes observed in our numerical simulations are summarised in Fig. \ref{Ra_Pr} together with the exponents computed fitting power laws to the data (fig.\ref{Nu_Re}(a-d)). The colours shown in Fig. \ref{Ra_Pr} correspond to the colours shown in Fig.\ref{Nu_Re}(a-d) and the two new regimes, labelled $II_l$ in brown in the laminar case, $II_l$ and $IV_u$ are shown in purple in Fig. \ref{Ra_Pr} for the turbulent scaling laws.\\ The dependence of $\rm{Nu}$ and $\rm{Re}$ with respect to $\rm{Ra}$ and $\rm{Pr}$ are summarised in Fig. \ref{Nu_Re}(a-d). The Nusselt number obeys a scaling law $\rm{Nu}\sim Ra^{\alpha}$ [see Fig. \ref{Nu_Re}(a)] with the exponent $\alpha$ depending on $Ra$ and $Pr$ as follows: \begin{itemize} \item $\alpha=1/4$ ,the enhanced laminar scaling, for low $\rm{Ra}$ and higher $\rm{Pr}$, \item $\alpha=1/5$ , the classical laminar scaling, for small $\rm{Ra}$, \item $\alpha=1/6$, for small $\rm{Pr}$, \item $\alpha=1/5$, the entrainment-type regime, at high $\rm{Ra}$ and not too small $Pr$, \item $\alpha\approx 1/4.4$ for large $\rm{Ra}$ and small $Pr$. \end{itemize} We observe the laminar scaling $Re\sim \rm{Ra}^{\gamma}$ with $\gamma=1/2$ (see ref. \cite{ShishkinaW16}) and $\gamma=2/5$ (see ref.\cite{Rossby65}). At higher $\rm{Ra}$, the new scaling $\gamma=1/3$ is also observed and changes back to $\gamma=2/5$ (see ref.\cite{Hughes07}) [Fig. \ref{Nu_Re}(c)]. Similarly, when $\rm{Pr}<1$ and $\rm Ra$ is fixed, we observe a scaling relationship $\rm{Nu}\sim Pr^{\beta}$ with: \begin{itemize} \item $\beta=0$ for higher $\rm{Pr}$ and low $\rm{Ra}$ (see ref. \cite{ShishkinaW16}), \item $\beta=1/10$ for $\rm{Ra}<10^{11}$ see (see ref.\cite{Rossby65}), \item $\beta=1/3$ at low $\rm{Pr}$, \item $\beta=1/5$ for $\rm{Ra}>5\times10^{11}$ see (see ref.\cite{Hughes07}). \item $\beta \approx 2/3-1/4$ at large $\rm{Ra}$ and low $\rm{Pr}$. \end{itemize} The Reynolds number dependence $\rm{Re} \sim \rm{Pr}^{\delta}$ with $\delta=-2/3$ for the smaller value of $\rm{Pr}$, then $\delta=-1$ for $10^{-2}\lesssim\rm{Pr}\lesssim0.2$ changes to $\delta=-4/5$ for increasing $\rm{Ra}$ at all $\rm{Pr}$ and increases at high $\rm{Ra}$ to the HGM scaling $\delta=-3/5$ for the larger values of $\rm{Ra}$ [Fig.\ref{Nu_Re}(d)].\\ The fact that our simulations recover the scaling of the SGL\cite{shishkina2017scaling} and HG\cite{hughes2008horizontal} theories validates them, and give confidence in the scaling regimes occurring in the lower part of parameter space, with $\alpha=\beta=1/6, $ and $\alpha\approx 1/4.4,\,\beta\approx 1/2.4$ which, new to this study, are the focus of the following section. \subsection{The low-Prandtl core-driven flow $II_l$} \subsubsection{The limiting regime} In the low-Prandtl number regime, the flow transitions from the Rossby $I_l$ regime to the $II_l$ regime as $\rm Ra$ increases. A snapshot of the flow (Fig. \ref{3D}(a)) shows that, unlike the Rossby regime, the flow here is clearly turbulent in the core. In this low-Prandtl number regime, the buoyancy flux provided through the boundary is large but the thermal and kinetic boundary layers remain thick, hence laminar, and eq. (\ref{Nu}) still holds. With decreasing $\rm{Pr}$ and/or increasing $\rm{Ra}$, the bulk dynamics hence dominates dissipation with a large-scale overturning flow occupying the entire domain and whose horizontal length scale is $L$. In this case, it is the large-scale velocity $U$ which drives the dissipation of kinetic energy and the latter is given by \begin{equation} \overbar{\epsilon_u} \sim \nu^{3}L^{-4}\rm{Re}^3. \label{epsub} \end{equation} From (\ref{Nu}), (\ref{epsu2}) and (\ref{epsub}), it follows that low-$\rm{Pr}$ HC exhibits dependencies of the form \begin{subeqnarray} \rm{Re}&\sim&\rm{Ra}^{1/3}\rm{Pr}^{-2/3}, \\ \rm{Nu}&\sim&\rm{Ra}^{1/6}\rm{Pr}^{1/6}, \label{NuRe1/6} \end{subeqnarray} where this scaling regime is denoted as $II_l$ [see Fig. \ref{Ra_Pr}(b) and ref.\cite{ShishkinaGL16}]. Note that these scaling are only observed for the Rayleigh-number dependence but the Prandtl-number dependence is clearly underestimated for both $\rm Nu$ and $\rm Re$. The boundary-layer scaling observed thus far was consistent with $$ \rm{Nu}\sim \rm{Pe}^{1/2}. $$ In the following, we show that the balance in the boundary has to be modified in order to take into account either core-size modifications or turbulent boundary-layer effects. \subsubsection{Modification induced by the variable turbulent depth $h$ for the $II_l$ regime} In low Prandtl number regimes for $\rm{Pr} < 10^{-1}$, turbulence is confined between the plume and the left part of the domain, under the statically unstable boundary layer whose depth is denoted by $h<H$. The PY inequality provides a bound for the the dissipation which can be used to relate the depth $h$ occupied by the turbulent core with the Reynolds number $\rm{Re}$. However, dissipation in horizontal convection is bounded by the dissipation in the laminar kinetic boundary layer, located under the warming (statically stable) boundary. This unbalance between the dissipation in the bulk and the boundary layer is the first occurrence of a turbulent regime subject to two regions with different dissipation rates at low Prandtl numbers. At low values of ${\rm Pr}$, the thermal boundary layer providing the available energy drives the dynamics near the forcing boundary and its dissipation rate is given in eq. (\ref{epsu_lam}). Dissipation in the core supplies a higher dissipation rate, given by eq. (\ref{epsub}) which therefore dissipates the energy faster than it is created from the forcing boundary. As a consequence, the bulk size $h$ must decrease with respect to the full depth of the domain $H$, in order to take into account for this effect. \\ Similar observation was drawn by Chiu-Webster et al.\cite{chiu2008very} in the case of infinite Prandtl numbers where the core dissipates less than the boundary layer, leaving the Nusselt number scaling independent of the core dynamics. Here we demonstrate that the turbulent core modifies the Prandtl number dependence by relating the dissipation and buoyancy variance in both the boundary layer and the bulk. The idea is to allow for the ratio $h/H$ to appear and provide the Rayleigh and Prandtl number dependencies and correct for the above estimate.\\ The bulk size is assumed to balance the ratio between the thermal dissipation in the BL and in the bulk, which has to equal to the ratio between the kinetic energy dissipation in the BL and in the bulk. The same idea writes \begin{equation} \left(\frac{\overbar{\epsilon_{b,bulk}}}{\overbar{{\epsilon}_{b,BL}}}\right) \sim \left(\frac{\overbar{\epsilon_{u,bulk}}}{\overbar{{\epsilon}_{u,BL}}}\right). \label{eq:ratios_eps} \end{equation} We now substitute the expressions for dissipation and the buoyancy variance in regions. For $\overbar{\epsilon_{u,bulk}}$, $\overbar{{\epsilon}_{u,BL}}$ and $\overbar{{\epsilon}_{b,BL}}$, the values are expressed in terms of the Reynolds and the Prandtl number in eqs. (\ref{epsub}), (\ref{epsu_lam}), and (\ref{epsbbl_lam}) respectively. The buoyancy variance in the bulk is \begin{equation} \overbar{\epsilon_{b,bulk}} \sim \frac{U \Delta^2}{h}\frac{h-\lambda_b}{h} \sim \frac{U \Delta^2}{h} \sim \frac{\kappa\Delta^2}{h^2} Pe. \end{equation} Rearranging the terms in eq. (\ref{eq:ratios_eps}), the ratio of bulk-to-domain depth appears naturally and provides the following scaling \begin{equation} \frac h H \sim \left({\rm Re^{1/2}}{\rm Pe^{-1/2}}\right)^{-1/2} \sim {\rm Pr}^{1/4}. \label{bulk_decrease} \end{equation} This decrease of bulk size is depicted in Fig. \ref{streamlines}(a-c) where the plume depth behaves according to the above scaling. The reduction in $h$ also implies that in this transition regime between $I_l$ and $II_l$ or $II_u$ and $II_l$, eq.(\ref{epsub}) has to take into account the fact that now $\rm Re$ is not based on $H$ but $h$, the overturning depth. With this respect, the overturning may be rescaled such that \begin{equation} \overbar{\epsilon_u} \sim \frac{\nu^{3}}{H^{4}}{\rm Re}^3\left(\frac h H\right)^3, \label{epsub2} \end{equation} in order to take into account the reduction of the bulk size $h$ as $\rm{Pr}$ decreases. \begin{figure*}[t!] \centering \hspace{-0mm} \includegraphics[width=0.8\textwidth]{streamlines_Ra_19e14_Pr_1.eps}\put(-390,10){(a)}\put(-184,0){\scriptsize$\Gamma$}\put(-360,60){\rotatebox{90}{\scriptsize$\Gamma$}}\\ \includegraphics[width=0.8\textwidth]{streamlines_Ra_19e14_Pr_01.eps}\put(-390,10){(b)}\put(-184,0){\scriptsize$\Gamma$}\put(-360,60){\rotatebox{90}{\scriptsize$\Gamma$}}\\ \includegraphics[width=0.8\textwidth]{streamlines_Ra_19e14_Pr_001.eps} \put(-390,10){(c)}\put(-184,0){\scriptsize$\Gamma$}\put(-360,60){\rotatebox{90}{\scriptsize$\Gamma$}}\\ \caption{Time-averaged iso-contours of the streamfunction $\psi=[-0.1, -0.075, -0.05, -0.025, 0, 2, $ $ 4, 6, 8, 10, 12, 14, 16]/8\times 10^{-4}$ for $\rm{Ra}=1.92\,10^{15}$; (a) $\rm{Pr}=1$ where (b) $\rm{Pr}=0.1$, and (c) $\rm{Pr}=0.01$ showing the narrowing of the circulation as $\rm{Pr}$ decreases and the circulation within the core narrowing beneath the deferentially heated surface at $z\Gamma=1$.} \label{streamlines} \end{figure*} The above argument can also be expressed through the heat transport in the laminar boundary-layer where the bulk modification, similarly to eq. (\ref{epsub2}) is now tied to the amount of heat transport such that \begin{equation} U \frac \Delta L \sim \frac{\kappa\Delta}{\lambda_b^2}\left(\frac H h\right). \label{Nu2} \end{equation} Substituting eq. (\ref{bulk_decrease}) into eq. (\ref{epsub2}), the modified dissipation leads to a new Prandtl number dependence for the $II_l$ regime such that \begin{subeqnarray} {\rm Re} &\sim& {\rm Ra}^{1/3}{\rm Pr}^{-2/3}\left(\frac{h}{H}\right)^{-1} \sim {\rm Ra}^{1/3}{\rm Pr}^{-11/12},\\ {\rm Nu} &\sim& {\rm Re}^{1/2} {\rm Pr}^{1/2}\left(\frac{h}{H}\right) \quad \sim {\rm Re}^{1/2} {\rm Pr}^{3/4} \label{epsub2mod} \end{subeqnarray} which is verified empirically in our DNS [see Fig. \ref{Nu_Re}(d) and Fig. \ref{Re_eps}(b)]. Combining eq. (\ref{epsub2}a), and eq. (\ref{epsub2}b) provides a correction for this $\rm{Pr}$ transition in the $II_l$ regime \begin{subeqnarray} \rm{Re}&\sim&\rm{Ra}^{1/3}\rm{Pr}^{-11/12}, \\ \rm{Nu}&\sim&\rm{Ra}^{1/6}\rm{Pr}^{7/24}, \label{NuRe1/6corr} \end{subeqnarray} found for $\rm{Pr}\lesssim 0.2$ [see Fig. \ref{Nu_Re}(b,d)]. Whence, the advection-diffusion balance in the boundary layer eq. (\ref{Nu}) is modified according to $$ \rm{Nu} \sim \rm{Pe}^{1/2}\rm{Pr}^{1/4} \sim \rm{Re}^{1/2}\rm{Pr}^{3/4} . $$ which now takes into account variable depth effects $h$, decoupled from the domain's depth $H$. This particular scaling is the last controlled by the laminar boundary layer as ${\rm Ra}$ increases. As noted by Shishkina {\it et al.}\cite{shishkina2017scaling} further increasing the Rayleigh number may eventually lead to core-driven dynamics, as previously observed by Griffith \& Gayen \cite{griffiths2015turbulent} for a somewhat different boundary condition. In the following subsection, we report the transition to a similar regime, which marks the transition to the limiting regime of horizontal turbulent convection for large $\rm{Ra}$. In particular, we leverage on the same scaling analysis for the variable size of the turbulent core with depth $h$ to recover Prandtl-number dependencies. \subsection{The limiting turbulent boundary-layer regime at large Rayleigh numbers $IV_u$} \begin{figure*}[t!] \scalebox{0.69}{\large\input{turb_BL.tex}} \scalebox{0.69}{\large\input{turb_BL_b.tex}} \put(-500,10){(a)}\put(-250,10){(b)} \caption{(a) Mean turbulent boundary layer profiles and (b) mean turbulent buoyancy profiles measured at $x=-0.325$, rescaled using the slip velocity at the wall $u_0=u(x=-0.75,z=H)$ and the maximum velocity at this particular $x$ location. Note that we are not in the presence of a free-slip-type turbulent boundary layer but we recover a log-type zone as shown by the black line in the above for our largest values of $\rm{Ra}$, small $\rm{Pr}$, and thus largest $\rm{Re}$. The origin of this log-type boundary layer is further discussed in the text.} \label{fig:log-BL} \end{figure*} The modification of circulation depth at higher Rayleigh numbers was first investigated by Griffiths \& Gayen \cite{griffiths2015turbulent,rosevear2017turbulent} who considered a periodic forcing at the surface and very small aspect ratios $\Gamma$. They showed that a laminar-type scaling for the dissipation in the bulk was responsible for the transition to a core dominated turbulence regime. The scaling obtained from the vorticity equation is linked to the previous analysis and considers a balance between the dissipation in the core (or the interior) scaling as $\rm Re\sim Ra^{1/2}Pr^{-1/2}$ and balances the dissipation in the boundary layer, obtained from (\ref{Nu}) boundary layer. This is in contrast with the regime observed in the previous regime ($II_l$). This is shown in Fig. (\ref{Nu_Re}) and Fig. (\ref{Re_eps}) where both $\rm Ra$ and $\rm Pr$ dependencies do not support the above scaling. There is a clear departure from the laminar scaling obtained in eq. (\ref{Nu}) which suggest that turbulent boundary layers, characterised by a log-type profile and modified heat transfer coefficients \cite{GrossmannL11} may be expected. Note that such boundary layer profiles were already observed in Rosevear {\it et al.}\cite{rosevear2017turbulent} but the latter did not affect the heat- nor the momentum-transfer scaling obtained in their analysis. Here we report different results and show that log-type profiles do influence heat and momentum transfers, in a similar way to results obtained in Rayleigh-B\'enard convection \cite{GrossmannL11,van2015logarithmic}.\\ \subsubsection{Incompatibility with a laminar boundary-layer scaling} Low-Prandtl number flows are particularly interesting with respect to the study of the transition to the limiting regime in HC since transition to a turbulent-dominated flow is first observed in the $II_l$ regime where the kinetic energy dissipation driving the dynamics scales as $\rm \epsilon_u \sim \rm Re^{3}$. Therefore increasing $\rm{Ra}$ may eventually trigger turbulent boundary as well. However, increasing $\rm{Ra}$ leads to thinner boundary layers and a thinner bulk. Following this logic, once turbulence is triggered in the boundary layer, the thermal boundary layer becomes embedded into the kinetic one and the boundary layer profiles exhibit log-type profiles. As recently observed in Reiter \& Shishkina\cite{reiter2020classical}, the velocity of the flow, which carries the temperature in the bulk, reduces from $U$ to $U(\lambda_{b}/\lambda_u)$ and the buoyancy variance dissipation rate\cite{ShishkinaGL16} becomes \begin{equation} \overbar{\epsilon_{b,bulk}}\sim(\Gamma/2)\kappa\Delta^2 h^{-2}{\rm Pr\; Re^{3/2}\; Nu^{-1}}. \label{epsb} \end{equation} whereas the total volume $V$ has a buoyancy variance of \begin{equation} \overbar{\epsilon_{b,V}}=(\Gamma/2) \kappa\Delta^2 L^{-2}{\rm Nu}. \label{epsbV} \end{equation} Equating eqs. (\ref{epsb}) and (\ref{epsbV}) gives the expression for the Nusselt number such that \begin{equation} {\rm Nu}\sim {\rm Re}^{3/4}{\rm Pr}^{1/2}\left(\frac h H\right). \label{NuRe075} \end{equation} With increasing $\rm{Ra}$, the bulk dynamics is driven by the large-scale overturning flow whose length scale is $h$ as confirmed by our simulations in the previous subsection. In this case, the dissipation of kinetic energy in the bulk is essentially dependent on the large-scale velocity $U$ and the Reynolds number is again modified from ${\rm Re}$ based on $H$ to ${\rm Re}$ based on $h$ such that \begin{equation} \overbar{\epsilon_{u,bulk}} \sim \frac{\nu^{3}}{H^{4}}{\rm Re}^3\left(\frac h H\right)^3, \label{epsub4u} \end{equation} while the Rayleigh and Prandtl number dependencies of the bulk now yield \begin{equation} \left(\frac{\overbar{\epsilon_{b,bulk}}}{\overbar{{\epsilon}_{b,BL}}}\right) \sim \left(\frac{\overbar{\epsilon_{u,bulk}}}{\overbar{{\epsilon}_{u,BL}}}\right) \equiv \frac h H \sim {\rm Re}^{-1/8} {\rm Pr}^{1/4}. \label{eq:bulk_reduce_high_Ra} \end{equation} In the above, the buoyancy variance in the bulk still follows $\overbar{\epsilon_{b,bulk}}\sim h^{-2}{\rm Pe}$, but in the boundary layer, the buoyancy variance follows eq. (\ref{NuRe075}) and $\overbar{{\epsilon}_{b,BL}}\sim ^{-2}{\rm Re}^{3/4}{\rm Pr}^{1/2}$ (see ref.\cite{reiter2020classical}). The turbulent kinetic energy dissipation is given by eq. (\ref{epsub4u}) and reads $\overbar{\epsilon_{u,bulk}}\sim L^{-4}{\rm Re}^3$ while in the boundary layer, dissipation is bounded by the stably stratified layer $\overbar{\epsilon_{u,BL}}\sim L^{-4}{\rm Re}^{5/2}$. Combining (\ref{epsb}) and (\ref{epsub4u}), together with the bulk reduction effect $\left(h/H\right)$ for the Nusselt number as in eq. (\ref{epsub2mod}b), the relation for the Nusselt number reads \begin{equation} {\rm Nu}\sim {\rm Re}^{3/4}{\rm Pr}^{1/2}\left(\frac h H\right) \sim {\rm Re}^{5/8}{\rm Pr}^{3/4}, \label{NuRe075_h} \end{equation} which agrees with the results shown in Fig. \ref{Re_eps}(a,b). Combining (\ref{epsb}), (\ref{epsu2}), (\ref{epsub4u}a) and (\ref{epsub4u}b), one obtains \begin{subeqnarray} \rm{Re}&\sim&\rm{Ra}^{8/21}\rm{Pr}^{-22/21},\\ \rm{Nu}&\sim&\rm{Ra}^{5/21}\rm{Pr}^{17/96}, \label{NuRe1/4_turb} \end{subeqnarray} which slightly over estimates the ${\rm Ra}$ number dependence for the Nusselt number with respect to $\rm Ra$ (i.e. $\approx 0.225$ from the DNS vs. $\approx 0.238$ from the theory) but clearly under estimates the Prandtl number dependence ($\approx 0.41$ for the DNS vs. $\approx 0.17$ for the theory) in that particular region of the $(\rm{Ra},\rm{Pr})$ plane and is denoted as $IV_u$ in Fig. \ref{Ra_Pr}(a,b) and \ref{Re_eps}(a,b).\\ This scaling analysis shows that the effect of turbulence must play a role. In particular, the dissipation scaling has to take into account the turbulent boundary layer characteristics and log-type corrections have to be eventually reintroduced in order to predict accurately both the Prandtl and Reynolds number dependencies observed from the simulations. \subsubsection{A fully turbulent boundary-layer scaling} The above scaling overestimates the heat flux, since the observed exponent $\rm Nu\sim {\rm Ra}^{0.225}$ is close to the exponent ${\rm Nu}\sim{\rm Ra}^{5/21}\sim{\rm Ra}^{0.238}$ but smaller and the turbulent ${\rm Nu}\sim{\rm Ra}^{1/4}$ scaling (Fig. \ref{Nu_Re}(a)) and suggests a new correction for the dissipation in what may appear as turbulent thermal and the kinetic boundary layers. In addition, both Prandtl-number dependencies found in the numerical simulations are not predicted by eq. (\ref{NuRe1/4_turb}a) and (\ref{NuRe1/4_turb}b) which suggests that turbulence plays a non trivial role in the above scaling. For small $\rm Pr$ and large $\rm Ra$, the statically unstable boundary layers becomes indeed turbulent and for decreasing ${\rm Pr}$, the Reynolds number increases [Fig.\ref{Nu_Re}(c)] which causes the boundary layer to transition to turbulence. The dissipation of kinetic energy may be split between a viscous sub-layer $\overbar{\epsilon_{vs}}$ and a log layer $\overbar{\epsilon_{ll}}$ such that $\overbar{\epsilon_{u,BL}}=\overbar{\epsilon_{vs}}+\overbar{\epsilon_{ll}}$. \cite{GrossmannL11} In the log layer, the dissipation writes \begin{equation} {\epsilon_{u,ll}}(z) = \frac{u_*}{C_\kappa z} \end{equation} which may also be rearranged as \begin{equation} {\epsilon_{u,ll}}(z) = \nu^3 L^{-4} \frac{\rm{Re}^3}{C_\kappa z} \left(\frac{u_*}{U}\right)^3. \end{equation} The mean kinetic energy dissipation in the log layer can therefore be obtained by integrating the above such that \begin{equation} \overbar{\epsilon_{u,ll}} = \int_{z^*}^{L_2}\epsilon_{u,ll}(z) \mbox{d}z, \end{equation} where $z_* = \nu/u_*$ and $L_2$ corresponds to the edge of the logarithmic zone. The dissipation in the log layer thus acts as a buffer to heat exchanges and induces a log-type correction denoted as ${\cal L}(\cdot)$, which is dependent on the Reynolds number such that \begin{equation} \overbar{\epsilon_{u,ll}} := \nu^{3}L^{-4}{\rm Re}^3\left(\frac{u_*}{U}\right)^3 \frac{2}{C_\kappa}\log\left(Re\frac{u_*}{U}\frac{1}{2}\right), \label{logll} \end{equation} where the length scale is now $L$ (i.e. the length of the domain), $C_\kappa\approx0.4$ is the von K\'arm\'an constant, and $u_*=\overline{u'w'}$ is the typical velocity fluctuation scale\cite{GrossmannL11}. Indeed in the present simulations, friction directly at the wall is null because of the free-slip boundary condition and the friction velocity $u_*$ does not refer to the wall shear stress but the turbulent shear stresses, induced by turbulent fluctuations of the buoyancy flux $\overline{w'b'}$, originating from the statically unstable buoyancy profile in this near-wall region. The log profiles for both the velocity and the buoyancy are shown in Fig. \ref{fig:log-BL}(a) and Fig. \ref{fig:log-BL}(b). For all turbulent profiles, a logarithmic region can be observed for the velocity profile. In addition the profile seems to be self-similar, at least for the three profiles reported in the figure. The source of turbulent stresses here is suggested by the large log profile, measured for two decades in Fig. \ref{fig:log-BL}(b) for the buoyancy profile at $\rm Ra = 1.92\, 10^{15}$ and $\rm Pr=10^{-2}$. \begin{figure*}[t!] \begin{minipage}[b]{\linewidth} \scalebox{0.7}{\Large\input{logBLRe_Ra}} \scalebox{0.7}{\Large\input{logBLRe_Pr}} \put(-510,10){(a)}\put(-250,10){(b)} \end{minipage} \vspace{-7mm} \caption{Dependencies of $\rm{Nu}$ (red) and $\rm{Re}$ (blue) with respect to $\rm Ra$ (a) and $\rm Pr$ (b) showing the modification and weak variations of both $\rm{Nu}$ and $\rm{Re}$ in the regime considered here. The continuous lines show the predictions from eqs. \ref{NuRe1/3ll}(a,b) while the dashed line shows the measurements from Fig. \ref{Nu_Re}(a-d).} \label{fig:log-scaling} \end{figure*} The buoyancy variance in the boundary layer can also expressed using a similar analogy. As shown in Fig. \ref{fig:log-BL}(b), the thermal layer displays a log-type layer which is a common feature of turbulent statically stable and unstable boundary layers \begin{equation} {\epsilon_{b,ll}}(z) = \frac{\kappa b_*^2}{C_\kappa z^2}, \end{equation} where $b_*= \overbar{w'b'}/u_*$. The fluctuating velocity $u_*$ can be connected to the outer velocity $U$ or $\rm Re$, and equivalently for the buoyancy variance by \begin{equation} \frac{u_*}{U} = \frac{c_\kappa}{\log\left({\rm Re}\frac{u_*}{U}\frac{1}{\theta}\right)} \quad \mbox{and} \quad \frac{b_*}{\Delta} = \frac{c_\kappa}{\log\left({\rm Re}\frac{u_*}{U}\frac{1}{\theta}\right)}. \label{eq:epsilon_exp} \end{equation} In the above, we assumed that the flux Richardson number $R_f=\overline{w'b'}/(\overline{u'w'} \partial u/\partial z)$ is constant, which should hold provided that the structure of the turbulent boundary layer remains self-similar with $\rm Ra$ and $\rm Pr$ (see Fig. \ref{fig:log-scaling}(a,b)). The empirical constant $b$ depends on the system geometry, along a plate $\theta$ is empirically found to be equal to $0.13$ (see\cite{GrossmannL11}). Again, the mean buoyancy variance can be integrated such that \begin{equation} \overbar{\epsilon_{b,ll}} = \int_{z^*}^{L/2}\epsilon_{b,ll}(z) \mbox{d}z, \end{equation} which reduces to \begin{equation} \overbar{\epsilon_{b,ll}} = \frac{2 \kappa b^2_*}{C_\kappa L^2} {\rm Re}\frac{u_*}{U}, \end{equation} and using eq. (\ref{eq:epsilon_exp}), the expression becomes \begin{equation} \overbar{\epsilon_{b,ll}} = \kappa \Delta^2 L^{-2} {\rm Re}\left(\frac{u_*}{U}\right)\frac{2}{C_\kappa }{\log\left({\rm Re}\frac{u_*}{U}\frac{1}{\theta}\right)^{-2}}. \end{equation} In the above expressions, the unknown ratio $u_*/U$ can be computed using Lambert's W-function where $u_*/U=\bar{\kappa}/W({\rm Re}\bar{\kappa}/\theta)$. The dissipation in the turbulent regime is thus modified from eq. (\ref{epsub4u}a) and becomes \begin{equation} \overbar{\epsilon_{u,ll}} \sim \nu^{3}L^{-4}\rm{Re}^3{{\cal L}(Re)} \label{logu} \end{equation} where ${{\cal L}(Re)}$ is given by eq. (\ref{logll}). The bulk however still dissipates at a rate expressed in eq. (\ref{NuRe1/4_turb}a) above (see Figs. \ref{Nu_Re}(c,d)). One can think of this correction as a decrease in heat transfer through the boundary layer which is responsible for an even faster modification of the relative depth of the recirculation region $(h/H)$. The later can is estimated using eq. (\ref{eq:bulk_reduce_high_Ra}) which writes $$ \left(\frac{\overbar{\epsilon_{b,bulk}}}{\overbar{{\epsilon}_{b,BL}}}\right) \sim \left(\frac{\overbar{\epsilon_{u,bulk}}}{\overbar{{\epsilon}_{u,BL}}}\right) \equiv \frac h H \sim {\rm Re}^{-1/8} {\rm Pr}^{1/4}. $$ Thus the scaling for the Nusselt number becomes \begin{equation} {\rm Nu}={\rm Re}^{3/4}{\rm Pr}^{1/2}\left(\frac h H\right)\sim\rm{Re}^{5/8}\rm{Pr}^{3/4}{\cal L}(Re)^{1/2}, \label{NuRe075_turb} \end{equation} and from (\ref{epsub4u}b) (\ref{epsb}), (\ref{epsu2}), and (\ref{logu}) it follows that \begin{subeqnarray} \rm{Re}&\sim&\rm{Ra}^{8/21}\rm{Pr}^{-22/21}{\cal L}(Re)^{-8/21}, \\ \rm{Nu}&\sim&\rm{Ra}^{5/21}\rm{Pr}^{17/96}{\cal L}(Re)^{-5/27}. \label{NuRe1/3ll} \end{subeqnarray} These scaling are verified in Fig. \ref{fig:log-scaling}(a,b) for both the Nusselt number and the Reynolds number with respect to both $\rm Ra$. The Prandtl number dependence prediction is also improved compared with eq. \ref{NuRe1/4_turb}(a,b). The exponent of the Prandtl number for the Reynolds number is found at $\rm Re \sim Pr^{-1}$ for the DNS while the log-corrected exponent is $\rm Re \sim Pr^{-1.075}$. The Nusselt-number dependence is $\rm Nu \sim Pr^{0.41}$ in the DNS while the log-corrected scaling provides $\rm Nu \sim Pr^{0.2}$ which hints at a possible Prandtl-number dependence due to the plume dynamics, as observed in Rayleigh-B\'enard convection \cite{GrossmannL11,ni2011local}. To the best of our knowledge, it is the first time that such log-type corrections are applied and verified for the Prandtl number dependence. The log region of time-averaged turbulent boundary layers are shown in Fig. \ref{fig:log-BL} for two different Prandtl numbers and two different Rayleigh numbers in the turbulent regime in support of our assumption and analysis. These scaling laws thus support the evidence of a new limiting regime in horizontal convection which can be considered as the limiting regime in horizontal convection. It is worth pointing that as $\rm{Ra}$ further increases, the $\alpha$ exponent progressively reaches the value of $5/21$ where log-corrections become less significant. Note that this exponent is also in very good agreement with the recent study of Reiter \& Shishkina\cite{reiter2020classical}. The transition from the $II_l$ regime to the $IV_u$ regime in Fig. \ref{Ra_Pr} is marked by the vertical black dotted line and is obtained by matching the Reynolds number between each region. Equating eq. (\ref{NuRe1/3ll}a) with eq. (\ref{NuRe1/6corr}), the transition is found very close to a constant at $\rm Ra \approx 3\,10^{12}$. \\ The next subsection investigates whether this last transition can be thought in terms of turbulent regimes. As one expects to see a transition to the limiting regime of convection, the question therefore arises whether turbulence in the present flow is dependent or not on viscosity and if we are reaching the asymptotic regime known as the ultimate regime where dissipation is not dependent on viscosity. Note that a viscous-independent turbulent regime would make the present result relevant for large-scale geophysical applications. \section{"Hard" or "Soft" turbulence?}\label{sec:Ko} The picture drawn in the previous section provides an understanding on the heat and momentum transition leading to the ultimate regime of horizontal convection. Although the above picture is rather convincing with successive scaling transitions in agreement with what was previously observed in Rayleigh-B\'enard convection, it remains complex, with two control parameters and at least five different regimes spanning ten orders of magnitude in $\rm{Ra}$. We propose to reanalyse our data in the framework of Kolmogorov turbulence which frames the above analysis differently than the Grossman \& Lohse\cite{GL00} picture and provide a valuable tool for diagnosing the state of turbulence observed and how these regimes and its transitions may be further analysed.\\ \begin{figure*}[t!] \begin{minipage}[b]{\linewidth} \scalebox{0.7}{\Large\input{Ko_Re}} \scalebox{0.7}{\Large\input{Ko_Pe}} \put(-510,10){(a)}\put(-250,10){(b)}\\ \scalebox{0.7}{\Large\input{Ko_Pr_Re}} \scalebox{0.7}{\Large\input{Ko_Pr_Pe}} \put(-510,10){(c)}\put(-250,10){(d)} \end{minipage} \vspace{-7mm} \caption{(a) $\rm{Re}$ dependencies and (b) $\rm{Pe}$ dependencies of the Kolmogorov number $\rm{Ko}$ for variations with respect to $\rm{Ra}$. Same for (c) and (d) but for variations with respect to $\rm{Pr}$ (refer to figure \ref{Ra_Pr} for colour code).} \label{fig:Ko} \end{figure*} We first define what we denote as the Kolmogorov number, which is obtained rescaling the Paparella \& Young constraint on dissipation using $U$ and $L$, giving \begin{equation} \rm{Ko} = {\rm Re}^{-3} {\rm Ra} {\rm Pr}^{-2}. \end{equation} The scaling law relating $\rm{Ko}$ with respect to $\rm{Re}$ and $\rm{Pe}$ provides a new way to analyse whether the flow is laminar, transitional or driven by "soft" or "hard" turbulence. In the laminar case, the Kolmogorov number and hence dissipation is solely caused by the vertical gradient of velocity where $\rm{Ko}\sim\rm{Re}^{-1}$. At the contrary, hard turbulence achieves complete similarity with respect to parameters that contain viscosity\cite{vassilicos2015dissipation}, that is we should expect $\rm{Ko}=Cst$. An intermediate regime $\rm{Ko}\sim\rm{Re}^{-1/2}$ may also be expected if boundary layers dominate dissipation, since the latter may not achieve complete similarity. The evolution of $\rm{Ko}$ with respect to both $\rm{Re}$ and $\rm{Pe}$ is shown in Fig. \ref{fig:Ko} where for laminar flows we obtain for all $\rm{Pr}$, $\rm{Ko}\sim\rm{Re}^{-1}$. Across the $I_l$ and $I^*_l$ regime, the dependency of $\rm{Ko}$ exhibits a $\rm{Re}^{1}$ transition, were dissipation is enhanced throughout this core-driven mixing regime. A $\rm{Ko}\sim\rm{Re}^{-1/2}$ type-regime is then recovered for all values of $\rm{Pr}$ in the $IV_u$ regime (see Fig. \ref{fig:Ko}(a)). The same observation can be done for the scaling with respect to $\rm{Pe}$ where the same conclusions arise (see Fig. \ref{fig:Ko}(b)). Variations with respect to the Prandtl number follow the same rationale. Transitions between regimes varying the Prandtl number is found at constant $\rm{Ko}$ (see Fig. \ref{fig:Ko}(c)) whereas in the $II_l$ and $II^*_l$ regimes, variations with respect to $\rm{Pr}$ occurs for $\rm{Ko}\sim \rm{Re}^{-1}$. At higher $\rm{Ra}$ and in the $IV_u$ regime, conclusions are yet hard to draw but we may expect a $\rm{Ko}\sim \rm{Re}^{-1/2}$. It is straightforward to conclude that despite we have reached the limiting regime of horizontal convection at large $\rm{Ra}$, the transition to "hard" turbulence or the "ultimate" regime of turbulent convection in natural horizontal convection does not seem attainable using our numerical simulations. In addition, this may not arise, even at extremely high $\rm{Ra}$. This agrees with Sandstr\"om inference which may be explained by the bound on the Richardson number which holds for all regimes reported in this paper and the scaling laws available in the literature. As shown in ref.\cite{toppaladoddi2017roughness}, such a regime may be achievable by introducing appropriate roughness elements along the statically unstable boundary, optimising the distribution of the forcing boundary\cite{rocha2020improved}, adding active forcing in this same region, or for instance through radiative heat transfer \cite{lepot2018radiative}. A such strategy may also allow for observing the $IV_l$ regime predicted by Siggers {\it et al.}\cite{siggers2004bounds,rocha2020improved}. \section{Conclusions}\label{sec:conclusion} In conclusion, we report evidences of two new turbulent regimes in horizontal convection based on scaling arguments at low Prandtl numbers. More precisely we first highlight regimes that are known as limiting regimes. For asymptotically small Prandtl numbers, we highlight a regime where the core is driven by turbulence but where the boundary layer remain laminar and name this regime $II_l$ following the nomenclature of Shishkina, Grossmann \& Lohse \cite{ShishkinaGL16}. The second regime is characterised by both a turbulent core and turbulent boundary layers. It is also found to be a limiting regime for asymptotically large Rayleigh numbers called $IV_u$ following SGL's nomenclature. Our results, support and integrate previous evidence from Shishkina \& Wagner \cite{ShishkinaW16} and the model of Hughes {\it et al.} (see ref. \cite{Hughes07}) in the SGL theory of HC (see ref. \cite{GL00,ShishkinaGL16}).\\ In the $II_l$ regime, we observe a new scaling where the modification of the turbulent bulk size modifies the Prandtl number dependence for both the Reynolds and the Nusselt numbers scaling. This reduction of the bulk size is found to be essentially Prandtl-number dependent where the bulk decreases in size when $\rm{Pr}$ decreases.\\ The transition to the turbulent limiting regime denoted as $IV_u$ is also observed. In this particular regime, the flow becomes turbulent, that is both the boundary layer and the core follow turbulent-type scaling laws. Similarly to the study of Rosevear {\it et al.}\cite{rosevear2017turbulent}, the turbulent flow is essentially located beneath the horizontal forcing and progressively clusters underneath as both $\rm{Ra}$ increases and $\rm{Pr}$ decreases. According to Shishkina {\it et al.}\cite{ShishkinaGL16}, this last regime marks the final transition at large $\rm{Ra}$. The log-corrections allow for recovering the correct $\rm{Nu}$ and $\rm{Re}$ dependencies with respect $\rm{Ra}$ and improved estimates for $\rm{Pr}$. however further work need to be dedicated to the exact Nusselt number dependence to the Prandtl number, which may be attributed to plume dynamics\cite{GrossmannL11,ni2011local}.\\ We also propose a new analysis, based on the Kolmgorov number $\rm Ko$, a rescaled dissipation rate, defined in such a way so that $\rm{Ko}\sim{Re}^{-3}\rm{Ra}\rm{Pr}^{-2}$. The analysis confirms that the flow transitions from laminar to soft turbulence but also shows that the flow never transitions to hard turbulence which would be akin to the $IV_l$ regime of Siggers {\it et al.}\cite{siggers2004bounds,ShishkinaGL16,rocha2020improved}. The ultimate regime $IV_l$, if it exists, thus has yet to be observed (see ref. \cite{ShishkinaGL16,rocha2020improved}). It is therefore of particular interest to study new types of horizontal convection where the turbulence can be strong enough to get rid of the effect of boundary layers and trigger purely inertial, core-driven, turbulent horizontal convection regimes. A such regime would be of particular importance for geophysical applications such as the overturning circulation.\\ The companion paper Part II\cite{Passaggia2019LimitigB}, gathers the results from this study together with an experimental study at Large Prandtl number. In particular, a regime diagram is provided and highlights all known limiting regimes of horizontal convection.\\ The authors acknowledge the support of the National Science Foundation Grant Number OCE--1155558 and OCE--1736989. \bibliographystyle{plain}
1,314,259,996,229
arxiv
\section{Introduction} While the basis sets describing atoms and molecules have been extensively studied and optimized~\cite{Jensen,Szabo}, significant opportunities exist for the improvement of molecule-optimized basis sets and Hamiltonians for the acceleration of electronic structure computations, especially in the presence of strong electron correlation. Standard atomic-orbital basis sets are optimized only to minimize electronic energies of the constituent atoms rather than the total electronic energy of the molecule~\cite{D89}. While the concept of atoms forming molecules has a critical role throughout chemistry, such atom-centered basis sets do not capitalize on opportunities for greater efficiency arising from the nature of the bonding. Molecule-optimized basis sets and Hamiltonians accelerate correlation-energy calculations by minimizing the size (rank) of the orbital basis set in the description of the correlated Hamiltonian and wave function (or reduced density matrix). Such basis sets realize the intuitive idea that the optimal basis set for a molecule at a stretched geometry is different from the optimal basis set at the equilibrium geometry. Molecule-optimized orbitals, largely based on approximate natural orbitals~\cite{RDM07,CB00,L55,M60, C63,SL59,BD66,CSM79,W60,K64,DJ62,D72,B68,HL59,S59,RD66,RD69,HS63,LS69, K69,S69,BLS65,NH63,TRPB77,BS67,FS03,MZP93,LS85,SH74,L64,L69,ALS75, H73,M73,H73,MYN96,BP89,LK73} or frozen natural orbitals~\cite{BD70, EK66,SGT89,AS04,TB05,KO06,TB08,LKD10,HS10,NHL09,SKS11,RK11,GBM11,DS13, KSX13,AB87,PNK06,NPU05,A10,PAH11,KPH12,AS03,BR09,ATG09,LM12,CP12,TB05, LKD10,HS10,DS13}, have been extensively studied in the context of perturbation methods about the Hartree-Fock reference wave function, but their study has been much more limited for strongly correlated molecular systems. The aim of the present paper is to generate a molecule-optimized double-zeta basis set and Hamiltonian in which quantum chemistry calculations of strongly correlated systems can be performed. We (i) form the molecule-optimized double-zeta basis set and Hamiltonian from an {\em approximate} solution of the anti-Hermitian contracted Schr{\"o}dinger equation (ACSE)~\cite{AC06, *AC07B,*AC07C,*AC07D,ACMR,*GM09b,*RFM09,ACV07,*ACV09,AC08,FRM09,GM10, SM10,*SM11,*G11,*SM11,*SM12} in a higher (triplet-zeta) basis set and (ii) apply the molecule-optimized double-zeta basis set and Hamiltonian to solving the ACSE. The molecule-optimized double-zeta orbitals capture strong correlation, if present, because they are generated from an ACSE calculation starting with an initial multi-configuration self-consistent-field (MCSCF) two-electron reduced density matrix (2-RDM). In the theory section we present a general formulation for molecule-optimized basis sets and Hamiltonians in terms of unitary transformations of the Hamiltonian operator. Optimized orbitals can be viewed as one-electron unitary transformations of the Hamiltonian operator. Within this framework more general unitary transformations can potentially be explored. Illustrative applications are made to the potential energy curves of hydrogen fluoride and diatomic nitrogen. The ACSE-computed molecule-optimized basis sets significantly improve the non-parallelity errors in both curves. Although we specifically use the optimized basis orbitals and Hamiltonians in the ACSE, they can be more generally employed with any electronic structure method. \section{Theory} Molecule-optimized orbitals are generally developed from approximate natural orbitals in section~\ref{sec:no}. The construction of a set of natural orbitals from the ACSE that are suitable for treating strongly correlated many-electron molecular systems is described in section~\ref{sec:acse}. In section~\ref{sec:ham} we recast the acceleration of convergence as a unitary transformation of the molecular Hamiltonian and suggest extensions of the approximate natural-orbital transformations. \subsection{Approximate natural orbitals} \label{sec:no} The ``best'' molecule-optimized molecular orbitals are the natural orbitals, the eigenfunctions of the 1-RDM. The optimality of the natural orbitals follows from a mathematical theorem derived by E. Schmidt in 1907~\cite{C63,CB00,E07}. While finding the exact natural orbitals in a large standard atom-optimized basis set might require the same computational cost as solving the correlation problem in that large basis set, significant cost savings can be achieved by identifying an approximate set of natural orbitals and then solving the correlation problem in a truncated set of these orbitals. Approximate natural orbitals can be obtained from a low-cost correlation-energy calculation and then employed after truncation in a higher cost correlation-energy calculation. Examples of the strategy from the literature include the early use of natural orbitals from perturbation theory~\cite{L64,BLS65,EK66,H73,M73,SH74} or iterative refinement~\cite{D72,TRPB77} in configuration interaction and the recent use of natural orbitals from second-order many-body perturbation theory in coupled cluster calculations~\cite{TB05,LKD10,HS10,DS13}. Most previous calculations differ from the general approach to the optimal natural orbitals adopted here in two respects: (1) they typically employ a truncation scheme for the natural orbitals based on a threshold for their occupations and (2) they usually determine approximate natural orbitals either from or for single-reference electron correlation methods. Taube and Bartlett~\cite{TB05,TB08} have truncated their natural orbitals according to a predefined percentage, and Roos and co-workers~\cite{ATG09} have employed approximate natural orbitals in complete-active-space second-order perturbation theory. In this work we generate molecule-optimized basis sets that use a truncation of the natural orbitals based on the rank of the orbitals (see also Ref.~\cite{AS03} for a truncation by basis-set size). For example, the approximate natural orbitals are obtained from a low-cost method in a large standard atom-optimized basis set \begin{equation} \label{eq:D1eig} ^{1} D v_{i} = n_{i} v_{i}, \end{equation} where ${}^{1} D$ denotes the 1-RDM, $n_{i}$ are the natural occupation numbers ordered from largest to smallest, and $v_{i}$ are the eigenvectors whose components denote the expansion coefficients of the natural orbitals in terms of the initial molecular-orbital basis set. Then the set of natural orbitals $\{ v_{i} \}$ is truncated to share the rank $M$ of the smaller standard atom-optimized basis set. In accordance with the Schmidt theorem, the largest $M$ of the $n_{i}$ are retained to generate the optimal set of $M$ orbitals. The compact molecule-optimized basis set can then be employed in a higher cost method for more accurate and more efficient description of the molecule's electron correlation. Truncation by basis-set rank has a different philosophy from truncation by threshold. In truncation by threshold the aim of the calculation is to reproduce the accuracy of the larger basis set within a given tolerance (threshold), but in truncation by basis-set rank the aim of the calculation is to attain some of the accuracy of the larger basis set at the significantly reduced cost of a smaller basis set. For larger molecules where the computational cost of the larger basis set is prohibitive, the strategy of truncation by basis-set rank has important advantages because {\em the basis-set rank can be chosen to remain within existing computational resources}. Furthermore, the generation of molecule-optimized basis sets which mimic traditional atom-optimized basis sets share some advantages of atom-optimized basis sets such as correlation consistency and systematic extrapolation to the complete-basis-set limit. Secondly, as discussed in section~\ref{sec:acse}, we aim to develop molecule-optimized molecular orbitals that can be employed in multi-reference calculations for the description of strongly correlated electrons. We generate approximate natural orbitals through a partial solution of the ACSE, starting with a 2-RDM from an MCSCF calculation. The resulting natural orbitals from the partial ACSE solution have a natural ordering with respect to correlation effects that inherently require multiple many-electron configurations in the reference wave function. Natural orbitals from single-reference theories typically do not reflect the multi-reference correlation in the wave functions of highly correlated atoms molecules. Furthermore, as shown in the results, canonical orbitals from MCSCF, ordered by their canonical energies, do not provide a suitable ordering for accelerating convergence with respect to basis-set size. \subsection{ACSE natural orbitals} \label{sec:acse} Solution of the anti-Hermitian contracted Schr{\"o}dinger equation (ACSE)~\cite{AC06,*AC07B,*AC07C,*AC07D,ACMR,*GM09b,*RFM09,ACV07,*ACV09}, the anti-Hermitian part of the contracted Schr{\"o}dinger equation (CSE)~\cite{V93,*V93b,NY96,*NY97,M98a,*M98b}, for the 2-RDM and its energy can be tuned for single-reference or multi-reference electron correlation through the choice of the initial 2-RDM. The 2-RDM can be chosen from an initial mean-field (Hartree-Fock) or a correlated calculation such as a multi-configuration self consistent field (MCSCF) calculation~\cite{AC06,*AC07B,*AC07C,*AC07D,ACMR, *GM09b,*RFM09}. The ACSE method is applicable to both ground and excited states as well as arbitrary spin states~\cite{ACMR,*GM09b,*RFM09}. It has been applied to studying multi-reference correlation in excited states and conical intersections in the photoexcitation of {\em gauche}-1,3-butadiene to form bicyclobutane~\cite{SM11}, the tautomerization of vinyl alcohol to acetylaldehyde~\cite{SM12}, and the reaction of firefly luciferin for bioluminescence~\cite{G11}. In a finite basis set the contracted Schr{\"d}inger equation (CSE) as well as its anti-Hermitian part (ACSE) can expressed in second quantization as \begin{equation} \label{eq:CSE2} \langle \Psi_{n} | {\hat a}^{\dagger}_{i} {\hat a}^{\dagger}_{j} {\hat a}_{l} {\hat a}_{k} {\hat H} | \Psi_{n} \rangle = E_{n} \hspace{1mm} {}^{2} D^{i,j}_{k,l} \end{equation} and \begin{eqnarray} \label{eq:ACSE2} \frac{1}{2} \langle \Psi_{n} | [ {\hat a}^{\dagger}_{i} {\hat a}^{\dagger}_{j} {\hat a}_{l} {\hat a}_{k}, {\hat H} ] | \Psi_{n} \rangle & = & 0 , \end{eqnarray} where each index $i$, $j$, $k$, and $l$ denotes a one-electron spin orbital that is a product of a spatial orbital and a spin function $\sigma$ equal to either $\alpha$ (+1/2) or $\beta$ (-1/2) and the elements of the 2-RDM \begin{equation} \label{eq:2rdm} {}^{2} D^{i,j}_{k,l} = \langle \Psi_{n} | {\hat a}^{\dagger}_{i} {\hat a}^{\dagger}_{j} {\hat a}_{l} {\hat a}_{k} | \Psi_{n} \rangle \end{equation} follow from the expectation value of the 2-RDO with respect to $| \Psi_{n} \rangle$. In second quantization the creation operator $a^{\dagger}_{i}$ generates an electron in the $i^{\rm th}$ spin orbital while the annihilation operator $a_{k}$ destroys an electron in the $k^{\rm th}$ spin orbital. For a quantum many-electron system the Hamiltonian is expressible as \begin{equation} \label{eq:H} {\hat H} = \sum_{p,s}{ {}^{1} K^{p}_{s} {\hat a}^{\dagger}_{p} {\hat a}_{s} } + \sum_{p,q,s,t}{{}^{2} V^{p,q}_{s,t} {\hat a}^{\dagger}_{p} {\hat a}^{\dagger}_{q} {\hat a}_{t} {\hat a}_{s} } \end{equation} where the one- and two-electron reduced Hamiltonian matrices ${}^{1} K$ and ${}^{2} V$ contain the one- and two-electron integrals respectively. By rearranging the creation and annihilation operators according to the anti-commutation relations for fermions, we can write the CSE in terms of the elements of the 2-, 3-, and 4-RDMs and the ACSE in terms of the elements of the 2- and 3-RDMs. Explicit expressions for these contracted equations in terms of the spin-orbital elements of the reduced Hamiltonians and RDMs are given elsewhere~\cite{AC06,*AC07B,*AC07C,*AC07D,ACMR,*GM09b,*RFM09,ACV07,*ACV09}. The ACSE can be solved by propagating the following initial-valued differential equation as a function of the parameter $\lambda$ which serves as an imaginary time: \begin{equation} \label{eq:dD} \frac{d \hspace{1mm} {}^{2} D^{i,j}_{k,l}}{d \lambda} = \langle \Psi(\lambda) | [ {\hat a}^{\dagger}_{i} {\hat a}^{\dagger}_{j} {\hat a}_{l} {\hat a}_{k}, {\hat S}(\lambda) ] | \Psi(\lambda) \rangle \end{equation} where the two-body operator ${\hat S}$ \begin{equation} \label{eq:S} \hat{S}(\lambda) = \sum_{p,q,s,t} {}^{2} S^{p,q}_{s,t} (\lambda) {\hat a}^{\dagger}_{p} {\hat a}^{\dagger}_{q} {\hat a}_{t} {\hat a}_{s} \end{equation} depends upon a two particle reduced matrix $^{2} S$ equal to the residual of the ACSE \begin{equation} \label{eq:S2} {}^{2} S^{p,q}_{s,t}(\lambda) = \langle \Psi(\lambda) | [ {\hat a}^{\dagger}_{p} {\hat a}^{\dagger}_{q} {\hat a}_{t} {\hat a}_{s}, {\hat H} ] | \Psi(\lambda) \rangle \end{equation} where ${\hat H}$ is the Hamiltonian operator. The dependence of the above equations on the three-electron reduced density matrix (3-RDM) is removed by reconstructing the 3-RDM as a cumulant functional of the lower 1-and 2-RDMs~\cite{M98b,*M98c,*M99a,*KM99}. The 2-RDM is propagated until either the energy or the residual the ACSE ceases to decrease. Because the ACSE can treat multi-reference correlation, it can serve as a general platform for creating an approximate set of natural orbitals. The 1-RDM is obtainable from the 2-RDM by contraction \begin{equation} ^{1} D^{i}_{k} = \frac{1}{N-1} \sum_{j}{ {}^{2} D^{i,j}_{k,j} }, \end{equation} and the natural orbitals and their occupations are readily obtained from Eq.~(\ref{eq:D1eig}). A family of approximate orbitals can be systematically generated from the ACSE by evolving the 2-RDM over a short length in the parameter $\lambda$. By choosing the distance in $\lambda$ to be a small fraction of the total distance $\lambda^{*}$ required for the solution of the ACSE, we can obtain an approximate set of natural orbitals at low computational cost. The evolution over the short distance in $\lambda$ can be performed in a large standard atom-centered basis set. From the 1-RDM obtained, a truncated set of natural orbitals sharing the rank $M$ of the smaller standard atom-centered basis set can be employed for solving the ACSE to convergence. Hence, through the choice of the evolution distance in $\lambda$ we are able to generate both low-cost and higher cost methods for electron correlation directly within a common ACSE framework. For convenience, we diagonalize only the virtual-virtual block of the 1-RDM to obtain natural orbitals in terms of the virtual MCSCF orbitals; in this manner, we can truncate these approximate natural orbitals without changing the original MCSCF 1-RDM at $\lambda=0$. These approximate natural orbitals are similar in spirit to those created from the Hartree-Fock virtual-virtual block of the 1-RDM in single-reference methods, which have been called frozen natural orbitals~\cite{EK66, BD70,SGT89,DS13}. Importantly, because the approximate natural orbitals obtained from the ACSE are molecule optimized, they incorporate important features of the molecule's electron density and chemical bonding that are not present in the standard atom-centered basis sets of the same size (or rank). \subsection{Molecule-optimized Hamiltonians} \label{sec:ham} The generation of molecule-optimized basis sets through the use of natural orbitals and Schmidt's theorem can also be viewed as a unitary transformation of the Hamiltonian in the original larger basis set ${\hat H}_{0}$ to produce a more compact Hamiltonian ${\hat H}_{1}$ whose non-negligible elements can be captured in a smaller basis set \begin{equation} {\hat H}_{1}= e^{-{\hat S}_{1}} {\hat H_{0}} e^{{\hat S}_{1}}, \end{equation} where ${\hat S}_{1}$ is a one-body anti-Hermitian operator \begin{equation} \label{eq:S1} \hat{S}_{1} = \sum_{p,s} {}^{1} S^{p}_{s} {\hat a}^{\dagger}_{p} {\hat a}_{s} . \end{equation} Similar anti-Hermitian operators arise in the unitary transformations underlying contracted Schr{\"o}dinger theory~\cite{RDM07,V93,*V93b,NY96, *NY97, M98a,*M98b} including the solution of the ACSE~\cite{AC06, *AC07B,*AC07C,*AC07D, ACMR,*GM09b,*RFM09,ACV07,*ACV09,AC08,FRM09, GM10,SM10,*SM11, *G11,*SM11,*SM12}. In the ACSE we employ unitary transformations from not only one-body anti-Hermitian operators but also such transformations from two-body anti-Hermitian operators, which are critical to capturing important many-body correlation effects. The molecule-optimized Hamiltonian from a one-body unitary transformation can be generalized to a molecule-optimized Hamiltonian from a two-body unitary transformation \begin{equation} {\hat H}_{2}= e^{-{\hat S}_{2}} {\hat H_{0}} e^{{\hat S}_{2}}, \end{equation} where ${\hat S}_{2}$ is a two-body anti-Hermitian operator. As in the previous case, the two-body transformation produces a more compact Hamiltonian ${\hat H}_{2}$ whose non-negligible elements can be captured in a smaller basis set. Because the two-body unitary transformations contain the one-body unitary transformations, the set of potential Hamiltonian operators $\{{\hat H}_{2}\}$ is larger than a set of potential Hamiltonian operators $\{{\hat H}_{1}\}$ . Consequently, the two-body transformations generalize the set of molecule-optimized Hamiltonians obtainable from approximate natural orbitals. Unlike the one-body transformations, the two-body transformations generate three-body Hamiltonians whose expectation values depend upon the three-electron RDM (3-RDM). As in contracted Schr{\"o}dinger theory, however, these three-body Hamiltonians can be readily approximated as two-body Hamiltonians through cumulant reconstruction of the three-electron reduced density operators~\cite{M98b, *M98c,*M99a,*KM99}. While these extensions of the natural-orbital transformations are not pursued in the present work, the ACSE theory~\cite{AC06, *AC07B,*AC07C,*AC07D,ACMR,*GM09b} in the previous section provides a useful framework for (i) approximating suitable ${\hat S}_{2}$ operators and (ii) recasting the Hamiltonians ${\hat H}_{2}$ as two-body operators through cumulant reconstruction. Recently, related two-body transformations of the Hamiltonian with cumulant reconstruction have been employed in the context of an explicit $r12$ theory~\cite{YS12}. \section{Applications} After brief discussion of the computational methodology, we apply the molecule-optimized basis sets described in the previous section to generating potential energy curves from hydrogen fluoride and diatomic nitrogen. \subsection{Computational methodology} The initial MCSCF 2-RDM is computed with the wave function from an MCSCF calculation in the GAMESS package for electronic structure~\cite{GAMESS}. The ACSE calculations are performed with the code developed by one of the authors in Refs.~\cite{AC06, *AC07B,*AC07C,*AC07D,ACMR,*GM09b,*RFM09}. Approximate sets of natural orbitals are generated from ACSE calculations in the correlation-consistent polarized valence triple-zeta (TZ) basis sets~\cite{D89}. Unless stated otherwise, the 2-RDM is evolved from $\lambda = 0.0$ to $\lambda = 0.01$. This evolution is a small fraction of the total evolution in $\lambda$ from $0.0$ to $\lambda^{*}$ required for satisfying the ACSE method's convergence criteria. As shown in previous work~\cite{AC06}, convergence typically occurs at a value $\lambda^{*}$ between 1 and 10. The resulting natural orbitals are then truncated based on orbital occupations to produce a molecule-optimized basis set whose rank $M$ equals that of the standard correlation-consistent polarized valence double-zeta (DZ) basis set~\cite{D89}. The ACSE is then evolved in this molecule-optimized basis set until convergence. \subsection{Hydrogen fluoride} \begin{figure}[htp!] \includegraphics[scale=0.4]{fig_1_hf_pes_r1.eps} \caption{The potential energy curve in the molecule-optimized basis set from the ACSE with $\lambda$ equal to 0.01 (TZ/DZ[0.01]) is compared to those from solving the ACSE in the standard correlation consistent basis sets, DZ and TZ, the nonstandard DZ basis set derived from the energy-ordered orbitals of MCSCF in the TZ basis set (TZ/DZ[MCSCF]) as well as the molecule-optimized basis set with $\lambda$ evolved its full distance $\lambda^{*}$ to convergence (TZ/DZ[full]). Relative to TZ, the nonparallelity error (NPE) of 0.0154~a.u. from TZ/DZ[0.01] is significantly better than the error of 0.0252~a.u. from DZ or the error of 0.02411~a.u. from TZ/DZ[MCSCF].} \label{f:hf} \end{figure} \begin{table*}[ht!] \caption{TABLE 1: Relative to TZ, the table reports the energy errors from the ACSE from the standard atom-optimized basis set DZ as well as a series of molecule-optimized basis sets for $\lambda$ equal to 0.01, 0.05, 0.10, and $\lambda^{*}$ where $\lambda^{*}$ represents the full $\lambda$ trajectory to convergence. The results show the error relative to TZ continues to decrease as the approximate set of natural orbitals are improved through longer $\lambda$ evolutions. The molecule-optimized basis set from $\lambda$ equal to 0.01 (TZ/DZ[0.01]) offers an improvement in accuracy at a computational cost that is not significantly different from that of the standard DZ calculation.} \label{t:t} \begin{ruledtabular} \begin{tabular}{ccccccc} Bond & Energy (a.u.) & \multicolumn{5}{c}{Energy Errors (a.u.)} \\ \cline{2-2} \cline{3-7} Distance (\AA) & TZ & DZ & TZ/DZ[0.01] & TZ/DZ[0.05] & TZ/DZ[0.10] & TZ/DZ[full] \\ \hline 0.8 & -100.328456 & 0.119026 & 0.084854 & 0.077534 & 0.065513 & 0.049063 \\ 1.0 & -100.341785 & 0.116656 & 0.083603 & 0.077410 & 0.063675 & 0.047857 \\ 1.2 & -100.298315 & 0.114687 & 0.086913 & 0.075308 & 0.060738 & 0.046282 \\ 1.4 & -100.248797 & 0.112944 & 0.085235 & 0.079852 & 0.059025 & 0.044956 \\ 1.8 & -100.175789 & 0.107404 & 0.078613 & 0.072371 & 0.061938 & 0.041952 \\ 2.2 & -100.144837 & 0.103344 & 0.075038 & 0.069548 & 0.058482 & 0.036774 \\ 2.8 & -100.132043 & 0.101452 & 0.072117 & 0.066604 & 0.055794 & 0.037054 \\ 3.4 & -100.130323 & 0.101302 & 0.071648 & 0.066151 & 0.055370 & 0.036750 \\ \end{tabular} \end{ruledtabular} \end{table*} The hydrogen fluoride molecule with its highly polarized chemical bond has contributions from multiple configurations in the dissociative region of its potential energy curve. Here we generate the potential energy curve in the molecule-optimized basis set from the ACSE with $\lambda$ equal to 0.01. In Fig.~1 this potential energy curve is compared to those from solving the ACSE in the standard correlation-consistent basis sets, DZ and TZ, the nonstandard DZ basis set derived from the energy-ordered orbitals of MCSCF in the TZ basis set (TZ/DZ[MCSCF]), as well as the molecule-optimized basis set with $\lambda$ evolved its full distance $\lambda^{*}$ to convergence (TZ/DZ[full]). Importantly, even though the molecule-optimized basis set with $\lambda=0.01$ has a rank equal to that of the polarized basis set DZ, it improves the energies from DZ by 20\% relative to the TZ basis set. Furthermore, it has a nonparallelity error (NPE) of 0.0154~a.u. relative to TZ which is significantly better than the error of 0.0252~a.u. from DZ or the error of 0.02411~a.u. from TZ/DZ[MCSCF]. The NPE is defined as the difference between the maximum and minimum errors in the potential energy curve. The NPE of the molecule-optimized basis set with $\lambda=\infty$ at 0.0136~a.u. is not much different from that of the basis set with $\lambda=0.01$. Table~1 reports the energy errors from the ACSE relative to TZ from the standard atom-optimized basis set DZ as well as a series of molecule-optimized basis sets for $\lambda$ equal to 0.01, 0.05, 0.10, and $\lambda^{*}$ where $\lambda^{*}$ represents the full $\lambda$ trajectory to convergence. The results show the error relative to TZ continues to decrease as the approximate set of natural orbitals is improved through longer $\lambda$ evolutions. Qualitatively, the space spanned by the $M$ most occupied natural orbitals improves the energy with increasing $\lambda$ because it better represents the part of the one-electron Hilbert space that describes the electron density of the molecule. This increasing accuracy, however, comes at the price of increasing computational cost. The molecule-optimized basis set from $\lambda$ equal to 0.01 (TZ/DZ[0.01]) offers an improvement in accuracy at a computational cost that is {\em not significantly different from that of the standard DZ calculation}. In this case, the TZ/DZ[0.01] calculation is {\em more than an order of magnitude faster} than the TZ calculation. \subsection{Diatomic nitrogen} \begin{figure}[htp!] \includegraphics[scale=0.4]{fig_2a_n2_pes_v2.eps} \includegraphics[scale=0.4]{fig_2b_n2_err_v2.eps} \caption{For the dissociation of diatomic nitrogen with the ACSE the figure compares the shape of the potential energy curve in the molecule-optimized basis set with $\lambda$ equal to 0.01 (TZ/DZ[0.01]) to the shapes of potential curves from the standard DZ and TZ basis sets. The curves DZ and TZ/DZ[0.01] in part (a) are shifted by $-0.115426$ and $-0.094708$~a.u. respectively to agree with the energy from TZ at $1.1$~\AA. Relative to TZ, the TZ/DZ[0.01] curve better approximates both the curvature about equilibrium and the dissociation energy than DZ; it significantly improves the nonparallelity error of 0.115~a.u. of DZ to 0.024~a.u. Part (b) shows the energy errors from DZ and TZ/DZ[0.01] relative to TZ.} \label{f:n2} \end{figure} Breaking the triple bond of diatomic nitrogen provides a challenging problem for single-reference methods and a benchmark problem for multi-reference methods. Here we generate the potential energy curve for the nitrogen dissociation from the ACSE in the molecule-optimized basis set with $\lambda$ equal to 0.01. The shape of this potential energy curve is compared to the shapes of those from the standard DZ and TZ basis sets in Fig.~2a. The curves DZ and TZ/DZ[0.01] in Fig.~2a are shifted by $-0.115426$ and $-0.094708$~a.u. respectively to agree with the energy from TZ at $1.1$~\AA. Even though the molecule-optimized basis set has the same computational cost as the DZ basis set, it significantly improves the nonparallelity error relative to TZ from 0.115~a.u. (DZ) to 0.024~a.u. The curve from the molecule-optimized basis set (TZ/DZ[0.01]) better approximates both the curvature about equilibrium and the dissociation energy relative to TZ. Figure~2b shows the energy errors from DZ and the molecule-optimized basis set TZ/DZ[0.01] relative to TZ. In terms of absolute energies, TZ/DZ[0.01] improves the energies from DZ at bond lengths in the vicinity of the equilibrium geometry; however, for bond lengths greater than 1.6~\AA\ the molecule-optimized basis set yields energies that are higher than those from the standard correlation-consistent DZ basis set. This result can be understood from recalling that the standard basis sets are optimized to minimize atomic energies in the configuration interaction singles-doubles method. Upon dissociation the nitrogen molecule breaks up into two nitrogen atoms, and hence, the standard basis set is highly optimized in this region of the potential energy surface. Examining the errors in the cc-pVDZ basis set relative to the cc-pVTZ basis set, however, reveals that the errors at short bond lengths are significantly larger than the errors at longer bond lengths. This discrepancy in accuracy contributes to a large nonparallelity error. The molecule-optimized basis set significantly decreases this error by improving the energies in the equilibrium region while sacrificing the accuracy of energies in the dissociation region. The basis set that is optimized for the molecule provides a more balanced description of the molecule's electron correlation throughout the potential energy surface. \section{Discussion and conclusions} Molecule-optimized basis sets have been presented for accelerating the convergence of electron correlation calculations. As in previous work, the definition of the molecule-optimized basis set depends upon the generation of an approximate set of natural orbitals. Significant computational acceleration can be achieved because the natural orbitals provide the optimal one-electron basis set for the convergence of the many-electron wave function (or two-electron reduced density matrix)~\cite{L55,C63}. In contrast to most previous work~\cite{SGT89,AS04,TB05,KO06,TB08,LKD10,HS10,NHL09, SKS11,RK11,GBM11,DS13,KSX13,AB87,PNK06,NPU05,A10,PAH11,KPH12}, the molecule-optimized basis sets (1) are defined by truncation of the natural orbitals to a fixed rank that equals the rank of a standard correlation-consistent polarized basis set and (2) are optimized by a low-cost multi-reference calculation that can capture important contributions from strong electron correlation in their definition. With regard to (2), the present work does have important connections to the early refinement of the natural orbitals through iterative configuration interaction~\cite{D72,TRPB77} and the recent truncation of natural orbitals in both configuration interaction calculations~\cite{AS03,BR09} and second-order complete-active-space perturbation theory~\cite{ATG09}. While the approach is quite general, here we study the generation of molecule-optimized basis sets from the solution of the ACSE~\cite{AC06,*AC07B,*AC07C,*AC07D,ACMR,*GM09b,*RFM09}. By evolving the ACSE in a large standard atom-centered basis set for a short distance in the imaginary time-like parameter $\lambda$, we can generate an approximate 1-RDM whose eigenfunctions provide approximate natural orbitals. Selection of a smaller set of natural orbitals based on the occupation numbers generates a molecule-optimized basis set. We can choose the rank of this new basis set equal to that of a smaller standard atom-optimized basis set which can then be employed to solve the ACSE until convergence. In this fashion we can generate systematic sets of molecule-optimized basis sets that significantly accelerate the solution of multi-reference methods like the ACSE. These basis sets incorporate important features of chemical bonding and correlation of the molecule that are not present in the standard atom-optimized basis sets. Importantly, these molecule-optimized orbitals can be employed to accelerate any multi-reference quantum chemistry method. Illustrative applications of the ACSE molecule-optimized basis sets to the potential energy curves of hydrogen fluoride and diatomic nitrogen show significant improvements in the nonparallelity errors. For diatomic nitrogen a molecule-optimized double-zeta-like basis set yields a nonparallelity error of 0.024~a.u., relative to the TZ basis set, which significantly improves upon the 0.115~a.u. error in the DZ basis set. For hydrogen fluoride the nonparallelity errors from the ACSE's approximate natural orbitals ordered by occupation numbers are much better than those from the MCSCF's canonical orbitals ordered by orbital energies. Significantly, the correlation of the active space with the inactive space, as performed with the approximate solution of the ACSE, is critical to generating a suitable set of natural orbitals. While the improvement in absolute energies is not as substantial as the improvement in the nonparallelity errors, the present results provide a foundation for future work that may further improve these results. In section~\ref{sec:ham} we show that basis-set truncation by approximate natural orbitals can be viewed as a one-electron unitary transformation of the Hamiltonian operator and suggest an extension of approximate natural-orbital truncations through {\em two-electron} unitary transformations of the Hamiltonian operator, similar to those employed in the ACSE method. In future work we plan to study larger molecules in larger basis sets, a variety of approaches for computing approximate sets of natural orbitals, extrapolations of molecule-optimized basis sets to the complete-basis-set limit, and extensions of natural-orbital truncations through two-electron unitary transformations of the Hamiltonian operator. The acceleration of the ACSE method for multi-reference correlation can be applied to extending recent applications of the ACSE to the study of ground- and excited-state chemical reactions~\cite{FRM09, GM10} including conical intersections~\cite{SM10,G11,SM11,SM12} in vinyl alcohol, {\em gauche}-1,3-butadiene, and firefly luciferin. Often improvements in nonparallelity errors rather than absolute errors are more important for the accurate prediction of reaction and excitation energies and other energy differences studied in the above examples. The present work can also be applied to correlation methods that use natural orbitals as their basic variables such as natural-orbital functional theory~\cite{P07,PML10,P13,SDL08,SDS13, VGG13}, geminal functional theory~\cite{GFT00, GFT01}, the precursors of the projected quasi-variational theory~\cite{SJH11}, and the natural-orbital solution of the contracted Schr{\"o}dinger equation~\cite{M02bb}. The exploitation of molecule-optimized orbitals and Hamiltonians in electronic structure calculations has the potential for decreasing computational cost while maintaining computational accuracy. \begin{acknowledgments} DAM gratefully acknowledges the National Science Foundation under Grant No. CHE-1152425, the Army Research Office under Grant No. W91 INF-1 1-504 1-0085, and Microsoft Corporation for the generous financial support. G.G. is supported by an award from the Research Corporation for Science Advancement and a grant to Gonzaga University from the Howard Hughes Medical Institute through the Undergraduate Science Education Program. \end{acknowledgments}
1,314,259,996,230
arxiv
\section{Introduction} Lagrangian particle methods approximate the solution of flow equations using a cloud of points which move with the flow. Examples are vortex methods \cite{seibold::Chorin1973}, smoothed particle hydrodynamics (SPH) \cite{seibold::Lucy1977,seibold::GingoldMonaghan1977}, or generalized SPH methods \cite{seibold::Dilts1999}. The latter are typically based on generalized meshfree finite difference schemes \cite{seibold::LiszkaOrkisz1980}. An example is the finite pointset method (FPM) \cite{seibold::KuhnertTiwari2002}. Moving the computational nodes with the flow velocity $\vec{v}$ allows the discretization of the governing equations in their more natural Lagrangian frame of reference. The material derivative \mbox{$\frac{D}{Dt} = \pd{}{t}+\vec{v}\cdot\nabla$} becomes a simple time derivative. For a conservation law, the natural velocity is the characteristic velocity. In a frame of reference which is moving with this velocity, the equation states that the function value remains constant. Of course, this is only valid where the solution is smooth. In this case, characteristic particle methods are very simple solution methods for conservation laws. In spite of the obvious advantages of particle methods, almost all numerical methods for conservation laws operate on a fixed Eulerian grid, even though significant work has to be invested to solve even a simple advection problem preserving sharp features and without creating oscillations. Leaving aspects of implementation complexity aside, two main reasons favor fixed grid methods: First, a fixed grid allows an easy generalization to higher space dimensions using dimensional splitting. Second, in particle methods one has to deal with the interaction of characteristics. While the former point remains admittedly present, the latter aspect is addressed in this contribution. Most methods which use the characteristic nature of the conservation law circumvent the problem of crossing characteristics by remeshing. Before any particles can interact, the numerical solution is interpolated onto ``nicely'' distributed particles, for instance onto an equidistant grid -- in which case the approach is essentially a fixed grid method. The CIR-method \cite{seibold::CourantIsaacsonRees1952} is an example. Such approaches incur multiple drawbacks: First, the shortest interaction time determines the global time step. Second, the error due to the global interpolation may yield numerical dissipation and dispersion. Finally, such schemes are not conservative when shocks are present. In practice, finite volume approaches, such as Godunov methods with appropriate limiters \cite{seibold::VanLeer1974}, or ENO \cite{seibold::HartenEngquistOsherChakravarthy1987}/WENO \cite{seibold::LiuOsherChan1994} schemes are used to compute weak entropy solutions that show neither too much oscillations nor too much numerical dissipation. With moving particles, two fundamental problems arise: On the one hand, neighboring particles may depart from each other, resulting in poorly resolved regions. On the other hand, a particle may (if left unchecked) overtake a neighbor, which results in a ``breaking wave'' solution. The former problem can be remedied by inserting particles. The latter has to be resolved by merging particles. When characteristic particles interact (i.e.~one overtakes the other) one is dealing with a shock, thus particles must be merged. In this contribution, we present a local and conservative particle management (inserting and merging particles) that yields no numerical dissipation (where solutions are smooth) and correct shock speeds (where they are not). The particle management is based on exact conservation properties between neighboring particles, which are derived in Sect.~\ref{seibold::sec:conservation_laws}. In Sect.~\ref{seibold::sec:method_description}, we outline our numerical method. The heart of our method, the particle management, is derived in Sect.~\ref{seibold::sec:interp_particle_management}. There, we also show that the method is TVD. In Sect.~\ref{seibold::sec:entropy}, we prove that the numerical solutions satisfy the Kru\v zkov entropy condition, thus showing that the solutions we find are entropy solutions for any convex entropy function. In Sect.~\ref{seibold::sec:numerical_results}, we apply the method to examples and compare it to traditional finite volume methods using CLAWPACK. In Sect.~\ref{seibold::sec:inflection_points} we present how non-convex flux functions can be treated. Finally, in Sect.~\ref{seibold::sec:outlook}, we outline possible extensions and conclusions. These include applications of the 1D solver itself as well as possible extensions beyond the 1D scalar case. \section{Scalar Conservation Laws} \label{seibold::sec:conservation_laws} Consider a one-dimensional scalar conservation law \begin{equation} u_t+\prn{f(u)}_x = 0, \quad u(x,0) = u_0(x) \label{seibold::eq:conslaw} \end{equation} with $f'$ continuous. As long as the solution is smooth, it can be obtained by the method of characteristics \cite{seibold::Evans1998}. The function $u(x(t), t)$ is constant along the characteristic curve \begin{equation} x(t) = x(0)+f'(u(x(0),0))\,t \;. \label{seibold::eq:characteristic} \end{equation} For nonlinear functions $f$ the characteristic curves can ``collide'', resulting in a shock, whose speed is given by the Rankine-Hugoniot condition \cite{seibold::Evans1998}. Discontinuities are shocks only if the characteristic curves run into them. Other discontinuities become rarefaction waves, i.e.~continuous functions which attain every value between the left and the right states. If the flux function $f$ is convex or concave between the left and right state of a discontinuity, then the solution is either a shock or a rarefaction. If $f''$ switches sign between between the two states, then a combination of a shock and a rarefaction occur. These physical solutions are defined by a weak formulation of \eqref{seibold::eq:conslaw} accompanied by an entropy condition. \subsection{Conservation Properties} \label{seibold::subsec:cons_properties} Conservations laws conserve the total area under the solution \begin{equation} \frac{d}{dt}\int\limits_{-\infty}^\infty u(x,t)\ud{x} = 0 \;. \label{seibold::eq:cons_law_area} \end{equation} The change of area between two \emph{moving} points $b_1(t)$ and $b_2(t)$ is given by \begin{align*} \frac{d}{dt}\int_{b_1(t)}^{b_2(t)} u(x,t)\ud{x} = b_2'(t)\,u(b_2(t),t)-b_1'(t)\,u(b_1(t),t)+\int_{b_1(t)}^{b_2(t)} u_t(x,t)\ud{x} \\ = \prn{b_2'(t)\,u(b_2(t),t)-b_1'(t)\,u(b_1(t),t)}-\prn{f(u(b_2(t),t))-f(u(b_1(t),t))} \;. \end{align*} If $x_1(t)$ and $x_2(t)$ are \emph{characteristic} points, that is, points following the characteristics of a smooth solution as in equation \eqref{seibold::eq:characteristic}, we have that $x_1'(t)=f'(u_1)$ and $x_2'(t)=f'(u_2)$. Therefore, the change of area between $x_1$ and $x_2$ is \begin{equation} \prn{f'(u_2)u_2-f'(u_1)u_1}-\prn{f(u_2)-f(u_1)} = \brk{f'(u)u-f(u)}_{u_1}^{u_2} \;, \label{seibold::eq:area_deriv} \end{equation} where $\brk{g(x)}_a^b=g(b)-g(a)$. Equation \eqref{seibold::eq:area_deriv} implies that the change of area between two characteristic points does \emph{not} depend on the positions of the points, only on the left state $u_1$ and right state $u_2$ and the flux function. Since the two states do not change as the points move, the area between the two points changes linearly, as does the distance between them: \begin{equation} \frac{d}{dt}(x_2(t)-x_1(t)) = x_2'(t)-x_1'(t) = f'(u_2)-f'(u_1) = \brk{f'(u)}_{u_1}^{u_2} \;. \label{seibold::eq:distance_deriv} \end{equation} If the two points move at different speeds, then there is a time $t_0$ (which may be larger or smaller than $t$) at which they have the same position. Thus at time $t=t_0$ the distance between them, and the area between them equal zero. From \eqref{seibold::eq:area_deriv} and \eqref{seibold::eq:distance_deriv} we have that \begin{align*} \int_{x_1(t)}^{x_2(t)} u(x,t)\ud{x} &= (t-t_0) \cdot \brk{f'(u)u-f(u)}_{u_1}^{u_2} \;, \\ x_2(t)-x_1(t) &= (t-t_0) \cdot \brk{f'(u)}_{u_1}^{u_2} \;. \end{align*} In short, the area between two Lagrangian points can be written as \begin{equation} \int_{x_1(t)}^{x_2(t)} u(x,t)\ud{x} = (x_2(t)-x_1(t))\,a(u_1,u_2) \;, \end{equation} where $a(u_1, u_2)$ is the nonlinear average function \begin{equation} a(u_1,u_2) = \frac{\brk{f'(u)u-f(u)}_{u_1}^{u_2}}{\brk{f'(u)}_{u_1}^{u_2}} = \frac{\int_{u_1}^{u_2}f''(u)\,u \ud{u}}{\int_{u_1}^{u_2}f''(u) \ud{u}} \;. \label{seibold::eq:nonlinear_average} \end{equation} The integral form shows that $a$ is indeed an average of $u$, weighted by $f''$. This last observation needs one additional assumption: that the points $x_1$ and $x_2$ remain characteristic point between $t$ and $t_0$. That is, that a shock does not develop between the two points before $t_0$. Our numerical method relies heavily on the nonlinear average $a(\cdot,\cdot)$. \begin{lemma} \label{seibold::thm:average_properties} Let $f$ be strictly convex or concave in $[u_L,u_U]$, that is $f''<0$ or $f''>0$ in $(u_L,u_U)$. Then for all $u_1, u_2\in [u_L,u_U]$, the average \eqref{seibold::eq:nonlinear_average} is\dots \begin{enumerate} \item the same for $f$ and $-f$. Thus we assume $f''>0$ WLOG; \item symmetric, $a(u_1,u_2) = a(u_2,u_1)$. Thus we assume $u_1\le u_2$ WLOG; \item an average, i.e.~$a(u_1,u_2)\in(u_1,u_2)$, for $u_1\neq u_2$; \item strictly increasing in both $u_1$ and $u_2$; and \item continuous at $u_1=u_2$, with $a(u_1,u_1)=u_1$. \end{enumerate} \end{lemma} \noindent \begin{proof} We only include here the proof of 4. We show that $a(u_1,u_2)$ is strictly increasing in the second argument. Let $u_1<u_2<u_3$, $u_i\in [u_L, u_U]$. Then \begin{align*} a(u_1, u_3) &=\frac{\int_{u_1}^{u_2}f''(u)u\ud{u}+\int_{u_2}^{u_3}f''(u)u\ud{u}} {\int_{u_1}^{u_3}f''(u)\ud{u}} \\ &>\frac{a(u_1,u_2)\int_{u_1}^{u_2}f''(u)\ud{u}+a(u_1,u_2)\int_{u_2}^{u_3}f''(u)\ud{u}} {\int_{u_1}^{u_3}f''(u)\ud{u}} = a(u_1, u_2) \;. \end{align*} Similar arguments show the result for the first argument.\smartqed \end{proof} \section{Description of the Particle Method} \label{seibold::sec:method_description} The first step is to approximate the initial function $u_0$ by a finite number of points $x_1,\dots,x_m$ with function values $u_1,\dots,u_m$. A straightforward strategy is to place $x_1,\dots,x_m$ equidistant on the interval of interest and assign $u_i = u_0(x_i)$. More efficient adaptive sampling strategies can be used, since our method does not impose any requirements on the point distribution. For instance, one can choose $x_i$ and $u_i$ to minimize the $L^1$ error, using the specific interpolation introduced in Sect.~\ref{seibold::sec:interp_particle_management}. This strategy is the topic of future work. The points are ordered so that $x_1<\dots<x_m$. The evolution of the solution is found by moving each point $x_i$ with speed $f'(u_i)$. This is possible as long as there are no ``collisions'' between points. Two neighboring points $x_i(t)$ and $x_{i+1}(t)$ collide at time \mbox{$t+\varDelta t_i$}, where \begin{equation} \varDelta t_i = -\frac{x_{i+1}-x_i}{f'(u_{i+1})-f'(u_i)} \;. \label{seibold::eq:intersection_time} \end{equation} A positive $\varDelta t_i$ indicates that the two points will eventually collide. Thus, $t+\varDelta t_{\text s}$ is the time of the next particle collision\footnote{If the set $\{i|\varDelta t_i\ge 0\}$ is empty, then $\varDelta t_{\text s}=\infty$.}, where \begin{equation*} \varDelta t_{\text s} = \min_i\{\varDelta t_i | \varDelta t_i\ge 0\} \;. \end{equation*} For any time increment $\varDelta t\le \varDelta t_{\text s}$ the points can be moved directly to their new positions $x_i+f'(u_i)\varDelta t$. Thus, we can step forward an amount $\varDelta t_{\text s}$, and move all points accordingly. Then, at least one particle will share its position with another. To proceed further, we merge each such pair of particles. If the collision time $\varDelta t_i$ is negative, the points depart from each other. Although at each of the points the correct function value is preserved, after some time their distance may be unsatisfyingly large, as the amount of error introduced during a merge grows with the size of the gaps in the neighboring particles. To avoid this, we insert new points into large gaps between points \emph{before} merging particles. In Sect.~\ref{seibold::subsec:particle_management} we derive positions and values of the new particles that assure that the method is conservative, TVD, and entropy diminishing. \section{Interpolation and Particle Management} \label{seibold::sec:interp_particle_management} The movement of the particles is given by a fundamental property of the conservation law \eqref{seibold::eq:conslaw}: its characteristic equation \eqref{seibold::eq:characteristic}. We derive particle management to satisfy another fundamental property: the conservation of area \eqref{seibold::eq:cons_law_area}. Using the conservation principles derived in Sect.~\ref{seibold::sec:conservation_laws}, the function value of an inserted or merged particle is chosen, such that area is conserved exactly. A simple condition on the particles guarantees that the entropy does not increase. In addition, we define an interpolating function between two neighboring particles, so that the change of area satisfies relation \eqref{seibold::eq:area_deriv}. Furthermore, this interpolation is an analytical solution to the conservation law. \subsection{Conservative Particle Management} \label{seibold::subsec:particle_management} Consider four neighboring particles located at \mbox{$x_1<x_2\le x_3<x_4$} with associated function values $u_1$, $u_2$, $u_3$, $u_4$. Assume that the flux $f$ is strictly convex or concave on the range of function values $[\min_i(u_i),\max_i(u_i)]$. If $u_2\neq u_3$, the particles' velocities must differ $f'(u_2)\neq f'(u_3)$, which gives rise to two possible cases that require particle management: \begin{itemize} \item \textbf{Inserting:} The two particles deviate, i.e.~$f'(u_2)<f'(u_3)$. If the distance $x_3-x_2$ is larger than a predefined maximum distance $d_{\text max}$, we insert a new particle $(x_{23},u_{23})$ with $x_2<x_{23}<x_3$ and $u_{23}$ chosen so that the area between $x_2$ and $x_3$ is preserved by the insertion: \begin{equation} (x_{23}-x_2)\,a(u_2,u_{23})+(x_3-x_{23})\,a(u_{23},u_3) = (x_3-x_2)\,a(u_2,u_3) \;. \label{seibold::eq:area_cond_insert} \end{equation} This condition defines a function, connecting $(x_2,u_2)$ with $(x_3,u_3)$, on which the new particle has to lie. This function is the interpolation defined in Sect.~\ref{seibold::subsec:interpolation} and illustrated in Fig.~\ref{seibold::fig:def_interp}. \item \textbf{Merging:} The two particles collide, i.e.~$f'(u_2)>f'(u_3)$. If the distance $x_3-x_2$ is smaller than a preset value $d_{\text min}$ ($d_{\text min}=0$ is possible), we replace both with a new particle $(x_{23},u_{23})$. The position of the new particle $x_{23}$ is chosen with \mbox{$x_2<x_{23}<x_3$} and $u_{23}$ is chosen so that the total area between $x_1$ and $x_4$ is unchanged: \begin{align} (x_{23}-x_1)\,a(u_1,u_{23})+(x_4-x_{23})\,a(u_{23},u_4)& \nonumber \\ = (x_2\!-\!x_1)\,a(u_1,u_2)+(x_3\!-\!x_2)\,a(u_2,u_3)&+(x_4\!-\!x_3)\,a(u_3,u_4) \;. \label{seibold::eq:area_cond_merge} \end{align} Any particle $(x_{23},u_{23})$ with \mbox{$x_2<x_{23}<x_3$} that satisfies \eqref{seibold::eq:area_cond_merge} would be a valid choice. We choose $x_{23}=\frac{x_2+x_3}{2}$, and then obtain $u_{23}$ such that \eqref{seibold::eq:area_cond_merge} is satisfied. Figure~\ref{seibold::fig:merging} illustrates the merging step. \end{itemize} Observe that inserting and merging are similar in nature. Conditions \eqref{seibold::eq:area_cond_insert} and \eqref{seibold::eq:area_cond_merge} for $u_{23}$ are nonlinear (unless $f$ is quadratic, see Remark~\ref{seibold::remark:quadratic_flux}). For most cases $u_{23}=\frac{u_2+u_3}{2}$ is a good initial guess, and the correct value can be obtained by a few Newton iteration steps. The next few claims attest that we can find a unique value $u_{23}$ that satisfies \eqref{seibold::eq:area_cond_insert} and \eqref{seibold::eq:area_cond_merge}. \begin{lemma} The function value $u_{23}$ for the particle at $x_{23}$ is unique. \end{lemma} \begin{proof} We show the case for merging. The argument for insertion is similar. From Lemma~\ref{seibold::thm:average_properties} we have that both $a(u_1,\cdot)$ and $a(\cdot, u_4)$ are strictly increasing. Thus, the LHS of \eqref{seibold::eq:area_cond_merge} is strictly increasing, and cannot be the same for different values of $u_{23}$.\smartqed \end{proof} \begin{lemma} \label{seibold::lem:existence_u23} If $x_2 = x_3 = x_{23}$, there exists $u_{23}\in [u_2,u_3]$ which satisfies \eqref{seibold::eq:area_cond_merge}. \end{lemma} \begin{proof} WLOG we assume that $u_2\le u_3$. First, we define \begin{align*} A\phantom{(u)}&=(x_2-x_1)\,a(u_1,u_2)+(x_4-x_2)\,a(u_3,u_4),\quad \text{and} \\ B(u) &= (x_2-x_1)\,a(u_1,u)+(x_4-x_2)\,a(u,u_4)\;. \end{align*} Equation \eqref{seibold::eq:area_cond_merge} is now simply $B(u_{23})=A$. The monotonicity of $a$ implies that \begin{equation} B(u_2)\le A \le B(u_3) \;. \end{equation} Since $a$ is continuous, so is $B$, and the existence of $u_{23}$ follows the intermediate value theorem.\smartqed \end{proof} \begin{corollary} If particles are merged only according to Lemma~\ref{seibold::lem:existence_u23}, then the total variation of the solution is either the same as before the merge, or smaller. \end{corollary} Merging points only when $x_2=x_3$ can be too restrictive. Fortunately, the following claim allows for a little more freedom. \begin{theorem} \label{seibold::thm:merging_TVD} Consider four consecutive particles $(x_i,u_i) \ \forall i=1,\ldots,4$. Merging particles 2 and 3 so that $x_{23}=\frac{x_2+x_3}{2}$ yields $u_{23}\in[u_2,u_3]$ if \begin{equation} \frac{x_3-x_2}{\abs{u_3-u_2}} \le\frac{1}{16}\prn{\frac{\min\abs{f''}}{\max\abs{f''}}}^6 \frac{\min\prn{x_4-x_2,x_3-x_1}}{\abs{\max(u_3,u_2)-\min(u_4, u_1)}} \;. \end{equation} \end{theorem} Here the $\min$ and $\max$ of $f''$ are taken over the maximum range of $u_1,\dots,u_4$. This condition is naturally satisfied if $x_2=x_3$. \begin{proof}[outline] The full proof will be given in a future paper. The idea is to merge in two steps: First, we find a value $\tilde u$ such that setting $u_2=u_3=\tilde u$ (while leaving $x_2$ and $x_3$ unchanged) results in the same area. Then, we merge the two points to $u_{23}$. In the first step we bound $\tilde u$ \emph{away} from $u_2$ and $u_3$ (but inside $[u_2,u_3]$), and in the second step we bound $\abs{u_{23}-\tilde u}$ from above.\smartqed \end{proof} \begin{theorem} \label{seibold::thm:arbitrary_times} The particle method can run to arbitrary times. \end{theorem} \begin{proof} Let $u_L=\min_i u_i$, $u_U=\max_i u_i$, and $C=\max_{[u_L,u_U]}|f''(u)|\cdot(u_U-u_L)$. For any two particles, one has $|f'(u_{i+1})-f'(u_i)|\le C$. Define $\varDelta x_i=x_{i+1}-x_i$. After each particle management, the next time increment (as defined in Sect.~\ref{seibold::sec:method_description}) is at least $\varDelta t_{\text s}\ge \frac{\min_i{\varDelta x_i}}{C}$. If we do not insert particles, then in each merge one particle is removed. Hence, a time evolution beyond any given time is possible, since the increments $\varDelta t_{\text s}$ will increase eventually. When a particle is inserted (whenever two points are at a distance more than $d_{\text max}$), the created distances are at least $\frac{d_{\text max}}{2}$, preserving a lower bound on the following time increment.\smartqed \end{proof} \begin{figure} \centering \begin{minipage}[t]{.47\textwidth} \centering \includegraphics[width=0.99\textwidth]{seiboldfig_conservation} \caption{Merging two particles} \label{seibold::fig:merging} \end{minipage} \hfill \begin{minipage}[t]{.495\textwidth} \centering \includegraphics[width=0.99\textwidth]{seiboldfig_interpolation} \caption{Definition of the interpolation} \label{seibold::fig:def_interp} \end{minipage} \end{figure} \subsection{Conservative Interpolation} \label{seibold::subsec:interpolation} The particle management does not require an interpolation between points. As it stands, it complements the characteristic movement to yield a full particle method for the conservation law \eqref{seibold::eq:conslaw} that can run for arbitrarily long times. Yet, for plotting the solution and interpreting approximation properties, it is desirable to define an interpolation that is compatible with the conservation principles of the underlying partial differential equation. We define such an interpolation between each two neighboring points $(x_1,u_1)$ and $(x_2,u_2)$. In the case $u_1=u_2$, we define the interpolation to be a constant. In the following, we describe the case $u_1\neq u_2$. Assume that $f$ is strictly convex or concave in $[u_1,u_2]$. Therefore $f'(u_1)\neq f'(u_2)$. Hence, as derived in Sect.~\ref{seibold::subsec:cons_properties}, the solution either came from a discontinuity (i.e.~it is a rarefaction) or it will become a shock. The time $\varDelta t_1$ until this discontinuity happens is given by \eqref{seibold::eq:intersection_time}. At time $t+\varDelta t_1$ the points have the same position $x_1=x_2=x_{\text{sh}}$, as shown in Fig.~\ref{seibold::fig:def_interp}. At this time the interpolation must be a straight line connecting the two points, representing a discontinuity at $x_{\text{sh}}$. We require any point of the interpolating function $(x,u)$ to move with its characteristic velocity $f'(u)$ in the time between $t$ and $t+\varDelta t_1$. This defines the interpolation uniquely as \begin{equation} x(u) = x_1-t_1\prn{f'(u)-f'(u_1)} = x_1+\frac{f'(u)-f'(u_1)}{f'(u_2)-f'(u_1)}(x_2-x_1) \;. \label{seibold::eq:interpolation_function} \end{equation} Defining $x$ as a function of $u$ is in fact an advantage, since at a discontinuity characteristics for all intermediate values $u$ are defined. Thus, rarefaction fans arise naturally if $f'(u_1)<f'(u_2)$. Since $f''$ has no inflection points between $u_1$ and $u_2$, the inverse function $u(x)$ exists. However, it is only required at a single point for particle management. For plotting purposes we can always plot $x(u)$ instead. \begin{lemma} The interpolation \eqref{seibold::eq:interpolation_function} is a solution of the conservation law \eqref{seibold::eq:conslaw}. \end{lemma} \begin{proof} Using that \mbox{$\dot x_i(t)=f'(u_i)$} for~$i=1,2$ one obtains \begin{align*} \frac{\partial x}{\partial t}(u,t)&= \dot x_1 +\frac{f'(u)-f'(u_1)}{f'(u_2)-f'(u_1)}(\dot x_2-\dot x_1)\\ &=f'(u_1)+\frac{f'(u)-f'(u_1)}{f'(u_2)-f'(u_1)}(f'(u_2)-f'(u_1)) =f'(u) \;. \end{align*} Thus every point on the interpolation $u(x,t)$ satisfies the characteristic equation \eqref{seibold::eq:characteristic}.\smartqed \end{proof} \begin{corollary}[exact solution property] \label{seibold::cor:exact_solution} Consider characteristic particles with $x_1(t)<x_2(t)<\dots<x_n(t)$ for $t\in [t_1,t_2]$. At any time consider the function defined by the interpolation \eqref{seibold::eq:interpolation_function}. This function is a classical (i.e.~continuous) solution to the conservation law \eqref{seibold::eq:conslaw}. In particular, it satisfies the conservation properties given in Sect.~\ref{seibold::subsec:cons_properties}. \end{corollary} \begin{theorem}[TVD] With the assumptions of Theorem~\ref{seibold::thm:merging_TVD}, the particle method is total variation diminishing. \end{theorem} \begin{proof} Due to Corollary~\ref{seibold::cor:exact_solution}, the characteristic movement yields an exact solution, thus the total variation is constant. Particle insertion simply refines the interpolation, thus preserves the total variation. Due to Theorem~\ref{seibold::thm:merging_TVD}, merging of particles yields a new particle with a function value $u_{23}$ between the values of the removed particles. Thus, the total variation is the same as before the merge or smaller.\smartqed \end{proof} \begin{remark} \label{seibold::remark:quadratic_flux} The method is particularly efficient for quadratic flux functions. In this case the interpolation \eqref{seibold::eq:interpolation_function} between two points is a straight line, since $f'$ is linear. Furthermore, the average \eqref{seibold::eq:nonlinear_average} is the arithmetic mean $a(u_1,u_2) = \frac{u_1+u_2}{2}$, since $f''$ is constant. Consequently, the function values for particle insertion and merging can be computed explicitly. \end{remark} \begin{remark} The method has some similarity to \emph{front tracking} by Holden et al.~\cite{seibold::HoldenHoldenHeghKrohn1988}, and some fundamental differences. In front tracking, one approximates the flux function by a piecewise linear, and the solution by a piecewise constant function. Shocks are moved according to the Rankine-Hugoniot condition. In comparison, our method uses the wave solutions. Hence, in front tracking everything is a shock; in the particle method, everything is a wave. \end{remark} \section{Entropy} \label{seibold::sec:entropy} We have argued in Sect.~\ref{seibold::subsec:interpolation} that due to the constructed interpolation the particle method naturally distinguishes shocks from rarefaction fans. In this section, we show that the method in fact satisfies the entropy condition \begin{equation} \eta(u)_t+q(u)\le 0 \label{seibold::eq:entropy_condition} \end{equation} if a technical assumption on the resolution of shocks is satisfied. We consider the Kru\v zkov entropy pair $\eta_k(u) = \abs{u-k}$, $q_k(u) = {\rm sign}(u-k)(f(u)-f(k))$. As argued by Holden et al.~\cite{seibold::HoldenRisebro2002}, if \eqref{seibold::eq:entropy_condition} is satisfied for $\eta_k$, then it is satisfied for any convex entropy function. Relation \eqref{seibold::eq:entropy_condition} implies that the total entropy $\int\eta_k(u(x))\ud{x}$ does not increase in time for all values of $k$. Using the interpolation \eqref{seibold::eq:interpolation_function} we show that the numerical solution obtained by the particle method satisfies this condition. \begin{lemma}[entropy for merging] \label{seibold::lem:entropy_merging} Consider four particles located at $x_1<x_2=x_3<x_4$, with the middle two to be merged. We consider the case \mbox{$f''>0$}, i.e.~\mbox{$u_2>u_3$} WLOG.\footnote{For the case $f''<0$, all following inequality signs must be reversed.} If the resulting value $u_{23}$ satisfies \mbox{$u_1\ge u_{23}\ge u_4$}, then the Kru\v zkov entropy does not increase due to the merge. \end{lemma} \begin{proof} We consider the segment $[x_1,x_4]$. Let $u(x)$ and $\tilde u(x)$ denote the interpolating function before resp.~after the merge. The area under the function is preserved. We present the proof for $k\le u_{23}$. For $k\ge u_{23}$ the proof is similar. The interpolating function $u$ is monotone in the value of its endpoints, thus $u(x)\le\tilde u(x)$ for $x\in [x_2,x_4]$. Since $\abs{x}=x-2\varTheta(-x)$, where $\varTheta(x)$ is the Heaviside step function, we can write \begin{align*} \int_{x_1}^{x_4}\! \abs{u\!-\!k}\ud{x} &= \int_{x_1}^{x_4}\!\prn{u\!-\!k}\ud{x} -2\int_{x_1}^{x_4}\!(u\!-\!k)\varTheta(k\!-\!u)\ud{x} \\ &= \int_{x_1}^{x_4}\!\prn{\tilde u\!-\!k}\ud{x} -2\int_{x_2}^{x_4}(u\!-\!k)\varTheta(k\!-\!u)\ud{x} \\ &\ge\int_{x_1}^{x_4}\!\prn{\tilde u\!-\!k}\ud{x} -2\int_{x_2}^{x_4}(\tilde u\!-\!k)\varTheta(k\!-\!u)\ud{x} \ge\int_{x_1}^{x_4}\!\abs{\tilde u\!-\!k}\ud{x} \;. \end{align*} Thus, the entropy does not increase due to the merge.\smartqed \end{proof} The assumption of Lemma~\ref{seibold::lem:entropy_merging} implies that shocks must be reasonably well resolved before the points defining it are merged. It is satisfied if left and right of a shock points are not too far away. In the method, it can be ensured by an \emph{entropy fix}: A merge is rejected \emph{a posteriori} if the resolution condition is not satisfied. Then, points are inserted near the shock, and the merge is re-attempted. It remains to show in future work that with this procedure Theorem~\ref{seibold::thm:arbitrary_times} still holds. \begin{theorem} \label{seibold::thm:entropy} The presented particle method yields entropy solutions. \end{theorem} \begin{proof} During the characteristic movement of the points the entropy is constant, since due to Corollary~\ref{seibold::cor:exact_solution} the interpolation is a classical solution to the conservation law. Particle insertion does not change the interpolation, thus it does not change the entropy. Merging does not increase the entropy if the conditions of Lemma~\ref{seibold::lem:entropy_merging} are satisfied.\smartqed \end{proof} \section{Numerical Results} \label{seibold::sec:numerical_results} The particle method is particularly well suited for initial conditions that are composed of similarity solutions. By construction, the movement of the particles yields the exact solution as long as the solution is smooth. General initial conditions can be approximated by the interpolation \eqref{seibold::eq:interpolation_function}. Good strategies of sampling initial conditions shall be addressed in future work. Figure~\ref{seibold::fig:u4} shows a smooth initial function $u_0(x) = \exp\prn{-x^2}\cos\prn{\pi x}$ and its time evolution under the flux function $f(u)=\frac{1}{4}u^4$. The curved shape of the interpolation is due to the nonlinearity in $f'$. At time $t=0.25$ the solution (obtained by CLAWPACK using 80000 points) is still smooth, and thus represented exactly on the particles. At time $t=8$ shocks and rarefactions have occurred and interacted. Although the numerical solution uses only a few points, it represents the true solution well. \begin{figure} \centering \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=0.99\textwidth]{seiboldfig_comp_u4_t0_x20} \end{minipage} \hfill \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=0.99\textwidth]{seiboldfig_comp_u4_t0_25_x20} \end{minipage} \hfill \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=0.99\textwidth]{seiboldfig_comp_u4_t8_x20} \end{minipage} \caption{The particle method for $f(u)=\tfrac{1}{4}u^4$ before and after a shock} \label{seibold::fig:u4} \end{figure} The accuracy of the particle method is measured numerically. We consider the flux function and initial conditions as used in Fig.~\ref{seibold::fig:u4}. For a sequence of resolutions $h$, the initial data are sampled, and the particle method is applied ($d_{\text max}=1.9h$). Figure~\ref{seibold::fig:error} shows the $L^1$-error to the correct solution (obtained by a computation with much higher resolution, verified with CLAWPACK). While the solution is smooth ($t=0.25$), the method is second order accurate, as is sampling the initial data. After a shock has occurred ($t=0.35$), the approximate solution (dots) becomes only first order accurate, since the shock has just been treated by particle management, thus an error of the order height$\times$width of the shock is made. A postprocessing step (squares) can recover the second order accuracy: At merged particles, discontinuities are placed so that the total area is preserved. \begin{figure} \centering \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=.99\textwidth]{seiboldfig_error_smooth} \end{minipage} \hfill \begin{minipage}[t]{.48\textwidth} \centering \includegraphics[width=.99\textwidth]{seiboldfig_error_shock} \end{minipage} \caption{Error to the correct solution before and after a shock} \label{seibold::fig:error} \end{figure} \section{Non-Convex Flux Functions} \label{seibold::sec:inflection_points} So far we have only considered flux functions with no inflection points (i.e.~$f''$ has always the same sign) on the region of interest. In this section, we generalize to flux functions for which $f''$ has a finite number of zero crossings \mbox{$u^*_1<\dots<u^*_k$}. Between two such points \mbox{$u\in [u^*_i,u^*_{i+1}]$} the flux function is either convex or concave. We impose the following requirement for any set of particles: Between any two neighboring particles for which $f''$ has opposite sign, there must be an \emph{inflection particle} $(x,u_i^*)$. Thus, between two neighboring particles, $f$ has never an inflection point, and most results from the previous sections apply. The interpolation between any two particles is uniquely defined by \eqref{seibold::eq:interpolation_function}. It has infinite slope at the inflection points, but this is harmless. The characteristic movement of particles is the same as for flux functions without inflection points. The only complication is merging of particles when an inflection particle is involved: The standard approach, as presented in Sect.~\ref{seibold::subsec:particle_management}, removes two colliding points and replaces them with a point of a different function value. If an inflection particle is involved in a collision, we must merge points in a different way so that an inflection particle remains. \begin{figure} \centering \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=0.99\textwidth]{seiboldfig_inflection1} \end{minipage} \hfill \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=0.99\textwidth]{seiboldfig_inflection2} \end{minipage} \hfill \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=0.99\textwidth]{seiboldfig_inflection3} \end{minipage} \caption{Particle management around an inflection particle ($f''(u_3)=0$)} \label{seibold::fig:particle_management_inflection} \end{figure} We present one such special merge for dealing with a single inflection point (we do not consider here the interaction of two inflection points). Also, we only consider a collision where the positions are exactly the same. Since the inflection particle must remain (although its position may change), we consider five neighboring particles and not four as before. Let $(x_i,u_i), i=1,\ldots,5$ be these particle so that $x_2=x_3$, $f''(u_3)=0$, and (WLOG) $f'''>0$, i.e.~the inflection particle is the slowest. The other cases are simple symmetries of this situation. We present three successive steps to finding the final configuration of the particles. Each next step is attempted if the previous one failed. \begin{enumerate} \item Remove particle 2 and increase $x_3$ to satisfy the area condition. This fails if $x_3$ needs to be increased beyond $x_4$. \item Remove particle 2, set \mbox{$x_3=x_4$} and increase both to satisfy the area condition. This fails if $x_3, x_4$ need to be increased beyond $x_5$. \item Remove particle 4, set \mbox{$x_3=x_5$} and find $u_2$ to satisfy the area condition. This cannot fail if the previous two have failed. \end{enumerate} Formally, the resulting configuration could require another, immediate, merge (since \mbox{$x_3=x_4$} or \mbox{$x_3=x_5$}). However, we need not merge these points as they move away from each other. The five point particle management guarantees that in each merging step one particle is removed, thus Theorem~\ref{seibold::thm:arbitrary_times} holds. \begin{figure} \centering \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=1.02\textwidth]{seiboldfig_comp_bucklev_t10_x20} \end{minipage} \hfill \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=1.02\textwidth]{seiboldfig_comp_bucklev_t30_x20} \end{minipage} \hfill \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=1.02\textwidth]{seiboldfig_comp_bucklev_t80_x20} \end{minipage} \caption{Numerical results for the Buckley-Leverett equation} \label{seibold::fig:results_inflection} \end{figure} As numerical evidence of the performance, we apply our method to the Buckley-Leverett equation (see LeVeque \cite{seibold::LeVeque2002}), defined by the flux function $f(u) = \frac{u^2}{u^2+\frac{1}{2}(1-u)^2}$. It is a simple model for two-phase fluid flow in a porous medium. We consider piecewise constant initial data with a large downward jump crossing the inflection point, and a small upward jump. The large jump develops a shock at the bottom and a rarefaction at the top, the small jump is a pure rarefaction. Around $t=0.2$, the two similarity solutions interact, thus lowering the separation point between shock and rarefaction. Figure~\ref{seibold::fig:results_inflection} shows the reference solution (solid line, by CLAWPACK using 80000 points). The solution obtained by our particle method (dots) is compared to a second order CLAWPACK solution (circles) of about the same resolution. While the finite volume scheme loses the downward jump very quickly, the particle method captures the behavior of the solution almost precisely. Only directly near the shock inaccuracies are visible, which are due to the crude resolution. The solution away from the shock is nearly unaffected by the error at the shock. Note that although we impose a special treatment only at the inflection point, the switching point between shock and rarefaction is identified correctly. \section{Conclusions and Outlook} \label{seibold::sec:outlook} We have presented a particle method for 1D scalar conservation laws, which is based on the characteristic equations. The associated interpolation yields an analytical solution wherever the solution is smooth. Particle management resolves the interaction of characteristics locally while conserving area. Thus, shocks are resolved without creating any numerical dissipation away from shocks. The method is TVD and entropy decreasing, and shows second-order accuracy. It deals well with non-convex flux functions, as the results for the Buckley-Leverett equation show. The particle method serves as a good alternative to fixed grid methods whenever 1D scalar conservation laws have to be solved with few degrees of freedom, but exact conservation and sharp shocks are desired. An application, which we plan to investigate in future work, is nonlinear flow in networks (e.g.~traffic flow in road networks). For large networks, only a few number of unknowns can be devoted to the numerical solution on each edge. We regard the current work as a first step towards a more general particle method. Future work will focus on three main directions: \begin{itemize} \item\textbf{Source terms:} Source terms in an equation $u_t+\prn{f(u)}_x = g(x,u)$ could be handled using a fractional step method: In each time step, we would first move the particles according to $u_t+\prn{f(u)}_x = 0$ (including particle management), then change their values according to an integral formulation of $u_t = g(x,u)$. In the latter step, the constructed interpolation can be used. \item\textbf{Systems of conservation laws:} The particle method is based on similarity solutions of the conservation law. For simple systems, such as the shallow water equations in 1D, the analytical solutions to Riemann problems are known. Two complications arise in the generalization of the method: \begin{itemize} \item To connect two general states in a hyperbolic system, intermediate states have to be included. \item For a general system it is not clear at which velocity to move the particles. \end{itemize} \item\textbf{Higher space dimensions:} Scalar conservation laws in higher space dimensions can be reduced to 1D problems to be solved by fractional steps. In principle, this dimensional splitting can be used with the particle method. However, remeshing would be required between the different spacial directions, thus the benefits of the meshfree approach would be lost. For the generalization to a fundamentally meshfree approach in higher space dimensions, the following problem has to be overcome: In 1D one is never truly meshfree, since the points have a natural ordering. The method uses this in the interpolation and to detect shocks. In 2D/3D shocks can occur without particles colliding, as they can move past each other. Other meshfree methods, such as FPM applied to the Euler equations, circumvent this issue by using the pressure to regulate shocks. \end{itemize} \section*{Acknowledgments} We would like to thank R.~LeVeque for helpful comments and suggestions. \bibliographystyle{amsplain}
1,314,259,996,231
arxiv
\section{Introduction} The ongoing COVID-19 pandemic has witnessed multi-fold increase in face mask use for protection against viral infection, with many countries now mandating face masks in public areas \cite{CDCFMask}. This sudden demand surge has created a scarcity of face masks, necessitating homemade cloth mask fabrication \cite{CDCFMask}. But neither homemade cloth masks nor surgical masks are designed to protect against the SARS-Cov-2 virion, only respirators conforming to N95 or higher standard are rated for such protection. This capability of N95 filtering facepiece respirators (FRs) is owed primarily to an electrocharged filtration layer among other notable design features, deemed the most efficient among various particle filtration methods employed in face masks. Taken together, these features permit filtration of $\ge 95\%$ of particles of size $\ge 0.3 \mu$m under test standards designated by the US National Institute for Occupational Safety and Health (NIOSH) \cite{NIOSH-N95} for N95 (US: NIOSH-42C-FR84) and its counterparts, including FFP2 (Europe: EN149-2001), KN95 (China: GB2626-2006), P2 (Australia and New Zealand: AS/NZ1716:2012), Korea 1st class (South Korea: KMOEL-2017-64), and DS2 (Japan: JMHLW-Notification214 from year 2018) facepiece respirators \cite{3M}. Unfortunately, the electrocharged polymer filtration layers used in these FRs are manufactured through industrially sophisticated processes that are hard to duplicate using commonly available materials or methods. This article details a process to fabricate electrocharged polymer based fabric using commonly available materials and easily replicable methods. The fabrication setup is based on the Cotton Candy (CC) principle, also known as rotary jet spinning or centrifugal spinning method \cite{RSpinReview}. The primary control parameters for the CC method \cite{RJet2} are translated to practical design rules for either the construction of a fabric manufacturing setup from common parts, or through minimal modification of commercial cotton candy machines. Practical solutions to tune the control parameters for fabrication of electrocharged polymer fabrics are also specified. Finally, characterisation of electrocharged fabrics made using the CC method for structural as well as filtration properties using two mask designs are presented. \section{Filtration Principles} \subsection{Basic Mechanisms} Face mask filtration mechanisms must optimize between two competing requirements. On one hand the mask's filtration layers must possess an average pore diameter small enough to trap and filter particles from being inhaled, but at the same time too small a pore diameter prevents the user from breathing comfortably \cite{Revoir1995}. For this reason, face mask filtration layers cannot be fabricated below a certain pore diameter. Furthermore, mechanisms for filtration of large particles differ from those for small particles, thereby requiring a range of filtration strategies to be adopted. These strategies commonly involve three physical mechanisms, {\it viz.} inertial impaction, diffusion, and electrostatic attraction \cite{NAS2006}. Large particles with diameters $\ge 1 \mu$m possess inertia to deviate from aerodynamic streamlines and collide with filtration fibers and get caught in the filter mesh. Small particles of typical diameters $\le 0.1 \mu$m which follow streamlines while undergoing diffusion, execute a complex, meandering trajectory through the tortuous porous matrix and get trapped in the filtration layers. These two mechanisms are easily achieved in any cloth-based or commercial (surgical or PM2.5) masks, but they do not incorporate the third mechanism of electrostatic filtration \cite{Konda, Davies}, which traps particles of intermediate sizes in the range $0.1 - 1 \mu$m and is considered most effective of the three mechanisms. When an electrostatically charged layer is embedded among standard mask filtration layers, oppositely charged particles (both small and large in diameter) are attracted by the long-range electrostatic Coloumb force towards the electrocharged layer. Once caught, the particles are held in place through van der Waals forces. \subsection{Electrocharged Filtration} It is known from common experience, especially in cold climates with low ambient humidity, that when two dissimilar fabrics rub against each other they gain static electricity, a phenomenon known as triboelectric charging. Fabrics woven from natural fibers like wool or cotton, which possess high roughness, and even synthetic fabrics like Nylon are common examples of fabrics with high triboelectric charging ability. The idea of exploiting charged fabrics to aid in filtration goes back nearly five decades \cite{Frederick1974}, and indeed some early face mask designs incorporating electrocharged filtration employed wool or felt fibers, with resin additives to enhance filtration efficiency many times over that achieved with basic fabric materials alone \cite{NAS2006}. However, resin additives degrade upon exposure to airborne oil aerosol droplets, which can shield electrostatic charges. Consequently over time, synthetic electret fabrics such as plastic fibers (e.g. polypropylene and polystyrene) with high electrostatic charge characteristies were found to resist the shielding effect of oil aerosols quite effectively and came to be adopted as the material of choice for electrocharged filtration fabrics. An additional advantage of plastic electret fabrics was that they were non-woven, thus saving fabrication time. Finally, the disordered fabric pattern in non-woven electret fabrics, as opposed to knitted fabric with interleaving fiber strands in a grid, assures a highly irregular porous medium ensuring particles follow tortuous streamlines through the porous matrix. \section{Electrocharged material fabrication methods} All plastic electrets or electrocharged fabrics are manufactured by bonding polymer fibers in a porous mesh. In order to generate polymer fibers, one starts with liquid polymer either from solution (polymer dissolved in solvent) or polymer melt. The bulk liquid must be forced through a tiny orifice to overcome capillary forces, and quickly accelerated to stretch or extend the viscoelastic solution into fibers of diameter ranging from 100s of nanometers to 10s of $\mu$m. This dispersity in fabric diameters helps provide large surface area exposure (relative to bulk volume) to attract particles. Additionally, the non-woven fibers bonding together with the dispersity helps increase fabric disorder and results in a tortuous porous matrix. These fibers either evaporate their solvent or cool if derived from melt, as they traverse from the orifice to be collected on a surface where they solidify into an enmeshed fabric. The various fabrication methods therefore vary in the forcing mechanisms employed to overcome capillary force and thence accelerate the polymer. The three primary methods, {\it viz.} Electrospinning, Melt Blowing, and Rotary Jet Spinning, are briefly reviewed as they inform the design rules to follow subsequently. \begin{figure*} \begin{center} \includegraphics[width= 1.75\columnwidth]{Fig1.pdf} \end{center} \caption{(Color online) Schematic of (a) a generic electrospinning setup and (b) cross-sectional view of a Melt Blowing setup.} \label{fig1} \end{figure*} {\it Electrospinning:} Electrospinning is a widely used platform for generating polymer fibers \cite{ESpinRev1, ESpinRev2, ESpinRev3, ESpinRev4, ESpinRev5}. In this method, polymer solution or polymer melt in fewer settings \cite{MESpin1, MESpin2} is forced out of a container (emitter) with tiny orifice using a piston, such as a syringe pump. This emitter is connected to the positive terminal of a high voltage DC source ($\sim$ few to 10s of kV) and a flat plate or drum placed at a distance (collector) is connected to the voltage source's negative terminal, thereby setting up a high voltage DC electric field between the emitter and collector. The piston pressure competes with surface tension to generate a polymer jet, whereas the DC electric field accelerates the jet from emitter towards the collector and stretches them into fibers. The fibers are deposited on the collector where they evaporate their solvent to result in the electret fabric. A schematic representation of the electrospinning principle is shown in fig.~\ref{fig1}a. Whereas polymers used in electrocharged fabrics possess embedded charges, the electric field between the emitter and collector aids in orienting dipoles of the polymer melt during droplet stretching, thereby further enhancing the material's electrocharging properties. The electrospinning method is agnostic to the type of polymer material, with material dependent parameters, e.g. melting temperature and DC field voltage, being easy to adjust for each material once the primary setup is in place. However, the electrospinning process suffers from two disadvantages. Firstly, electrospun polymer throughput scales linearly with the number of orifices or syringes, requiring several syringes in parallel for increased throughput. Secondly, the high voltage DC electric field is expensive and requires additional operational safety features, hence not suitable outside laboratory and industrial settings where the method outlined below is intended to find its primary use. {\it Melt Blown Process:} The most common method for manufacture of electrocharged filtration fabrics is Melt Blowing \cite{MBlowing1, MBlowing2, MBlowing3}. In this method, jets of molten polymer are generated by injecting it from a conical die, wedged in a gap within an air knife that converge at the die tip; Fig.~\ref{fig1}b shows a cross-sectional view of the cylindrical geometry for a generic Melt Blowing setup. The hot air accelerates and stretches the polymer jet into fiber. Cold air sprayed at the polymer strands as they depart the die tip solidifies them as they land on a conveyor belt or drum \cite{MBlowBook1, MBlowBook2}. Like electrospinning, the melt blown process too can employ a variety of polymers by controlling temperatures of the die (for polymer melting) and hot air jets. Melt blowing offers a higher throughput relative to electrospinning and does not require high DC fields. However, melt blowing setups are inherently suited for large volume manufacturing, requiring a dedicated source of high pressure air at large flow rates, and thus not easily amenable to construction from commonly available parts. The quality of fabric too is ever so sensitive to die geometry and shaping of the high velocity hot air currents, thus requiring time-consuming fine tuning of the process. \begin{figure*} \begin{center} \includegraphics[width= 1.75\columnwidth]{Fig2.pdf} \end{center} \caption{(Color online) (a) Top and (b) Side view schematic of Rotary Jet Spinning (CC method).} \label{fig2} \end{figure*} {\it Rotary Jet Spinning:} In recent times, Rotary Jet or Centrifugal Spinning process has emerged as an attractive alternative platform \cite{RJet1, RJet2}. In this method (see schematic in Fig.~\ref{fig2}), a central cylindrical container (emitter) holds the polymer and has several orifices along its wall. The emitter is heated to melt the polymer, but high viscosity prevents it from flowing out of the orifices under static conditions. However, when the emitter is spun at several 1000s of revolutions per minute (rpm), the molten polymer is forced out of the orifices in jets, which are stretched into fibers by centrifugal force generated from fast spinning. A cylindrical drum enclosure (collector) surrounds the emitter and as the stretched polymer jets traverse the distance from emitter to collector in a spiral trajectory as shown in Fig.~\ref{fig2}a, they are cooled and deposited on the collector surface. Rotary jet spinning employs the same forcing scheme to generate as well as to accelerate and stretch polymer jets. It neither requires a high voltage electric field as employed in electrospinning \cite{ES-RSComp} nor hot air jets used in melt blowing. Furthermore, since the emitter wall has several orifices, rotary jet spinning offers higher throughput relative to electrospinning \cite{RSpinReview}. In fact, rotary jet spinning has been long known in regular life through the cotton candy machine and for this reason, it is also called the Cotton Candy (CC) method \cite{CCandy1, CCandy2}. Replacing sugar with polymer and tuning the temperature and emitter rpm offers an easy design one can construct from commonly available parts. \section{Fabrication Setup} The ability to construct a low-cost fabrication setup from commonly available parts and materials forms the primary consideration behind the design strategy detailed in this article. For this reason, some design choices were made at the very outset to keep the design rules accessible to the layperson. Firstly, non-woven electret polymer fabrics can also be manufactured by dissolving polymers in suitable organic solvents, as opposed to melting them at high temperatures. However, organic solvents are not commonly available whereas heat sources are universally accessible. For this reason, methods involving liquid polymers dissolved in solvents are not explored here and will form part of a future study. Secondly, as already discussed, CC method offers easy construction from common materials as opposed to electrospinning and melt blowing. The design strategy therefore relies heavily on the CC method (Fig.~\ref{fig2}), with hybrid characteristics that adopt some aspects of electrospinning and melt blowing wherever practicable. \subsection{Control Parameters} As discussed earlier, electret fabrication involves forcing in two stages, whose order of magnitude analysis presented here is borrowed from Ref. \cite{RJet2} and the terms defined below are also shown in the schematic in Fig.~\ref{fig2}. The first stage concerns droplet generation in order to initiate a jet by overcoming the capillary force $F_{\sigma} = \sigma r_O$, where $\sigma$ is the surface tension of the polymer melt and $r_O$ is the orifice radius. Hydrostatic pressure being much smaller in magnitude than the centrifugal force $F_{\omega} \sim \rho \omega^2 R_E r_O$ ($\rho$ being polymer melt density, $\omega$ the emitter's angular speed, and $R_E$ emitter radius), balancing the inertial force $\rho \omega^2 R_E r_O^3$ with $F_{\sigma}$ provides the threshold angular speed $\omega_{th}$ for droplet generation and jet initiation: \begin{equation} \omega_{th} \sim \sqrt{\frac{\sigma}{r_O^2 R_E \rho}} \label{omegath} \end{equation} The second stage of droplet acceleration or jet elongation concerns a competition between the centrifugal force $F_{\omega}$ and the viscous force $F_{\mu} \sim \mu v/x$ where $v$ is the jet velocity at a distance $x$ from the orifice and $\mu = \sqrt{\sigma R_E \rho}$ is its extensional viscosity since the polymer melt is a viscoelastic fluid and the droplet stretching represents a case of extensional rheology. Applying mass conservation between matter ejected at the orifice with speed $u$ and the elongated jet of radius $r$ arriving at the collector at a distance $R_C$ from emitter, provides the mean fiber radius: \begin{equation} r \sim \frac{r_O u^{1/2} \nu^{1/2}}{R_C^{3/2}\omega}, \label{frad} \end{equation} where $\nu = \mu/\rho$ is the extensional kinematic viscosity; it is assumed $R_C \gg R_E$. Whereas equations \ref{omegath} and \ref{frad} resulting from scaling analysis \cite{RJet2} provide appropriate control parameters for the CC method, some of them are invariant. For instance, surface tension $\sigma$ does not vary significantly with temperature $T$ and maybe assumed constant, and since polymers dissolved in solvents by concentration are not explored here, the extensional viscosity $\mu$ varies only with temperature through density $\rho$. Therefore the dominant parameters that control our design are orifice radius $r_O$, emitter radius $R_E$, collector radius $R_C$, angular speed $\omega$ and heating temperature $T$ through which $\rho$ (and $\mu$) is varied. It may then be surmised that the material dependence enters only through the temperature. Armed with these parameters, practical design rules may now be developed. \subsection{Design Rules} \subsubsection{Materials} The material of choice in manufacture of electrocharged filtration fabrics is usually polypropylene (PP) of high molecular weight, but polystyrene (PS) and poly(4-methylpent-1-ene) also possess high electrocharging characteristics. Commonly available materials being the primary goal, PP and PS become the natural materials of choice as they can be easily sourced as raw material, or are commonly available through plastic containers. This has implications for the manufacturing process in that PS being glassy, requires better temperature control, but cooling is less difficult. On the other hand, PP viscosity is nearly insensitive to temperature above its melting point, but does require cooling well below its melting point in order to crystallise. In using PP, it is known that low molecular weight ($M_W$) isotactic PP ($M_W < 12000$) tends to form brittle fibers that easily break up \cite{Strength}. If working with PP pellets ordered from regular suppliers, high molecular weight isotactic PP is preferred. In the following, fabrics made from isotactic PP were fabricated with material characteristics: $M_W = 250000$, melting temperature $T_M = 160 - 165^{\circ}$C and density $\rho = 900$kg/m$^{3}$; this is in fact the preferred material for manufacture of N95 FR electrocharged layers. Another commonly available polymer used in electrocharged filtration fabrics is polystyrene owing to its high charge retention property, as is known from common experience with styrofoam packaging material which easily sticks to surfaces due to static charge. PS material was employed with material properties molecular weight $M_W = 35000$, glass transition temperature $T_G = 100^{\circ}$C but with a reasonable minimum temperature for processing in the range $T = 123 - 128^{\circ}$C and density $\rho \sim 1060$ kg/m$^3$. One can also use discarded PP and PS plastic containers, but care should be taken not to use expanded PS that is generally available in the form of packaging material, but rather regular PS known as General Purpose Polystyrene in industry and used in fabrication of plastic containers through injection moulding. When using discarded PP and PS containers, they were crushed into powder using a commonly available blender. However, PP containers are often manufactured from low crystallinity PP, which results in very dense fabric mesh due to high cohesive properties, and is uncomfortable when breathing \cite{StrucFilt}. Additionally, low crystallinity PP was found not to possess high charge relative to isotactic PP. However, when mixed with PS from discarded containers, the resulting fabric is more compliant, less dense allowing easier respiration, and retains excellent electrocharging characteristics. A mixture of 80\% low crystallinity PP with 20\% PS (PP-PS blend) gave very good results. It is noted that PP and PS are not miscible, and that by a blend it is merely implied that both powders were molten together. The melting and extrusion process of PP or PP-PS would likely inactivate most biological material (in particular bacteria, fungi and viruses). As a comparison, autoclaves are typically run at comparable temperatures (160- 190$^{\circ}$C) on a dry cycle for 15-120 minutes. For compatible materials, sterilization is usually done in a wet cycle at 120$^{\circ}$C and fairly high pressure ($\sim$100 kPa) as the steam helps to break open cells and irreversibly denature protein and nucleic acids. Quickly melting polymer powder of contaminated bottles passed through a blender may not result in destruction of 100\% of the infectious properties of potential contaminants. This is of particular concern when using the resulting material for face protection. It is therefore advisable that plastic containers be cleaned in a domestic pressure cooker at high steam for 20 minutes before crushing them in a food processor to turn into powdery material. \subsubsection{Jet Generation} {\it Temperature:} Tests on the appropriate range of temperatures were heavily informed by Ref.~\cite{CCandy2}. Although the transition temperature for PS is $T_T = 123 - 128^{\circ}$C and that of PP is $T_T = 160 - 165^{\circ}$C, they do not readily flow at these temperatures but merely soften into highly viscous fluids. Whether working with pure PP or PP-PS, heating the emitter to higher temperatures in the range $T = 175 - 200^{\circ}$C reduces the viscosity, but the jets tend to break up during extension and do not result in fibers. We found that fibers were indeed generated in the temperature range $T = 200 - 250^{\circ}$ but resulted in beaded structures. Increasing the emitter angular speed $\omega$ did result in continuous fibers in accord with published literature \cite{RJet2}. However, temperatures $T > 280^{\circ}$ reduced viscosity sufficiently to give continuous fibers even at lower angular velocities. Note that Eq.~\ref{frad} shows mean fiber radius $r \sim \nu^{1/2}$ and inversely proportional to $\omega$. It is therefore desirable to tune the heating temperature $T$ to match the maximum rpm achievable by the motor employed to spin the emitter as discussed below. Small commercial cotton candy machines usually employ electrical heating elements whereas larger ones are gas fired. Irrespective of the heating method, most cotton candy machines operate at temperatures around $T = 160 - 175^{\circ}$C, which falls below the temperature range desirable for generating jets of pure PP or PP-PS ($T = 280 - 340^{\circ}$). If working with commercial machines, {tweaking the heating elements to achieve the desirable temperature range is suggested}. If building one's own machine, developing one's own electrical heating element is desirable if one has working knowledge, since electrical heating elements provide precise control. A simpler alternative is to use a gas torch and tweak the torch flame and its distance from emitter by checking emitter temperature with an ordinary thermometer used for home baking. {\it Emitter Motor:} The choice of emitter motor for CC method is dictated by Eq.~\ref{omegath} as it sets the lower bound on the revolutions per minute (rpm) to overcome $\omega_{th}$. Taking surface tension of PP to be $\sigma_{PP} \sim 20 \times 10^{-3}$ N/m, $\rho = 900$ kg/m$^3$, $r_O$ of order 100s of microns ($\sim 10^{-4}$ m), and $R_E$ of order few cm ($\sim 10^{-2}$ m) yields $\omega_{th} \sim$ 470 s$^{-1}$ or 4500 rpm. In practice however, fibers emerge at around 2500 rpm, which is easily achieved with most commercial cotton candy machines as they operate between 3000 - 4500 rpm. If constructing one's own machine, high rpm DC motors capable of going up to 15000 rpm are suggested. A simple alternative is to repurpose an electrical drill to spin the emitter as they commonly achieve up to 4500 rpm or a Dremel drill with variable adjustable speeds in the range 3000 - 37000 rpm. Drills can be easily connected to an emitter container with a suitably threaded steel rod, nut, and washer. Two simple alternatives exist in the event a high rpm motor is unavailable, as presented below when discussing emitter geometry. \begin{figure*} \begin{center} \includegraphics[width=1.5\columnwidth]{Fig3.pdf} \end{center} \caption{(Color online) Commercial candy machine emitters usually come with ((a) vertical or diagonal slats or (b) wire mesh for orifices.} \label{fig3} \end{figure*} {\it Emitter:} The emitter follows a standard cylindrical geometry of radius $R_E \sim 5$ cm (0.05 m) and height of roughly 0.1 m. The side wall of the cylindrical emitter is dotted with several tiny orifices through which the polymer jets are forced out when the emitter is spun. Since orifice radius $r_O$ and emitter radius $R_E$ are the system design parameters entering Eq.~\ref{omegath} ($r_O$ also enters Eq.~\ref{frad}), they should be chosen to match the material parameters $\sigma$ and $\rho$ in order to obtain a comfortable $\omega_{th}$ within the operational rpm range of one's chosen emitter motor. Smaller the orifice radius $r_O$, higher is the required $\omega_{th}$ as they're inversely proportional, thus setting the upper bound on choice of $r_O$. However, the mean fiber radius $r$ scales linearly with $r_O$, and constrains the upper bound through Eq.~\ref{frad}. If one has access to high rpm motor, then it is advisable to go to lower $r_O$ of order few 100 microns. Commerical cotton candy machines either use an emitter mesh with large orifice dimensions of order 1-2 mm or vertical/diagonal slats of width $\sim 1$ mm, see fig.~\ref{fig3} for an exemplar. Replacing such emitters with home-built emitter cylinders of lower orifice radii is suggested. Alternatively, if constructing one's own fabrication system, an easy way to construct the emitter is using a soda or beer can as they come readily manufactured to the right radial dimension and aluminum is an excellent thermal conductor. Cutting or shearing a soda or beer can in half and using the bottom half gives a readymade emitter. Folding the top open edge along the wall perimeter and drilling holes of few 100 micron radius along the emitter wall provides satisfactory results. For instance, the present study used drill bits of gauge 87 to obtain $r_O = 0.254$ mm with a Dremel drill motor connected to bottom half of a soda can with a nut and washer to drive the emitter, which gave very satisfactory results. In the event, one does not have a high rpm motor, a hybrid design combining rotary jet spinning (CC method) with melt blowing works just as well. Recognizing that Eq.~\ref{omegath} results from a balance between surface tension and inertial forces because hydrostatic pressure is low, one could consider sealing the top of the emitter and pump compressed air at roughly 0.2 - 0.3 MPa, while spinning the emitter at lower rpm to overcome the surface tension force to generate polymer jets. This is simply the melt blowing principle in disguise, and it shifts the inertial force by a DC offset value proportional to the pumping pressure divided by emitter wall's surface area $\pi R_E^2 h$ ($h$ is emitter height), thus bringing down the $\omega_{th}$ value. Of course, as a consequence, the lower $\omega$ would lead to larger mean fiber radius since $r \sim 1/\omega$ but this can be overcome by either decreasing $r_O$ and/or increasing $R_C$, the latter usually being easier to accomplish. \subsubsection{Fluid Acceleration and Stretching} Fluid acceleration and stretching is the stage at which the fibers are formed. Once the jets are generated, they're flung radially outwards and execute a spiral trajectory as shown in Fig.~\ref{fig2}b before they're deposited on the collector surface. From Eq.~\ref{frad}, the mean fiber radius $r \sim R_C^{-3/2}$ scales linearly with $r_O$ and inversely with $\omega$, which together form the system design parameters controlling fiber radius. It is therefore advantageous to have as large a collector radius to achieve fibers of ever smaller radii. These system design parameters are complemented by the polymer's kinematic extensional viscosity $\nu = \mu/\rho$ which scales as $r \sim \nu^{1/2}$. For this reason, emitter temperatures in the range $T = 280 - 340^{\circ}$C are suggested so that the viscosity is low enough for the polymer melt to flow, but still high enough to result in continuous, fibers of lower radii. The droplet ejection velocity $u$ is not a controllable parameter since it is a resultant of the competing (viscous, surface tension, and centrifugal) forces. Typical range of collector radii for commercial candy machines fall in the range $R_C \sim 0.1 - 0.3$ m. Naturally, larger the $R_C$, the finer the fiber radii generated, by virtue of Eq.~\ref{frad}. If constructing one's own fabrication system, one has the freedom to set $R_C$ to larger radii. We have found $R_C \sim 0.25 - 0.4$ m worked best as they gave sufficient distance for fibers to stretch and cool as they traverse from emitter to collector. Furthermore, if one's emitter motor is unable to achieve high rpm, we suggest a simple hybrid solution borrowed from electrospinning to circumvent the problem. It was observed that at low rpm $\sim 3000 - 4000$ rpm, and $R_C \sim 0.3$ m, connecting a car battery or laboratory power supply at 12 - 24V DC (negative terminal to emitter and positive terminal to collector with insulation between the two) generates a DC electric field to add sufficient droplet acceleration to stretch the fibers to desired radii. Irrespective of acceleration needs, the electric field also plays an indirect role in enhancing the fabric's surface charge characteristics, as discussed later. \subsubsection{Electrocharging} Electrets -- dielectric materials which exhibit an external electric field in the absence of an applied field -- can be broadly classified in two varieties, {\it viz.} space-charge or real-charge electrets and dipolar or oriented-dipole electrets \cite{Electrets}. Real-charge electrets possess an injected or embedded excess charge (of one or both polarities) within the dielectric volume or at the surface; they are usually manufactured via direct charge injection into the dielectric material. Dipolar electrets are formed by dipole orientation (polar groups) within the dielectric material and are usually formed, or rather polarized, by applying an electric field to the material either at an ambient temperature or by heating the material above its transition temperature $T_T$ where dipoles become mobile. An alternative method is to employ charge injection techniques on dipolar electrets where the embedded charge causes dipole reorientation. The fundamental limitation to all electret-charging methods is dielectric breakdown of the polymer material or external medium, which depends on the dielectric strength of a given material (typically of order few MV/cm for polymers with charge densities of a few tenths of $\mu$C.cm$^{-2}$). The primary methods for electric charging of electrets include, Triboelectric charging, Thermal charging, Isothermal charging, Electron and ion beam charging and Photoelectret charging. \begin{figure*} \begin{center} \includegraphics[width= 1.95\columnwidth]{Fig4.pdf} \end{center} \caption{(Color online) Fabrication flow process: Fibers generated in a modified cotton candy machine (see movie M1.mov in supplementary material for brief movie of the process) could be scooped out as a concentric fiber fabric, which was sandwiched and cut into individual sheets. The sheets were then subjected to isothermal charging with an air ionizer 1 cm from the fabric for a 10 minute duration to improve the fabric's static charge characteristics.} \label{fig4} \end{figure*} As the name suggests, triboelectric charging occurs from charge transfer due to frictional contact between dissimilar dielectric materials. Not only is this method very unreliable as it requires intimate contact between the two surfaces being charged, the present study does not employ two different materials, hence triboelectric charging does not apply here. Thermal charging involves application of an external electric field to the polymer at elevated temperatures, as occurs with the electrospinning process. Even though PP and PP-PS materials used here fall under the real-charge electret classification, application of a weak (12-24 V) DC electric field as described in the previous subsection did enhance surface charge characteristics as shown later. Electron and ion beam charging involves low energy secondary electron cascade resulting from scattering of the primary beam within the dielectric bulk and is not generally efficient for non-woven polymer fabrics of the kind explored here. On the other hand high energy electron beams cause chemical damage to most dielectric materials, hence not a suitable process for present needs. The photoelectret process only applies to charging of photoactive polymers and not relevant to the current study. The best results were obtained by Isothermal charging \cite{coronachg1, coronachg2}. In this method the polymer fabric is placed between two electrodes maintained at high electric fields (typically kilovolts and 100s of $\mu$A current). Although this method may seem complicated due to requirement of high voltages, it is in fact the easiest to achieve \cite{CoronaPP1, CoronaPP2}. Ionizing air purifiers used in homes and offices operate on the isothermal charging principle, where they apply high voltage to ionize or electrically charge air molecules to attract charged dust particles, bacteria, and viruses. Air ionizers come in two varieties -- those generating negative ions (anions) and electrostatic discharge (ESD) ionizers (balanced ion generators). ESD ionizers should not be used for iosthermal charging because not only do they not impart charge, they in fact neutralize existing charges on surfaces. The relevant ionizers suitable for electrocharging fabrics are anion generators. The present study used an ionizing air purifier for home settings (Model NIP-6E from Mystic Marvels LLC) operating at 9kV and 160 $\mu$A. Exposing the fabricated polymer fabric to isothermal charging for 10 minutes at a distance of 1 cm (0.1 m) substantially enhanced their surface charge characteristics. Details of the charge characterization are presented in the next section. \subsection{Fabrication Process} \subsubsection{Setup and Material} Having outlined the primary control parameters for the CC method and various ways to optimize them in the design process, the basic fabrication process is now explained. The dimensions and operating ranges naturally vary by user; the fabrication setup employed in this study had emitter radius $R_E = 0.055$ m, collector radius $R_C = 0.35$ m, and orifice radius $r_O = 254~\mu$m ($2.54 \times 10^{-4}$ m). The emitter heating element could be adjusted to achieve temperatures up to $T = 400^{\circ}$C, but the operating temperature in this study was $T = 285^{\circ}$C for PP-PS and $T = 300^{\circ}$C for PP. The emitter motor could achieve a maximum rpm of 35000, but the emitter was run at 10000 - 12000 rpm, capable of yielding a throughput of approximately 0.65 kg of fibers per hour. The basic process steps are outlined in Fig.~\ref{fig4}, also see movie M1.mov in supplementary material for a brief video of fiber generation process. Each fabrication run used 12 grams of polymer as input, and the resulting fibers were available in less than a minute. The fibers scooped out from the collector wall had a concentric circular profile, which were then sandwiched between two flat and clean surfaces at high pressure, this study employed 1 cm thick float glass plates, resulting in thin fabric sheets of 0.2 - 0.3 mm thickness. The values vary with the collector dimensions, quantity of material used per fabrication run, pressure applied to generate the final fabric sheet and are best worked out through trial-and-error by each end user. Characterisation details for fabrics are provided in the next section. \subsubsection{Face Masks} Two approaches were followed to turn the fabricated material into face masks. Lacking in-house capability to stitch fabric layers into masks, in the first approach a sheet of the electrocharged fabric manufactured by the CC method was added to the inner surface of surgical and PM2.5 face masks available commercially as shown in Fig.~\ref{fig5} (a-c). Three different sets of tests were performed on these masks. The first, a control test, was performed on the surgical masks in the condition they were procured to measure their baseline filtration quality. In the second test, a sheet of the manufactured fabric was added after intentionally bleeding the sheet of its electrostatic charge with a static eliminator. The reason for this second test was to discriminate between filtration quality arising purely from the presence of an additional porous layer without electrocharged filtration capability. The final test was performed on a surgical mask with an electrocharged fabric layer added to its inner surface. In all three cases, the static electricity on the masks before and after filtration tests were measured with a non-contact electrostatic potentiometer (KSS-2000 Digital Electrostatic Potentiometer, Manufacturer: Kasuga Denki Inc.). Whereas these filtration tests did show improvement in filtration quality due to addition of the electrocharged fabric layer, as is well known, the lack of tight facial fit left gaps through which particles could easily pass unfiltered. \begin{figure*} \begin{center} \includegraphics[width= 1.9\columnwidth]{Fig5.pdf} \end{center} \caption{(Color online) (a) Standard surgical masks (typical dimensions 9 cm $\times$ 16 cm) were (b) reinforced with an electrocharged filtration layer (5 cm $\times$ 10 cm) fabricated in-house and attached to mask's inner surface to improve filtration efficiency upon (c) wearing the mask. Montana Mask holders with tight facial fit with (d) placement design for Object 500 3D printer and the (e) front and (f) back view of finished Montana Mask, to hold square (5 cm $\times$ 5 cm) patches of electrocharged filtration fabric (3 layers per mask) to achieve N95 filtration quality.} \label{fig5} \end{figure*} In the second approach, face mask holders were 3D printed from an open-source design known as the Montana Mask \cite{MMask} as shown in Fig.~\ref{fig5}(d-f)) to overcome the deficiency of standard surgical masks in providing a tight facial fit. The Stereolithography (STL) format design files for the Montana Mask are publicly available for download from Ref.\cite{MMask}. One of the design features that allows N95 FRs achieve superior filtration efficiency of $\ge 95\%$ is their ability to provide tight facial fit and prevent air from leaking through the interstitial gap between the mask and skin during respiration. Mask holders (fig.~\ref{fig5}d) were manufactured on an Objet 500 3D printer from the open source STL files \cite{MMask}. The filter holder shown in fig.~\ref{fig5}e-f (front and back views, respectively) with a square grid could hold a square patch roughly 5 cm $\times$ 5 cm in dimensions. Adding up to three electrocharged filtration layers resulted in the desired N95 filtration quality, but respiration became difficult with 5 layers because the fabricated layers were denser than the layers present in commercial N95 FRs. These 3D printed Montana masks were therefore limited to 4 electrocharged filtration layers for optimal respiratory comfort without sacrificing filtration quality. Table~\ref{table1} lists the mean densities (g.cm$^{-3}$) obtained by dividing the measured weight by the dimensions of the samples for commercial fabrics and ones manufactured in-house. The standard deviation is quoted over measurements for ten samples of each material, except for the N95 FR for which only one sample was used. All values are rounded off to second decimal place, and the raw data is available at Ref.~\cite{BandiN95-Data1}. The variability is higher in fabrics manufactured in-house, and that is to be expected strict quality control processes possible in industrial methods were not permissible in the in-house fabrication method. \begin{table*} \caption{Electrocharged fabric densities, data available at Ref.~\cite{BandiN95-Data1}} \begin{tabular}{cc} \hline Material & Density (g.cm$^{-3}$): Mean $\pm$ Stdev.\\ \hline N95 FR & 0.45\\ Surgical mask & 0.38 $\pm$ 0.03\\ Isotactic PP & 0.73 $\pm$ 0.09\\ PP-PS & 0.63 $\pm$ 0.09\\ \hline \end{tabular} \label{table1} \end{table*} \begin{figure*} \begin{center} \includegraphics[width= 1.95\columnwidth]{Fig6.pdf} \end{center} \caption{(Color online) Scanning electron micrographs for electrocharged fabrics from (a) a commercial N95 FR, (b) isotactic Polypropylene (PP) fabricated in-house, and (c) Low crystallinity Polypropylene - Polystyrene (PP-PS) blend fabricated in-house. High resolution images available at Ref.~\cite{BandiN95-Data1}} \label{fig6} \end{figure*} \section{Characterisation} Following details of the design rules and fabrication process, characterisation studies performed on the fabricated electrocharged material and resulting face masks are now presented for the structural, charge retention, and filtration properties of fabrics and masks developed in-house. Recognizing that knowledge of their properties were not meaningful by themselves, the same tests were performed on a NIOSH-certified commercial N95 FR to serve as the benchmark against which to compare the quality of our fabrics and masks. \subsection{Structure} Structural properties of the electrocharged filtration fabrics were studied using scanning electron microscopy (SEM). As a preparatory step, platinum-palladium sputter coating deposition was performed on the fabric sample surfaces for SEM visualization, followed by interrogation under a scanning electron microscope (Quanta 250 FEG, Manufacturer: FEI Thermo Fisher) at 2 kV acceleration voltage. Figure~\ref{fig6} presents scanning electron micrographs for electrocharged fabrics. In qualitative terms, the N95 electrocharged fabric layer (fig.~\ref{fig6}a) seems structurally similar to PP (fig.~\ref{fig6}b) and PP-PS (fig.~\ref{fig6}c) fabrics in terms of obtaining a heterogeneous, non-woven fabric of enmeshed fibers. This heterogeneous structure results from fluctuations in fiber trajectories arising from the individual forcing conditions, {\it viz.} acceleration under DC electric field in electrospinning, hot air jet in case of melt blowing, and centrifugal forcing for the CC method. The qualitative similarity in fabric heterogeneity obtained by CC method (fig.~\ref{fig6}b and c) relative to N95 electrocharged fabric (fig.~\ref{fig6}a) presumably manufactured via melt blowing was therefore very encouraging. A cursory inspection suggests the commercial N95 FR fabric's fibers were slightly more tortuous relative to PP fabric, and less tortuous than PP-PS fibers. To understand the qualitative difference in fiber tortuosity, it is recalled that PP melt forms crystalline fibers whereas PS is a glass. PP fibers are therefore expected to result in linear, crystalline fibers relative to PS fibers which freeze into tortuous structures as the molten fluid undergoes glass transition under cooling and viscosity abruptly shoots up. Within PP, isotactic PP is more crystalline than low crystallinity PP obtained from discarded plastic containers. It is therefore important to ascertain how degree of crystallinity affects tortuosity or other structural characteristics. Fiber tortuosity is expected to impact the fabric's porosity. Though porosity could not be measured directly, filtration tests did present a small measurable difference between isotactic PP and PP-PS fabrics, which may be attributed to charge characteristics rather than porosity. Figure~\ref{fig7} presents scanning electron micrographs for low crystallinity PP (fig.~\ref{fig7}a) and PS (fig.~\ref{fig7}b) fabrics. Firstly, a comparison of isotactic PP (fig.~\ref{fig6}b) and low crystallinity PP (fig.~\ref{fig7}a) fabrics manufactured under similar conditions shows both fibers are relatively linear. However, low crystallinity PP fibers possess a more uniform and thicker radius and its fibers are more linear than isotactic PP fibers. When manufactured under similar conditions, mean fiber radii for low crystalline PP were almost twice that of isotactic PP (see Table~\ref{table2}). This implies, the presence of crystalline order, be it low or high, is sufficient to obtain relatively linear fibers whereas the degree of crystallinity determines the average fiber radius under identical fabrication conditions (temperature and rpm). A comparison of scanning electron micrographs for low crystallinity PP (fig.~\ref{fig7}a) and PS (fig.~\ref{fig7}b) shows PS fibers are far more tortuous owing to their glassy nature, albeit with less variability in fiber radius (see Table~\ref{table2}). Combining the two materials in different proportions helps control the fiber tortuosity, and therefore the porosity, as done in this study which used 80\% low crystalline PP with 20\% PS by weight (fig~\ref{fig6}c). Commercial N95 FR fabrics most likely use a proprietary mix of polymers to control porosity and charge retention, but the specific materials and their percentages are not available in the public domain. The mean and standard deviation of the fiber radii obtained from image analysis using ImageJ open source software of the scanning electron micrographs for all material combinations investigated are listed in Table~\ref{table2}. High resolution images for scanning electron micrographs presented in fig.~\ref{fig7} and fig.~\ref{fig8} are available at Ref.~\cite{BandiN95-Data1}. \begin{figure*} \begin{center} \includegraphics[width= 1.5\columnwidth]{Fig7.pdf} \end{center} \caption{(Color online) Scanning electron micrographs for electrocharged fabrics manufactured in-house with (a) Low crystallinity Polypropylene and (b) Polystyrene. High resolution images available at Ref.~\cite{BandiN95-Data1}} \label{fig7} \end{figure*} \begin{table*} \caption{Electrocharged fabric fiber diameters} \begin{tabular}{cc} \hline Material & Radius: Mean $\pm$ Stdev ($\mu$m)\\ \hline N95 FR & 4.1 $\pm$ 4.7\\ Isotactic PP & 4.54 $\pm$ 6.2\\ PP-PS & 4.9 $\pm$ 5.1\\ Low crystallinity PP & 9.63 $\pm$ 2.4\\ PS & 18.5 $\pm$ 2.8\\ \hline \end{tabular} \label{table2} \end{table*} \subsection{Filtration} {\it Setup:} Filtration tests for face mask certification are usually performed on specialized equipment such as the Portacount Respirator Fit-Tester and MITA 8120, both from TSI Inc. or AccuFIT 9000 from Accutec-IHS Inc. Lacking access to such special testing system and its non-affordability, a filtration testing system was designed in-house as shown in fig.~\ref{fig8}. A manikin head used in retail store fronts was drilled with a hole from its mouth to the back of its head. The face mask under test was then mounted onto the manikin head's face and placed in a confining box. An inexpensive piezeoelectric atomizer (APGTEK Aluminum Mist Maker) usually employed in home decoration was submerged in Sodium Chloride solution (5\% by weight NaCl in de-ionized water) to generate aerosol particles. The generated mist was exposed to negative ion air purifier to charge the aerosol particles for some of the tests. The mist could pass through a pipe with a second connecting pipe open to ambient air as shown in fig.~\ref{fig8} and both pipes had valves to help control the total aerosol concentration in the air entering the confining box. A portable PM2.5 air quality monitor (Manufacturer: Dienmern) used normally for home and office air quality monitoring was placed in the confining box (Monitor A in fig.~\ref{fig8}) to measure the particle concentration within the box. By reading this PM2.5 monitor, the two inlet valves were adjusted for aerosol mist and ambient air to control aerosol concentration in the confining box. The back of the manikin head was connected to a pipe which exited the confining box and terminated in a box containing a second PM2.5 air quality monitor (Monitor B in fig.~\ref{fig8}). This monitor gave reading of particles that had passed through the fabric and manikin mouth and allowed measurement of filtration quality. This box containing the second PM2.5 monitor was, in turn connected to a vacuum pump as shown in fig.~\ref{fig8}. When the vacuum pump was turned on, a suction pressure was felt in the confining box and aerosol particles mixed with ambient air were sucked into the confining box and passed through the face mask to enter the drilled hole in the manikin head and exited the confining box. By controlling the vacuum pump valve, one was able to simulate flow rates for normal (30 liters per minute) and high (85 liters per minute) respiration rates. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Fig8.pdf} \end{center} \caption{(Color online) Filtration test setup schematic.} \label{fig8} \end{figure} The general ambient conditions under which all filtration tests were conducted are now specified before explaining some of the design shortcomings of the home-designed filtration test setup. The lab temperature during all tests was maintained at 23 $^{\circ}$C $\pm$ 2$^{\circ}$C and relative humidity of 43\%. The relative humidity within the filtration test setup's confining box was however higher due to aerosol presence at 58\%. The area of fabric samples used for surgical masks (SM) was 9 cm $\times$ 16 cm with the electrocharged filtration layer occupying an area of 5 cm $\times$ 10 cm, where as the fabric area in the 3D printed montana mask was 5 cm $\times$ 5 cm. The aerosol number concentration at 30 L/min flow rate was $1.7 \times 10^{8}$ particles per cm$^{3}$ and at 85 L/min flow rate was $1.25 \times 10^{8}$ particles per cm$^{3}$, providing high enough aerosol concentration. Two tests were performed to study time to failure, i.e. the total duration from test commencement after which filtration started to deteriorate and varied between 14.5 hours (for PP fabrics) to 17 hours (for PP-PS blend). As a comparison commercial N95 FRs failed after 22 hours. NIOSH certification standard for N95 FRs requires use of neutrally charged sodium chloride solution (10\% NaCl in water). However the filtration tests in the current study had a dual role, firstly to demonstrate the efficacy of electrocharged filtration, and secondly to test the filtration efficiency itself. To demonstrate the efficacy of electrocharged filtration, independent tests were conducted with aerosol particles that were charged and charge neutralized for N95 FR and the 3D printed Montana Mask design. {\it Design shortcomings: }It is emphasized that this filtration test setup does not conform to some of the stringent testing specifications employed in face piece respirator certification. For instance, NIOSH 42 CFR Part 84 standard for N95 FRs requires filter performance of $\ge 95\%$ with NaCl test agent at 85 liters per minute flow rate and inhalation resistance (maximum pressure drop across mask) of $\le 343$ Pa and exhalation resistance of $\le 245$ Pa \cite{3M}. The filtration test setup developed in-house had no means to measure the pressure drop nor could one simulate the oscillatory respiratory air flow from inhalation and exhalation and the scheme could only generate steady suction flow. An important quantity in face mask filtration quality testing is the Most Penetrating Particle Size (MPPS); MPPS for N95 FRs is 300 nm or 0.3 $\mu$m. An important shortcoming of the filtration test setup detailed here is that it cannot provide a size distribution of aerosol particles generated by the relatively inexpensive piezoelectric atomizer. Secondly, commercial respirator testing systems use laser-based particle counter sensors that are sensitive to detection of particles below the MPPS value down to about 100 nm. The PM2.5 air quality monitors, as their name suggests, are rated for measuring particles as small as 2.5 $\mu$m. Be that as it may, PM2.5 monitors also employ the same laser-based particle counter sensors and do hold the capability to detect particles down to 0.3 $\mu$. Although a proper verification was not possible, it is reasonable to assume the PM2.5 monitors could detect particles at least down to 0.3 $\mu$m diameter. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Fig9.pdf} \end{center} \caption{Filtration test results showing penetration P(t) (\%) versus time (hours) over 12 hour duration for (a) N95 FR with charged (open black symbols) and neutral (filled black symbols) aerosols at 30 L/min (circles) and 85 L/min (squares) flow rate and (b) commercial surgical mask (SM) taped (filled black symbols) and not taped (open black symbols) around manikin head at 30 L/min (upright triangles) and 85 L/min (inverted triangles).} \label{fig9} \end{figure} {\it Test Results:} FR filtration efficiency is usually measured in terms of the penetration percentage (P), defined as percentage of particles present in the environment that pass through the FRs and is quoted against the particle diameters. Since particle sizes could not be measured by the PM2.5 monitors, the penetration is instead defined as: \begin{equation} P(t) = \frac{C_B(t)}{C_A(t)} \times 100\% \label{penetration} \end{equation} where $C_A(t)$ and $C_B(t)$ are the particle concentrations of PM2.5 monitors A and B respectively at time $t$. The penetration as a function of time was followed to study any deterioration in filtration properties. The PM2.5 monitors A and B were connected to a laptop and programmed to record concentration values at 15 minute intervals over a duration of 12 hours. Raw time series data for all filtration tests presented in fig.~\ref{fig9} and fig.~\ref{fig10} and Table~\ref{table3} are available at Ref.~\cite{BandiN95-Data1}. The first tests were performed on commercial N95 FR and surgical masks (SMs) to obtain baseline calibration readings on the filtration test setup designed in-house, which would then form the comparative standard for tests performed on filtration fabrics manufactured with the CC method. Since a tight facial fit for N95 FR is often emphasized, the test N95 FR was taped with a mask tape on to the manikin face and performed separate tests using aerosol particles that were both charged and charge neutral. On the other hand, baseline SM calibration tests were performed only with neutrally charged aerosol particles, but with and without mask tape applied for tight facial fit. Figure~\ref{fig9} shows the results for the baseline commercial N95 FR and SM tests. A few details that are known among filtration research community are immediately apparent. Firstly, N95 FR filters out more than 95\% of the particles at both 30 and 85 L/min flow rates (see fig.~\ref{fig9}a). Secondly, a small but measurable difference in penetration percentage is clearly observable between charged (1 - 2\% penetration) and neutral ($\sim 3\%$ penetration) aerosol particles. Finally, no measurable dependence on flow rate could be observed for the N95 FR. The SM results were in stark contrast with the N95 FR. Firstly, emphasis on the importance of tight facial fit becomes immediately apparent from fig.~\ref{fig9}b. When no tape was applied to close the interstitial gap between the face and mask, the penetration was nearly 50\% at 85 L/min and around 40\% at 35 L/min implying substantial leakage. Taping the mask around the manikin face drastically brought down the penetration percentage values to around 12\% at both flow rates, thereby underscoring the importance of tight facial fit in filtration masks and respirators. Secondly, even after applying tape to close the gap between face and mask, the penetration value of around 12\% was still higher than the N95 standard of $\le 5\%$. The reason for this discrepancy lies in the fact that SM fabrics do not employ electrocharged layers and rely entirely on inertial impaction and diffusion to achieve filtration, thus underscoring the importance of the electrocharged filtration mechanism. \begin{figure*} \begin{center} \includegraphics[width=1.75\columnwidth]{Fig10.pdf} \end{center} \caption{Penetration P(t) (\%) versus time (hours) for (a) PP fabric mounted on a taped SM (b) PP fabric mounted on 3D printed Montana mask holder, (c) PP-PS fabric mounted on a taped SM, and (d) PP-PS fabric mounted on a 3D printed Montana mask holder.} \label{fig10} \end{figure*} Following these baseline tests, results are presented for tests performed on the two mask designs using isotactic PP fabrics. Figure~\ref{fig10}a presents results for the first design where a filtration fabric layer manufactured in-house was placed on the inner side of a commercial surgical mask (SM). Tests were performed using an electrocharged layer as well as a fabric layer whose charge was depleted with a static eliminator as described earlier. All tests were performed by taping the SM to the manikin head. From fig.~\ref{fig10}a it is seen that commercial SM with non-electrocharged layer exhibits the same penetration percentage of around 12\% at both flow rates (30 and 85 L/min) and the results are statistically identical to ones shown in fig.~\ref{fig10}b for taped masks. However, adding the electrocharged filtration layer leads to a marked decrease in penetration of aerosol particles down to around 7\% at both flow rates. Unfortunately though, at 7\% penetration, it still falls short of the N95 FR requirement of $\le 5\%$. Be that as it may, at just 7\% penetration the proposed solution of an electrocharged fabric layer in a commercial SM provides ample protection for people using face masks in non-critical settings, but falls short of protection deemed necessary for say, emergency response and healthcare services. Figure~\ref{fig10}b presents filtration test results for the 3D printed Montana mask design meant to achieve the N95 FR tight facial fit. Four electrocharged filtration fabric layers of 0.2 - 0.3 mm thickness each were employed here. Charged aerosol particles exhibited slightly less penetration at 3.6\% (30 L/min) and 4.35\% (85 L/min) relative to neutral aerosols with 4.6\% (30 L/min) and 5.2\% (85 L/min). This trend is similar to one observed for commercial N95 FR in fig.~\ref{fig10}a, but the penetration values for fabrics manufactured using the CC method were unfortunately slightly higher than commerical N95 FR. The penetration values do fall within the 5\% requirement for the N95 standard for all cases except neutral aerosols at 85 L/min, which misses the target by a meagre 0.16\%. However, in natural work settings, one expects to be exposed to both charged and neutral aerosols. Therefore, when taken together, the mean penetration for all aerosols at 30 L/min flow rate is 4.12\% and at 85 L/min is 4.86\%, i.e. just below the 5\% penetration requirement. Ergo, the 3D printed Montana mask design fit with electrocharged filtration fabrics barely manages to meet the N95 standard, but it is emphasized once again that the test for pressure drop across the filtration fabrics could not be measured nor could the oscillatory inhalation-exhalation cycles present in normal respiration be simulated with the in-house filtration test setup. Filtration tests were also performed under identical conditions using PP-PS fabrics. Figure~\ref{fig10}c shows results for SMs with and without PP-PS electrocharged fabric layer and should be compared against PP results in fig.~\ref{fig10}a. The penetration percentages were identical at both flow rates for the non-electrocharged layer, suggesting the slightly higher tortuosity in PP-PS fabric (fig.~\ref{fig7}c) relative to PP fabric (fig.~\ref{fig7}b) had no measurable impact on filtration quality. However, SM with PP-PS electrocharged filtration layer performed marginally better by $\sim 1\%$ (fig.~\ref{fig10}c compared to identical test with PP fabric (fig.~\ref{fig10}a). A similar improvement in penetration percentage was also observed for the 3D printed Montana mask design (see fig.~\ref{fig10}d). Tests for both charged and neutral aerosols at 30 as well as 85 L/min flow rate yielded penetration values in the same range of 3 - 4\%, modulo the variability one observes in the data (fig.~\ref{fig10}c). Ergo, the PP-PS fabrics surpassed the N95 requirement to within the testing limitations. Since the minor structural differences (tortuosity) between PP and PP-PS tests did not show any difference, it can only be surmised that the higher performance of PP-PS fabrics (relative to PP) comes from electrocharging as is confirmed in the following subsection. The mean penetration and standard deviation are listed for all our tests in Table~\ref{table3}, with all values rounded off to the second decimal place. \begin{table*} \caption{Penetration percentage values measured from filtration tests. Raw time series for all data are available at Ref.~\cite{BandiN95-Data1}} \begin{tabular}{cc} \hline Material & Penetration (\%): Mean $\pm$ Stdev\\ \hline N95 FR Charged (30 L/min) [Fig.~\ref{fig9}a] & 1.43 $\pm$ 0.03\\ N95 FR Neutral (30 L/min) [Fig.~\ref{fig9}a] & 3.29 $\pm$ 0.02\\ N95 FR Charged (85 L/min) [Fig.~\ref{fig9}a] & 1.54 $\pm$ 0.03\\ N95 FR Neutral (85 L/min [Fig.~\ref{fig9}a] & 3.37 $\pm$ 0.03\\ SM No Tape (30 L/min) [Fig.~\ref{fig9}b] & 40.72 $\pm$ 0.85\\ SM Taped (30 L/min) [Fig.~\ref{fig9}b] & 11.98 $\pm$ 0.73\\ SM No Tape (85 L/min) [Fig.~\ref{fig9}b] & 47.18 $\pm$ 0.45\\ SM Taped (85 L/min) [Fig.~\ref{fig9}b] & 13.13 $\pm$ 0.57\\ PP: SM + Non-Electrocharged Layer (30 L/min) [Fig.~\ref{fig10}a] & 12.04 $\pm$ 0.26\\ PP: SM + Non-Electrocharged Layer (85 L/min) [Fig.~\ref{fig10}a] & 12.26 $\pm$ 0.31\\ PP: SM + Electrocharged Layer (30 L/min) [Fig.~\ref{fig10}a] & 7.19 $\pm$ 0.32\\ PP: SM + Electrocharged Layer (85 L/min) [Fig.~\ref{fig10}a] & 7.3 $\pm$ 0.37\\ PP: 3D Charged (30 L/min) [Fig.~\ref{fig10}b] & 3.62 $\pm$ 0.13\\ PP: 3D Neutral (30 L/min) [Fig.~\ref{fig10}b] & 4.65 $\pm$ 0.2\\ PP: 3D Charged (85 L/min) [Fig.~\ref{fig10}b] & 4.34 $\pm$ 0.17\\ PP: 3D Neutral (85 L/min) [Fig.~\ref{fig10}b] & 5.14 $\pm$ 0.27\\ PP-PS: SM + Non-Electrocharged Layer (30 L/min) [Fig.~\ref{fig10}c] & 12.31 $\pm$ 0.31\\ PP-PS: SM + Non-Electrocharged Layer (85 L/min) [Fig.~\ref{fig10}c] & 12.1 $\pm$ 0.24\\ PP-PS: SM + Electrocharged Layer (30 L/min) [Fig.~\ref{fig10}c] & 6.67 $\pm$ 0.19\\ PP-PS: SM + Electrocharged Layer (85 L/min) [Fig.~\ref{fig10}c] & 6.22 $\pm$ 0.22\\ PP-PS: 3D Charged (30 L/min) [Fig.~\ref{fig10}d] & 2.97 $\pm$ 0.22\\ PP-PS: 3D Neutral (30 L/min) [Fig.~\ref{fig10}d] & 3.53 $\pm$ 0.25\\ PP-PS: 3D Charged (85 L/min) [Fig.~\ref{fig10}d] & 3.17 $\pm$ 0.29\\ PP-PS: 3D Neutral (85 L/min) [Fig.~\ref{fig10}d] & 3.48 $\pm$ 0.25\\ \hline \end{tabular} \label{table3} \end{table*} {\it Nanoparticle filtration test: }An important question that arose was if the masks were capable of trapping individual viral particles, such as the SARS-Cov-2 virion. The SARS-Cov-2 virion is estimated to have a diameter between 50 - 200 nm. More precisely, Ref. \cite{Kim} places the SARS-Cov-2 virion of order 70 - 90 nm and Ref. \cite{Chen} places it at 50 - 200 nm. Although MPPS for N95 FRs is set at 300 nm (0.3 $\mu$m), some studies have suggested MPPS for N95 FRs occurs in the range of 40 - 60 nm \cite{Balazy, Rengasamy}. In particular, experiments by Balazy et al \cite{Balazy} using MS2 virion -- bacteriophage with single-stranded RNA comprising 3569 nucleotides that infects male {\it E. Coli} bacteria -- with an approximate diameter of 27.5 nm shows that N95 FRs can achieve superlative filtration for particle diameters smaller than its rated MPPS. This indicates that N95 FRs would perform equally well at trapping an individual SARS-Cov-2 virion with lower bound on diameter roughly 1.5 times that of MS2. Having placed the approximate range for the SARS-Cov-2 virion, the next question is if the SARS-Cov-2 virion possesses a surface electric charge that permits it to get attracted to an electrocharged filtration layer. The net charge would be determined by the sum of the charges exposed on the viron surface and can be calculated from the protein structure(s) at the surface. However, in order for the virus to be trapped by the electrocharged filters, there are two small issues to contend. Firstly, individual SARS-Cov-2 virions are not likely to be airborne. Instead, like most enveloped viruses, it is hydrated or in solution and is transmitted by aerosol droplets. When dehydrated, the virion's lipid membrane collapses and its proteins become denatured within a certain time, depending on temperature etc., i.e., it is rendered inactive. Despite the low probability of encountering individual SARS-Cov-2 virions, it was a question still worthy of an investigative test. Secondly, the net charge of a protein in solution (water, typically) depends on the pH. Every amino acid has a pKa (the log of the acid dissociation constant), which is an equilibrium constant indicating the pH at which the charges are balanced (net neutral). There are pKa values for each chemical group. If the pH is lower than the pKa, that amino acid becomes protonated, if it can. At low pH, there is an abundance of positively charged protons. Even some acids can become protonated at low pH. One can approximate the net charge of a linear peptide from its sequence and by providing a pH for the solvent using tools on the web (e.g. see Ref.~\cite{protcalc}) one can arrive at an estimate for surface charge. However, this does not take into account that structures are folded in 3D and some charges are hidden inside. Although it is only a rough value, it does still provide an estimate because residues buried inside are often hydrophobic, i.e. not charged. There are several online tools available to calculate net charges of folded protein structures as well as their surface charge distribution, e.g. DELPHI \cite{delphi}. The isoelectric point pI is the pH at which the net charge is neutral. Below the pI, the net charge is positive, above it is negative. Ref.~\cite{Michen} lists some examples of viruses, most of their pIs being $< 7$, suggesting that most viruses would be net negatively charged at neutral pH, thus suggesting that the COVID-19 virion is also net negative. The main surface protein for SARS-Cov-2 is the spike protein \cite{spike}. It is glycosylated, thus has some amino acids, mostly arginines, chemically attached to sugars that can be charged, like sialic acid, which would also contribute to an overall negative charge. When the sequence of the spike 6VXX is input into Ref.~\cite{isoelectric}, a pI=5.8 is obtained, thus net negative. Commercial filtration test systems permit detection of individual virion penetration as these systems can identify particle diameters. In the absence of such a system, and lacking knowledge of handling even inactive virions, a surrogate technique using fluorescent tagged nanoparticles was employed. Polystyrene nanospheres fluorescently tagged with Dragon Green (480 nm Absorption wavelength, 520 nm Emission wavelength) with mean diameter 50 nm $\pm$ 10 nm (Manufacturer Bangs Labs, Catalog No. FSDG001, Lot No. 14092) were used as a substitute for the SARS-Cov-2 virion. Since the polystyrene nanospheres were to mimic virions, the aqueous solution was first dried and the nanoparticle dust was exposed to a brief burst of coronal discharge. Unsure whether coronal discharge exposure would dissociate fluorescent markers off the surface of the nanoparticles, a quick fluorescene test was performed and confirmed that the fluorescent nanoparticles performed to specification. The filtration test was performed on the 3D mask design with nanoparticle dust replacing the mist generator in fig.~\ref{fig9}. Measurement of penetration was meaningless as the PM2.5 monitors could detect dust particles from ambient air entering the confining box. Instead, the test was conducted at 30 L/min flow rate for one hour duration and then the electrocharged filtration layers were interrogated under a confocal microscope (Nikon Andor Revolution WD spinning disk confocal, Laser wavelength 455 nm) for fluorescence signals from the trapped nanoparticles, if any. Figure~\ref{fig11} presents the fluorescence emission signal from polystyrene particles trapped along fiber surfaces of the electrocharged layer as measured under the confocal at a depth of 500 nm from the outer surface (exposed to the environment). It is observed that the fluorescence signal occurs in clusters because the nanoparticles were not dispersed in aqueous solution as usually done in experiments. Instead the solution was dried to obtain nanodust that comprised clumps of particles, or alternatively it is possible the charged particles were preferentially attracted to charge centers that acted as traps and begs further future investigation. This was a proof of principle test to check whether the electrocharged fabrics were capable of trapping charged particles of dimensions comparable to the SARS-Cov-2 virion, and further quantitative analysis was not undertaken for the purposes of the current study. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Fig11.pdf} \end{center} \caption{(Color online) Confocal image of fluorescence emission signal from polystyrene nanospheres trapped along fiber surfaces at a depth of 500 nm from the outer surface of the electrocharged filtration layer exposed to the environment.} \label{fig11} \end{figure} \subsection{Charge} Electrostatic charge measurements were made with a non-contact electrostatic potentiometer (KSS-2000 Digital Electrostatic Potentiometer, Manufacturer: Kasuga Denki Inc.) with a piezoelectric transducer capable of measuring electrostatic voltage at a distance of 50 -100 nm from the sample. The measurement readings for various fabric samples are listed in Table~\ref{table4}. The mean and standard deviation are taken over a set of 100 measurements conducted at various locations on several samples of each fabric material. \begin{table*} \caption{Electrostatic potential of various fabric samples.} \begin{tabular}{cc} \hline Material & Electrostatic potential (kV)\\ \hline N95 FR & 11.1 $\pm$ 0.9\\ SM & 0.4 $\pm$ 0.3\\ PP & 5.3 $\pm$ 0.44\\ Isothermal charged PP & 7.4 $\pm$ 0.23\\ PS & 8.1 $\pm$ 0.28\\ Isothermal charged PS & 10.8 $\pm$ 0.41\\ PP-PS & 8.4 $\pm$ 0.36\\ Isothermal charged PP-PS & 10.1 $\pm$ 0.3\\ \hline \end{tabular} \label{table4} \end{table*} The commercial N95 FR fabric material which serves as the comparative standard for this study clearly holds the highest electrostatic potential at 11.1 $\pm$ 0.9 kV (see Table~\ref{table4}. The electrostatic potential quoted here for the commercial N95 FR was measured after extracting the electrocharged fabric layer sandwiched between the neutral outer layers. For this reason, the value quoted here will be higher than the actual potential an aerosol or dust particle experiences because of the outer neutral layer forming a dielectric barrier between the electrocharged layer and ambient environment. Be that as it may, for purposes of this study, the electrostatic potential of the commercial N95 FR's bare electrocharged layer forms the right comparative standard for fabrics manufactured with the CC method. SM fabrics had the lowest electrostatic potential at 0.4 $\pm$ 0.3 kV (see Table~\ref{table4}) and were practically neutral given the standard deviation (0.3 kV) nearly equals the mean (0.4 kV) electrostatic potential. With the commercial N95 FR's electrocharged layer and SM fabrics as the two bounds of the comparative standard, it becomes possible to establish a correspondence between the electrostatic potential and filtration efficiency for all the fabrics studied here. Firstly, the measured electrostatic potential listed in Table~\ref{table4} shows PS fabrics possess more charge than isotactic PP fabrics, as is established by prior independent measurements \cite{PPPScharge}. Although both PP and PS are poor conductors in pure form, PS molecules have benzene rings which improve charge concentration of PS molecules \cite{PolyHBook}, this is not so for PP molecules which do not possess the benzene ring. However, several extraneous factors such as purity, thermal treatment (recrystallization in PP versus physical aging in PS) etc. also influence charge retention. Indeed, it is known that N95 FR electrocharged layers embed dopants \cite{dopePP1, dopePP2} for charge enhancement, particularly in polypropylene. Fabrication with simple materials and methods being the design strategy, such charge enhancement using dopants was not explored in the present work. Secondly, isothermal charging of the manufactured fabric certainly increases the electrostatic potential of all samples by roughly 25\% (for PP-PS) to 40\% (for isotactic PP) thereby establishing the efficacy of the secondary isothermal charging step employed in the present study. Finally, a one-to-one correspondence is observed between electrostatic potential and filtration efficiency obtained from the penetration tests. The difference in filtration efficiency we observed between the PP and PP-PS samples could not be attributed to slight structural difference in tortuosity of the fiber porous matrix, but PP-PS fabrics hold higher electrostatic potential than PP fibers. This explains why PP-PS 3D printed Montana mask respirators consistently met the N95 requirement of $\le 5\%$ penetration, whereas PP 3D printed Montana masks could barely meet the requirement, with especially neutral particles falling slightly above the requirement. \section{Conclusions} In summary, a set of general design principles have been presented to construct one's own fabrication setup and manufacture of electrocharged filtration fabrics using the CC principle, together with some potential designs to use these filtration fabrics in face masks. In doing so, attention has been paid to utilising commonly available materials and easily replicable methods using two commonly available candidate materials, {\it viz.} PP and PS and their blends. Electrocharged filtration fabrics must meet two requirements, structural heterogeneity for filtration through inertial impaction and diffusion, and high charge retention for electrocharged filtration. Fabric heterogeneity comes in several forms and includes a non-woven fabric comprised of fibers enmeshed in a random manner, fiber tortuosity which affects the fabric porosity, and variability of fiber radii which control the surface to volume ratio. The CC method can easily achieve a disordered non-woven fabric of enmeshed fibers, but tortuosity and variability of fiber radii is a material dependent parameter that enters {\it via} operating temperature. Whereas isotactic PP fabrics exhibited all three forms of heterogeneity (fiber radius 4.54 $\pm$ 6.2 $\mu$m), they possessed less charge (5.3 $\pm$ 0.44 kV without and 7.4 $\pm$ 0.23 kV with isothermal charging) relative to PS fabrics (8.1 $\pm$ 0.28 kV without and 10.8 $\pm$ 0.41 kV with isothermal charging). On the other hand PS fabrics resulted in thicker fiber radii of lower variability (18.5 $\pm$ 2.8 $\mu$m), albeit with higher tortuosity than PP fabrics. Achieving an optimal mix of heterogeneity and charge retention by blending PP and PS (fiber radii 4.9 $\pm$ 5.1 $\mu$m and charge of 8.4 $\pm$ 0.36 kV without and 10.1 $\pm$ 0.3 kV with isothermal charging) resulted in fabrics that were able to meet the N95 FR standard (fiber radii 4.1 $\pm$ 4.7 $\mu$m and electrostatic potential 11.1 $\pm$ 0.9 kV), at least within the limitations of the filtration test setup designed and constructed in-house. The current work maintained exclusive focus towards providing a proof of principle for the process and underlying mechanics of the CC principle as a viable method for fabrication of electrocharged filtration fabrics. Further work to be explored in future includes researching more commonly available fabric materials, such as Poly Ethylene Terepthalate, universally available as PET bottles for beverages, and Biaxially-oriented Poly Ethylene Terepthalate (BoPET), better known as Mylar and available as plastic bags and food wrapping. A more detailed characterisation of the structural and electrocharging properties of fabrics obtained via the CC method is also needed. Another avenue that begs exploration is the introduction of metallic nanoparticles at the fabrication stage to embed them into fabrics for charge enhancement, and possibly even decontamination, without endangering users as these fabrics are meant to be worn around mouth and nose for safe respiration. Although the COVID-19 pandemic provided the impetus for the current effort, face masks may well become a mainstay of human social interactions going forward. On order of two viruses jump across from animals to humans per year \cite{VJump} with most animals exhibiting viral richness showing propensity for close human contact \cite{V2}. If even a small fraction of those viruses result in asymptomatic viral shedding in human exhalation (breathing, coughing, or sneezing) \cite{V1}, face mask protection at the population level becomes a necessary means of protection. Decentralized local manufacture of face masks with high filtration efficiency from commonly available materials and simple designs could potentially alleviate global supply chain disruptions during such times, as witnessed during the COVID-19 pandemic \cite{manup}. It is hoped that this effort will help communities with face mask protection during such pandemics. \acknowledgments This work was supported by the Nonlinear and Non-equilibrium Physics Unit, OIST Graduate University. MMB first learned about the cotton candy method from a passing remark by Dr. L. Mahadevan in 2010. MMB acknowledges advice from Dr. M. Wolf on COVID-19 surface charge characteristics, Dr. S. Ghosh on using PM2.5 air quality monitors for filtration testing, and help from Dr. K. Deasy with 3D printing, N. Ishizu with Scanning Electron Microscopy and Dr. H. B. Kang for characterisation, and OIST Imaging section with Confocal Microscopy. Dr. S. Velankar is gratefully acknowledged for critical reading of the manuscript and several helpful suggestions for its improvement.
1,314,259,996,232
arxiv
\section{Introduction} With the sheer volume of online information, much attention has been given to data-driven recommender systems. Those systems automatically guide users to discover products or services respecting their personal interests from a large pool of possible options. Numerous recommendation techniques have been developed. Three main categories of them are: collaborative filtering methods, content-based methods and hybrid methods~ \cite{bobadilla2013recommender,lu2015recommender}. In this paper, we aim to develop a method producing a ranked list of $n$ movies to a user at a given moment (top-$n$ movie recommendation) by exploiting both historical user-movie interactions and the content information of movies. Matrix factorization (MF) \cite{koren2009matrix} is one of the most successful techniques in the practice of recommendation due to its simplicity, attractive accuracy and scalability. It has been used in a broad range of applications such as recommending movies, books, web pages, relevant research and services. The matrix factorization technique is usually effective because it discovers the latent features underpinning the multiplicative interactions between users and movies. Specifically, it models the user preference matrix approximately as a product of two lower-rank latent feature matrices representing user profiles and movie profiles respectively. Despite the appeal of matrix factorization, this technique does not explicitly consider the temporal variability of data~\cite{wu2017recurrent}. Firstly, the popularity of an movie may change over time. For example, movie popularity booms or fades, which can be triggered by external events such as the appearance of an actor in a new movie. Secondly, users may change their interests and baseline ratings over time. For instance, a user who tended to rate an average movie as ``4 stars'', may now rate such a movie as ``3 stars''. Recently, recurrent neural network (RNN)~\cite{hochreiter1997long} has gained significant attention by considering such temporal dynamics for both users and movies and achieved high recommendation quality~\cite{wu2016joint,wu2017recurrent}. The basic idea of these RNN-based methods is to formulate the recommendation as a sequence prediction problem. They take the latest observations as input, update the internal states and make predictions based on the newly updated states. As shown in~\cite{devooght2016collaborative}, such prediction based on short-term dependencies is likely to improve the recommendation diversity. More recent work~\cite{wu2017recurrent} reveals that matrix factorization based and RNN-based recommendation approaches have good performances for the reasons that are complementary to each other. Specifically, the matrix factorization recommendation approaches make movie predictions based on users' long-term interests which change very slowly with respect to time. On the contrary, the RNN recommendation approaches predict which movie will the user consume next, respecting the dynamics of users' behaviors and movies' attributes in the short term. It therefore motivates us to devise a joint approach that takes advantage of both matrix factorization and RNN, exploiting both long-term and short-term associations among users and movies. Furthermore, most existing recommender systems take into account only the users' past behaviors when making recommendation. compared with tens of thousands of movies in the corpus, the historical rating set is too sparse to learn a well-performed model. It is desirable to exploit the content information of movies for recommendation. For example, movie posters reveal a great amount of information to understand movies and users, as demonstrated in~\cite{zhao2016matrix}. Such a poster is usually the first contact that a user has with a movie, and plays an essential role in the user's decision to watch it or not. When a user is watching the movie presented in cold, blue and mysterious visual effects, he/she may be interested in receiving recommendations for movies with similar styles, rather than others that are with the same actors or subject~\cite{zhao2016matrix}. These visual features of movies are usually captured by the corresponding posters. In this paper, we propose a novel LSIC model, which leverages \textbf{L}ong and \textbf{S}hort-term \textbf{I}nformation in \textbf{C}ontent-aware movie recommendation using adversarial training. The LSTC model employs an adversarial framework to combine the MF and RNN based models for the top-$n$ movie recommendation, taking the best of each to improve the final recommendation performance. In the adversarial process, we simultaneously train two models: a generative model $G$ and a discriminative model $D$. In particular, the generator $G$ takes the user $u_{i}$ and time $t$ as input, and predicts the recommendation list for user $i$ at time $t$ based on the historical user-movie interactions. We implement the discriminator $D$ via a siamese network that incorporates long-term and session-based ranking model in a pair-wise scenario. The two point-wise networks of siamese network share the same set of parameters. The generator $G$ and the discriminator $D$ are optimized with a minimax two-player game. The discriminator $D$ tries to distinguish the real high-rated movies in the training data from the recommendation list generated by the generator $G$, while the training procedure of generator $G$ is to maximize the probability of $D$ making a mistake. Thus, this adversarial process can eventually adjust $G$ to generate plausible and high-quality recommendation list. In addition, we integrate poster information of movies to further improve the performance of movie recommendation, which is specifically essential when few ratings are available. We summarize our main contributions as follows: \begin{itemize} \item To the best of our knowledge, we are the first to use GAN framework to leverage the MF and RNN approaches for top-$n$ recommendation. This joint model adaptively adjusts how the contributions of the long-term and short-term information of users and movies are mixed together. \item We propose hard and soft mixture mechanisms to integrate MF and RNN. We use the hard mechanism to calculate the mixing score straightforwardly and explore several soft mechanisms to learn the temporal dynamics with the help of the long-term profiles. \item Our model uses reinforcement learning to optimize the generator $G$ for generating highly rewarded recommendation list. Thus, it effectively bypasses the non-differentiable task metric issue by directly performing policy gradient update. \item We automatically crawl the posters of the given movies, and explore the potential of integrating poster information to improve the accuracy of movie recommendation. The release of the collected posters would push forward the research of integrating content information in movie recommender systems. \item To verify the effectiveness of our model, we conduct extensive experiments on two widely used real-life datasets: Netflix Prize Contest data and Movielens data. The experimental results demonstrate that our model consistently outperforms the state-of-the-art methods. \end{itemize} The rest of the paper is organized as follows. In Section 2, we review the related work on recommender systems. Section 3 presents the proposed adversarial learning framework for movie recommendation in details. In Section 4, we describe the experimental data, implementation details, evaluation metrics and baseline methods. The experimental results and analysis are provided in Section 5. Section 6 concludes this paper. \begin{comment}Our model naturally incorporates the framework of generative adversarial network (GAN) by ...... The MF component tries to build a global profiles for both users and movies, while the temporal RNN component models the session-based information for users' short-term interests and movies' changing profile over time. This joint model could adaptively adjust the mixture between the long-term and short-term profiles in the data-driven training. Movie recommendation focuses on how to infer the users' preference over large amount of items via past user-item interactions or user/item profiles. Previous CF related works (like some recommendation contests) usually optimized for minimizing root mean squared error (RMSE) to fit the possible user-item pairs. However, the RMSE-orient algorithms do not necessarily perform as expected in top-N recommendation task for each users \cite{lu2012serendipitous,cremonesi2010performance}. Namely, we can not provide accurate top-N items for a user even we have gotten a good rating prediction over given items. Rather than RMSE-orient algorithms, Rank-orient algorithms is alternate approach for Top-N Recommendation Tasks, which tries to predict all unlabeled user-item pairs by collaborative filtering and to optimize the rank-based metric that is more sensitive to the high-position items. However, the current Rank-orient CF algorithms is just filling the \textbf{time-unaware} user-item rating matrix.\wei{ranked the candidate movies in a time-unaware manner.} Firstly, it treats all rating in individual time without distinction, although it is indeed a temporal behavior and the profiles of user (and even item) are evolving along with user-item interactions or changing contexts. Secondly, the train/test data splitting is arbitrary, for example, it even uses a future user-item rating records to infer the previous ones. Generally speaking, time-unaware approaches indeed simplify this task and ignore the unbalanced train-/test dataset biases . But the real-time task is obviously temporal. In order to bridge this gap, we propose a dual structure to jointly train the Matrix factorization and RNN-based temporal model. Both two models are learning a representation of user/item in a siamese structure \cite{} \benyou{please add some citations, WEI} . The MF sub-component tries to build a global profiles for both users and items, while the Temporal RNN sub-component models the session-based information for users' short-term interests and items' changing profile over time. This joint model could adaptively adjust the mixture between the long-term and short-term profiles in the data-driven training. We summarize our main contributions as follows: \begin{itemize} \item We firstly propose a long-term and session-based hybrid to provide users with a ranking or recommendation list instead of concerning on the accuracy of rating prediction\cite{adomavicius2005toward,mcnee2006being} given the past behavioral trajectories. \benyou{To the best our knowledge, it is the first try to incorporate the temporal information in the top-N recommendation. In which global static profiles and short-term interests are jointly taken into consideration in an end-to-end way. This framework can be easily extended with any complexed models and extern features, whenever} \item We provide a unified explanation of thinking in ranking inference and pairwise ranking model with adversarial network so that learn each other iteratively, inspired by \cite{wang2017irgan,dehghani2017neural} in IR community. We treat generator G as ranking inference network to fool the discriminator to generate(predicts) the well-rated and highly rewarded recommendation list, and treat discriminator D as Siamese Network in pairwise scenario to distinguish real high-rated items on training data from ranking list generated(predicted) by G. \benyou{We provide a unified thinking between the inference and its instant adversarial feedback. A extern discriminator D is proposed to fine-tune our predicting models (denoted as Generator G). } \item We attempt to deal with cold-start issue using long-term and session-based hybrid as in the \cite{wu2017recurrent} and develop this approach into GANs framework so that further tackle this problem. \item To verify the effectiveness of our model, we use the well-known dataset from the Netflix Prize Contest and Movielens to evaluate our model. The experimental results show that our model consistently outperforms previous methods. Also, we conduct our experiment with real item recommendation dataset, called QZone, which have millions of posted short-videos. \end{itemize} \benyou{it should be clarified about the difference between our models with RRN and irgan} \end{comment} \section{Related Work} Recommender system is an active research field. The authors of~\cite{bobadilla2013recommender,lu2015recommender} describe most of the existing techniques for recommender systems. In this section, we briefly review the following major approaches for recommender systems that are mostly related to our work. \paragraph{Matrix factorization for recommendation} Modeling the long-term interests of users, the matrix factorization method and its variants have grown to become dominant in the literature \cite{rennie2005fast,koren2008factorization,koren2009matrix,hernando2016non,he2016fast}. In the standard matrix factorization, the recommendation task can be formulated as inferring missing values of a partially observed user-item matrix \cite{koren2009matrix}. The Matrix Factorization techniques are effective because they are designed to discover the latent features underlying the interactions between users and items. \citeauthor{srebro2005maximum} \shortcite{srebro2005maximum} suggested the Maximum Margin Matrix Factorization (MMMF), which used low-norm instead of low-rank factorizations. \citeauthor{mnih2008probabilistic} \shortcite{mnih2008probabilistic} presented the Probabilistic Matrix Factorization (PMF) model that characterized the user preference matrix as a product of two lower-rank user and item matrices. The PMF model was especially effective at making better predictions for users with few ratings. \citeauthor{he2016fast} \shortcite{he2016fast} proposed a new MF method which considers the implicit feedback for on-line recommendation. In \cite{he2016fast}, the weights of the missing data were assigned based on the popularity of items. To exploit the content of items and solve the sparse issue in recommender systems, \cite{zhao2016matrix} presented the model for movie recommendation using additional visual features (e.g.. posters and still frames) to better understand movies, and further improved the performance of movie recommendation. \paragraph{Recurrent neural network for recommendation} These traditional MF methods for recommendation systems are based on the assumption that the user interests and movie attributes are near static, which is however not consistent with reality. \citeauthor{koren2010collaborative} \shortcite{koren2010collaborative} discussed the effect of temporal dynamics in recommender systems and proposed a temporal extension of the SVD++ (called TimeSVD++) to explicitly model the temporal bias in data. However, the features used in TimeSVD++ were hand-crafted and computationally expensive to obtain. Recently, there have been increasing interests in employing recurrent neural network to model temporal dynamic in recommendation systems. For example, \citeauthor{hidasi2015session} \shortcite{hidasi2015session} applied recurrent neural network (i.e. GRU) to session-based recommender systems. This work treats the first item a user clicked as the initial input of GRU. Each follow-up click of the user would then trigger a recommendation depending on all of the previous clicks. \citeauthor{wu2016recurrent} \shortcite{wu2016recurrent} proposed a recurrent neural network to perform the time heterogeneous feedback recommendation. \citeauthor{wu2017recurrent} \shortcite{wu2017recurrent} used LSTM autoregressive model for the user and movie dynamics and employed matrix factorization to model the stationary components that encode fixed properties. Different from their work, we use GAN framework to leverage the MF and RNN approaches for top-$n$ recommendation, aiming to generate plausible and high-quality recommendation lists. To address the cold start problem in recommendation, \cite{cui2016visual} presented a visual and textural recurrent neural network (VT-RNN), which simultaneously learned the sequential latent vectors of user’s interest and captured the content-based representations that contributed to address the cold start. \paragraph{Generative adversarial network for recommendation} In parallel, previous work has demonstrated the effectiveness of generative adversarial network (GAN) \cite{goodfellow2014generative} in various tasks such as image generation \cite{reed2016generative,arjovsky2017wasserstein}, image captioning \cite{chen2017show}, and sequence generation\cite{yu2017seqgan}. The most related work to ours is \cite{wang2017irgan}, which proposed a novel IRGAN mechanism to iteratively optimize a generative retrieval component and a discriminative retrieval component. IRGAN reported impressive results on the tasks of web search, item recommendation, and question answering. Our approach differs from theirs in several aspects. First, we combine the MF approach and the RNN approach with GAN, exploiting the performance contributions of both approaches. Second, IRGAN does not attempt to estimate the future behavior since the experimental data is split randomly in their setting. In fact, they use future trajectories to infer the historical records, which seems not useful in real-life applications. Third, we incorporate poster information of movies to deal with the cold-start issue and boost the recommendation performance. \begin{comment} Moreover, we found ranking performance can be skewed by few popular items and splitting dataset randomly would be not serious to bias accuracy metrics as proposed in\cite{zhang2013optimizing,cremonesi2010performance}, but it is impractical in the real-world. To further improve the performance, we provide Siamese Network that independently learn a representation for each user and candidate item and then maximum the distance between the two estimated representations with a margin constraint in pair-wise scenario. \end{comment} \section{Our Model} \begin{table}[t] \small \centering \caption{Notation list. We use superscript $u$ to annotate parameters related to a user, and superscript $m$ to annotate parameters related to a movie.} \label{tab:symbol} \begin{tabular}{l|l} \toprule \midrule $R$ & the user-movie rating matrix\\ $U$, $M$ & the number of users and movies\\ $r_{ij}$ & rating score of user $i$ on movie $j$\\ $r_{ij,t}$ & rating score of user $i$ on movie $j$ at time $t$\\ $\mathbf e^{u}_{i}$ & MF user factors for user $i$\\ $\mathbf e^{m}_{j}$ & MF movie factors for movie $j$\\ $b^{u}_{i}$ & bias of user $i$ in MF and RNN hybrid calculation \\ $b^{m}_{j}$ & bias of movie $j$ in MF and RNN hybrid calculation\\ $\mathbf h^{u}_{i,t}$ & LSTM hidden-vector at time $t$ for user $i$\\ $\mathbf h^{m}_{j,t}$ & LSTM hidden-vector at time $t$ for movie $j$\\ $\mathbf z^{u}_{i,t}$ & the rating vector of user $i$ at time $t$ (LSTM input)\\ $\mathbf z^{m}_{j,t}$ & the rating vector of movie $j$ at time $t$ (LSTM input)\\ $\alpha^{i}_{t}$ & attention weight of user $i$ at time $t$\\ $\beta^{j}_{t}$ & attention weight of movie $j$ at time $t$\\ $m_{+}$ & index of a positive (high-rating) movie drawn from the \\ & entire positive movie set $\mathcal M_+$ \\ $m_{-}$ & index of a negative (low-rating) movie randomly chosen from the \\ &entire negative movie set $\mathcal M_-$\\ $m_{g,t}$ & index of an item chosen by generator $G$ at time $t$\\ \bottomrule \end{tabular} \vspace{-0.3cm} \end{table} Suppose there is a sparse user-movie rating matrix $R$ that consists of $U$ users and $M$ movies. Each entry $r_{ij,t}$ denotes the rating of user $i$ on movie $j$ at time step $t$. The rating is represented by numerical values from 1 to 5, where the higher value indicates the stronger preference. Instead of predicting the rating of a specific user-movie pair as is done in~\cite{adomavicius2005toward,mcnee2006being}, the proposed LSIC model aims to provide users with ranked lists of movies (top-$n$ recommendation) \cite{liu2008eigenrank}. In this section, we elaborate each component of LSIC model for content-aware movie recommendation. The main notations of this work are summarized in Table \ref{tab:symbol} for clarity. The LSTC model employs an adversarial framework to combine the MF and RNN based models for the top-$n$ movie recommendation. The overview of our proposed architecture and its data-flow are illustrated in Figure \ref{fig:1}. In the adversarial process, we simultaneously train two models: a generative model $G$ and a discriminative model $D$. \begin{figure*}[h] \centering \includegraphics[ width=5.25in]{figures/architecture-crop.pdf} \caption{The Architecture of Long-term and Session-based Ranking Model with Adversarial Network. } \label{fig:1} \end{figure*} \subsection{Matrix Factorization (MF)} The MF framework \cite{mnih2008probabilistic} models the long-term states (global information) for both users ($\mathbf e^u$) and movies ($\mathbf e^m$). In its standard setting, the recommendation task can be formulated as inferring missing values of a partially observed user-movie rating matrix $R$. The formulation of MF is given by: \begin{equation} \label{eq:mf} \argmin_{\mathbf e^u,\mathbf{e}^m} \sum_{i,j} I_{ij}\left(r_{ij}-\rho\left((\mathbf e^{u}_i)^T \mathbf e^{m}_{j}\right)\right)^2 +{\lambda^{u}}\|\mathbf{e}^u\|_{F}^2 +{\lambda^{m}}\|\mathbf{e}^m\|_{F}^2 \end{equation} where $\mathbf{e}^{u}_{i}$ and $\mathbf{e}^{m}_{j}$ represent the user and movie latent factors in the shared $d$-dimension space respectively. $r_{ij}$ denotes the user $i$'s rating on movie $j$. $I_{ij}$ is an indicator function and equals 1 if $r_{ij}>0$, and 0 otherwise. $\lambda^{u}$ and $\lambda^{m}$ are regularization coefficients. The $\rho(\cdot)$ is a logistic scoring function that bounds the range of outputs. In most recommender systems, matrix factorization techniques \cite{koren2009matrix} recommend movies based on estimated ratings. Even though the predicted ratings can be used to rank the movies, it is known that it does not provide the best prediction for the top-$n$ recommendation. Because minimizing the objective function -- the squared errors -- does not perfectly align with the goal to optimize the ranking order. In this paper, we apply MF for ranking prediction (top-$n$ recommendation) directly, similar to \cite{wang2017irgan}. \subsection{Recurrent Neural Network (RNN)} The RNN based recommender system focuses on modeling session-based trajectories instead of global (long-term) information \cite{wu2017recurrent}. It predicts future behaviors and provides users with a ranking list given the users' past history. The main purpose of using RNN is to capture time-varying state for both users and movies. Particularly, we use LSTM cell as the basic RNN unit. Each LSTM unit at time $t$ consists of a memory cell $c_t$, an input gate $i_t$, a forget gate $f_t$, and an output gate $o_t$. These gates are computed from previous hidden state $\mathbf h_{t-1}$ and the current input $\mathbf x_t$: \begin{equation} \begin{aligned} \left[f_{t},i_{t},o_{t}\right] &= \text{sigmoid}(W[\mathbf h_{t-1}, \mathbf x_{t}])\\ \end{aligned} \end{equation} The memory cell $c_t$ is updated by partially forgetting the existing memory and adding a new memory content $\mathbf l_{t}$: \begin{align} \mathbf l_{t}&=\tanh(V[\mathbf h_{t-1}, \mathbf x_{t}])\\ \mathbf c_{t}&=f_{t}\odot \mathbf c_{t-1}+i_{t}\odot \mathbf l_{t} \end{align} Once the memory content of the LSTM unit is updated, the hidden state at time step $t$ is given by: \begin{equation} \begin{aligned} \mathbf h_{t}&=o_{t}\odot \tanh(\mathbf c_{t}) \end{aligned} \end{equation} For simplicity of notation, the update of the hidden states of LSTM at time step $t$ is denoted as $\mathbf h_{t}={\rm LSTM}(\mathbf h_{t-1}, \mathbf x_{t})$. Here, we use $\mathbf z^{u}_{i,t} \in \mathbb{R}^{U}$ and $\mathbf z^{m}_{j,t} \in \mathbb{R}^{M}$ to represent the rating vector of user $i$ and movie $j$ given time $t$ respectively. Both $\mathbf z^{u}_{i,t}$ and $\mathbf z^{m}_{j,t}$ serve as the input to the LSTM layer at time $t$ to infer the new states of the user and the movie: \label{eq:lstm-user} \begin{align} \mathbf h^{u}_{i,t} &= {\rm LSTM}(\mathbf h^{u}_{i,t-1}, \mathbf z^{u}_{i,t})\label{eq:lstm-user}\\ \mathbf h^{m}_{j,t} &= {\rm LSTM}(\mathbf h^{m}_{j,t-1}, \mathbf z^{m}_{j,t})\label{eq:lstm-movie} \end{align} Here, $\mathbf h^{u}_{i,t}$ and $\mathbf h^{m}_{j,t}$ denote hidden states for user $i$ and movie $j$ at time step $t$ respectively. In this work, we explore the potential of integrating posters of movies to boost the performance of movie recommendation. Inspired by the recent advances of CNNs in computer vision, the poster is mapped to the same space of the movie by using a CNN. More concretely, we encode each image into a FC-2k feature vector with Resnet-101 (101 layers), resulting in a 2048-dimensional vector representation. The poster $P_j$ of movie $j$ is only inputted once, at $t=0$, to inform the movie LSTM about the poster content: \begin{equation} \label{eq:lstm-cnn} \begin{aligned} \mathbf z^{m}_{j,0}=CNN(P_j). \end{aligned} \end{equation} \begin{comment} \begin{figure}[h] \label{fig:hard} \centering \includegraphics[ width=6cm]{figures/hard-crop.pdf} \caption{Hard mechanism} \vspace{-10pt} \end{figure} \end{comment} \subsection{RNN and MF Hybrid} The session-based model deals with temporal dynamics of the user and movie states, we further incorporate the long-term preference of users and the fixed properties of movies. To exploit their advantages together, similar to \cite{wu2017recurrent}, we define the rating prediction function as: \begin{equation} \begin{aligned} r_{ij,t} &= g(\mathbf e^{u}_{i}, \mathbf e^{m}_{j}, \mathbf h^{u}_{i,t}, \mathbf h^{m}_{j,t}) \end{aligned} \end{equation} where $g(.)$ is a score function, $\mathbf e^{u}_{i}$ and $\mathbf e^{m}_{j}$ denote the global latent factors of user $i$ and movie $j$ learned by Eq.~ (\ref{eq:mf}); $\mathbf h^{u}_{i,t}$ and $\mathbf h^{m}_{j,t}$ denote the hidden states at time step $t$ of two RNNs learned by Eq.~(\ref{eq:lstm-user}) and Eq.~(\ref{eq:lstm-movie}) respectively. In this work, we study four strategies to calculate the score function $g$, integrating MF and RNN. The details are described below. \begin{figure*} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=4.0cm]{figures/hard-crop.pdf}} \centerline{(a) LSIC-V1: Hard mechanism} \end{minipage} \begin{minipage}{0.24\linewidth} \centerline{\includegraphics[width=4.0cm]{figures/soft1-crop.pdf}} \centerline{(b) LSIC-V2: Prior initialization} \end{minipage} \begin{minipage}{0.24\linewidth} \centerline{\includegraphics[width=4.0cm]{figures/soft2-crop.pdf}} \centerline{(c) LSIC-V3: Static context} \end{minipage} \begin{minipage}{0.24\linewidth} \centerline{\includegraphics[width=4.0cm]{figures/soft3_hard-crop.pdf}} \centerline{(d) LSIC-V4: Attention model} \end{minipage} \caption{Four strategies to calculate the score function $g$, integrating MF and RNN.} \label{fig:mixture} \end{figure*} \paragraph{LSIC-V1} This is a hard mechanism, using a simple way to calculate the mixing score from MF and RNN with the following formulation: \begin{align} r_{ij,t} &=g(\mathbf e^{u}_{i}, \mathbf e^{m}_{j}, \mathbf h^{u}_{i,t}, \mathbf h^{m}_{j,t})=\frac{1}{1+{\rm exp}(-\mathbf s)} \label{eq:hard}\\ \mathbf s&=\mathbf e^{u}_{i} \cdot \mathbf e^{m}_{j} + \mathbf h^{u}_{i,t} \cdot \mathbf h^{m}_{j,t} +b^{u}_{i}+b^{m}_{j} \label{eq:hard-score} \end{align} where $b^{u}_{i}$ and $b^{m}_{j}$ are the biases of user $i$ and movie $j$; $\mathbf h^{u}_{i,t}$ and $\mathbf h^{m}_{j,t}$ are computed by Eq.~(\ref{eq:lstm-user}) and Eq.~(\ref{eq:lstm-movie}). In fact, LSIC-V1 does not exploit the global factors in learning the temporal dynamics. In this paper, we also design a soft mixture mechanism and provide three strategies to account for the global factors $\mathbf e^{u}_i$ and $\mathbf e^{m}_j$ in learning $\mathbf h^{u}_{i,t}$ and $\mathbf h^{m}_{j,t}$, as described below (i.e., LSIC-V2, LSIC-V3 and LSIC-V4). \paragraph{LSIC-V2} We use the latent factors of user $i$ ($\mathbf e^{u}_i$) and movie $j$ ($\mathbf e^{m}_j$) pre-trained by MF model to initialize the hidden states of the LSTM cells $\mathbf h^{u}_{i,0}$ and $\mathbf h^{m}_{j,0}$ respectively, as depicted in Figure \ref{fig:mixture}(b). \paragraph{LSIC-V3} As shown in Figure \ref{fig:mixture}(c), we extend LSIC-V2 by treating $\mathbf e^{u}_i$ (for user $i$) and $\mathbf e^{m}_j$ (for movie $j$) as the static context vectors, and feed them as an extra input into the computation of the temporal hidden states of users and movies by LSTM. At each time step, the context information assists the inference of the hidden states of LSTM model. \paragraph{LSIC-V4} This method use an attention mechanism to compute a weight for each hidden state by exploiting the global factors. The mixing scores at time $t$ can be reformulated by: \begin{align} r_{ij,t}&=g(\mathbf e^{u}_{i}, \mathbf e^{m}_{j}, \mathbf h^{u}_{i,t-1}, \mathbf h^{m}_{j,t-1}, \mathbf c^{u}_{i,t}, \mathbf c^{m}_{j,t})=\frac{1}{1+{\rm exp}(-\mathbf s)} \label{eq:soft}\\ \mathbf s&=\mathbf e^{u}_{i} \cdot \mathbf e^{m}_{j} + \mathbf h^{u}_{i,t} \cdot \mathbf h^{m}_{j,t} +b_{i}+b_{j}\label{eq:soft-score} \end{align} where $\mathbf c^{u}_{i,t}$ and $\mathbf c^{m}_{j,t}$ are the context vectors at time step $t$ for user $i$ and movie $j$; $\mathbf h^{u}_{i,t}$ and $\mathbf h^{m}_{j,t}$ are the hidden states of LSTMs at time step $t$, computed by \begin{align} \mathbf h^{u}_{i,t} &= {\rm LSTM}(\mathbf h^{u}_{i,t-1}, \mathbf z^{u}_{i,t}, \mathbf c^{u}_{i,t})\label{eq:lstm-soft-user}\\ \mathbf h^{m}_{j,t} &= {\rm LSTM}(\mathbf h^{m}_{j,t-1}, \mathbf z^{m}_{j,t}, \mathbf c^{m}_{j,t})\label{eq:lstm-soft-movie} \end{align} The context vectors $\mathbf c^{u}_{i,t}$ and $\mathbf c^{m}_{j,t}$ act as extra input in the computation of the hidden states in LSTMs to make sure that every time step of the LSTMs can get full information of the context (long-term information). The context vectors $\mathbf c^{u}_{i,t}$ and $\mathbf c^{m}_{j,t}$ are the dynamic representations of the relevant long-term information for user $i$ and movie $j$ at time $t$, calculated by \begin{equation} \label{eq:context} \begin{aligned} \mathbf c^{u}_{i,t}=\sum_{k=1}^U\alpha^{i}_{k,t} \mathbf e^{u}_{k} ; \hspace{0.3cm} \mathbf c^{m}_{j,t}=\sum_{p=1}^M\beta^{j}_{p,t} \mathbf e^{m}_{p} \end{aligned} \end{equation} where $U$ and $M$ are the number of users and movies. The attention weights $\alpha^{i}_{k,t}$ and $\beta^{j}_{p,t}$ for user $i$ and movie $j$ at time step $t$ are computed by \begin{align} \alpha^{i}_{k,t}=\frac{\exp(\sigma(\mathbf h^{u}_{i,t-1}, \mathbf e^{u}_{k}))}{\sum_{k'=1}^U \exp(\sigma(\mathbf h^{u}_{i,t-1}, \mathbf e^{u}_{k'}))}\label{eq:lstm-attention-user}\\ \beta^{j}_{p,t}=\frac{\exp(\sigma(\mathbf h^{m}_{j,t-1}, \mathbf e^{m}_{p}))}{\sum_{p'=1}^M \exp(\sigma(\mathbf h^{m}_{j,t-1}, \mathbf e^{m}_{p'}))}\label{eq:lstm-attention-movie} \end{align} where $\sigma$ is a feed-forward neural network to produce a real-valued score. The attention weights $\alpha^{i}_{t}$ and $\beta^{j}_{t}$ together determine which user and movie factors should be selected to generate $r_{ij,t}$. \subsection{Generative Adversarial Network (GAN) for Recommendation} \begin{comment} We introduce the architecture of long-term and session-based ranking model in the pair-wise scenario. It learns a scoring function $s(u_{i},m_{j}|t;\theta)$ for user $i$ on movie $j$ given time $t$. During the inference, we treat the trained model as a point-wise scoring function to score user $i$ on all movies given time $t$, inspired by \cite{dehghani2017neural} in IR community. For this reason, we provides a unified explanation of thinking in ranking inference and pairwise ranking model with adversarial network shown as Figure 1. \end{comment} Generative adversarial network (GAN)\cite{goodfellow2014generative} consist of a generator G and a discriminator D that compete in a minimax game with two players: The discriminator tries to distinguish real high-rated movies on training data from ranking or recommendation list predicted by G, and the generator tries to fool the discriminator to generate(predict) well-ranked recommendation list. Concretely, D and G play the following game on V(D,G): \begin{equation} \label{eq:gan} \begin{aligned} \min_{G}\max_{D}v(D,G)=&\mathbb{E}_{x \sim P_{true}(x)}[{\rm log} D(x)] + \\ &\mathbb{E}_{z\sim P_{z}(z)}[{\rm log}(1-D(G(z)))] \end{aligned} \end{equation} Here, $x$ is the input data from training set, $z$ is the noise variable sampled from normal distribution. We propose an adversarial framework to iteratively optimize two models: the generative model $G$ predicting recommendation list given historical user-movie interactions, and the discriminative model $D$ predicting the relevance of the generated list. Like the standard generative adversarial networks (GANs)~\cite{goodfellow2014generative}, our model also optimizes the two models with a minimax two-player game. $D$ tries to distinguish the real high-rated movies in the training data from the recommendation list generated by $G$, while $G$ maximizes the probability of $D$ making a mistake. Hopefully, this adversarial process can eventually adjust $G$ to generate plausible and high-quality recommendation list. We further elaborate the generator and discriminator below. \vspace{-5pt} \subsubsection{Discriminate Model} As depicted in Figure 1 (right side), we implement the discriminator $D$ via a Siamese Network that incorporates long and session-based ranking models in a pair-wise scenario. The discriminator $D$ has two symmetrical point-wise networks that share parameters and are updated by minimizing a pair-wise loss. The objective of discriminator $D$ is to maximize the probability of correctly distinguishing the ground truth movies from generated recommendation movies. For $G$ fixed, we can obtain the optimal parameters for the discriminator $D$ with the following formulation. \vspace{-5pt} \begin{equation} \begin{aligned} \label{eq:update_d} \theta^*= \argmax_{\theta}\sum_{i \in \mathcal{U}}\Big(\mathbb{E}_{m_{+},m_{-} \sim p_{true}}\left[{\log} D_{\theta}(u_{i},m_{-},m_{+}|t)\right]+ \\ \mathbb{E}_{m_{+}\sim p_{true},m_{g,t}\sim G_{\phi}(m_{g,t}|u_i,t)}\left[{\log}(1-D_{\theta}(u_{i},m_{g,t},m_{+}|t)\right]\Big)\\ \end{aligned} \end{equation} where $\mathcal{U}$ denotes the user set, $u_{i}$ denotes user $i$, $m_{+}$ is a positive (high-rating) movie, $m_{-}$ is a negative movie randomly chosen from the entire negative (low-rating) movie space, $\theta$ and $\phi$ are parameters of $D$ and $G$, and $m_{g,t}$ is the generated movie by $G$ given time $t$. Here, we adopt hinge loss as our training objective since it performs better than other training objectives. Hinge loss is widely adopted in various learning to rank scenario, which aims to penalize the examples that violate the margin constraint: \begin{equation} \begin{aligned} \label{eq:reward} D(u_{i},m_{-}, m_{+}|t)=\max\Big\{0,\epsilon - g(\mathbf e^{u}_{i}, \mathbf e^{m}_{m_+}, \mathbf h^{u}_{i,t}, \mathbf h^{m}_{m_{+},t})\\ +g(\mathbf e^{u}_{i}, \mathbf e^{m}_{m_{-}}, \mathbf h^{u}_{i,t}, \mathbf h^{m}_{m_{-},t})\Big\} \end{aligned} \end{equation} where $\epsilon$ is the hyper-parameter determining the margin of hinge loss, and we compress the outputs to the range of $(0,1)$. \subsubsection{Generative Model} Similar to conditional GANs proposed in~\cite{mirza2014conditional}, our generator $G$ takes in the auxiliary information (user $u_{i}$ and time $t$) as input, and generates the ranking list for user $i$. Specifically, when D is optimized and fixed after computing Eq. \ref{eq:update_d}, the generator $G$ can be optimized by minimizing the following formulation: \begin{equation} \label{eq:update_g} \begin{aligned} \phi^* = \argmin_{\phi}\sum_{m \in \mathcal{M}}\big(\mathbb{E}_{m_{g,t} \sim G_{\phi}(m_{g,t}|u_i,t)}[{\rm log}(1-D(u_{i},m_{g,t},m_{+}|t)]\big) \end{aligned} \end{equation} Here, $\mathcal{M}$ denotes the movie set. As in~\cite{goodfellow2014generative}, instead of minimizing ${\rm log}(1-D(u_{i},m_{g,t},m_{+}|t))$, we train $G$ to maximize ${\rm log}(D(u_{i},m_{g,t},m_{+}|t))$. \subsubsection{Policy Gradient} Since the sampling of recommendation list by generator $G$ is discrete, it cannot be directly optimized by gradient descent as in the standard GAN formulation. Therefore, we use policy gradient based reinforcement learning algorithm~\cite{sutton2000policy} to optimize the generator $G$ so as to generate highly rewarded recommendation list. Concretely, we have the following derivations: \begin{equation} \begin{aligned} \label{eq:rl} \nabla_{\phi} J^{G}(u_{i})&=\nabla_{\phi}\mathbb{E}_{m_{g,t}\sim G_{\phi}(m_{g,t}|u_i,t)}[\log D(u_{i},m_{g,t},m_{+}|t)] \\ &=\sum_{m \in \mathcal{M}}\nabla_{\phi}G_{\phi}(m|u_{i},t)\log D(u_{i},m,m_{+}|t)\\ &=\sum_{m \in \mathcal{M}} G_{\phi}(m|u_{i},t)\nabla_{\phi}\log G_{\phi}(m|u_{i},t)\log D(u_{i},m,m_{+}|t)\\ &=\mathbb{E}_{m_{g,t}\sim G_{\phi}(m|u_{i},t)}[\nabla_{\phi}{\rm log}G_{\phi}(m_{g,t}|u_{i},t)\log D(u_{i},m_{g,t},m_{+}|t)]\\ &\approx \frac{1}{K}\sum_{k=1}^K\nabla_{\phi}{\rm log}G_{\phi}(m_k|u_{i},t)\log D(u_{i},m_k,m_{+}|t)\\ \end{aligned} \end{equation} where $K$ is number of movies sampled by the current version of generator and $m_k$ is the $k$-th sampled item. With reinforcement learning terminology, we treat the term ${\rm log}D(u_{i},m_{k},m_{+}|t)$ as the reward at time step $t$, and take an action $m_k$ at each time step. To accelerate the convergence, the rewards within a batch are normalized with a Gaussian distribution to make the significant differences. \begin{algorithm} \label{alg:alg} \caption{Long and Session-based Ranking Model with Adversarial Network} \textbf{Input:} generator $G_{\phi}$, discriminator $D_{\theta}$, training data $S$.\\ Initialize models $G_{\phi}$ and $D_{\theta}$ with random weights, and pre-train them on training data $S$.\\ \textbf{repeat}\\ \For{g-steps} { Generate recommendation list for user $i$ at time $t$ using the generator $G_{\phi}$.\\ Sample $K$ candidates from recommendation list.\\ \For{$k\in\{1,...,K\}$} { Sample a positive movie $m_{+}$ from S.\\ Compute the reward ${\rm log}D(u_{i},m_{k},m_{+}|t)$ with Eq.(\ref{eq:reward})\\ } Update generator $G_{\phi}$ via policy gradient Eq.(\ref{eq:rl}).\\ } \For{d-steps} { Use current $G_{\phi}$ to generate a negative movie and combined with a positive movie sampled from $S$.\\ Update discriminator $D_{\theta}$ with Eq.(\ref{eq:update_d}).\\ } \textbf{until} convergence \end{algorithm} The overall procedure is summarized in Algorithm 1. During the training stage, the discriminator and the generator are trained alternatively in a adversarial manner via Eq.(\ref{eq:update_d}) and Eq.(\ref{eq:rl}), respectively. \section{Experimental Setup} \subsection{Datasets} \begin{table}[h] \small \centering \caption{Characteristics of the datasets.} \label{tab:data} \resizebox{1.0\columnwidth}{!} {\footnotesize \begin{tabular}{l|lll} \toprule Dataset & Movielens-100K & Netflix-3M & Netflix-Full \\ \midrule Users & 943 & 326,668& 480,189 \\ movies & 1,6831 &17,751& 17,770 \\ Ratings & 100,000 & 16,080,980 & 100,480,507\\ \midrule Train Data & 09/97-03/98 &9/05-11/05 & 12/99-11/05\\ Test Data & 03/98-04/98& 12/05 & 12/05\\ Train Ratings & 77,714 & 13,675,402& 98,074,901\\ Test Ratings & 21,875 & 2,405,578& 2,405,578\\ \midrule Density & 0.493 & 0.406 & 0.093\\ Sparsity & 0.063 & 0.003 & 0.012\\ \bottomrule \end{tabular} } \end{table} In order to evaluate the effectiveness of our model, we conduct experiments on two widely-used real-life datasets: Movielens100K and Netflix (called ``Netflix-Full''). To evaluate the robustness of our model, we also conduct experiments on a 3-month Netflix (called ``Netflix-3M'') dataset, which is a small version of Netflix-Full and has different training and testing period. For each dataset, we split the whole data into several training and testing intervals based on time, as is done in~\cite{wu2017recurrent}, to simulate the actual situation of predicting future behaviors of users given the data that occurred strictly before current time step. Then, each testing interval is randomly divided into a validation set and a testing set. We removed the users and movies that do not appear in training set from the validation and test sets. The detailed statistics are presented in Table \ref{tab:data}\footnote{``Density'' shows the average number of 5-ratings for the user per day. ``Sparsity'' shows the filling-rate of user-movie rating matrix as used in \cite{wu2017recurrent}}. Following \cite{wang2017irgan}, we treat ``5-star'' in Netflix, ``4-start'' and ``5-start'' for Movielens100K as positive feedback and all others as unknown (negative) feedback. \subsection{Implementation Details} \paragraph{Matrix Factorization.} We use matrix factorization with 5 and 16 factor numbers for Movielens and Netflix respectively \cite{wang2017irgan}. The parameters are randomly initialized by a uniform distribution ranged in [-0.05, 0.05]. We take gradient-clipping to suppress the gradient to the range of [-0.2,0.2]. L2 regularization (with $\lambda = 0.05$) is used to the weights and biases of user and movie factors. \paragraph{Recurrent Neural Network.} We use a single-layer LSTM with 10 hidden neurons, 15-dimensional input embeddings, and 4-dimensional dynamic states where each state contains 7-days users/movies behavioral trajectories. That is, we take one month as the length of a session. The parameters are initialized with the same way as in MF. L2 regularization (with $\lambda = 0.05$) is used to the weights and biases of the LSTM layer to avoid over-fitting. \paragraph{Generative Adversarial Nets.} We pre-train $G$ and $D$ on the training data with a pair-wise scenario, and use SGD algorithm with learning rate $1 \times 10^{-4}$ to optimize its parameters. The number of sampled movies is set to 64 (i.e., $K=64$). In addition, we use matrix factorization model to generate 100 candidate movies, and then re-rank these movies with LSTM. In all experiments, we conduct mini-batch training with batch size 128.\\ \subsection{Evaluation Metrics} To quantitatively evaluate our method, we adopt the rank-based evaluation metrics to measure the performance of top-$n$ recommendation \cite{liu2008eigenrank,cremonesi2010performance} , including Precision@N, Normalised Discounted Cumulative Gain (NDCG@N), Mean Average Precision (MAP) and Mean Reciprocal Ranking (MRR). \subsection{Comparison to Baselines} In the experiments, we evaluate and compare our models with several state-of-the-art methods.\\ \begin{table*}[t] \centering \caption{Moive recommendation results (MovieLens). }\label{tab:cf-perf} \label{tab:movielens} \vspace{-8pt} \resizebox{1.85\columnwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \toprule & \textbf{Precision@3} & \textbf{Precision@5} & \textbf{Precision@10} & \textbf{NDCG@3} & \textbf{NDCG@5} & \textbf{NDCG@10} & \textbf{MRR} & \textbf{MAP} \\ \midrule BPR & 0.2795 & 0.2664 & 0.2301 & 0.2910 &0.2761 & 0.2550 &0.4324 & 0.3549\\ PRFM & 0.2884 & 0.2699 & 0.2481 & 0.2937 & 0.2894 & 0.2676 &0.4484 & 0.3885 \\ LambdaFM & 0.3108 & 0.2953 & 0.2612 & 0.3302 & 0.3117 &0.2795 &0.4611 & 0.4014 \\ RRN & 0.2893& 0.2740& 0.2480 & 0.2951 & 0.2814 & 0.2513 & 0.4320 & 0.3631 \\ IRGAN & 0.3022 & 0.2885 & 0.2582 & 0.3285 & 0.3032 &0.2678 &0.4515 & 0.3744 \\ \midrule LSIC-V1 & 0.2946 &0.2713 & 0.2471&0.2905& 0.2801& 0.2644 & 0.4595 & 0.4066 \\ LSIC-V2 & 0.3004 & 0.2843 & 0.2567 &0.3122 & 0.2951 & 0.2814 & 0.4624 & 0.4101\\ LSIC-V3 &0.3105& 0.3023& 0.2610 &0.3217& 0.3086& 0.2912 & 0.4732& 0.4163\\ LSIC-V4 & \textbf{0.3327} & \textbf{0.3173} & \textbf{0.2847} & \textbf{0.3512} & \textbf{0.3331} & \textbf{0.2939} & \textbf{0.4832} & \textbf{0.4321} \\ \midrule Impv & 7.05\% & 7.45\% & 9.00\% & 6.36\% & 6.87\% & 5.15\% & 4.79\% & 7.65\%\\ \bottomrule \end{tabular} } \end{table*} \begin{table*}[t] \centering \caption{Movie recommendation results (Netflix-3M). }\label{tab:netflix-3m} \resizebox{1.85\columnwidth}{!}{ \vspace{-8pt} \begin{tabular}{l|c|c|c|c|c|c|c|c} \toprule & \textbf{Precision@3} & \textbf{Precision@5} & \textbf{Precision@10} & \textbf{NDCG@3} & \textbf{NDCG@5} & \textbf{NDCG@10} & \textbf{MRR} & \textbf{MAP} \\ \midrule BPR & 0.2670 & 0.2548 & 0.2403 & 0.2653 & 0.2576 & 0.2469 & 0.3829 & 0.3484 \\ PRFM & 0.2562 & 0.2645 & 0.2661 & 0.2499 & 0.2575 &0.2614 &0.4022 & 0.3712 \\ LambdaFM & 0.3082 & 0.2984 & 0.2812 & 0.3011 & 0.2993 &0.2849 &0.4316 & 0.4043 \\ RRN & 0.2759 & 0.2741 & 0.2693 & 0.2685 & 0.2692 & 0.2676 & 0.3960& 0.3831 \\ IRGAN & 0.2856 &0.2836 & 0.2715 & 0.2824 & 0.2813 & 0.2695 & 0.4060& 0.3718 \\ \midrule LSIC-V1 & 0.2815 & 0.2801 & 0.2680&0.2833 & 0.2742 & 0.2696 & 0.4416& 0.4025 \\ LSIC-V2 & 0.2901 & 0.2883 & 0.2701 &0.2903 & 0.2831 & 0.2759 & 0.4406& 0.4102 \\ LSIC-V3 &0.3152 & 0.3013 & 0.2722 &0.2927 & 0.2901 & 0.2821 & 0.4482& 0.4185\\ LSIC-V4 & \textbf{0.3221} & \textbf{0.3193} & \textbf{0.2921} & \textbf{0.3157} & \textbf{0.3114} & \textbf{0.2975} & \textbf{0.4501}& \textbf{0.4247} \\ \midrule Impv & 4.51\% & 7.00\% & 3.88\% & 4.85\% & 4.04\% & 4.42\% & 4.29\% & 5.05\%\\ \bottomrule \end{tabular} } \end{table*} \begin{table*}[t] \centering \caption{Movie recommendation results (Netflix-Full). }\label{tab:netflix-full} \vspace{-8pt} \resizebox{1.85\columnwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \toprule & \textbf{Precision@3} & \textbf{Precision@5} & \textbf{Precision@10} & \textbf{NDCG@3} & \textbf{NDCG@5} & \textbf{NDCG@10} & \textbf{MRR} & \textbf{MAP} \\ \midrule BPR & 0.3011 & 0.2817 & 0.2587 & 0.2998 & 0.2870 & 0.2693 & 0.3840 & 0.3660 \\ PRFM & 0.2959 & 0.2837 & 0.2624 & 0.2831 & 0.2887 & 0.2789 &0.4060 & 0.3916 \\ LambdaFM & 0.3446 & 0.3301 & 0.3226 & 0.3450 & 0.3398 &0.3255 &0.4356 & 0.4067 \\ RRN & 0.3135 & 0.2954 & 0.2699 & 0.3123 & 0.3004 & 0.2810 & 0.3953& 0.3768 \\ IRGAN & 0.3320 &0.3229 & 0.3056 & 0.3319 &0.3260 & 0.3131 & 0.4248& 0.4052 \\ \midrule LSIC-V1 & 0.3127 & 0.3012 & 0.2818&0.3247 & 0.3098 & 0.2957 & 0.4470& 0.4098 \\ LSIC-V2 & 0.3393 & 0.3271 & 0.3172 &0.3482 & 0.3401 & 0.3293 & 0.4448& 0.4213 \\ LSIC-V3 &0.3501 & 0.3480 & 0.3291 &0.3498 & 0.3451& 0.3321 & 0.4503& 0.4257\\ LSIC-V4 & \textbf{0.3621} & \textbf{0.3530} & \textbf{0.3341} & \textbf{0.3608} & \textbf{0.3511} & \textbf{0.3412}& \textbf{0.4587}& \textbf{0.4327} \\ \midrule Impv & 5.08\% & 6.94\% & 3.56\% & 4.58\% & 3.33\% & 4.82\% & 5.30\% & 6.39\%\\ \bottomrule \end{tabular} } \end{table*} \begin{table*}[t] \centering \caption{Ablation results for Netflix-3M dataset. }\label{tab:ablation} \vspace{-8pt} \resizebox{1.85\columnwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \toprule &\textbf{Precision@3} & \textbf{Precision@5} & \textbf{Precision@10} & \textbf{NDCG@3} & \textbf{NDCG@5} & \textbf{NDCG@10} & \textbf{MRR} & \textbf{MAP} \\ \midrule LSIC-V4 & \textbf{0.3221} & \textbf{0.3193} & \textbf{0.2921} & \textbf{0.3157} & \textbf{0.3114} & \textbf{0.2975} & \textbf{0.4501}& \textbf{0.4247} \\ w/o RL & 0.3012 & 0.2970 & 0.2782 &0.2988 & 0.2927 & 0.2728 & 0.4431& 0.4112 \\ w/o poster &0.3110 & 0.3012 & 0.2894 & 0.3015 & 0.3085 & 0.2817 & 0.4373& 0.4005\\ \bottomrule \end{tabular} } \end{table*} \begin{table*}[t] \label{tab:cases} \small \centering \caption{ The Recalled movies from Top-10 candidates in Neflix-3M dataset }\label{tab:cf-perf} \vspace{-8pt} \resizebox{2.2\columnwidth}{!}{ \begin{tabular}{l|l|l|l|l|l|l|l} \toprule &\textbf{Groundtruth} & \textbf{IRGAN}~\cite{wang2017irgan} & \textbf{RRN}~\cite{wu2017recurrent} & \textbf{LambdaFM}~\cite{yuan2016lambdafm} & \textbf{LSIC-V4} \\ \midrule Userid: 1382 & \makecell [l]{ 9 Souls\\ The Princess Bride\\ Stuart Saves His Family\\ The Last Valley\\ Wax Mask\\ After Hours \\ Session 9\\ Valentin \\ } & \makecell[l]{ \textbf{[1]} The Beatles: Love Me Do \\ \textbf{[2]} Wax Mask \checkmark \\ \textbf{[3]} Stuart Saves His Family \checkmark \\ \midrule \textbf{[4]} After Hours \checkmark \\ \textbf{[5]} Top Secret! \\ \midrule \textbf{[6]} Damn Yankees \\ \textbf{[7]} Dragon Tales: It's Cool to Be Me! \\ \textbf{[8]} Play Misty for Me \\ \textbf{[9]} The Last Round: Chuvalo vs. Ali' \\ \textbf{[10]} La Vie de Chateau \\ } & \makecell[l]{ \textbf{[1]} Falling Down \\ \textbf{[2]} 9 Souls \checkmark\\ \textbf{[3]} Wax Mask \checkmark\\ \midrule \textbf{[4]} After Hours \checkmark\\ \textbf{[5]} Stuart Saves His Family \checkmark \\ \midrule \textbf{[6]} Crocodile Dundee 2 \\ \textbf{[7]} The Princess Bride \checkmark \\ \textbf{[8]} Dragon Tales: It's Cool to Be Me! \\ \textbf{[9]} They Were Expendable \\ \textbf{[10]} Damn Yankees\\ } & \makecell[l]{\textbf{[1]} The Avengers '63 \\ \textbf{[2]} Wax Mask \checkmark \\ \textbf{[3]} The Boondock Saints \\ \midrule \textbf{[4]} Valentin \checkmark\\ \textbf{[5]} 9 Souls \checkmark \\ \midrule \textbf{[6]} The Princess Bride \checkmark \\ \textbf{[7]} After Hours \checkmark \\ \textbf{[8]} Tekken \\ \textbf{[9]} Stuart Saves His Family \checkmark \\ \textbf{[10]} Runn Ronnie Run } & \makecell[l]{\textbf{[1]}9 Souls \checkmark \\ \textbf{[2]}The Princess Bride \checkmark\\ \textbf{[3]}Stuart Saves His Family \checkmark\\ \midrule \textbf{[4]} The Last Valley \checkmark \\ \textbf{[5]} Wax Mask \checkmark \\ \midrule \textbf{[6]} Session 9 \checkmark \\ \textbf{[7]} Dragon Tales: It's Cool to Be Me! \\ \textbf{[8]} Damn Yankees\\ \textbf{[9]} After Hours \checkmark \\ \textbf{[10]} Valentin \checkmark }\\ \midrule Userid: 8003 & \makecell[l]{9 Souls \\ Princess Bride} & \makecell [l]{ \textbf{[1]} Cheech Chong's Up in Smoke\\ \textbf{[2]} Wax Mask \\ \textbf{[3]} Damn Yankees\\ \midrule \textbf{[4]} Dragon Tales: It's Cool to Be Me! \\ \textbf{[5]} Top Secret!\\ \midrule \textbf{[6]} Agent Cody Banks 2: Destination London\\ \textbf{[7]} After Hours\\ \textbf{[8]} Stuart Saves His Family \\ \textbf{[9]} 9 Souls \checkmark\\ \textbf{[10]} The Beatles: Love Me Do \\ } & \makecell [l]{ \textbf{[1]} Crocodile Dundee 2\\ \textbf{[2]} Session 9\\ \textbf{[3]} Falling Down\\ \midrule \textbf{[4]} Wax Mask \\ \textbf{[5]} After Hours\\ \midrule \textbf{[6]} Stuart Saves His Family\\ \textbf{[7]} 9 Souls \checkmark\\ \textbf{[8]} The Princess Bride \checkmark\\ \textbf{[9]} Dragon Tales: It's Cool to Be Me!\\ \textbf{[10]} Scream 2\\ } & \makecell [l]{ \textbf{[1]} The Insider\\ \textbf{[2]}A Nightmare on Elm Street 3\\ \textbf{[3]} Dennis the Menace Strikes Again\\ \midrule \textbf{[4]} Civil Brand \\ \textbf{[5]} 9 Souls \checkmark\\ \midrule \textbf{[6]} Falling Down\\ \textbf{[7]} The Princess Bride \checkmark\\ \textbf{[8]} Radiohead: Meeting People\\ \textbf{[9]} Crocodile Dundee 2\\ \textbf{[10]} Christmas in Connecticut\\ } & \makecell[l]{\textbf{[1]} 9 Souls \checkmark \\ \textbf{[2]}The Princess Bride \checkmark \\ \textbf{[3]}The Last Valley \\ \midrule \textbf{[4]}Stuart Saves His Family \\ \textbf{[5]}Wax Mask \\ \midrule \textbf{[6]}Dragon Tales: It's Cool to Be Me! \\ \textbf{[7]}Session 9 \\ \textbf{[8]}Crocodile Dundee 2 \\ \textbf{[9]}Damn Yankees \\ \textbf{[10]}Cheech Chong's Up in Smoke } \\ \bottomrule \end{tabular} } \end{table*} \vspace{-5pt} \paragraph{Bayesian Personalised Ranking (BPR)} Given a positive movie, BPR uniformly samples negative movies to resolve the imbalance issue and provides a basic baseline for top-N recommendation ~\cite{rendle2009bpr}. \paragraph{Pairwise Ranking Factorization Machine (PRFM)} This is one of the state-of-the-art movie recommendation algorithms, which applies Factorization Machine model to microblog ranking on basis of pairwise classification \cite{qiang2013exploiting}. We use the same settings as in \cite{qiang2013exploiting} in our experiments. \paragraph{LambdaFM} It is a strong baseline for recommendation, which directly optimizes the rank biased metrics~\cite{yuan2016lambdafm}. We run the LambdaFM model with the publicly available code\footnote{https://github.com/fajieyuan/LambdaFM}, and use default settings for all hyperparameters. \paragraph{Recurrent Recommender Networks (RRN)} This model supplements matrix factorization with recurrent neural network model via a hard mixture mechanism~\cite{wu2017recurrent}. We use the same setting as in~\cite{wu2017recurrent}. \paragraph{IRGAN} This model trains the generator and discriminator alternatively with MF in an adversarial process ~\cite{wang2017irgan}. We run the IRGAN model with the publicly available code\footnote{https://github.com/geek-ai/irgan}, and use default settings for all hyperparameters. \section{Experimental Results} In this section, we compare our model with baseline methods quantitatively and qualitatively. \subsection{Quantitative Evaluation} We first evaluate the performance of top-$n$ recommendation. The experimental results are summarized in Tables \ref{tab:movielens} ,\ref{tab:netflix-3m} and \ref{tab:netflix-full}. Our model substantially and consistently outperforms the baseline methods by a noticeable margin on all the experimental datasets. In particular, we have explored several versions of our model with different mixture mechanisms. As one anticipates, LSIC-V4 achieves the best results across all evaluation metrics and all datasets. For example, on MovieLens dataset, LSIC-V4 improves $7.45\%$ on percision@5 and $6.87\%$ on NDCG@5 over the baseline methods. The main strength of our model comes from its capability of prioritizing both long-term and short-term information in content-aware movie recommendation. In addition, our mixture mechanisms (hard and soft) also seem quite effective to integrate MF and RNN. To better understand the adversarial training process, we visualize the learning curves of LSIC-V4 as shown in Figure 3. Due to the limited space, we only report the Pecision@5 and NDCG@5 scores as in~\cite{wang2017irgan}. The other metrics exhibit a similar trend. As shown in Figure 3, after about 50 epoches, both Precision@5 and NDCG@5 converge and the winner player is the generator which is used to generate recommendation list for our final top-$n$ movie recommendation. The performance of generator $G$ becomes better with the effective feedback (reward) from discriminator $D$. On the other hand, once we have a set of high-quality recommendation movies, the performance of $D$ deteriorates gradually in the training procedure and makes mistakes for predictions. In our experiments, we use the generator G with best performance to predict test data. \subsection{Ablation Study} In order to analyze the effectiveness of different components of our model for top-$n$ movie recommendation, in this section, we report the ablation test of LSIC-V4 by discarding poster information (w/o poster) and replacing the reinforcement learning with Gumbel-Softmax \cite{kusner2016gans} (w/o RL), respectively. Gumbel-Softmax is an alternative method to address the non-differentiation problem so that $G$ can be trained straightforwardly. Due to the limited space, we only illustrate the experimental results for Netflix-3M dataset that is widely used in movie recommendation (see Table \ref{tab:ablation}). Generally, both factors contribute, and reinforcement learning contributes most. This is within our expectation since discarding reinforcement learning will lead the adversarial learning inefficient. With Gumbel-Softmax, $G$ does not benefit from the reward of $D$, so that we do not know which movies sampled by G are good and should be reproduced. Not surprisingly, poster information also contributes to movie recommendation. \subsection{Case Study} In this section, we will further show the advantages of our models through some quintessential examples. In Table 7, we provide the recommendation lists generated by three state-of-the-art baseline methods (i.e., IRGAN, RNN, LambdaFM) as well as the proposed LSIC-V4 model for two users who are randomly selected from the Netflix-3M dataset. Our model can rank the positive movies in higher positions than other methods. For example, the ranking of the movie ``\emph{9 souls}'' for user ``8003'' has increased from 5-th position (by LambdaFM) to 1st position (by LSIC-V4). Meanwhile some emerging movies such as ``\emph{Session 9}'' and ``\emph{The Last Valley}'' that are truly attractive to the user ``1382'' have been recommended by our models, whereas they are ignored by baseline methods. In fact, we can include all positive movies in the top-10 list for user ``1382'' and in top-3 list for user ``8003''. Our model benefits from the fact that both dynamic and static knowledge are incorporated into the model with adversarial training. \begin{comment} We conduct some case studies in Netflix-Full dataset as shown in the Table \ref{tab:case}. MF based approach (in the 3rd column) provides a basic ranking list with the global user/movie profile. Since the incorporation with hard-mixture session-based model in RRN (in the 4rd column), some potential positive movies have been recalled in a higher position (movie named "\emph{9 souls}" from 9 to 2) due to the complementarity between the global and session-term user/movie profiles. In the same time, some emerging movies (like "\emph{9 souls}" and "\emph{Stuart Saves His Family}" for user 1382) have been mined by the extra joint with the session information, which is ignored by a single MF model but is really attractive to the current user. Rather than this hard-mixture joint models, a soft-version one with a flexible data-driven mechanism achieves a better performance for ranking (with more accurate recall in the 5th column). Finally, GAN mechanism boosts our models in a adversarial training, especially recalling the positive movies in all the top 5 positions for user 1382 . \end{comment} \begin{comment} \subsection{Cold Start} \benyou{If we mentioned "cold start", the removal of little-rating users might be argued, we should take care of the related statements} Figure [?] illustrates that long-term and session-based hybrid provides some improvements for both users and movies that have few ratings in the training data as claimed in \cite{wu2017recurrent}, and then we develop this approach into GANs framework so that gains significant improvements. As we expected, new users and movies doesn't have history in the past, but recently have few behavioral trajectories. However, the long-term model such as (PMF) focuses on long-history preference that is difficult to track the shifts of recent preference for both user and movie. \begin{table}[!tb] \centering \caption{In facts this is a figure, instead of a table.} \label{tab:cf-data} {\small \begin{tabular}{l|rrr} \toprule users\_groupby\_ratings\_count &1 & 2 & 3 \\ \midrule performance Non-GAN version & 0.1 & 0.2&0.3 \\ performance GAN version& 0.1 & 0.2& 0.3 \\ \bottomrule \end{tabular} } \end{table} Figure : the difference over users between Gan and NonGAN \benyou{Wei prepare the saved models of joint model and its GAN version in Netflix Three mouth , and I will draw the picture} \end{comment} \begin{figure}[h] \centering \subfigure{\includegraphics[width=0.49\columnwidth]{figures/curveP5_crop.pdf}\label{fig:percision5}} \subfigure{\includegraphics[width=0.49\columnwidth]{figures/curveNDCG5_crop.pdf}\label{fig:curveMRRquadratic} \caption{Learning curves of GAN on Netflix-3M}\label{fig:mrr} \vspace{-5pt} \end{figure} \subsection{Re-rank Effect} From our experiments, we observe that it could be time-consuming for RNN to infer all movies. In addition, users may be interested in a few movies that are subject to a long-tailed distribution. Motivated by these observations, we provide a re-ranking strategy as used in \cite{covington2016deep}. Specifically, we first generate $N$ candidate movies by MF, and then re-rank these candidate movies with our model. In this way, the inference time can been greatly reduced. Figure \ref{fig:rerank} illustrates the performance curves over the number of candidate movies (i.e. $N$) generated by MF. We only report the Pecision@5 and NDCG@5 results due to the limited space, the other metrics exhibit a similar trend. As shown in Figure \ref{fig:rerank}, when the number of candidate movies is small, i.e., $N\leq 100$ for Netflix-3M dataset, the Percision@5 raises gradually with the number of candidate movies increases. Nevertheless, the performance drops rapidly with the increasing candidate movies when $N\geq 100$. It suggests that the candidates in a long-tail side generated by MF is inaccurate and these candidate movies deteriorate the overall performance of the re-rank strategy. \begin{figure}[h] \centering \subfigure{\includegraphics[width=0.49\columnwidth]{figures/P5_rerankcubic-crop.pdf}\label{fig:p5_rerank}} \subfigure{\includegraphics[width=0.49\columnwidth]{figures/NDCG5_rerankcubic-crop.pdf}\label{fig:ndcg5_rerank} \caption{Sensitivity of the candidate scale on Netflix-3M}\label{fig:rerank} \vspace{-5pt} \end{figure} \subsection{Session Period Sensitivity} The above experimental results have shown that the session-based (short-term) information indeed improves the performance of top-N recommendation. We conduct an experiment on Netflix-3M to investigate how the session period influences the final recommendation performance. As shown in Figure \ref{fig:session}, the bars in purple color (below) describe the basic performance of MF component while the red ones (above) represent the extra improvement by the LSIC-V4. The RNN plays an insignificant role in the very early sessions since it lacks enough historical interaction data. For later period of sessions, RNN component tends to be more effective, and our joint model achieves a clear improvement over the model with only MF component. \vspace{-5pt} \begin{figure}[h] \centering \subfigure{\includegraphics[width=0.49\columnwidth]{figures/session_P5-crop.pdf}} \subfigure{\includegraphics[width=0.49\columnwidth]{figures/session_NDCG5-crop.pdf}} \caption{Sensitivity of the session period on Netflix-3M}\label{fig:session} \vspace{-5pt} \end{figure} \section{Conclusion} In this paper, we proposed a novel adversarial process for top-$n$ recommendation. Our model incorporated both matrix factorization and recurrent neural network to exploit the benefits of the long-term and short-term knowledge. We also integrated poster information to further improve the performance of movie recommendation. Experiments on two real-life datasets showed the performance superiority of our model. \bibliographystyle{ACM-Reference-Format}
1,314,259,996,233
arxiv
\section{Rasbha Spin-Orbit Interaction} The Rashba spin-orbit interaction has been shown \cite{Schapers1998,Lange.1996} to consist of two parts: an electric field contribution and a valence band offset contribution. The electric field component can be understood as an average of the electric field by the effective mass wavefunction $\psi(z)$, weighted by a bandstructure-determined factor. This contribution takes the form: \begin{equation}\label{eqn:a_e} \vec \alpha_E = \vec \alpha_0 \int\limits_{-\infty}^{+\infty}{\left|\psi(z)\right|^2\frac{d\phi(z)}{dz} \sum\limits_{b=7,8}{\frac{\left(-1\right)^{b+1}}{\left(E-E_{\Gamma_b}(z)-\phi(z)\right)^{2}}}}dz \end{equation} where $\vec \alpha_0 = \hat z \frac{\hbar^2 E_p}{6 m_0}$, $E_{\Gamma_b}(z)$ is the band edge of the $\Gamma_b$ band through the nanostructure, $\phi$ is the electrostatic potential energy from applied and internal self-consistent electric fields, $\psi(z)$ is the effective mass wavefunction, $E_p$ is the $k \cdot p$ interband coupling parameter, and $m_0$ is the bare mass of the electron. The other contribution to the Rashba spin-orbit interaction comes from the valence band offsets at the quantum well interfaces. At a perfectly sharp interface between two materials, these offsets form a Dirac delta function at the interface. When integrated, this becomes the change in the valence bands across the interface and an average of the valence band weighting term across the interface. This contribution takes the form: \begin{equation}\label{eqn:a_i} \vec \alpha_I = \vec \alpha_0 \sum\limits_{i=0}^{N_I} \sum\limits_{b=^7_8;s=_+^-} \frac{\left(-1\right)^{b+1}\left(E_{\Gamma_b}^+(z_i) - E_{\Gamma_b}^-(z_i)\right)}{2\left(E-E_{\Gamma_b}^s(z_i)-\phi(z_i)\right)^{2}}\left|\psi(z_i)\right|^2 \end{equation} where $N_I$ is the number of interfaces, and $z_i$ is the location of the interface $i$. The interface contribution can be understood as the difference between the {\it valence} band edges at each interface, weighted by the average inverse-square of the electrostatic-potential-offset band edge and the probability density at the interface. In the case of a continuously-alloyed well, the single interface becomes a continuous sequence of infinitesimal changes. When used on a real-space grid, as in this work, the material is constant within each gridsite. This yields a discrete alloy with steps of one gridsite. Examining a square well with a constant electric field (fig. \ref{fig:square_well}) can provide intuition. \begin{figure}[hh!c] \begin{center} \includegraphics[width=0.5 \columnwidth]{single-well-iface-illustration.pdf} \caption{ Cartoon of a wavefunction in the conduction band of a square well. A constant electrostatic field $\phi(z) = kz$ has been applied to the structure. Valence bands are not shown due to space considerations, but are also square. } \label{fig:square_well} \end{center} \end{figure} In this square well, the valence band difference at the left and right interfaces are of equal magnitude but opposite sign. The inverse-square band offset weighting terms are also the same on the left and the right, if we neglect the electric field term. If a non-uniform electrostatic potential were introduced, for instance from the internal electric field between electrons in the well and the dopant layer, a small difference would arise. A much larger contribution comes from the difference in probability density at the left and right interfaces. This will in general be the largest contribution to the interface term. It should be noted that these equations do not account for Dirac renormalization\cite{Winkler2003}, and thus neglect e.g. the Darwin contribution to the overall Hamiltonian. This is not a major impediment to the current discussion, as the Dirac renormalization terms have only a modest effect on the above equations. Much more interesting is the energy $E$ above. The effective mass wavefunction perturbation methods used to determine equations \ref{eqn:a_e} and \ref{eqn:a_i} are derived using a projection operator formalism which has been described by L\"owdin.\cite{Lowdin.JMP.1962} Ultimately, the $E$ term in equations \ref{eqn:a_e} and \ref{eqn:a_i} comes from the solution to the full N-band Schr\"odinger equation for {\em all} of the carriers in the well. Both aspects of this are important to fully understand the Rasbha spin-orbit interaction in doped quantum wells: the fact that these come from a solution to the full Schr\"odinger equation as well as the fact that it must include every carrier in the well. One approach is to use the energy of the particular subband being examined. In this approach, the single-particle, effective mass ground state eigenenergy ($E_0$) is used for $E$. The drawback of this approach is that it neglects the carriers at higher $k$, at energies up to the Fermi energy. This approach is taken by some groups\cite{Esmerindo.PRL.2007}, although they approximate $E_0$ by the energy of the conduction band edge in the well. Although $E_0$ is readily understood in triangular or more general quantum wells, the value to take for the conduction band edge in triangular or more complicated wells is unclear. Alternately, a clear second approach is to take the Fermi energy ($E_F$) for $E$. This then can be qualitatively understood as examining only the highest occupied state. This approach has also been taken by a number of groups\cite{Engels.PRB.1997,Koga.PRL.2002,Nitta.JAP.2009}, and like the first approach, some\cite{Engels.PRB.1997} approximate $E_F$ with the conduction band edge in the well. However, other groups use $E_F$. In addition, using the Fermi energy instead of the conduction band edge in the well\cite{Zhou.APL.2008} should qualitatively reproduce the trend of decreasing Rashba spin-orbit coupling with increasing carrier density (i.e. increasing $E_F$) due to the inverse-$E^2$ dependency of $\alpha$, which is harder to explain with other means. In this paper, we use the latter definition of $E$, namely that we simply replace $E$ with $E_F$. However, simply introducing $E_F$ is somewhat ad hoc and a more rigorous formalism is certainly called for to clarify the ambiguity in methods. Koga et al.\cite{Koga.PRL.2002} compared the approximation for $E$ used by Engels et al.\cite{Engels.PRB.1997} (the conduction band edge in the well) and found approximately a 20-30\% increase in $\alpha$ when using $E_F$ instead of the band edge in InAlAs/InGaAs/InAlAs wells (70-90 meV). The ground state energy, necessarily being lower than $E_F$ and higher than $E_0$ should lie somewhere below that. The difference in InSb/InAlSb is likely even larger, due to the smaller bandgap in InSb. Using the $E_0$ or conduction band edge approximation of $E$ should change only the scale of our results, not the results themselves. To compare digitally-, discretely-, and continuously-alloyed triangular wells, we have calculated the Rashba spin-orbit coupling in doped continuously-alloyed, discretely-alloyed, and digitally-alloyed quantum wells without an applied electric field. The wavefunctions within the conduction band were calculated using a real-space self-consistent effective mass method, using an inhomogeneous, material-dependent effective mass. As the abrupt change in effective mass between the well and barrier in InSb/AlInSb quantum wells causes discontinuities in the first derivative of the wavefunction, we additionally calculated the same well types but with an InAs/GaInAs quantum well system where the effective masses in the well and barrier are much closer. To prevent infinite loops in the self-consistent calculation, both the second-to-last and previous wavefunctions were compared to the current wavefunction to determine convergence, with no potential mixing being performed. An alternate method would have been to mix a fraction of the previous and current potential values.\cite{Ando.RMP.1982} After the effective mass wavefunctions were calculated, equations \ref{eqn:a_e} and \ref{eqn:a_i} were used to calculate the Rashba spin-orbit coupling for the conduction electrons in the ground state subband. The Rashba couplings at a variety of dopings were thus calculated and are plotted in figure \ref{fig:total_alpha}. The electric field and interface contributions are plotted in figures \ref{fig:efield_alpha} and \ref{fig:interface_alpha}, respectively. Doping was varied between $2\times 10^{17} \text{cm}^{-3}$ and $1.2\times 10^{18} \text{cm}^{-3}$ across 2.5nm in the InSb/$\text{In}_{f(z)}\text{Al}_{1-f(z)}$Sb quantum wells, and between $4\times 10^{16} \text{cm}^{-3}$ and $2\times 10^{17} \text{cm}^{-3}$ across 2.5nm in the InAs/$\text{In}_{f(z)}\text{Ga}_{1-f(z)}$As quantum wells. The doping regions in both cases began 15nm to the right of the sharp interface of the triangular wells. The wells were all 28.8nm wide, using a grid spacing of 0.4nm for the continuously and discretely alloyed wells and 0.1nm for the digitally alloyed well. Due to the similarity in wavefunction shape, the electric field term can be expected to be very similar for digitally- and continuously-alloyed wells. This expectation is borne out in our calculations, shown in subfigures \ref{sfig:insb_efield_alpha} and \ref{sfig:gainas_efield_alpha}. \begin{figure}[h!c] \begin{center} \subfigure{\includegraphics[width=0.49 \columnwidth]{InSb_efield_alpha.pdf}\label{sfig:insb_efield_alpha}} \hfill \subfigure{\includegraphics[width=0.49 \columnwidth]{GaInAs_efield_alpha.pdf}\label{sfig:gainas_efield_alpha}} \caption{ Electric field contribution to the Rashba spin-orbit coupling in InSb/$\text{Al}_{0.2}\text{In}_{0.8}\text{Sb}$ (left plot) and InAs/$\text{In}_{0.2}\text{Ga}_{0.8}\text{As}$ (right plot) continuously-alloyed (solid line), discretely-alloyed (dotted line), and digitally-alloyed (dashed line) triangular quantum wells. Results for discrete and continuous alloys are nearly indistinguishable.\label{fig:efield_alpha} } \end{center} \end{figure} There is a strong increase in the magnitude of the Rashba spin-orbit interaction as doping is increased for all three alloying types. This is due to the increasing electric field between the carriers in the well and the dopant layer as doping is increased, and is expected. \begin{figure}[h!c] \begin{center} \subfigure{\includegraphics[width=0.49 \columnwidth]{InSb_interface_alpha.pdf}\label{sfig:insb_iface_alpha}} \hfill \subfigure{\includegraphics[width=0.49 \columnwidth]{GaInAs_interface_alpha.pdf}\label{sfig:gainas_iface_alpha}} \caption{ Interface contribution to the Rashba spin-orbit coupling in InSb/$\text{Al}_{0.2}\text{In}_{0.8}\text{Sb}$ (left plot) and InAs/$\text{In}_{0.2}\text{Ga}_{0.8}\text{As}$ (right plot) continuously-alloyed (solid line), discretely-alloyed (dotted line), and digitally-alloyed (dashed line) triangular quantum wells.\label{fig:interface_alpha} } \end{center} \end{figure} In a continuously alloyed well, given that one side is a sharp interface, the other side of the well provides a continuous (or monotonically changing) set of interfaces. In a digitally alloyed well, by contrast, the sequence of square wells present both a strong positive and negative interface at the subwell boundaries. This can be seen in figure \ref{fig:wavefunctions} where the continuously-alloyed and digitally-alloyed wells and wavefunctions are superimposed (without the internal potential) for InSb and InAs triangular quantum wells. Thus instead of one large change in the valence band fighting a large set of small changes, a digitally alloyed well has two large changes in opposition. In addition, figure \ref{fig:wavefunctions} shows an increasing probability density in the first subwell (going right to left) which is then countered by a series of subwells with decreasing probability density. Because the valence band changes are identical in each of the subwells, this increase in probability density in the first subwell followed by a decrease across the subsequent subwells seen in figure \ref{fig:wavefunctions} serves to counter itself in addition to the small changes within each individual subwell. \begin{figure}[h!c] \begin{center} \subfigure{\includegraphics[width=0.49 \columnwidth]{InSb_total_alpha.pdf}\label{sfig:insb_total_alpha}} \hfill \subfigure{\includegraphics[width=0.49 \columnwidth]{GaInAs_total_alpha.pdf}\label{sfig:gainas_total_alpha}} \caption{ Total Rashba spin-orbit coupling in InSb/$\text{Al}_{0.2}\text{In}_{0.8}\text{Sb}$ (left plot) and InAs/$\text{In}_{0.2}\text{Ga}_{0.8}\text{As}$ (right plot) continuously-alloyed (solid line), discretely-alloyed (dotted line), and digitally-alloyed (dashed line) triangular quantum wells. Continuous and discrete alloys are almost indistinguishable.\label{fig:total_alpha} } \end{center} \end{figure} There is a moderate increase in the magnitude of the interface contribution to the Rashba spin-orbit interaction as doping is increased. This has its origin in two parts: the first is the increase in electrostatic potential on one side of the well due to the internal electric field between the carriers in the well and the dopant layer, leading to a larger asymmetry across the wells in the valence band. The second part is the wavefunction tunneling into the barrier increasing as the internal electric field increases, leading to another increase in the asymmetry of the well and therefore an increasing interfacial Rashba contribution. The sum of the two contributions is plotted for the quantum wells in figure \ref{fig:total_alpha}. Because of the almost complete absence of the interface contribution in the digital alloys but the similarity of the electric field contribution to that found in discrete or continuous quantum wells, the total Rashba spin-orbit interaction in the digitally alloyed wells is considerably different from the continuous and discrete counterparts. This is perhaps surprising, due to the similarity of digital and continuous alloy wavefunctions. This further reinforces the importance of the interface term in Rashba calculations. Aside from the offset due to the differences in interface terms, the similarity in electric field terms in digital compared to discrete and continuous alloying causes the high-doping regimes, where the electric field term dominates, to have similar trends. This has its origin in the greater sensitivity of the electric field contribution to the internal electric field compared to the interface contribution as doping is increased. To summarize, digitally alloyed quantum wells have substantially different Rashba spin-orbit couplings compared to continuously or discretely alloyed wells, even changing sign in some doping regimes. This has its origin in the near absence of the interface contribution in digital alloys. As the electric field contribution is more sensitive to doping than the interface contribution, the trend as a function of doping density in the high-doping regime in digital alloys are similar to the trend in continuously or discretely alloyed wells. This is likely to be an important consideration in further experimental growth of novel quantum wells for topological insulators and other applications. \begin{acknowledgements} Thanks to J. Carlos Egues, P. M. Koenraad, M. B. Santos, and S. Q. Murphy for several helpful conversations. Thanks to Craig E. Pryor for collecting the band parameters. This project was supported by the US National Science Foundation under Grant~\mbox{MRSEC DMR-0080054}. \end{acknowledgements}
1,314,259,996,234
arxiv
\section{Introduction} Event correlation reasoning is critical for AI systems since it benefits many downstream tasks. Figure~\ref{fig:intro_case} lists several examples of event-related reasoning tasks. Figure~\ref{fig:intro_case}(left) is an example of abductive reasoning, which aims to infer the most plausible explanation for incomplete observations. Given the two observations, ``\textit{Andrew was very drowsy}'' and ``\textit{Now he is very alert}'', we could infer that ``\textit{he took a long nap}'' is more plausible than ``\textit{he stayed up a long time}''. Another typical task in Figure~\ref{fig:intro_case}(right) is script reasoning. Given a sequence of events, ``\textit{A frog was hungry. But it was late in the day, and the bugs were not to be found}'', we need to infer a potential subsequent event -- ``\textit{The frog was still hungry}''. Although formulated as different tasks, they all require event correlation\footnote{In this paper, ``correlation'' refers to associated or discourse relations (including causal, temporal) among events if not specified.} reasoning, i.e., to infer whether a paragraph containing multiple events conforms to human common sense. Here \textit{event} refers to a span involving a predicate with its arguments and describing either states of entities/things or how they act in real world, e.g., ``\textit{A frog was hungry}'', ``\textit{Bugs was not to be found}'', and ``\textit{The frog was still hungry}'' in Figure~\ref{fig:intro_case}(right). \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figs/intro_case_slim.pdf} \caption{\small Examples of two downstream event-related reasoning tasks, i.e., abductive reasoning (left) and script reasoning (right), as well as our created corpus (bottom). } \label{fig:intro_case} \end{figure} A major challenge for event correlation reasoning is that they require large amounts of diverse commonsense knowledge describing correlation among events. Although such information may exist in large-scale unlabeled corpus, from which modern pre-trained language models (i.e., BERT \citep{Devlin2019BERT}, RoBERTa \citep{Liu2019RoBERTa}, SpanBERT \citep{Joshi2020SpanBERT}) are learned, the text data is often noisy and thus cannot be used directly. First, events in the same paragraph may not necessarily have strong correlation. For example, although there are multiple events in the paragraph ``\textit{She was referring to several large cellular towers at one corner of the Sheriff's complex. Beside the towers was a large, power generator—or, at least, what we assumed to be the power generator.}'' (from book, \textit{Days Alone}), there is no strong logical correlation among them. Moreover, there is no explicit labeling for event boundary. Therefore, traditional masked language models (MLM) or span-based models are not aware of events. Another paradigm is to leverage human-curated paragraph with related events. \citet{sap2019atomic} collected 9 types of if-then event-based commonsense knowledge by crowd-sourcing. But, this paradigm is not scalable enough in both diversity and corpus size due to cost limitation. In this paper, we propose EventBERT, a pre-trained model for general event correlation commonsense reasoning from unlabeled text. We propose a novel approach to automatically collect a large event correlation corpus. Specifically, we first identify natural language paragraphs which contain multiple events with strong correlation with each other. We then explicitly mark event spans in those paragraphs. Figure~\ref{fig:intro_case}(bottom) shows an example paragraph with three identified events (i.e., ``\textit{Jeter got control of a Major League Baseball team}'', ``\textit{he retired}'', and ``\textit{owning a relatively small percentage}''). Furthermore, we present three novel self-supervised contrastive learning objectives, i.e., correlation-based event ranking, contradiction event tagging and discourse relation ranking, to learn correlations among events effectively. EventBERT has 3 advantages: 1) Since the training corpus stems from a large corpus of unlabeled data, it provides large amounts of diverse event correlation knowledge, and can be applied to a variety of downstream tasks. 2) The recognition of events in paragraphs enables the pre-trained model to be aware of the events and further understand the correlation among them better. 3) The event-based self-supervised learning objectives enforce the model to focus on event correlation instead of token-level concurrence as in MLM. We evaluate EventBERT on multiple downstream tasks, including script reasoning, abductive commonsense reasoning, narrative incoherence detection, and story cloze test. While only continually pre-trained on mined \textsc{BookCorpus} (that has been used in previous pre-training) with very few number of updates (180k steps within 500 GPU hours), it outperforms strong baselines on 4 benchmark datasets, and achieves state-of-the-art (SoTA) results on most of them. EventBERT outperforms existing pre-trained models by 6.5\%\textasciitilde23\% in a zero-shot setting. Moreover, with only a small size (e.g., 10\% for story cloze test) of task-specific training data, it can achieve similar performance of the baseline using all data. All these results demonstrate that EventBERT encapsulates rich eventuality knowledge for event-correlation reasoning. \section{Related Work} \paragraph{Masked Language Modeling. } Most recent works \citep{Devlin2019BERT,Liu2019RoBERTa,He2020DeBERTa} pre-train an encoder via self-supervised masked language modeling (MLM) \citep{Devlin2019BERT} but sole word-level masking makes the model focus on local context. Hence, this masking scheme is extended to other text units beyond words, e.g., phrase \citep{Sun2019ERNIE1}, named entity \citep{Sun2019ERNIE1} and random span \citep{Joshi2020SpanBERT}. It is intuitive to further extend into event-level one to capture event correlation, e.g., CoCoLM \citep{Yu20CoCoLM}, but there are two major obstacles: First, training objectives with MLM still focus more on token-level concurrence. For example, given a context [\textit{Andrew was very drowsy, so ?, and now he is very alert}], both candidates ``\textit{he took a long nap}'' and ``\textit{he stayed up a long time}'' are assigned with similar perplexities by following \citet{Davison2019Mining}. Second, an event is expressed with a sentence or a clause, so masking such an extremely long span hinders MLMs from learning the context from both sides. \paragraph{Sentence-level Pre-training. } Several objectives learn inter-sentence semantics in text, e.g., next sentence prediction (NSP) \citep{Devlin2019BERT}, sentence-order prediction (SOP) \citep{Lan2020AlBERT}, sentence permutation \citep{Lewis2020BART}. Although NSP and SOP share a very high-level inspiration with our work in terms of training example composition, they are unlikely to explicitly acquire event correlations. First, event correlations scarcely exist in a sentence from open-domain corpora, not to mention unconsciousness of event boundary. Second, they operate at sentence level, but ignore that a discourse relation (e.g., \textit{but}, \textit{then}), coupled with event(s) to compose a sentence and serving as indispensable component of cross-event correlation, should be considered individually. Third, negatives are sampled from corpus wide (NSP) and intra-document (SOP), so trivial and less diverse. \paragraph{Commonsense-centric Pre-training.} Some works learn concept-centric commonsense knowledge. One paradigm is designing specific continual pre-training objectives. \citet{Shen2020GLM} leverage ConceptNet \citep{Speer2017ConceptNet} to guide masking spans for MLM. Similar to denoising in BART, \citet{Zhou21Pre} merely consider token-level concepts (e.g., verb and noun) and aims to recover original sentence given a set of concepts or disordered sentence. Another paradigm is transfer learning from similar commonsense tasks, such as Unicorn \citep{Lourie21UNICORN} and UnifiedQA \citep{Khashabi20UnifiedQA}, which need extra human-labeled data. \paragraph{Event-based Pre-training.} Similarly, some approaches inject pre-trained models with rich event-related knowledge, which focus more on event correlations than commonsense-centric ones. First, some approaches like COMET \citep{Bosselut2019COMET,hwang2020atomic2020} conduct continual pre-training on triple facts from a human-curated knowledge graph (KG) to build neural knowledge models. The KG is usually related to eventuality, e.g., ATOMIC \citep{sap2019atomic}, which depends on crowd-sourcing and thus limits the model's scalability. Moreover, the knowledge model, learned on the triple fact (i.e., a pair of events with a prompt), focuses merely on pairwise correlation, thus degrading the general-purpose pre-trained model. Second, some other approaches mine event information in raw corpus instead of the KGs. DEER \citep{Han2020DEER} performs continual pre-training via temporal and event masking predictions, i.e., new masking schemes for MLM, to focus on event temporal relations. \citet{Lin2020Conditional} leverage BART structure and propose to recover a temporally-disordered or event-missing sequence to the original one by a denoising autoencoder. \citet{Wang20CLEVE} use AMR structure to design self-supervised objectives but focus only on event detection task. In contrast, we focus on long-term event correlations underlying raw corpus and targets correlation-based reasoning tasks. \section{EventBERT} \label{sec:meth} This section begins with data collection (\S\ref{sec:meth-train_data_collect}), followed by self-supervised objectives for training (\S\ref{sec:meth-eventbert_pretrain} and Figure~\ref{fig:model_illustration}). Lastly, we detail two knowledge transfer paradigms (\S\ref{sec:meth-eventbert_eval}). \subsection{Training Data Collection} \label{sec:meth-train_data_collect} \paragraph{Text Corpus and Filtering.} We leverage \textsc{BookCorpus} \citep{Zhu2015bookcorpus}, which contains 16 different sub-genres books (e.g., romance, historical), as training corpus. Compared to other text corpora stemming from encyclopedia (e.g., \textsc{Wikipedia}), news \citep{Nagel2016Ccnews} and web crawl \citep{Gokaslan2019Openwebtext}, \textsc{BookCorpus} is a large collection of books (11K books with 74M sentences with 1B words) and likely to contain diverse event correlation and rich eventuality knowledge. Then, we conduct corpus filtering to ensure our training data rich in event correlation. The filtering unit is set to ``paragraph'' since it usually contains a complete storyline or reasoning logic for learning efficacy whilst consists of mere several sentences to keep filtering efficiency. Besides basic filtering strategies like removing unreadable and meaningless text pieces, we propose to filter paragraphs and keep those with strong event correlation according to discourse relation keywords (e.g., \textit{however}, \textit{while}). Specifically, we filter out paragraphs without any of the pre-defined connectives in PDTB\footnote{\url{https://www.seas.upenn.edu/~pdtb/}}, based on our observation that paragraphs with these connectives are prone to contain strong correlation among events and vice versa. Considering that an event is usually centered with a verb (e.g., ``\textit{Andrew was very drowsy}'', ``\textit{he stayed up a long time}''), we further restrict that these keywords should be adjacent to a verb in the dependency parsing tree to avoid false positive cases where these keywords does not express connectives in some paragraphs. We detail our discourse-verb association filtering strategy in Algorithm~\ref{alg:paragraph_filter}. After filtering, we get 199.9M (out of 1B) words in 3.9M paragraphs. \begin{algorithm}[!t] \small \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{~\textsc{BookCorpus}, Discourse Relations ${\mathbb{K}}$ from PDTB.} \Output{~Filtered paragraphs, ${\mathbb{F}}$.} ${\mathbb{F}} \leftarrow \{\}$; \tcp{To store filtered paragraphs} \For{Para $\in$ \textsc{BookCorpus} \tcp{Traverse paragraph}}{ ${\mathbb{M}} \leftarrow \{\}$; \tcp{To store meta information} ${\mathcal{G}} \leftarrow \text{Dep-Parsing}(\textit{Para})$ \par ${\mathbb{V}} \leftarrow \text{Verbs in \textit{Para} according to PoS-Tagging}(\textit{Para})$ \par ${\mathbb{L}} \leftarrow$ Locate all discourse relations in \textit{Para} through lexicon-based matching with ${\mathbb{K}}$;\par \For{$r\in{\mathbb{L}}$ \tcp{Traverse discourse rel in \textit{Para}}}{ ${\mathbb{N}}\leftarrow~\text{Neighbor words of}~r~\text{over}~{\mathcal{G}}$; \par \For{$v\in {\mathbb{N}} \!\cap\! {\mathbb{V}}$ \tcp{Discourse-verb connection}}{ ${\mathbb{M}} \leftarrow {\mathbb{M}} \cup \{(r, v)\}$ } } \lIf{$|{\mathbb{M}}| > 0$}{ ${\mathbb{F}} \leftarrow {\mathbb{F}} \cup \{(\textit{Para}, {\mathbb{M}})\}$ } } \caption{\small Discourse-verb association filtering} \label{alg:paragraph_filter} \end{algorithm} \paragraph{Event Extraction and Training Set Building.} Then, it is essential to extract events in text since explicit event annotations will facilitate designs of unsupervised learning objectives towards event correlation reasoning. Particularly, event extraction aims to find an event trigger (e.g., a predicate) and then retrieve all arguments (e.g., subject, adverb, preposition) for the trigger. Previous works usually employ supervised event extraction models \citep{Wadden2019EventExtract} and semantic role labeling \citep{Huang2018SRL4EventEx} but both suffer from high overheads. Thus, for high efficiency, we focus only on verb-triggered event and resort to low-level syntactic features of a sentence, says dependency parsing tree \citep{Chen2014DepParse} that describes modification relations between words. Coupled with Part-of-Speech (PoS) tagging, we can readily extract events from a sentence: We first take the word with PoS of ``\textit{Verb}'' as an event trigger and then retrieve a sub-tree, whose root is the trigger, from the dependency parsing tree. Considering the dependency tree is ordered, the retrieved verb-rooted sub-tree can be mapped to a span of words and regarded as an extracted event. We finally generate training examples for EventBERT as detailed in Algorithm~\ref{alg:train_set_gen}. Consequently, each training example $(x, e, r)$ denotes a paragraph, $x$, containing an event $e$ with a discourse relation $r$ ($r\in x \wedge r\notin e$). It is worth mentioning (1) $e$ is a word span in $x$ so $x$ can also be written as $x=[x^{(fw)}, e, x^{(bw)}]$, and (2) $r$ is usually a single word while sometimes consecutive words. \begin{algorithm}[!t] \small \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{~Filtered corpus ${\mathbb{F}}$.} \Output{~EventBERT Training Set ${\mathbb{D}}$.} ${\mathbb{D}} \leftarrow \{\}$; \tcp{To store training examples} \For{(Paragraph, ${\mathbb{M}}$) $\in {\mathbb{F}}$}{ ${\mathcal{G}} \leftarrow \text{Dep-Parsing}(\textit{Paragraph})$ \par \For{$(r, v) \in {\mathbb{M}}$ \tcp{discourse relation and verb}} { ${\mathcal{G}}' \leftarrow$~$v$-rooted sub-tree from ${\mathcal{G}}$; \tcp{extract event} $e \!\leftarrow\!$~Map~${\mathcal{G}}'$ to span in \textit{Paragraph}; \!\!\tcp{\!\!\!extract event} $x \leftarrow \textit{Paragraph}$ \tcp{Define context} ${\mathbb{D}} \leftarrow {\mathbb{D}}\cup \{(x, e, r)\}$; } } \caption{\small Training set building with event extraction}\label{alg:train_set_gen} \end{algorithm} \subsection{Self-supervised Learning Objectives}\label{sec:meth-eventbert_pretrain} In line with most pre-trained models \citep{Devlin2019BERT}, we use the Transformer encoder \citep{Vaswani2017Transformer} to produce contextualized embeddings. Instead of learning from scratch, we adopt MLM-based pre-trained Transformer encoder (e.g., BERT \citep{Devlin2019BERT}, RoBERTa \citep{Liu2019RoBERTa}) and focus on further injecting event correlation knowledge. For this purpose, we propose three self-supervised contrastive learning objectives, i.e., \textit{Correlation-based Event Ranking}, \textit{Contradiction Event Tagging} and \textit{Discourse Relation Ranking}. The former two teach the model to distinguish the correct event against negative ones based on event correlations within paragraphs, while the latter one helps the model identify subtle difference among discourse relations. But it is still non-trivial to generate challenging, diverse negative events. \paragraph{Event-based Negative Sampling.} Instead of negative sampling by a generative model like GPT \citep{Radford2019GPT2}, we retrieve diverse events from ${\mathbb{D}}$ to avoid pattern gap between human-written and machine-generated texts. Thereby, we build an event pool ${\mathbb{Q}}=\{e|\forall (x,e,r)\in{\mathbb{D}}\}$ and propose three heuristic schemes to derive candidates of negative events for $e$ (e.g., $e=$``\textit{he looks very worried}'') in $(x,e,r)$: \begin{enumerate} \item \textit{Lexicon-based Retrieval}: We retrieve events from ${\mathbb{Q}}$ according to lexicon overlap with $e$, leading to nuanced negative events ${\mathbb{Q}}^{(lb)}$ (e.g., ``\textit{he dies}'') and thus challenging distractors. \item \textit{PoS-based Retrieval}: We retrieve $\bar e$ from ${\mathbb{Q}}$ according to overlaps of PoS tags between $e$ and $\bar e$, leading to syntax-similar negative events ${\mathbb{Q}}^{(pb)}$ (e.g., ``\textit{it tastes pretty good}''). \item \textit{In-domain Retrieval}: We retrieve neighbor events around $e$ up to five paragraphs, leading to in-domain negative events ${\mathbb{Q}}^{(id)}$ (e.g., ``\textit{he moves to my side}''). \end{enumerate} We remove $e$ itself from every ${\mathbb{Q}}^{(*)}$ and limit the size of ${\mathbb{Q}}^{(*)}$ to a $N$ ($N=3$). Then, for the positive event $e$. we sample a negative one $\bar e$ from ${\mathbb{Q}}^{(lb)}$, ${\mathbb{Q}}^{(pb)}$ and ${\mathbb{Q}}^{(id)}$ with probabilities of $20\%$, $60\%$ and $20\%$ respectively. A behind intuition is, lexicon-based retrieval may returns false negative samples whilst in-domain retrieval probably returns trivial negative samples. Lastly, we extract $M$ negative samples $\{(\bar x, \bar e, r)\}_{k=1}^M$ for $(x, e, r)$, where $\bar x\coloneqq [x^{(fw)}, \bar e, x^{(bw)}]$ and $M=5$ in our experiments. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figs/model_illustration_slim.pdf} \caption{\small An overview of our proposed EventBERT with three self-supervised contrastive learning objectives.} \label{fig:model_illustration} \end{figure} \paragraph{Correlation-based Event Ranking. } Based on our training example $(x,e,r)$ and corresponding negative samples $\{(\bar x, \bar e, r)\}_{k=1}^M$, we propose the first self-supervised contrastive learning objective, named correlation-based event ranking (CER), to rank the correlation-correct paragraph $x$ against the paragraph $\bar x$ corrupted by a distracting event $\bar e$. Formally, we first pass $x=[w_1,\dots,w_n]$ and $\bar x=[\bar w_1,\dots,\bar w_m]$ into the Transformer encoder to generate contextualized embeddings, i.e., \begin{align} &{\bm{H}} = [{\bm{h}}_1, \dots, {\bm{h}}_n] \coloneqq \transformerenc( x; \theta^{(ptm)}), \label{eq:transformer_enc_pos} \\ &\notag \bar{\bm{H}}^{(k)} = [\bar{\bm{h}}^{(k)}_1, \dots, \bar{\bm{h}}^{(k)}_m] \label{eq:transformer_enc_neg} \\ &~~~~~~~~\notag \coloneqq \transformerenc(\bar x^{(k)}; \theta^{(ptm)}), \\ &~~~~~~~~~~\forall k\in[1,M], \end{align} where ${\bm{H}}\in\mathbb{R}^{d\times n}$, $\bar{\bm{H}}^{(k)}\in\mathbb{R}^{d\times m}$, superscript $k$ in parenthesis indicates the index of negative samples, and $x$ ($\bar x^{(k)}$ also) has been prefixed and suffixed with special tokens, \texttt{[CLS]} and \texttt{[SEP]}, respectively. Note the above two Transformer encoders are parameter-tied and parameterized by $\theta^{(ptm)}$. Then, we pool a sequence of word-level contextualized embeddings (i.e., ${\bm{H}}$ or $\bar{\bm{H}}^{(k)}$), to sequence-level representation: \begin{align} \notag {\bm{v}} \coloneqq& {\bm{h}}_{\texttt{[CLS]}} = \pool({\bm{H}})\in\mathbb{R}^{d},~~\text{and}\\ \bar{\bm{v}}^{(k)} \coloneqq& \bar{\bm{h}}^{(k)}_{\texttt{[CLS]}} = \pool(\bar{\bm{H}}^{(k)})\in\mathbb{R}^{d},~\forall k\in[1,M], \label{eq:transformer_pooling} \end{align} where, $\pool(\cdot)$, as defined by \citet{Devlin2019BERT}, denotes collecting the embedding of \texttt{[CLS]} to represent the entire sequence. Next, the sequence-level representation is passed into a one-way multi-layer perceptron (MLP) to derive a correlation score of the targeted event in a paragraph, \begin{align} s =& \mlp({\bm{v}}; \theta^{(cs)})~~\text{and}~~\bar s^{(k)}=\mlp(\bar{\bm{v}}^{(k)}; \theta^{(cs)}), \label{eq:correlation_score} \end{align} where $\forall k\in[1,M]$, $s$ and $\bar s^{(k)}\in\mathbb{R}$, and $\mlp(\cdot; \theta^{(cs)})$ denotes our correlation scoring module. Lastly, to facilitate transfer to downstream multi-choice question answering, we regard this ranking problem as a contrastive classifying objective and define a negative log likelihood loss, i.e., \begin{align} \notag {\mathcal{L}}^{(\text{CER})} =& - \sum\nolimits_{\mathbb{D}} \log\!P^{(cer)}(x |x, \{\bar x^{(k)}\}_{k=1}^M) \\ =& \mathrm{softmax}([s; \bar s^{(1)}; \dots; \bar s^{(M)}])_1 \label{eq:loss_cer} \end{align} where subscript ``$1$'' indicates the first dim of $\mathrm{softmax}$ distribution, corresponding to the positive $x$. \paragraph{Contradiction Event Tagging. } Moreover, we take a step closer to the corrupted paragraphs and determine whether a word belongs to a negative-sampled event or the original one. Therefore, we define another self-supervised learning objective based on contrastive examples, called contradiction event tagging (CET), to build a binary classifier at word level. To be formal, we first adopt the sequences of contextualized embeddings from Eq.(\ref{eq:transformer_enc_pos}-\ref{eq:transformer_enc_neg}), i.e., ${\bm{H}}= [{\bm{h}}_1, \dots, {\bm{h}}_n]$ and $\bar{\bm{H}}^{(k)}=[\bar{\bm{h}}^{(k)}_1, \dots, \bar{\bm{h}}^{(k)}_m]$. And then, we employ another one-way MLP to generate a probability, which measures if a word is contradictory to contexts in the paragraph, based on each word-level contextualized embedding, i.e., \begin{align} P^{(cet)}\left({\bm{h}}\right) =& \mysigmoid(\mlp({\bm{h}} ; \theta^{(cet)})), \end{align} $\forall{\bm{h}}\in \{{\bm{H}}\}\cup\{\bar{\bm{H}}^{(k)}\}_{k=1}^M$. Finally, we can define a binary cross-entropy loss for contradiction event tagging objective: \begin{align} \notag {\mathcal{L}}^{(\text{CET})} &= - \sum_{(x,e,r)\in{\mathbb{D}}} \biggl( \sum_{i\in e}\log \left(1-P^{(cet)}\left({\bm{h}}_i\right)\right) \\ &+ \sum_{\{(\bar x, \bar e, r)\}_{k=1}^M}\sum_{i\in\bar e}\log P^{(cet)}\left(\bar{\bm{h}}^{(k)}_i\right) \biggr). \label{eq:loss_cet} \end{align} \paragraph{Discourse Relation Ranking. } To further exploit event-correlation information underlying paragraphs, it is promising to consider another kind of negative sampling from the perspective of ``discourse relation''. That is, we can sample a couple of negative discourse relations $\bar r$ to corrupt $r$ in the paragraph $x$, and then employ our model to distinguish them. To this end, we propose the third self-supervised contrastive learning objective, discourse relation ranking (DRR), with a very similar target with CER -- ranking the original paragraph over the corrupted ones. Thereby, we share learnable parameters in CER with those in DRR here to improve correlation-based ranking and enhance the correlation scoring module. Specifically, we first sample $M$ negative relations $\bar r$ from PDTB keywords ${\mathbb{K}}$ for $(x,e,r)$. To avoid false negative, we remove the keywords with the same categories\footnote{A taxonomy of discourse relations (e.g., `\textit{after}' for temporal, `\textit{so}' for causal) is defined in Appendix~C of PDTB manual.} as $r$ from ${\mathbb{K}}$. Also, we denote the $\bar r$-corrupted paragraph as $\bar x$, where the corruption is achieved by \textit{word} (sometimes \textit{phrase}) replacement. Then, we pass $x$ and $\bar x$ individually into the Transformer encoder $\theta^{(ptm)}$ as in Eq.(\ref{eq:transformer_enc_pos}-\ref{eq:transformer_pooling}), followed by the correlation scoring module $\theta^{(ce)}$ as in Eq.(\ref{eq:correlation_score}). Lastly, identical to Eq.(\ref{eq:loss_cer}) in CER, we can define DRR's training loss as \begin{align} {\mathcal{L}}^{(\text{DRR})} = - \sum\nolimits_{\mathbb{D}} \log P^{(drr)}(x |x, \{\bar x^{(k)}\}_{k=1}^M). \label{eq:loss_drr} \end{align} Consequently, the learning objective of EventBERT is to minimize ${\mathcal{L}} = {\mathcal{L}}^{(\text{CER})}+{\mathcal{L}}^{(\text{CET})}+{\mathcal{L}}^{(\text{DRR})}$. \subsection{Knowledge Transfer to Downstream Task} \label{sec:meth-eventbert_eval} After pre-trained, EventBERT can be readily transferred to a wide range of event-centric downstream tasks. Since EventBERT is equipped with rich event-correlation knowledge by contrastive learning, the transfer can be achieved by either zero-shot evaluation or supervised fine-tuning. \paragraph{Zero-shot Evaluation. } It is vitally significant to conduct zero-shot evaluations of a pre-trained model on downstream tasks since this can verify if the targeted knowledge was successfully injected into the pre-trained model. Empowered by the well-trained correlation scoring module $\theta^{(cs)}$, it is straightforward to adapt EventBERT into a zero-shot scenario for various-formatted tasks because $\theta^{(cs)}$ coupled with $\theta^{(ptm)}$ can assign a text with a score to measure if the text correctly expresses event correlations. Take multi-choice or Cloze-type question answering as an example. It aims at choosing an answer $a^{*}$ from a candidate set ${\mathbb{A}}$ to best fit into a given context $[x^{(fw)}, ?, x^{(bw)}]$ -- assigning the highest score to $[x^{(fw)}, a^{*}, x^{(bw)}]$. In this case, EventBERT can directly derive plausible scores for all candidate answers and then return the answer with highest score, i.e., \begin{align}\label{eq:event_zero_shot} \notag a^* = \arg&\max\nolimits_{a\in{\mathbb{A}}} \mlp\Bigl(\pool\bigl(\transformerenc\\ &~~~~~~([x^{(fw)}, a, x^{(bw)}];\theta^{(ptm)})\bigr);\theta^{(ce)}\Bigr). \end{align} In contrast, traditional MLM-based pre-training cannot be directly applied to such a task, but by following \citet{Davison2019Mining}, we can adopt an autoregression-like operation to greedily generate the words masked by a candidate answer, where ``autoregression-like'' means we only recover a single \texttt{[mask]} back to the original word in each feed-forward process and then repeat, and ``\textit{greedily}'' means the recovered word is assigned with highest MLM probability in each process. Thus, an MLM-based pre-trained model can perform zero-shot evaluation on this task by \begin{align}\label{eq:roberta_zero_shot} \notag a^* = \arg\max\nolimits_{a\in{\mathbb{A}}} P^{(mlm)}&(a~~|~~[x^{(fw)}, \texttt{[MASK]},\\ & \dots, \texttt{[MASK]}, x^{(bw)}]), \end{align} where the number of \texttt{[MASK]} is equal to word number in the corresponding $a$. \paragraph{Supervised Fine-tuning. } Following the common practice in most pre-trained Transformers \citep{Devlin2019BERT,Liu2019RoBERTa}, we can fine-tune $\theta^{(ptm)}$ from EventBERT with a task-specific prediction module $\theta^{(task)}$ on a task in a supervised manner. For example, given a multi-choice question answering task again, we pass each candidate $a\in{\mathbb{A}}$ with the context $[x^{(fw)}, ?, x^{(bw)}]$ into $\pool(\transformerenc(\cdot;\theta^{(ptm)}))$ to produce a sequence-level representation, and then feed the representation into an MLP-based scorer, $\mlp(\cdot;\theta^{(task)})$, for a plausible score of the candidate. Here, $\theta^{(task)}$ is random initialized and then updated with $\theta^{(ptm)}$ towards the task-specific learning objective during fine-tuning. \begin{table}[t]\small \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{lc} \toprule \textbf{Method} & \textbf{ACC (\%)} \\ \midrule SGNN + Int\&Senti \citep{Ding2019Event} & 56.03 \\ \midrule Random & 20.00 \\ PMI \citep{Chambers2008Unsupervised} & 30.52 \\ Event-Comp \citep{Wilding2016What} & 49.57 \\ SGNN \citep{li2018constructing} & 52.45 \\ RoBERTa-base \citep{Liu2019RoBERTa} & 56.23 \\ RoBERTa-large \citep{Liu2019RoBERTa} & 61.53 \\ RoBERTa + knwl. \citep{Lv2020Integrating} & 58.66 \\ EventBERT (\textbf{ours}) & \textbf{63.50} \\ \bottomrule \end{tabular}} \caption{\small Fine-tuning on script reasoning. First part involves extra human-craft data, e.g., curated KG or transfer from other tasks. } \label{tab:sr} \end{table} \section{Experiments} \paragraph{Datasets.} We evaluate on 4 datasets for 4 downstream tasks, i.e., MCNC \citep{li2018constructing} for script reasoning, \emph{{\usefont{T1}{pzc}{m}{n} ART}{}} \citep{bhagavatula2019abductive} for abductive commonsense reasoning, ROCStories \citep{mori2020finding} for narrative incoherence detection, and story cloze test \citep{mostafazadeh2016corpus}. All these are independent of our pre-trained corpus. \begin{itemize \item \textbf{Multi-choice narrative cloze (MCNC).} Given an event chain, it aims to select the most plausible subsequent event from 5 candidates. We follow the official data split with 140,331/10,000/10,000 samples in training/dev/test sets. \item \textbf{\emph{{\usefont{T1}{pzc}{m}{n} ART}{}}.} We evaluate the abductive NLI task on the \emph{{\usefont{T1}{pzc}{m}{n} ART}{}} dataset. Given two observations about the world, it aims to select the most plausible explanatory hypothesis from two choices. We also follow the official data split with 17,801/1,532/3,059 examples for training/dev/test. \item \textbf{ROCStories.} ROCStories is a well-organized story-generation task. We follow \citep{mori2020finding} to use it for narrative incoherence detection, where one random sentence is removed for each five-sentence story. The goal is to predict the missing position. We use the same data split as \citet{mori2020finding} where there are 78,528/9,816/9,817 examples in the training/dev/test sets. \item \textbf{Story Cloze Test.} Based on ROCStories. it aims to choose the right ending from two alternative endings for a four-sentence context. We follow the official data split by \citet{mostafazadeh2016corpus}, and there are 98,161/1,871/1,871 examples in the training/dev/test sets. \end{itemize} \begin{table}[t]\small \centering \begin{tabular}{lc} \toprule \textbf{Method} & \textbf{ACC (\%)} \\ \midrule McQueen* \citep{Mitra2019Exploring} & 84.18 \\ UNICORN \citep{Lourie21UNICORN} & 78.30 \\ \midrule Random* & 50.41 \\ BERT-base* \citep{Devlin2019BERT} & 63.62 \\ BERT-large* \citep{Devlin2019BERT} & 66.75 \\ RoBERTa-large* \citep{Liu2019RoBERTa} & 83.91 \\ HighOrderGN* & 82.04 \\ CALM \citep{Zhou21Pre} & 77.12 \\ EventBERT (\textbf{ours}) & \textbf{84.51} \\ \bottomrule \end{tabular} \caption{\small Fine-tuning results on abductive commonsense reasoning. *from the leaderboard. First part involves extra human-craft data.} \label{tab:acr} \end{table} \begin{table}[t] \centering \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{lc} \toprule \textbf{Method} & \textbf{ACC (\%)} \\ \midrule Random & 20.00 \\ RoBERTa-large \citep{Liu2019RoBERTa} & 73.94 \\ Max-pool Context \citep{Yusuke2020Finding} & 35.00 \\ GRU Context \citep{Yusuke2020Finding} & 52.20 \\ EventBERT (\textbf{ours}) & \textbf{75.03} \\ \bottomrule \end{tabular}} \captionof{table}{\small Fine-tuning results on narrative incoherence detection. } \label{tab:nid} \end{table} \begin{table}[t] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{lc} \toprule \textbf{Method} & \textbf{ACC (\%)} \\ \midrule Random & 50.00 \\ HCM \citep{chaturvedi2017story} & 77.60 \\ val-LS-skip \citep{srinivasan2018simple} & 76.50 \\ Finetuned Transformer LM \citep{radford2018improving} & 86.50 \\ RoBERTa-large \citep{Liu2019RoBERTa} & 87.10 \\ EventBERT (\textbf{ours}) & \textbf{91.33} \\ \bottomrule \end{tabular}} \captionof{table}{\small Fine-tuning on story cloze test.} \label{tab:sct} \end{table} \begin{figure}[t] \begin{center} \centering \begin{minipage}[t]{\linewidth} \centering \includegraphics[width=0.47\linewidth]{figs/event_results.pdf} \includegraphics[width=0.51\linewidth]{figs/zero-shot.pdf} \caption{\small \textbf{Left}: fine-tuning results on SR, ACR, NID and SCT with average and variance on 10 runs, which result in small p-value ($<10^{-7}$) and verify significant improvements over the base model. \textbf{Right}: zero-shot evaluation compared to DeBERTa and RoBERTa. } \label{fig:result_var} \label{fig:zero} \end{minipage}% \end{center} \end{figure} \paragraph{Setups.} For continual pre-training, we use RoBERTa-large \citep{Liu2019RoBERTa} as our base model. We adopt Adam \citep{KingmaB2014Adam} with a learning rate of 1e-4. The maximum training step and the linear warmup step of the learning rate are set to 200K and 5k. Dropout ratio and batch size are 0.1 and 200. Each sample includes a positive example, 5 negatives for correlation-based event ranking and 5 negatives for discourse relation ranking. The weight decay is set to 0.01. The maximum sequence length is 128. Pre-training is conducted on 8$\times$A100 GPUs and takes $\sim 3$ days. For downstream fine-tuning, we run 10 random seeds and keep the fine-tuned model with best dev accuracy for test. We use Adam optimizer with a learning rate of 1e-5 and a linear warmup step of 1k. The dropout, batch size, weight decay and gradient clipping are 0.1, 32, 0.01 and 1.0, respectively. \subsection{Main Evaluation and Quantitative Analysis} \paragraph{Supervised Fine-tuning. } We take accuracy as the evaluation metric. Table \ref{tab:sr}, \ref{tab:acr}, \ref{tab:nid} and \ref{tab:sct} list comparison of supervised fine-tuning on task-specific training data for the four datasets. It is shown that EventBERT significantly outperforms RoBERTa-large on all datasets (mean, variance and p-value are in Figure~\ref{fig:result_var}(left)). For example, it outperforms RoBERTa by 2\% on script reasoning, and the improvement on story cloze test is larger (i.e., 4\%). EventBERT achieves state-of-the-art performance on script reasoning, narrative incoherence detection, and story cloze test, and it outperforms all baselines of same model size and without additional data EventBERT's superiority demonstrates that EventBERT is a general event-based correlation model, and can be applied to a wide range of downstream tasks. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figs/case_study.pdf} \caption{\small Case study (left) and error analysis (right).}\label{fig:case_study} \end{figure*} \begin{figure}[t] \begin{center} \centering \begin{minipage}[t]{\linewidth} \centering \includegraphics[width=0.52\linewidth]{figs/table_ablation_1.pdf} \includegraphics[width=0.46\linewidth]{figs/few-shot.pdf} \caption{\small \textbf{Left}: ablation on story cloze test. \textbf{Right}: impact of training data size.} \label{fig:few} \label{fig:abl} \end{minipage} \end{center} \end{figure} \begin{figure}[t] \begin{center} \centering \begin{minipage}[t]{\linewidth} \centering \includegraphics[width=0.52\linewidth]{figs/glue.pdf} \includegraphics[width=0.46\linewidth]{figs/pretrained_data.pdf} \caption{\small \textbf{Left}: results on natural language understanding tasks. \textbf{Right}: CER/DRR accuracy on held-out test set (random split of 2\%, about 77k examples) from our pre-training data.} \label{fig:glue} \label{fig:data_quality} \end{minipage} \end{center} \end{figure} \paragraph{Zero-shot Evaluation.} We apply EventBERT and competitors to downstream tasks without fine-tuning (as in \S\ref{sec:meth-eventbert_eval}). Figure~\ref{fig:zero}(right) shows results on script reasoning (SR), abductive commonsense reasoning (ACR), and story cloze test (SCT). RoBERTa/EventBERT is inapplicable to narrative incoherence detection in 0-shot due to an extra MLP needed. It is shown RoBERTa performs similarly to random guess, while EventBERT surpasses them by a large margin. RoBERTa only gets 52.6\% accuracy, and EventBERT outperforms it by 23\% absolute value, verifying EventBERT's effectiveness when no task-specific training data is available. \paragraph{Ablation Study.} We conduct an ablation study on story cloze test in Figure \ref{fig:abl}(left) to investigate the effect of each component. We first remove each of the three objectives in \S\ref{sec:meth-eventbert_pretrain} during the pre-training. It is shown, the correlation-based event ranking (CER) objective plays a critical role, and the accuracy drops by 2.36\% w/o it. Besides, ablating contradiction event tagging (CET) and discourse relation ranking (DRR) also lead to 0.99\% and 1.37\% decrease. We also replace the proposed three objectives with existing span-based masked language model for pre-training, which is denoted as ``Event-Span MLM''. Compared to our proposed event correlation based objectives, traditional span-based masked language model cannot fully leverage the event-based corpus, and EventBERT outperforms it by 3\%. \paragraph{Impact of Training Data Size.} Figure \ref{fig:few}(right) plots the accuracy of RoBERTa and EventBERT on story cloze test with various sizes of training data. First, when more training data is used, accuracy of both increases since they can learn more domain-/task-specific knowledge. Second, EventBERT outperforms RoBERTa by a large margin (i.e., 4\%\textasciitilde6\%), and the margin is consistent, which verifies EventBERT can conduct event-based reasoning better with less task-specific data. \paragraph{Capability Retention for Natural Language Understanding (NLU) Tasks.} To verify EventBERT is still competitive in NLU after continual pre-training with our objectives, we fine-tuning RoBERTa and EventBERT on some NLU tasks (natural language inference and relatedness), and report average and variance on 5 runs on the dev set in Figure \ref{fig:glue}(left). \paragraph{Insight into Built Corpus. } To check the quality of our created corpus ${\mathbb{D}}$ and verify the incompetence of MLM on event learning, we conduct evaluations on CER and DRR in Figure~\ref{fig:data_quality}(right). We use accuracy (equivalent to Hits@$1$) as the metric and these evaluations follow \S\ref{sec:meth-eventbert_eval}. MLM-based RoBERTa (already pre-trained on whole \textsc{BookCorpus}) underperforms on correlation-based masking, even on (almost) word-level discourse relation masking, demonstrating RoBERTa is unlikely to learn event-correlation knowledge, and our built data is non-trivial and challenging to prior pre-trained models. Moreover, even fine-tuning RoBERTa on our corpus via MLM (i.e., Event-Span MLM), the performance is still far from satisfying, verifying MLM is incompetent in correlation learning. \subsection{Case Study, Error Analysis and Limitation} \paragraph{Case Study.} Figure~\ref{fig:case_study}(left) shows examples where EventBERT can find the correct ending for a story, while the baseline selects a wrong option. The main reason might be that RoBERTa is more concerned with token-level concurrence while EventBERT takes events as units and focuses on correlation among them. For example, RoBERTa chooses ``\textit{I burned it in a fire}'' for the 2nd case, which might be due to strong correlation between \textit{dry} and \textit{burn}. In contrast, EventBERT understands that ``\textit{I found a flower in a field that I've never seen before}'' and ``\textit{I dried out the flower head}'', and infers that it is more possible that ``\textit{I flattened it for my journal}'. \paragraph{Error Analysis.} We also show some error cases in Figure~\ref{fig:case_study}(right). As we can see that it is difficult to select the correct option for some examples. Take the first case as an example, although according to the context we can infer that it is more likely to choose ``\textit{The cat was really slow}'', the option itself is somehow contradictory to common sense since usually a cat is fast, leading to confusion during inference. We leave such complex situations for future work. \paragraph{Limitation.} First, EventBERT is focused on correlation-based event reasoning, and not general enough to every event correlation reasoning task (e.g., event temporal reasoning as in \citep{Han2020DEER,Lin2020Conditional}). Second, we evaluate EventBERT on deterministic tasks, e.g., multi-choice and Cloze-type question answering, due to their stable metrics and widely-available baselines. Third, we only access limited computation resources and perform continual pre-training from RoBERTa, while previous works \citep{Gu2020PubMedBERT} verify learning with task-specific corpora and objectives from scratch greatly improves. \section{Conclusion} We propose to pre-train a general model for event correlation reasoning from unlabeled text. To achieve that, we create a corpus by filtering out paragraphs without strong event correlation and further extracting events for remaining ones. Then we present three correlation-based self-supervised objectives for pre-training. The derived model, EventBERT, outperforms strong baselines on 4 downstream tasks in both zero-shot and supervised fine-tuning settings. \bibliographystyle{acl_natbib}
1,314,259,996,235
arxiv
\section{Introduction} The chiral quark model (CQM) \cite{1,2,Polosa:2000ym} gives a good phenomenogical description of chiral symmetry breaking and reasonable values for the Gasser Leutwyler cofficients, but does not describe meson states with masses $\sim 1 $ GeV and does not provide a model for chiral symmetry breaking; it simply assumes that this takes place and incorporates the lowest dimensional operators compatible with the symmetry breaking pattern. The Nambu Jona Lasinio Model (NJL) \cite{3,7,brz,Pallante:1993jg} does provide a specific model for chiral symmetry breaking by assuming strong attractive forces in the scalar channel. It predicts a light narrow scalar partice, the elusive $\sigma$ particle, the would-be chiral partner of the pion. But unitarization studies combined with the large $N_c$ limit\cite{pelaez} suggest that such a particle is a dynamical resonance and not a truly QCD narrow resonance. Thus this simple model of chiral symmetry breaking is clearly disfavoured. The possibility that the phenomenology of low energy QCD can be captured by an hybrid model, where some features of both models are retained, was investigated in \cite{aet,ea}. The aim in these works was to write a very general low-energy model of QCD containing all possible operators compatible with the symmetries of the model and then let phenomenology decide the respective importance of the different terms. The model is understood to be valid in the chirally broken phase (so like in the CQM, no specific model of chiral symmetry breaking is assumed). In this model the pion stands alone, and the partner of the $\sigma$ particle (that is identified with a well established resonance, the $f_0$(980) in the isoscalar channel) is the $\pi^\prime$(1300) in the isovector channel. The authors named this model Extended Chiral Quark Model (ECQM). In this work we shall explore some of the phenomenological consequences of the ECQM in the realm of vector and axial-vector decays. We shall argue later what is the phenomenological interest of understanding these decays. For us they are basically a testing ground of the ECQM. It will be of interest to us also to compare the predictions of the ECQM to those of the NJL model. As we shall see the comparison is interesting to understand the criticality of the models on various parameters. We shall first review the extended chiral quark model of \cite{aet,ea}. After we will present our derivation of the effective lagrangian for vector and axial-vector mesons, then present our numerical predictions for the low-energy constants and conclusions. \section{The effective chiral quark model} The extended chiral quark model, ECQM, was introduced in \cite{aet,ea}. The reader is referred to these works for further details as we here present a succint description only. In Euclidean conventions its lagrangian consists of three different terms \be{ECQM} {\cal L}_{ECQM} = {\cal L}_{ch} + {\cal L}_{\cal M} + {\cal L}_{vec}, \end{equation} where \bea{ECQM1} {\cal L}_{ch} &=& {\cal L}_0 + i\bar Q \left( \not\!\! D + M_0 \right) Q + i \frac{4 \delta f_0}{\Lambda^2} \bar{Q} a_\mu a_\mu Q\cr &&+\frac{G_{S0}}{4N_{c} \Lambda^2}\ (\bar{Q}_L Q_R + \bar{Q}_R Q_L)^2 - \frac{G_{P1}}{4N_{c} \Lambda^2} ( - \bar{Q}_L \vec\tau Q_R + \bar{Q}_R \vec\tau Q_L)^2\cr &&+\frac{G_{S1}}{4N_{c} \Lambda^2} (\bar{Q}_L\vec\tau Q_R + \bar{Q}_R\vec\tau Q_L)^2 - \frac{G_{P0}}{4N_{c} \Lambda^2} ( - \bar{Q}_L Q_R + \bar{Q}_R Q_L)^2, \end{eqnarray} \bea{massi} {\cal L}_{\cal M} &=& i (\frac12 + \epsilon) \left(\bar Q_R {\cal M} Q_L + \bar Q_L {\cal M}^\dagger Q_R \right) \cr && + i (\frac12 - \epsilon) \left( \bar Q_R {\cal M}^\dagger Q_L + \bar Q_L {\cal M} Q_R\right)\cr && + \langle c_0\left({\cal M} + {\cal M}^\dagger\right) + c_5 ({\cal M} +{\cal M}^\dagger)a_\mu a_\mu + c_8 \left({\cal M}^2 + \left({\cal M}^\dagger\right)^2\right)\rangle , \end{eqnarray} and \bea{vect} {\cal L}_{vec} =&-& \frac{G_{V1}}{4N_c \Lambda^2} \bar Q \vec\tau \gamma_\mu Q \bar Q \vec\tau \gamma_\mu Q - \frac{G_{A1}}{4N_c \Lambda^2} \bar Q \vec\tau \gamma_5\gamma_\mu Q \bar Q \vec\tau \gamma_5\gamma_\mu Q \cr &-& \frac{G_{V0}}{4N_c \Lambda^2} \bar Q \gamma_\mu Q \bar Q \gamma_\mu Q - \frac{G_{A0}}{4N_c \Lambda^2} \bar Q \gamma_5\gamma_\mu Q \bar Q \gamma_5\gamma_\mu Q\cr &+& c_{10} \langle U \bar L_{\mu\nu} U^\dagger \bar R_{\mu\nu}\rangle. \end{eqnarray} The notation we have used is the following: $Q$ are the quark fields written in the 'constituent' or 'rotated' basis, \be{constituent} Q_L=u q_L,\qquad Q_R=u^\dagger q_R,\qquad u^2=U=\exp(2i\pi/F_0), \qquad {\cal M}= u^\dagger m u^\dagger \end{equation} $m$ is the quark mass matrix, $\not\!\!D$ is the covariant derivative defined as \be{va} \not\!\! D \equiv \not\!\partial + \not\! v - \gamma_5 \tilde g_A \not\! a, \end{equation} with the (antihermitian) fields \be{vector} v_\mu = \frac12 \left( u^\dagger \partial_\mu u - \partial_\mu u u^\dagger + u^\dagger \bar V_\mu u + u \bar V_\mu u^\dagger - u^\dagger \bar A_\mu u + u \bar A_\mu u^\dagger \right) \end{equation} \be{axial} a_\mu = \frac12 \left( - u^\dagger \partial_\mu u - \partial_\mu u u^\dagger - u^\dagger \bar V_\mu u + u \bar V_\mu u^\dagger + u^\dagger \bar A_\mu u + u \bar A_\mu u^\dagger \right), \end{equation} where $\bar A$ and $\bar V$ are external axial and vector fields. The parameter $M_0$ is the so called `constituent' mass. $G_{S0}$, $G_{P1}$, $G_{V1}$ and $G_{A1}$ are constants parametrizing the four-fermion interactions (indices denote the corresponding $J,I$ channels). These couplings will eventually be reduced and fixed by comparing with the physical values of vector meson masses. $\Lambda$ is a physical UV cut-off identified with the scale of chiral symmetry breaking ($\simeq 1.4$ GeV). The reader has by now undoubtedly noticed that ${\cal L}_{ch}$ contains the usual term operators in the CQM plus some four-quark operators (reminiscent of the NJL). However we intend to describe physics in the chirally broken phase and our fields include the pion matrix $u$, as befits an effective theory that should retain only the light degrees of freedom. Also, unlike in NJL the quark degrees of freedom appearing in (\ref{ECQM}) are from the very beginning 'constituent' quarks, quarks dressed by pions below the chiral symmetry breaking scale. The scalar and pseudoscalar four-quark couplings in (\ref{ECQM1}) need not be equal in order to preserve chiral symmetry (again unlike in NJL models). An additional operator is allowed by symmetry: ${\cal L}_{\cal M}$ contains the dependence on current quark masses. Again, because of the possibility of including the $u$ field, the structure of this term is quite rich. Finally, ${\cal L}_{vec}$ includes four quark operators in the vector and axial-vector channels. The term \be{bare} {\cal L}_0=-\frac{f_0^2}{4} \langle a_\mu a^\mu\rangle. \end{equation} as well as the operators whose coefficient are $c_0$, $c_5$, $c_8$ and $c_{10}$ contain contributions from those degrees of freedom with masses $\le \Lambda\simeq 1.4$ GeV. These ($c_0$, $c_5$, $c_8$ and $c_{10}$) contributions are typically small, the bulk of the contribution coming from the light resonances. They are unimportant for our present discussion as is $\delta f_0$ in (\ref{ECQM1}). The effective lagrangian in eq. (\ref{ECQM1}) is the most general\footnote{except for the fact that for simplicity not all isospin channels are included} one compatible with the principles of gauge and chiral invariance, $CP$ invariance and locality that one can build out of quarks and pions up to, and including, operators of dimension six. It contains four-fermion pieces somewhat reminiscent of NJL, but the philosophy is different here: these terms typically will not have large coefficients to trigger chiral symmetry breaking. No specific mechanism is assumed for the latter, we just write an effective lagrangian that is compatible with it. The vector field $\bar V$ contains a piece that commutes with $u$ describing the residual gluon interactions that ultimately ensure confinement. Some of the constants and terms are somewhat non-standard. For instance, the na\"{i}ve QCD value for the parameter $\epsilon$ is $\epsilon= 0.5$, but its actual value in the low energy theory is largely unconstrained. We shall return to this later. After introducing auxiliary fields in all four channels, the effective lagrangian (\ref{ECQM}) becomes bilinear in the quark fields. The four-fermion interaction is replaced by \be{HS} \bar{Q}\left[i\widetilde\Sigma - \gamma_5 \widetilde\Pi + \frac12 \gamma_\mu \widetilde V_\mu + \frac12 \gamma_\mu \gamma_5 \widetilde A_\mu\right] Q + 2 N_{c} \Lambda^2\left[\frac{\widetilde\Sigma ^2}{G_{S0}} + \frac{(\widetilde\Pi^a)^2}{G_{P1}} + \frac{ \left(\widetilde V^{a}_\mu\right)^2}{4G_{V1}} + \frac{\left(\widetilde A^{a}_\mu\right)^2}{4G_{A1}}\right] \end{equation} and we include an integration over the real auxiliary variables $\widetilde\Sigma, \widetilde\Pi^a, \widetilde V^{a}_\mu, \widetilde A^{a}_\mu$, defined by $\widetilde\Pi \equiv \widetilde\Pi^a \tau^a/\sqrt{2}$, $\widetilde V_\mu = \widetilde V^{a}_\mu \tau^a/\sqrt{2}$, etc. (note that the fields $\widetilde V^{a}_\mu$ and $\widetilde A^{a}_\mu$ are hermitian). This operation amounts to the replacement \be{replace} v_\mu \to {\cal V}_\mu =v_\mu - \frac12 i\widetilde V_\mu,\qquad \tilde g_A a_\mu \to {\cal A}_\mu = \tilde g_A a_\mu - \frac12 i\widetilde A_\mu, \end{equation} and to the addition of scalar ($\Sigma$) and pseudoscalar ($\Pi$) fields in the Dirac operator \be{dirac} \Sigma =M_0 + \widetilde\Sigma + \frac12 \left({\cal M} + {\cal M}^\dagger \right) +\frac{4 \delta f_0}{\Lambda^2} a_\mu a_\mu, \qquad \Pi = \widetilde\Pi + i \epsilon \left({\cal M}^\dagger - {\cal M} \right), \end{equation} which becomes \be{extdirac} \hat D=\not\!\!\partial +{\cal \not\!\! V}-\gamma_5\tilde g_A {\cal \not\!\! A} + \Sigma + i\gamma_5 \Pi \end{equation} We can now integrate out the bilinear quarks and solve for the mass gap. In the weak coupling regime the solution becomes \be{chibreaking} \Sigma_0\simeq M_0 + m, \end{equation} $m$ being the current quark mass. In practice the constituent mass is large enough so that a derivative expansion in inverse powers of $\Sigma_0$ makes sense at least for some range of energies. We can thus write the full quark-loop effective action. Retaining only the logarithmically enhanced part we get \cite{aet,ea} \bea{log} {\cal L}_{1-loop} &\simeq& \frac{N_c}{16\pi^2} \ln\frac{\Lambda^2}{\Sigma_0^2}\, \langle (\Sigma^2 + \Pi^2)^2 + (\partial_\mu \Sigma)^2 + [D^{\cal V}_\mu, \Pi]^2\cr &&- 4 ({\cal A}_\mu)^2 \Sigma^2 - \{{\cal A}_\mu, \Pi\}^2 - 4i [D^{\cal V}_\mu, \Pi] \ {\cal A}_\mu \ \Sigma\ + 2i \partial_\mu \Sigma \{{\cal A}_\mu, \Pi\} \cr &&- \frac16 \left( (F_{\mu\nu}^L)^2 + (F_{\mu\nu}^R)^2\right)\rangle. \end{eqnarray} $F_{L,R}$ are field strengths constructed with ${\cal V}\pm {\cal A}$ and $D^{\cal V}$ is the covariant derivative associated to the connection ${\cal V}_\mu$. In addition, we have the mass terms for the fields $\widetilde\Sigma$, $\widetilde\Pi$, $\widetilde V_\mu$ and $\widetilde A_\mu$ coming from (\ref{HS}). In the axial channel there is some mixing between $a_\mu$ and $\tilde A_\mu$; the corresponding mass term reads \be{diag} \frac{N_{c} I_0 \Sigma_0^2}{4} \langle \frac{1}{\bar G_{A}} \widetilde A_\mu^2 + \left(i 2 \tilde g_A a_\mu + \widetilde A_\mu\right)^2\rangle. \end{equation} The coupling $\bar G_A$ is introduced so as to give a natural scale for the four-fermion terms (they turn out to be $\sim 0.1$) \be{GA} \bar G_A=2G_{A1} I_0\frac{\Sigma_0^2}{\Lambda^2}\qquad I_0=\frac{1}{4\pi^2}\ln\frac{\Lambda^2}{\Sigma_0^2}. \end{equation} This mass term can be diagonalized by defining \be{diag2} i 2 \tilde g_A a_\mu + \widetilde A_\mu = i 2 g_A a_\mu + \frac{1}{\lambda_{-}} A_\mu, \end{equation} with \be{gadef} g_A = \frac{\tilde g_A}{1 + \bar G_{A}}, \end{equation} We refer to \cite{ea} for details. $A_\mu$ is, finally the physical axial-vector field. In the vector field there is no mixing. Of course we have to bear in mind that we are still in Euclidean space-time. These expressions differ from the related expression in the extended NJL model due to presence of a bare constant $\tilde g_A$. The constant $\lambda_-$ is determined by requiring proper normalization of the kinetic term for the $A_\mu$ field. One proceeds likewise for $\widetilde V_\mu$ finding that the properly normalized fields is $V_\mu =\lambda_+ \widetilde V_\mu$. Furthermore one finds that \be{normal} \lambda^2_{+} =\lambda^2_{-} = \frac{N_c I_0}{6}. \end{equation} The values of the physical masses of the axial and vector mesons in terms of the parameters of the model can be found in Ref. \cite{ea}. Ref. \cite{ea} concentrated on the implications of the model in two-point correlators. There it was seen that, after implementing the short distance constraints coming from QCD via the Operator Product Expansion, in spite of its relatively large number of parameters, the model could be well constrained and some clear predictions emerged, comparing very favourably with the data. All the parameters in the ECQM can be thus determined (with one exception to be mentioned below). \begin{table} \caption{The parameters of the ECQM as determined in reference \cite{ea}} \label{tab:parameters} \begin{center} \begin{tabular}{|c||c|} \hline $\Lambda$ & 1.3 GeV \\ \hline $\Sigma_0$ & 200 MeV \\ \hline $g_A$ & 0.55 \\ \hline $\epsilon$ & 0.05 / -0.51 \\ \hline \end{tabular} \end{center} \end{table} There are two possible values for $\epsilon$ that are compatible with the fit of the two-point correlators and their subsequent matching to the OPE. This ambiguity will be resolved in this work. It was also seen in \cite{aet,ea} that the description of the low energy phenomenology that the extended chiral quark model provides is clearly superior to that of the NJL model. \section{Vector and axial-vector phenomenological lagrangians} In what follows we want to explore other phenomenological consequences of the extended chiral quark model by deriving the effective lagrangian relevant for the decay of vector and axial vector mesons. All predictions will be essentially parameter-free, as the model is rigidly fixed from the two-point correlators as we have just indicated. The predictions for vector meson decays at order $p^ 3$ are actually contained in the first term in the expansion of the determinant of the deneralized Dirac operator (\ref{log}). Let us introduce some notations and relations \be{defin} \nabla_{\mu} \, X \, \equiv \, \partial_{\mu} \, X \, + \, \left[ v_{\mu} \, , \, X \, \right],\qquad X_{\mu\nu}\equiv \nabla_\mu X_\nu -\nabla_\nu X_\mu, \end{equation} where $X=V,A$. \be{defin2} v^{\mu\nu}\equiv \partial ^{\mu} \, v ^{\nu} \, -\partial ^{\nu} \, v^{\mu} \, + \left[ v^{\mu} \, , \, v ^{\nu} \, \right] \end{equation} \be{defin22} -\Frac{i}{2} f_{+}^{\mu \nu}\equiv v^{\mu\nu}- \Frac{1}{4}[ \, u^{\mu} \, , \, u^{\nu} \, ] \end{equation} \be{defin3} f_{-}^{\mu \nu}\equiv \nabla ^{\mu} u^{\nu} - \nabla ^{\nu} u^{\mu} \end{equation} where $u_\mu= -2i a_\mu$ is introduced to conform to the standard notation. Let us now consider the most general strong lagrangian linear in the vector field and up to ${\cal O}(p^3)$ assuming nonet symmetry. It reads \cite{EGL89,PradesZ} \begin{eqnarray} {\cal L}_V \, & = & \, - \, \Frac{f_V}{2 \sqrt{2}} \langle \, V_{\mu \nu} \, f_{+}^{\mu \nu} \, \rangle \, - \, \Frac{i g_V}{2 \sqrt{2}} \, \langle V_{\mu \nu} \, [ \, u^{\mu} \, , \, u^{\nu} \, ] \rangle \nonumber \\ & & \, + \, i \alpha_V \, \langle \, V_{\mu} \, [ \, u_{\nu} \, , \, f_{-}^{\mu \nu} \, ] \, \rangle \, + \, \beta_V \, \langle \, V_{\mu} \, [ \, u^{\mu} \, , \, \chi_{-} \, ] \, \rangle \end{eqnarray} In the above expresion \be{massterm} \chi_\pm=2B_0(u^+ m u^+ \pm u m ^\dagger u)\qquad B_0(1 {\rm GeV})\simeq 1.5\ {\rm GeV} \end{equation} We do not include the odd-parity part in the above lagrangian (proportional to $\epsilon^{\alpha\beta\mu\nu}$). For axial-vector fields \begin{eqnarray} {\cal L}_A \, & = & \, - \, \Frac{f_A}{2 \sqrt{2}} \, \langle \, A_{\mu \nu} \, f_{-}^{\mu \nu} \, \rangle \, + \, i \alpha_A \, \langle \, A_{\mu} \, [ \, u_{\nu} \, , \, f_{+}^{\mu \nu} \, ] \, \rangle \, + \, \gamma_1 \, \langle \, A_{\mu} \, u_{\nu} \, u^{\mu} \, u^{\nu} \, \rangle \, \nonumber \\ & & \, + \, \gamma_2 \, \langle \, A_{\mu} \, \{ \, u^{\mu} \, , \, u^{\nu} \, u_{\nu} \, \} \, \rangle \, + \, \gamma_3 \, \langle \, A_{\mu} \, u_{\nu} \, \rangle \, \langle \, u^{\mu} \, u^{\nu} \, \rangle \, + \, \gamma_4 \, \langle \, A_{\mu} \, u^{\mu} \, \rangle \, \langle \, u_{\nu} \, u^{\nu} \, \rangle \end{eqnarray} Again the terms containing $\varepsilon_{\mu \nu \rho \sigma}$ will not be considered in this work. Note that there is some ambiguity in the choice of overall signs. We choose eventually the sign of the axial field so as to conform to the usual conventions (implying a positive $f_A$). In order to make contact with phenomenology, we have to Wick rotate the Euclidean effective lagrangian we have obtained. Using the previous expressions, from the extended Chiral Quark model the following predictions emerge in the vector and axial-vector meson sector \be{result} f_V^2=N_cI_0/6,\quad\quad g_V=f_V\Frac{1-g_A^2}{2},\quad\quad \alpha _V=f_V\Frac{g_A^2}{2\sqrt{2}},\quad\quad \beta _V=f_V \Frac{3g_A M_0 \epsilon}{2\sqrt{2} B_0}, \end{equation} \be{result2} f_A=f_V g_A,\quad\quad \alpha _A=f_V\Frac{g_A}{2\sqrt{2}},\quad\quad \gamma _A ^1 =-f_V \Frac{g_A(1-g_A^2)}{2\sqrt{2}},\quad\quad \gamma _A ^2 =f_V \Frac{g_A(1-g_A^2)}{4\sqrt{2}}. \end{equation} These are the predictions of the ECQM. When comparing to the predictions of the NJL model \cite{PradesZ}, we note that, although the details of the expressions between our results and those of the NJL model obviously differ, when looking at the leading term in NJL there is an overall change of sign in the axial-vector couplings ($\alpha_A, \gamma_A^1,\gamma_A^2$) and also in $\beta _V$ and $\alpha _V$, perhaps due to different conventions in Minkowski and Euclidean space. An overall change of sign everywhere is of course undetectable. \section{Numerical analysis and conclusions} The previous predictions, making use of the 'best fit' presented in Table \ref{tab:parameters} lead to the set of numerical values quoted in Tables \ref{tab:predV} and \ref{tab:predA}. While there is no need to stress here the relevance of $g_V\ , f_V $ and $f_A$ it is worth emphasizing the phenomenological impact of the other couplings. Vector meson dominace in weak non-leptonic kaon decays have been studied accurately \cite{Ecker:1992de,D'Ambrosio:1997tb}. In fact it has been shown in Ref. \cite{D'Ambrosio:1997tb} that there are several cases where the remaining couplings are particularly interesting due the vanishing of the contributions due to $g_V$ and $ f_V $ are: see $K\to 2\pi /3 \pi$ and $K\to\pi ^+\pi ^0 \gamma $ \cite{D'Ambrosio:1997tb}. The couplings in ${\cal L}_V$ and ${\cal L}_A$ can be determined, in principle, from the phenomenology of the vector meson decays. $|f_V|$ and $|\alpha_V|$ could be obtained from the experimental widths \cite{Eidelman:2004wy} of $\rho^0 \rightarrow e^+ e^-$, $\omega \rightarrow \pi^0 \gamma$, $\omega \rightarrow \pi \pi \pi$ and $\rho \rightarrow \pi \pi \gamma$, respectively, while $g_V$ and $\beta_V$ enter in $\rho \rightarrow \pi \pi$. As for the axial-vector couplings they can be determined from the decays $a_1^+\rightarrow 3 \pi$ ($\gamma _1 $, $\gamma _2 $, $\gamma _3 $ and $\gamma _4 $) and $a_1^+\rightarrow \pi^+ \gamma $ ($ f_A $ and $\alpha _A $)\cite{PradesZ}. However unfortunately data are not precise enough to go beyond a good determination of $ f_A $. In Table 2 we collect the experimental determinations (when available). As we have emphasized the predictions are absolutely rigid, as all the free parameters in the model are fixed beforehand from the two-point functions. \begin{table} \caption{The predictions of the ECQM for the vector couplings compared to experiment} \label{tab:predV} \begin{tabular}{|c||c|c|} \hline & ECQM & Experiment\\ \hline \hline $f_V$ & input & 0.20 \\ \hline $g_V$ & 0.07 & 0.09 \\ \hline $\alpha_V$ & 0.02 & - \\ \hline $\beta_V$ & -0.008 &-0.018 \\ \hline \end{tabular} \end{table} \begin{table} \caption{The predictions of the ECQM for the axial-vector couplings} \label{tab:predA} \begin{tabular}{|c|c|c|} \hline & ECQM & Experiment \\ \hline \hline $f_A$ & 0.11 & 0.097 \\ \hline $\alpha_A$ & 0.04 & - \\ \hline $\gamma_1$ & -0.03 & - \\ \hline $\gamma_2$ & 0.01 &- \\ \hline $\gamma_{3,4}$ & ${\cal O}(1/\sqrt{N_c})$ &- \\ \hline \end{tabular} \end{table} When comparing the experimental value for $\beta_V$ with the theoretical prediction of the model, this favours the value $\epsilon=-0.5$. That solves the ambiguity in the determination of $\epsilon$ we alluded to before and fixes completely the leading coupling constants of the ECQM. It is unfortunate that except for $g_V$ and $\beta_V$ the other couplings in this parity even sector are not measured yet. In some of them we get results that clearly differ numerically from the predictions of the NJL model and therefore they provide a clear test of the mechanisms of chiral symmetry breaking. Their measurement is clearly interesting. Even if we have reduced ourself to the study of non-anomalous vector and axial coupling some interesting conclusions on NJL and ECQM can be drawn. Our numerical values certainly differ from the NJL ones and thus measuring the low energy constants related to meson decays into pseudoscalars can be particularly telling about the mechanisms of chiral symmetry breaking in QCD and its modelization. We have been able to resolve the ambiguity in the determination of the $\epsilon$ parameter in the ECQM. Particularly useful are some VMD couplings which could be measured in the near future and might be phenomenological relevant in K-meson decays. \section*{Acknowledgements} This work ~was partially supported by IHP-RTN, EC contract No.\ HPRN-CT-2002-00311 (EURIDICE). G.D. thanks the warm hospitality of Universitat de Barcelona, D.E. would like to acknowledge the support of the grant FPA2004-04582 and the hospitality of INFN-Naples. We would like to thank A.Andrianov for a long discussion concerning conventions and Wick rotation and J.Portoles and J.Prades for several comments.
1,314,259,996,236
arxiv
\section{Introduction} \input{introduction} \subsection{Model and Definitions} \label{sec:definition} \input{definitions} \subsection{Our Results} \input{results} \subsection{Related Work} \label{sec:related} \input{related_work} \section{A Robust Algorithm for Online Packing LPs} \label{sec:algo} \input{lp_algo} \subsection{High Capacities} \label{sec:high-b} \input{lp_proof_high} \subsection{Small Capacities} \label{sec:small-b} \input{lp_proof_small} \section{Extensions and Variants} \subsection{Truthfulness} \input{truthful} \subsection{Improved Bounds for Fixed Low Capacities} \input{small_tailored} \subsection{Online Generalized Assignment Problem} \input{gap_proof} \bibliographystyle{plain}
1,314,259,996,237
arxiv
\section{Introduction} Image states, arising from the polarization-induced interaction between an electron and a surface, offer the possibility for electron trapping at a surface. Since their original prediction\cite{CC69} for the surface of liquid and solid He, they have been extensively studied for metallic surfaces.\cite{DAG84,SH84,Fauster94,HSR97,Hoefer99} But image states also exist for dielectric surfaces provided the electron affinity of the dielectric is negative, that is, the vacuum level falls inside the gap between the valence and the conduction band. Image states are then the lowest unoccupied states and should hence allow for temporary trapping of external electrons. So far image states at a dielectric surface have been only observed for graphite,\cite{LMT99} but they are expected for other dielectrics with negative electron affinity as well, for instance, boron nitride\cite{LSG99} and the alkaline earth oxides.\cite{BKP07} Based on the idea of a two-dimensional electron surface plasma,\cite{EC87,EC88,BBD97,GMB02} electron trapping in image states has been suspected for a long time to be responsible for the build-up of surface charges at plasma walls. We have recently proposed therefore to consider the charging of a plasma wall as an electron physisorption process.\cite{BFKD08,BDF09} Indeed, for plasma walls with negative electron affinity image states should contribute to the very beginning of the charging process, when the wall carries no charges yet and the image states thus fall inside the energy gap of the wall. Only with increasing surface charge image states are expected to play a less important role because the Coulomb barrier due to the electrons already residing on the wall shifts image states to an energy range where they are destabilized by unoccupied bulk states. The later stages of charge collection most probably occurs via surface resonances or empty volume states.~\cite{BHMF10} Regardless of its importance for charge collection at dielectric plasma walls, the electron kinetics in the image states of a dielectric surface is an interesting phenomenon in its own right. In addition, it is relevant in other physical contexts as well. For instance, (i) in electron emitters, such as cesium-doped silicon oxide films with negative electron affinity, electron emission via image states reduces the operational voltage considerably,\cite{GDK05} (ii) in gallium arsenide based heterostructures surface charging can be used for the contactless gating of field devices,\cite{BGY05} and (iii) for the alkaline earth oxides, studied in the field of heterogeneous catalysis,\cite{Hattori95,Hattori03,WE00,Freund07} the electronic surface states provide the environment for catalytic reactions. Some situations encompass electronic transitions from bulk to surface states, as it is the case for electron emitters, while for others, the electron does not penetrate into the bulk and the electron kinetics takes only place in surface states. Interesting questions in this case are the probability for temporary trapping in these states, the mechanism of electron energy relaxation at the surface, and the time after which a trapped electron is released. This is the concluding paper out of a series of three on the phonon-mediated physisorption of an electron in the image states of a dielectric surface. As in our previous work, Refs.~\onlinecite{HBF10a,HBF10b} (thereafter referred to as I and II), we investigate adsorption and desorption of an electron at finite temperatures assuming an acoustic longitudinal bulk phonon to control energy relaxation at the surface. For the dielectric material we are considering, the level spacing of the lowest two bound states typically exceeds the Debye energy, implying that multi-phonon processes have to be taken into account. In I and II, we have studied desorption and sticking using an expansion of the energy dependent T matrix,\cite{BY73,GKT80b,AM91} allowing to calculate one- and two-phonon transition probabilities. This approach is however limited to very few materials, for instance, graphite and MgO. In the following we will adopt a different strategy, calculating multi-phonon transition probabilities due to the nonlinear electron-phonon coupling non-perturbatively. This allows us to calculate the desorption time and the sticking coefficient for the deeper surface potentials of CaO, \(\text{Al}_2\text{O}_3\) and \(\text{SiO}_2\). The remaining paper is structured as follows. In Sec. \ref{Electron kinetics} we briefly recall the quantum-kinetic approach to physisorption. In Sec. \ref{Electron-surface interaction and transition rates}, we calculate the multi-phonon state-to-state transition probabilities. In Sec. \ref{Results} we present our results for the desorption time and the prompt and kinetic energy-resolved and energy-averaged sticking coefficient. In this section we also discuss the time evolution of the bound state occupancy and the energy-resolved desorption flux. Section \ref{Discussion: Two state system} is devoted to the analytic treatment of a simplified two-state model, used to identify two generic physisorption scenarios into which we can classify the results of this paper as well as our previous results, before we conclude in Sec. \ref{Summary}. \section{Electron kinetics} \label{Electron kinetics} As in I~\cite{HBF10a} and II~\cite{HBF10b} we describe the time evolution of the occupancy of the bound surface states with a quantum-kinetic rate equation.\cite{GKT80a,KG86} It captures all three characteristic stages of physisorption:~\cite{IN76,Brenig82} initial trapping, subsequent relaxation and desorption. The time dependence of the occupancies of the bound states is given by\cite{GKT80a,KG86} \begin{align} \frac{\mathrm{d}}{\mathrm{d}t}n_n(t)=&\sum_{n^\prime} \left[W_{n n^\prime} n_{n^\prime}(t) - W_{n^\prime n} n_n(t) \right] \nonumber \\ & -\sum_k W_{k n} n_n(t) +\sum_k \tau_t W_{nk} j_k(t) \label{fullrateeqn} \\ =& \sum_{n^\prime} T_{nn^\prime} n_{n^\prime}(t)+\sum_k \tau_t W_{nk} j_k(t) \text{ ,} \label{fullrateeqn2} \end{align} where \(W_{n^\prime n}\) is the probability per unit time for a transition from a bound state \(n\) to another bound state \(n^\prime\), \(W_{kn}\) and \(W_{nk}\) are the probabilities per unit time for a transition from the bound state \(n\) to the continuum state \(k\) and vice versa and \(\tau_t=2L/v_z\) is the transit time through the surface potential of width \(L\), which, in the limit \(L\rightarrow \infty\), can be absorbed into the transition probability. The matrix \(T_{nm}\) is defined implicitly by the above equation. The last term in Eqs. (\ref{fullrateeqn}) and (\ref{fullrateeqn2}), respectively, gives the increase in the bound state occupancy due to trapping of an electron in bound surface states. The probability for an approaching electron in the continuum state \(k\) to make a transition to any of the bound states is given by the prompt energy-resolved sticking coefficient, \begin{align} s_{e,k}^\text{prompt}=\tau_t\sum_n W_{nk} \text{.} \end{align} Treating the incident electron flux as an externally specified parameter, the solution of Eq. (\ref{fullrateeqn}) describes the subsequent relaxation and desorption. It is given by \begin{align} n_n(t)=\sum_{\kappa} e^{-\lambda_\kappa t} \int_{-\infty}^t \mathrm{d}t^\prime e^{\lambda_\kappa t^\prime} e_n^{(\kappa)} \sum_{kl} \tilde{e}_l^{(\kappa)} \tau_t W_{lk} j_k(t^\prime) \text{ ,} \label{rateqensolution} \end{align} where \(e_n^{(\kappa)} \) and \(\tilde{e}_n^{(\kappa)}\) are the right and left eigenvectors to the eigenvalue \(-\lambda_\kappa\) of the matrix \(\mathbf{T}\). If the modulus of one eigenvalue, \(\lambda_0\), is considerably smaller than the moduli of the other eigenvalues, \(\lambda_\kappa\), a unique desorption time and a unique sticking coefficient can be identified.\cite{KG86} In this case \(\lambda_0\) governs the long time behavior of the equilibrium occupation of the bound states, \(n_q^\mathrm{eq} \sim e^{- E_q/k_BT_s}\), and its inverse can be identified with the desorption time, \(\lambda_0^{-1}=\tau_e\). In this case the bound state occupancy \(n_n(t)\) splits into a slowly varying part \(n_n^0(t)\) given by the \(\kappa =0\) summand in Eq. (\ref{rateqensolution}) and a quickly varying part \(n_n^f(t)\) given by the sum over \(\kappa \neq0\) in Eq. (\ref{rateqensolution}). The "adsorbate``, i.e. the fraction of the trapped electron remaining in the surface states for times on the order of the desorption time, is given by the slowly varying part only, \(n^0(t)=\sum_n n_n^0(t)\). Differentiating \(n^0(t)\) with respect to the time, \begin{align} \frac{d}{dt}{n}^0(t)=\sum_k s_ {e,k}^\text{kinetic} j_k(t) - \lambda_0 n^0(t) \text{ ,} \end{align} we can, following Brenig\cite{Brenig82}, identify the kinetic energy-resolved sticking coefficient \begin{align} s_{e,k}^\text{kinetic}=\tau_t \sum_{n,n^\prime} e_{n^\prime}^{(0)} \tilde{e}_n^{(0)} W_{n k} \text{ ,} \end{align} giving the probability for both, initial trapping and subsequent relaxation. If the incident unit electron flux corresponds to an electron with Boltzmann distributed kinetic energies, the prompt or kinetic energy-averaged sticking coefficient is given by \begin{align} s_e^{\dots}=\frac{\sum_k s_{e,k}^{\dots} k e^{-\beta_e E_k}}{\sum_k k e^{-\beta_e E_k}} \text{ ,} \end{align} where \(\beta_e^{-1}=k_B T_e\) is the mean electron energy. The desorption flux, that is, the flux due to an electron that is not instantly reflected at the boundary but sticks to the surface for a finite time can also be calculated from the occupancy of the bound surface states. From Eq. (\ref{fullrateeqn}), we infer that the losses of the bound state occupancy increase the continuum state occupancy by \begin{align} \frac{\mathrm{d}n_k}{\mathrm{d}t}=\sum_n W_{kn} n_n(t) \text{ .} \end{align} As the electron remains in the surface potential for the time it needs to travel trough the surface potential the occupancy of the continuum state \(k\) is given by \(n_k=\tau_t \dot{n_k} \). To obtain the energy-resolved desorption flux we multiply the occupancy of the continuum state \(k\) with the flux \(j_k^\text{box}\), associated with the box-normalized state \(|\phi_k\rangle\).\cite{HBF10a} Thus, the energy-resolved desorption flux is given by \begin{align} j_k(t)=\tau_t j_k^\text{box} \sum_n W_{kn} n_n(t) \text{ ,} \label{eresdes} \end{align} which is well defined in the limit \(L\rightarrow \infty\). \section{Transition probabilities} \label{Electron-surface interaction and transition rates} The kinetic equations presented in the last section rely on the knowledge of the transition probabilities. They have to be calculated from a microscopic model for the electron-surface interaction. For a dielectric surface, the transitions are driven by phonons, whose maximum energy is, within the Debye model, the Debye energy \(\hbar \omega_D\). Measuring energies in units of the Debye energy, important dimensionless parameters characterizing the potential depth are \begin{align} \epsilon_n=\frac{E_n}{\hbar \omega_D} \quad \text{ and } \quad \Delta_{nn^\prime}=\frac{E_n-E_{n^\prime}}{\hbar \omega_D} \text{ ,} \end{align} where \(E_n<0\) is the energy of the \(\text{n}^\text{th}\) bound state. In I, we introduced the following classification for the potential depth. If \(-n+1> \Delta_{12} >- n\), we call the potential n-phonon deep. For the calculations in I and II, we considered only one- or two-phonon deep potentials, for which one- and two-phonon transition probabilities are sufficient. Dielectrics with two-phonon deep potentials, such as graphite or MgO, are however an exception. Many dielectrics, for instance, \(\text{Al}_2\text{O}_3\), CaO, GaAs, or \(\text{SiO}_2\) have more than two-phonon deep potentials. Hence, the more relevant situation is physisorption in deep surface potentials for which multi-phonon transition probabilities are required. To calculate multi-phonon transition probabilities for the one-dimensional microscopic model used in I and II, we briefly recall its main features. In short, for a dielectric surface, the main source of the attractive static electron-surface potential is the coupling of the electron to a dipole-active surface phonon.\cite{EM73} Far from the surface the surface potential arising from this coupling merges with the classical image potential and thus \(\sim 1 / z\). Close to the surface, however, the surface potential is strongly modified by the recoil energy resulting from the momentum transfer parallel to the surface when the electron absorbs or emits a surface phonon. Taking this effect into account leads to a recoil-corrected image potential \(\sim 1 /(z + z_c)\) with \(z_c\) a cut-off parameter defined in I. Transitions between the eigenstates of the recoil-corrected image potential are due to dynamic perturbations of the surface potential. The surface potential is very steep near the surface. A particularly strong perturbation arises therefore from the longitudinal-acoustic bulk phonon perpendicular to the surface which causes the surface plane to oscillate. The Hamiltonian from which we calculate the transition probabilities was introduced in I where all quantities entering the Hamiltonian are explicitly defined. It is given by \begin{align} H=H_{e}^\text{static}+H_{ph}+H_{e-ph}^\text{dyn} \text{ ,} \end{align} where \begin{align} H_e^\text{static}=\sum_q E_q c_q^\dagger c_q \end{align} describes the electron in the recoil-corrected image potential, \begin{align} H_{ph}=\sum_Q \hbar \omega_Q b_Q^\dagger b_Q \end{align} describes the free dynamics of the bulk longitudinal acoustic phonon responsible for transitions between surface states, and \begin{align} H_{e-ph}^\text{dyn}=\sum_{q,q^\prime} \langle q^\prime | V_p(u,z) |q\rangle c_{q^\prime}^\dagger c_q \end{align} denotes the dynamic coupling of the electron to the bulk phonon. The perturbation \(V_p(u,z)\) can be identified as the difference between the displaced surface potential and the static surface potential. It reads, after the transformation \(z \rightarrow z- z_c\), \begin{align} V_p(u,z)=-\frac{e^2 \Lambda_0}{z+u}+\frac{e^2\Lambda_0}{z} \label{compactperturb} \text{ ,} \end{align} where \(\Lambda_0=(\epsilon_s-1)/4(\epsilon_s+1)\) with \(\epsilon_s\) the static dielectric constant. In general, multi-phonon processes can arise both from the nonlinearity of the electron-phonon coupling \(H_{e-ph}^\text{dyn}\) as well as from the successive actions of \(H_{e-ph}^\text{dyn}\) encoded in the T matrix equation, \begin{align} T=H_{e-ph}^\text{dyn}+H_{e-ph}^\text{dyn} G_0 T \text{ ,} \end{align} where \(G_0\) is given by \begin{align} G_0=\left(E-H_e^\text{static}-H_{ph}+i\epsilon\right)^{-1} \text{ .} \end{align} The transition probability per unit time from an electronic state \(q\) to an electronic state \(q^\prime\) encompassing both types of processes is given by~\cite{BY73} \begin{align} W_{q^\prime q}=&\frac{2\pi}{\hbar} \sum_{s,s^\prime} \frac{e^{-\beta_s E_s}}{\sum_{s^{\prime\prime}} e^{-\beta_s E_{s^{\prime \prime}}} } \left|\langle s^\prime,q^\prime | T | s,q\rangle \right|^2 \nonumber \\ & \times \delta (E_s -E_{s^\prime} +E_q -E_{q^\prime}) \text{ ,} \label{generalTR} \end{align} where \(\beta_s = (k_B T_s)^{-1}\), with \(T_s\) the surface temperature and \(|s\rangle\) and \(|s^\prime \rangle\) the initial and final phonon states. We are only interested in the transitions between electronic states. It is thus natural to average in Eq. (\ref{generalTR}) over all phonon states. The delta function guarantees energy conservation. In our previous work, we have used an expansion of the T matrix to calculate multi-phonon transition rates. In principle this ensures that both linear and nonlinear terms in the interaction as well as successive actions of the perturbation are taken into account up to a certain order of the phonon process. However even for a two-phonon deep potential, taking all two-phonon processes into account is nearly impossible. The calculation becomes feasible if two-phonon processes are only taken into account for transitions not already enabled by a one-phonon process. This amounts to computing only the lowest required phonon order for a given transition, neglecting higher order corrections to it. For higher order phonon processes even this simplified strategy becomes unfeasible. A different approach is thus needed. From I and II we qualitatively know the relevance of the different types of multi-phonon processes for particular electronic transitions. For continuum-bound state transitions, for instance, one-phonon processes are sufficient at low electron energies. We will therefore compute the transition probability between bound and continuum states in the one-phonon approximation. For transitions between bound states, we found that multi-phonon processes due to the nonlinearity of the electron-phonon coupling tend to be more important than the multi-phonon processes due to the iteration of the T matrix, unlike to what we found for bound state - continuum transitions (see I) or to what \v{S}iber and Gumhalter~\cite{SG03,SG05,SG08} found in the context of atom-surface scattering. Indeed, multi-phonon processes from the iteration of the T matrix give a minor contribution, unless resonances arising from the T matrix become relevant. This happens whenever the energy difference between two bound states is a multiple of the Debye energy. Resonances smoothen then the abrupt steps in the transition probability at the depth thresholds. Since the electronic matrix element between the first and the second bound state is the largest one, this effect is most pronounced for \(|\Delta_{12}|=n\). In view of the above discussion we expect an approximation which takes only the nonlinearity of the electron-phonon interaction non-perturbatively into account to give an acceptable first estimate for the multi-phonon transition rates. We denote this approximation the nonlinear multi-phonon approximation. In particular, it should be sufficient for the identification of the generic behavior of multi-phonon-mediated adsorption and desorption. Calculating multi-phonon processes due to nonlinear terms in the interaction potential\cite{Manson91} amounts to a distorted wave Born approximation with the full interaction potential. Thus, the transition probability per unit time is given by \begin{align} W_{q^\prime q}=&\frac{2\pi}{\hbar} \sum_{s,s^\prime} \frac{e^{-\beta_s E_s}}{\sum_{s^{\prime \prime}} e^{-\beta_s E_{s^{\prime \prime}}}} \left|\langle q,s| H_{e-ph}^\text{dyn} |s^\prime, q^\prime \rangle\right|^2 \nonumber \\ & \times \delta(E_s+E_q-E_{q^\prime}-E_{s^\prime})~. \label{mphTR} \end{align} To evaluate the multi-phonon transition probability, we use \(H_{e-ph}^\text{dyn}\) in the form of Eq. (\ref{compactperturb}). The transition matrix element in Eq. (\ref{mphTR}) is then given by \begin{align} &\langle q,s| H_{e-ph}^\text{dyn.} |q^\prime, s^\prime \rangle\nonumber \\ &\quad =\langle s| \int_{z_c}^\infty \mathrm{d}z \phi_q^\ast(z) \left[ v(z+u)-v(z)\right] \phi_{q^\prime}(z)|s^\prime \rangle \text{ ,} \end{align} where \(v(z)=-(e^2\Lambda_0)/z\). Introducing dimensionless variables \(x=z/a_B\), the Fourier transform of the static potential \begin{align} v(p)=\int_{x_c}^{\infty} \mathrm{d}x e^{ipx}v(x) \text{ ,} \end{align} and the state-to-state matrix element \begin{align} f_{q q^\prime}(p)=\int_{x_c}^\infty \mathrm{d}x \phi_q^\ast(x)e^{-ipx} \phi_{q^\prime}(x) ~, \end{align} the transition probability can be rewritten as \begin{widetext} \begin{align} W_{q^\prime q}=&\frac{2\pi}{\hbar} \sum_{s,s^\prime} \frac{e^{-\beta_s E_s}}{\sum_{s^{\prime\prime}}e^{-\beta_s E_{s^{\prime\prime}}}} \int_{-\infty}^\infty \frac{\mathrm{d}p}{2\pi} \int_{-\infty}^\infty \frac{\mathrm{d}\tilde{p}}{2\pi} v(p) v^\ast(\tilde{p}) f_{q q^\prime}(p) f_{q q^\prime}^\ast(\tilde{p}) \langle s | \left[e^{-i\frac{p}{a_B}u}-1\right] |s^\prime \rangle \nonumber \\ & \times \langle s^\prime | \left[ e^{i\frac{\tilde{p}}{a_B}u} -1 \right] | s \rangle \delta(E_s+E_q-E_{s^\prime}-E_{q^\prime}) \text{ .} \end{align} Using the identity \(\delta (x)=1/(2\pi) \int_{-\infty}^\infty \mathrm{d}t\text{ }e^{ixt}\) and employing \(\langle s|e^{i E_s t/\hbar}=\langle s| e^{iH_{ph} t/\hbar}\), the above expression becomes \begin{align} W_{q^\prime q}=\frac{1}{\hbar^2}\int_{-\infty}^\infty \frac{\mathrm{d}p}{2\pi} \int_{-\infty}^\infty \frac{\mathrm{d}\tilde{p}}{2\pi} v(p) v^\ast(\tilde{p}) f_{q q^\prime}(p) f_{q q^\prime}^\ast(\tilde{p}) \int_{-\infty}^\infty \mathrm{d}t\text{ } e^{i(E_q-E_{q^\prime})t/\hbar} \langle\langle \left[e^{-i\frac{p}{a_B}u(0)}-1\right] \left[e^{i\frac{\tilde{p}}{a_B} u(t)} -1\right] \rangle \rangle \label{Wqq} \end{align} with \(\langle\langle \dots \rangle \rangle = \sum_s e^{-\beta_s E_s} \langle s| \dots |s\rangle / \sum_{s^{\prime\prime}} e^{-\beta_s E_{s^{\prime \prime}}}\) the average over phonon states. This average can be evaluated for \(q\neq q^\prime\) employing Glauber's theorem\cite{Glauber55} which yields \begin{align} \langle \langle \left[e^{-i\frac{p}{a_B}u(0)}-1\right] \left[e^{i\frac{\tilde{p}}{a_B}u(t)}-1\right] \rangle\rangle = e^{-\frac{1}{2a_B^2}p^2\langle\langle u(0)^2 \rangle \rangle } e^{-\frac{1}{2a_B^2} \tilde{p}^2 \langle\langle u(t)^2 \rangle \rangle} e^{\frac{1}{a_B^2}p\tilde{p} \langle\langle u(0) u(t) \rangle \rangle } \label{GlauberTrick} \end{align} with \begin{align} \langle\langle u(0) u(t) \rangle \rangle = \sum_Q \frac{\hbar}{2 \mu N_s \omega_Q} \left[\left(1+n_B(\hbar\omega_Q)\right)e^{-i\omega_Q t} + n_B(\hbar\omega_Q)e^{i\omega_Q t} \right] \end{align} \end{widetext} the correlation function of the displacement field \begin{align} u=\sum_Q \sqrt{\frac{\hbar}{2\mu\omega_Q N_s}}\left( b_Q+b_{-Q}^\dagger \right)~, \end{align} where \(\mu\) is the mass of the unit cell of the lattice and $N_s$ is the number of unit cells. As in I and II we use for calculational convenience a bulk Debye model for the longitudinal acoustic phonon, although it is less justified for the high energy part of the spectrum which also enters our calculation. Sums over phonon momenta are thus replaced by \begin{align} \sum_Q \dots = \frac{3 N_s}{\omega_D^3}\int_0^{\omega_D} \mathrm{d}\omega \text{ } \omega^2 \dots \text{ .} \end{align} In terms of the dimensionless variables \begin{align} x=\frac{\omega}{\omega_D} \text{ , } \delta=\frac{\hbar \omega_D}{k_B T_s} \text{ , and } \tau=\omega_D t \text{ ,} \label{deltadef} \end{align} the phonon correlation function becomes \begin{align} \langle \langle u(0) u(\tau) \rangle \rangle = \frac{3 \hbar}{2 \mu \omega_D} \int_0^1 \mathrm{d}x x \left[\frac{e^{-ix\tau}}{1-e^{-\delta x}} +\frac{e^{ix\tau}}{e^{\delta x}-1} \right] \text{ .} \end{align} Hence, for the transition probability per unit time we obtain \begin{widetext} \begin{align} W_{q^\prime q}=\frac{e^4 \Lambda_0^2}{\hbar^2 \omega_D a_B^2} \int_{-\infty}^\infty \frac{\mathrm{d}p}{2 \pi} \int_{-\infty}^\infty \frac{\mathrm{d}\tilde{p}}{2 \pi} v(p) v(\tilde{p}) f_{q q^\prime}(p) f_{q q^\prime}^\ast(\tilde{p}) e^{-\frac{1}{2} \gamma p^2 q(0)} e^{-\frac{1}{2}\gamma \tilde{p}^2 q(0)} \int_{-\infty}^{\infty} \mathrm{d}\tau e^{i\Delta_{q q^\prime}\tau +\gamma p \tilde{p} q(\tau)} \label{fullmphrates} \text{ ,} \end{align} where \begin{align} q(\tau)= \int_0^1 \mathrm{d}x x \left[ \frac{e^{-ix\tau}}{1-e^{-\delta x}}+\frac{e^{ix\tau}}{e^{\delta x}-1} \right] \qquad \text{and} \qquad \gamma = \frac{3 \hbar}{2\mu a_B^2 \omega_D} \text{ .} \end{align} \end{widetext} The transition probability (\ref{fullmphrates}) contains two Debye-Waller factors, \(\exp(-\gamma p^2 q(0)/2)\) and \(\exp(-\gamma \tilde{p}^2 q(0)/2)\), governing the reduction of the transition probability as a function of the surface temperature. It also contains phonon processes of all orders as can be most easily seen from the Taylor expansion \begin{align} e^{\gamma p \tilde{p} q(\tau)}=1+\gamma p \tilde{p}q(\tau) +\frac{1}{2} \left[ \gamma p \tilde{p} q(\tau) \right]^2 +\dots~. \label{mphexpansion} \end{align} Clearly, the second term on the right hand side represents the one-phonon and the third term the two-phonon process. From I we know that two-phonon processes are much weaker than one-phonon processes. We expect therefore lower order phonon processes to dominate their higher order corrections, so that the expansion (\ref{mphexpansion}) converges quickly. As higher order phonon processes are small compared to lower order processes, we take, for a given \(\Delta_{qq^\prime} \) only the leading term of \(\exp( \gamma p \tilde{p} q(\tau)) \) into account, that is, the lowest order phonon process that enables a transition between the states \(q\) and \(q^\prime\). The Fourier transformation of powers of \(q(\tau)\), however, required when (\ref{mphexpansion}) is used in (\ref{fullmphrates}), cannot be evaluated in closed from, making it necessary to construct an approximation for \(q(\tau)\). To derive an approximation for \(q(\tau)\) subject to the constraint \begin{align} \int_{-\infty}^\infty \mathrm{d}\tau e^{i\Delta_{qq^\prime} \tau} q^n(\tau)=0 \quad \text{ for } |\Delta_{qq^\prime}|>n \text{ ,} \label{vanishcondition} \end{align} which states that an n-phonon process yields a non-vanishing transition probability only for \(-n<\Delta<n\) and vanishes otherwise, we split \(q(\tau)=q^s(\tau)+q^i(\tau)\) into a contribution arising from spontaneous phonon emission \(q^s(\tau)\) and a contribution from induced phonon emission or absorption \(q^i(\tau)\). They are respectively given by \begin{align} q^s(\tau)=\int_0^1 \mathrm{d}x \text{ }xe^{-ix\tau} \text{ and } q^i(\tau)=2 \int_0^1 \mathrm{d}x\text{ }x \frac{\cos(x\tau)}{e^{\delta x}-1} \text{ .} \end{align} The former can be evaluated giving \begin{align} q^s(\tau)=\frac{\cos \tau -1}{\tau^2}+i\frac{\tau \cos \tau- \sin \tau}{\tau^2} +\frac{\sin \tau}{\tau} \text{ .} \end{align} For \(q^i(\tau)\) we need to find an approximation. For that purpose we look at the Fourier transform of \(q^i\) \begin{align} \int_{-\infty}^\infty \mathrm{d}\tau e^{i \Delta \tau} q^i(\tau) =\left\lbrace \begin{matrix} 2\pi \frac{|\Delta|}{e^{\delta |\Delta|}-1} & \text{for} -1 <\Delta<1 \\ 0 & \text{else} \end{matrix} \right.~. \end{align} Expanding the Fourier transform in terms of \(|\Delta|\), \begin{align} 2\pi \frac{|\Delta|}{e^{\delta |\Delta|}-1} \approx 2 \pi \left[\frac{1}{\delta}-\frac{1}{2}|\Delta| +\frac{1}{12}\delta |\Delta|^2+\mathcal{O}(\delta^2) \right] ~, \end{align} yields a high-temperature approximation which converges quickly for the temperatures we are interested in and guarantees at the same time that the one-phonon contribution can only bridge energy differences up to \(|\Delta_{qq^\prime}|=1\). Applying the inverse transformation gives \begin{align} q^i(\tau)=&\left(\frac{2}{\delta}-1+\frac{\delta}{6} \right)\frac{\sin(\tau)}{\tau}+\frac{1}{\tau^2} \nonumber \\ &+\left(-1+\frac{\delta}{3}\right) \frac{\cos(\tau)}{\tau^2}-\frac{\delta}{3}\frac{\sin(\tau)}{\tau^3}+\mathcal{O}(\delta^2) \text{ ,} \label{htexpans} \end{align} which satisfies Eq. (\ref{vanishcondition}). Using this approximation the Fourier transform of powers of \(q(\tau)\) can be done analytically. As the \(n\)-phonon process gives a vanishing transition probability at \(|\Delta|=n\), we take the maximum of the \(n\)-phonon and the \(n+1\)-phonon process to obtain a better approximation in the vicinity of \(|\Delta|=n\). Then the Fourier transformation of \(\exp(\gamma p \tilde{p} q(\tau))\) in leading non-vanishing order is given by \begin{widetext} \begin{align} \int_{-\infty}^\infty \mathrm{d}\tau e^{i \Delta \tau +\gamma p \tilde{p} q(\tau,\delta)}\approx\left\lbrace \begin{matrix} \max (A_n,A_{n+1}) & \text{ for } n-1 < \Delta <n \\ \max (B_n,B_{n+1}) & \text{ for } -n < \Delta <-n+1 \end{matrix} \right. \text{ ,} \label{expqap} \end{align} where \begin{align} A_n=- 2\pi \frac{\left(\gamma p \tilde{p}\right)^n}{n!} \sum_{k=0}^n {n \choose k} \sum_{j=0}^k { k \choose j} \left(-1\right)^{n+j} \left(\frac{1}{\delta}+\frac{1}{2}+\frac{\delta}{12}\right)^{n-k}\left(\frac{1}{2}+\frac{\delta}{6} \right)^{k-j}\left(-\frac{\delta}{6} \right)^j \frac{\left(\Delta-n\right)^{n+k+j-1}}{\left(n+k+j-1\right)!}+\mathcal{O}(\delta^2) \end{align} and \begin{align} B_n=2\pi \frac{\left(\gamma p \tilde{p}\right)^n}{n!} \sum_{k=0}^n {n \choose k} \sum_{j=0}^k {k \choose j} \left(-1\right)^{n+j} \left(-\frac{1}{\delta}+\frac{1}{2}-\frac{\delta}{12}\right)^{n-k}\left(-\frac{1}{2}+\frac{\delta}{6}\right)^{k-j} \left(\frac{\delta}{6}\right)^j \frac{\left(\Delta+n \right)^{n+k+j-1}}{\left(n+k+j-1\right)!}+ \mathcal{O}(\delta^2) \text{ .} \label{Bn} \end{align} \end{widetext} Using the approximation given by Eq. (\ref{expqap}) allows an efficient numerical evaluation of the transition probabilities (\ref{fullmphrates}). Equations (\ref{htexpans}-\ref{Bn}) are first order in \(\delta\). For the materials and temperatures we are interested in this is sufficient. Note, however, that the expansion can be continued to higher orders in \(\delta\). The Fourier transformation of \(\exp(\gamma p \tilde{p} q(\tau) )\) is then still a polynomial in \(\Delta\) and thus amenable for numerical calculations. \section{Results} \label{Results} We now use the multi-phonon transition probability to study the electron kinetics in front of a \(\text{CaO}\), \(\text{Al}_2\text{O}_3\), and \(\text{SiO}_2\) surface. They all have three-phonon deep surface potentials, that is, the energy difference of the two lowest image states is between two and three Debye energies. The material parameters required for the numerical computation are summarized in Table \ref{materialtable}. All numerical results were obtained for these parameters. Where indicated, we varied the Debye temperature to simulate different potential depths. Furthermore, the multi-phonon calculation for \(\text{CaO}\), \(\text{Al}_2\text{O}_3\), and \(\text{SiO}_2\) is compared to the one- and two-phonon calculations from I and II which are applicable to graphite and MgO. \begin{table}[t] \caption{Material parameters for the numerical results. } \center \begin{tabular}{l l l l l} & & CaO & \(\text{Al}_2\text{O}_3\) & \(\text{SiO}_2\) \\ \hline Debye temperature \(T_D\) & \(\quad\) & \(648\) K &\(980\) K &\(470\) K \\ Dielectric constant \(\epsilon_s\) & \(\quad\) &\(12.01\) &\(9.9\) & \(3.78\) \\ TO-phonon frequency \(\hbar\omega_T\) & \(\quad\) & \(41\) meV & \(79\) meV & \(133\) meV \\ \hline \end{tabular} \label{materialtable} \end{table} \subsection{Desorption} To judge the quality of the nonlinear multi-phonon approximation derived in the previous section, we first compare in Fig.~\ref{figure1} the inverse desorption time obtained from it with the inverse desorption time obtained from our previous one- and two-phonon approximation. Shown is the dependence of \(\tau_e^{-1}\) on the Debye temperature \(T_D\) which is tuned to vary the potential depth. The dimensionless inverse temperature \(\delta\), as defined in Eq. (\ref{deltadef}), is kept constant to keep the level of phonon excitation the same while the Debye temperature is varied. The nonlinear multi-phonon approximation can be of course only compared with the two-phonon approximation in the range of Debye temperatures for which the potential is two-phonon deep. Calculated in the multi-phonon approximation \(\tau_e^{-1}\) changes very little over the range of two-phonon depth, but shows steep jumps at the threshold to one- and three-phonon depth. For \(\tau_e^{-1}\) calculated in the two-phonon approximation, which is based on an iteration of the T matrix with the nonlinear electron-phonon coupling, these thresholds are washed out by the resonances. Nevertheless \(\tau_e^{-1}\) is on the same order of magnitude in both approximations. The main effect of the neglected resonances is the rounding off of the drops at the thresholds. As the steps, an artefact of taking only nonlinear multi-phonon processes into account, are less steep for deeper potentials, the nonlinear multi-phonon approximation might be even more appropriate for deeper potentials. The thin dotted vertical line in Fig.~\ref{figure1} corresponds to \(\text{Al}_2\text{O}_3\). Unfortunately the potential depth is just below the two-phonon three-phonon threshold, so that the value for \(\tau_e^{-1}\) is most likely underestimated. \begin{figure} \includegraphics[width=\linewidth]{figure1.eps} \caption{(Color online) Inverse desorption time \(\tau_e^{-1}\) as a function of the Debye temperature \(T_D\) for \(\delta=1\) (surface temperature \(T_s=T_D/\delta\)) calculated in the nonlinear multi-phonon approximation (solid red line) and the two-phonon approximation from I for two-phonon depth (dashed blue line). The surface potential is one-phonon deep for \(T_D >2000K \), two-phonon deep for \( 2000K> T_D >1000K \), three-phonon deep for \( 1000K> T_D >666K \) and four-phonon deep for \(666K > T_D >500K \). Data for \(T_D=980K\) apply to \(\text{Al}_2\text{O}_3\) (thin vertical line).} \label{figure1} \end{figure} We now move on to the study of the dependence of \(\tau_e^{-1}\) on the surface temperature. Figure~\ref{figure2} shows the inverse desorption time \(\tau_e^{-1}\) as a function of the surface temperature for graphite, MgO, \(\text{Al}_2\text{O}_3\), CaO, and \(\text{SiO}_2\). For graphite and MgO, both two-phonon deep, \(\tau_e^{-1}\) was calculated in the two-phonon approximation, for \(\text{Al}_2\text{O}_3\), CaO and \(\text{SiO}_2\), all of them three-phonon deep, the nonlinear multi-phonon approximation has been used. For all materials \(\tau_e^{-1}\) increases significantly with the surface temperature. Comparing in Fig.~\ref{figure2} \(\tau_e^{-1}\) for \(\text{Al}_2\text{O}_3\), CaO, and \(\text{SiO}_2\), we notice that \(\tau_e^{-1}\) increases with decreasing \(\epsilon_s\) (see Table \ref{materialtable}) in accordance with the fact that a smaller \(\epsilon_s\) implies a less deep surface potential and thus a faster desorption. From Fig.~\ref{figure2} we also see that for high surface temperatures desorption from the two-phonon deep potentials of graphite and MgO is quicker than from the three-phonon deep potentials of \(\text{Al}_2\text{O}_3\), CaO and \(\text{SiO}_2\) as expected. For low surface temperature, however, \(\tau_{e}^{-1}\) for graphite decreases much steeper than for the other materials. This might be due to the high Debye temperature of graphite, so that for room temperature the dimensionless inverse temperature \(\delta=T_D/T_s\) which controls phonon excitation is already in the low temperature regime where downwards transitions due to spontaneous phonon emission remain constant while upwards transitions are extremely temperature dependent, causing the desorption time to be equally temperature dependent. This peculiarity leads to the surprising fact that at low temperatures desorption from two-phonon deep potentials can be in some cases slower than desorption from three-phonon deep potentials. \begin{figure} \includegraphics[width=\linewidth]{figure2.eps} \caption{(Color online) Inverse desorption time \(\tau_e^{-1}\) as a function of the surface temperature \(T_s\) for graphite, MgO, \(\text{CaO}\), \(\text{Al}_2\text{O}_3\), and \(\text{SiO}_2\). } \label{figure2} \end{figure} Figure~\ref{figure2} also gives insight into the validity of the high temperature expansion (\ref{htexpans}). Up to first order in \(\delta\), it is valid for \(\delta < 3\). This corresponds to a surface temperature of 300K for \(\text{Al}_2\text{O}_3\) and 225K for CaO. The small upwards bends at these temperatures indicate that for lower surface temperatures the expansion given by Eq. (\ref{htexpans}) should be continued to higher orders in \(\delta\). \subsection{Sticking} In II we found that one-phonon processes give much higher contributions to the sticking coefficient than two-phonon processes. For this reason we calculate the transition probabilities for continuum-bound state transitions only in the one-phonon approximation. The effect of multi-phonon processes with regard to sticking lies in the relaxation from the state the electron is initially trapped to the lowest bound state. This is captured by the kinetic sticking coefficient. Before we address this question in more detail we take a look at the prompt sticking coefficient. The prompt sticking coefficient for \(\text{Al}_2\text{O}_3\), CaO and \(\text{SiO}_2\) is presented in Fig.~\ref{figure3}. First we consider the prompt energy-resolved sticking coefficient shown in the inset. Note that the quadratic phonon dispersion of the Debye model translates into an energy-resolved sticking coefficient which, apart from the discontinuities, is proportional to the electron energy. The steep jumps in the energy-resolved sticking coefficient reflect level accessibility. When the energy difference between the approaching electron and a bound state exceeds the Debye energy, one-phonon processes no longer enable sticking to this level. Since the lowest two bound states of the image potential of \(\text{Al}_2\text{O}_3\) and \(\text{SiO}_2\) have energies \(\epsilon_n <-1\), they cannot be reached by one-phonon processes from the continuum. Thus, the lowest bound state contributing to prompt sticking is the third bound state. Due to the differences in the Debye energy, the energy-resolved sticking coefficient for \(\text{SiO}_2\) is larger and increases faster for low electron energies than the sticking coefficients for \(\text{Al}_2\text{O}_3\) and CaO (not explicitly shown). Compared to \(\text{Al}_2\text{O}_3\) and CaO the energy-resolved sticking coefficient for \(\text{SiO}_2\) is thus strongly peaked at low electron energies. As a result, the energy averaged-sticking coefficient shown in the main panel of Fig.~\ref{figure3} is much larger for \(\text{SiO}_2\) than for \(\text{Al}_2\text{O}_3\) and CaO. \begin{figure} \includegraphics[width=\linewidth]{figure3.eps} \caption{(Color online) Prompt energy-averaged sticking coefficient for \(\text{CaO}\), \(\text{Al}_2\text{O}_3\), and \(\text{SiO}_2\) as a function of the mean energy of the electron at a surface temperature of \(T_s=300 K\). Inset: Prompt energy-resolved sticking coefficient for \(\text{Al}_2\text{O}_3\) and \(\text{SiO}_2\) as a function of the electron energy for \(T_s=300K\).} \label{figure3} \end{figure} Figure~\ref{figure4} shows the prompt and kinetic energy averaged sticking coefficient for \(\text{SiO}_2\) as a function of the mean electron energy and the surface temperature. The prompt sticking coefficient increases slightly with temperature due to the increased contribution of induced phonon emission responsible for continuum-bound state transitions. The kinetic sticking coefficient is smaller than the prompt sticking coefficient by four to five orders of magnitude and decreases with temperature as a higher surface temperature favors quick transitions back into the continuum after initial trapping. Depending on whether transitions from the upper bound states to the lowest state or to the continuum are more likely, the electron trickles through after initial trapping or desorbs before relaxing to the lowest bound state. For the three-phonon deep surface potentials of \(\text{Al}_2\text{O}_3\), CaO and \(\text{SiO}_2\) trickling through is suppressed leading to a considerable reduction of the kinetic compared to the prompt sticking coefficient. \begin{figure} \includegraphics[width=\linewidth]{figure4.eps} \caption{(Color online) Prompt (full line) and kinetic (dashed line) energy averaged sticking coefficient for \(\text{SiO}_2\) as a function of the mean energy of the electron and the surface temperature \(T_s\).} \label{figure4} \end{figure} \subsection{Electron kinetics} So far we have calculated from the kinetic rate equation (\ref{fullrateeqn2}) the prompt and kinetic sticking coefficients and the desorption time. The rate equation contains however more information. For a specified electron influx or initial condition, the time-evolution of the bound state occupancy and the energy resolution of the desorption flux can be calculated as well. To address the first question, we plot in Fig.~\ref{figure5} the time evolution of the bound state occupancy. Our aim is to identify the stages of physisorption and to relate them to the eigenvalues of the matrix \(T_{nm}\). The situation we are considering is a three-phonon deep potential, that is, \(-3<\Delta_{12} < -2\), with the second bound state lying more than one Debye energy below the continuum, that is, \(\epsilon_2 <-1\), so that electron trapping, due to one-phonon processes, can occur only in the third and higher bound states. The bound states \(i\ge 2\), which we call upper bound states, are linked by one-phonon transitions. To obtain the time evolution of the bound state occupancy after trapping of an electron at \(t=0\), we solve the rate equation with the initial condition for the bound state occupancy, \(n_i(0)=\tau_t W_{ik}\), for a specified \(k\), which is the probability that the electron is trapped in state \(i\). In Fig.~\ref{figure5}, \(k\) corresponds to an electron energy of \(E=0.05 eV\). Due to the high electron energy the third state cannot be reached by a one-phonon process. Thus trapping occurs in the fourth and higher bound states. In a first stage after trapping of the electron, the fast one-phonon transitions between the upper bound states dominate the electron kinetics. Due to trapping of the electron in the fourth and higher bound states, the occupancy of the upper bound states is out of equilibrium. Over the time scale set by \(\lambda_2^{-1}\), the inverse of the third eigenvalue of \(\mathbf{T}\), the occupancy in the upper bound states relaxes towards its equilibrium value. The electron trickles through from the fourth and higher bound states to the second bound state, as can be seen from the increase in the occupancy of the second bound state \(n_2\) and the reduction of the occupancy of the third and higher bound states \(n_{q\ge3}\). \begin{figure} \includegraphics[width=\linewidth]{figure5.eps} \caption{(Color online) Time evolution of the bound state occupancy of a single electron trapped at \(t=0\) in the upper bound states of \(\text{Al}_2\text{O}_3\) at \(T_s=300K\). The thin vertical lines correspond to \(\lambda_0^{-1}\), \(\lambda_1^{-1}\), and \(\lambda_2^{-1}\), respectively, where \(-\lambda_i\) are the three lowest eigenvalues of the matrix \(\mathbf{T}\).} \label{figure5} \end{figure} Then the strong one-phonon transitions between the upper bound states and the continuum, occurring over the time scale set by \(\lambda_1^{-1}\), empty the upper bound states. The weak multi-phonon transitions from the upper states to the lowest bound state are only a small perturbation to the electron kinetics in the upper bound states so that \(\lambda_1^{-1}\) corresponds to the desorption time for the system of the upper bound states without the lowest bound state. Until the upper bound states are emptied, a small fraction of the occupancy reaches the lowest bound state as can be seen from the discrepancy of the initial occupancy of the upper bound states \(n_{q \ge 2}\) and the maximum occupancy of \(n_1\). This difference corresponds to the reduction of the kinetic with respect to the prompt sticking coefficient in Fig.~\ref{figure4}. The lowest bound state remains occupied for a much longer time, until desorption takes place at times on the order of \(\tau_{e}=\lambda_0^{-1}\). Figure~\ref{figure6} finally shows the energy-resolved desorption flux at \(t=10^{-4} s\) (given by Eq. (\ref{eresdes})) after trapping of the electron under the same conditions as in Fig.~\ref{figure5}. The final transition that sets the electron free is a one-phonon transition from one of the upper bound states to the continuum. From each bound state \(i\), one-phonon transitions are only possible to continuum states with an energy \(E\le E_i+\hbar \omega_D\). Hence, the energy-resolved desorption flux exhibits the same discontinuities as the energy-resolved sticking coefficient shown Fig.~\ref{figure3}, located at electron energies for which one-phonon transitions between bound states and the continuum cease to be operational. \begin{figure} \includegraphics[width=\linewidth]{figure6.eps} \caption{(Color online) Energy-resolved desorption flux at \(t=10^{-4}s\) for an electron tapped at \(t=0\) in the upper bound states under the same conditions as in Fig.~\ref{figure5}. For \(\text{Al}_2\text{O}_3\) one-phonon transitions between bound and continuum states are only possible from the third and higher bound state. The small numbers give the bound state from which transitions to the continuum are no longer possible at the respective energy.} \label{figure6} \end{figure} \section{Two-state system: Discussion} \label{Discussion: Two state system} To clarify the generic behavior of electron physisorption at dielectric surfaces, and to put the results presented in the previous section and in I and II into perspective, we study a simple model of two bound states coupled to a continuum of states. Electron physisorption occurs in the image potential which supports a deep lowest bound state, well separated from a group of relatively closely packed upper bound states. Since the upper bound states are strongly coupled by one-phonon processes they can be subsumed under an effective upper bound state. The effective state is then weakly coupled to the lowest bound state via multi-phonon processes and strongly coupled to the continuum via one-phonon processes. The left panel of Fig.~\ref{figure7} schematically shows the system of the two surface states. Gearing towards deep potentials, we include only transitions between the two bound states and between the upper state and the continuum. The matrix \(\mathbf{T}\) defined by Eq. (\ref{fullrateeqn}) reads for this system \begin{align} \mathbf{T}= \begin{pmatrix} -W_{21} & W_{12} \\ W_{21} & -W_{12}-W_{c2} \end{pmatrix} \text{ ,} \end{align} where \(W_{12}\) and \(W_{21}\) are the transition probabilities from the second to the first bound state and vice versa, and \(W_{c2}\) is the transition probability from the second bound state to the continuum. For the two-state model the eigenvalues \(-\lambda_{\kappa}\) and the right and left eigenvectors \(\mathbf{e}^{(\kappa)}\) and \(\tilde{\mathbf{e}}^{(\kappa)}\) can be calculated analytically. The eigenvalues are given by \begin{align} -\lambda_{0,1}=&-\frac{1}{2}\left(W_{21}+W_{12}+W_{c2}\right) \nonumber \\ & \pm \frac{1}{2}\sqrt{\left(W_{21}+W_{12}+W_{c2}\right)^2-4 W_{21} W_{c2}} \text{ .} \end{align} Parameters of physical interest can be also obtained analytically. The desorption time, for instance, is the negative inverse of the lowest eigenvalue, \(\tau_e=\lambda_0^{-1}\). Since only the upper bound state can be reached from the continuum, prompt sticking arises solely from trapping in the upper bound state. The prompt sticking coefficient is thus given by \(s_{e,k}^\text{prompt}=\tau_t W_{2 k}\). In the two-state model, the kinetic sticking coefficient is moreover related to the prompt sticking coefficient by \(s_e^\text{kin}=\tilde{e}_2^{(0)}s_e^\text{prompt}\). Hence, the probability for the electron to trickle through from the upper to the lower bound state is \(\tilde{e}_2^{(0)}\). For many dielectrics the weakest transitions are from the lowest bound state to the upper bound states. They are typically triggered by more than two phonons. To mimic this situation within the two-state model we set \(W_{21} \ll W_{12},W_{c2}\). The inverse of the desorption time becomes in this limit \begin{align} \tau_e^{-1}=\frac{W_{c2}}{W_{12}+W_{c2}}W_{21} \label{2statel} \end{align} and the ratio between kinetic and prompt sticking coefficient becomes \begin{align} \frac{s_e^\text{kin}}{s_e^\text{prompt}}=\frac{W_{12}}{W_{12}+W_{c2}} \text{ .} \label{2statee} \end{align} \begin{figure} \includegraphics[width=\linewidth]{figure7.eps} \caption{(Color online) Left panel: Schematic drawing of the two-state model discussed in the main text. Middle panel: Physisorption scenario of type A. A trapped electron has a high chance to drop to the bottom, then it revolves between the two bound states, until it desorbs. Right panel: Physisorption scenario of type B. Due to a relaxation bottleneck the electron is unlikely to drop to the lowest state. Transitions that the electron makes once per temporary trapping event are represented by a thin line, a bold line represents transitions made more than once, and dashed lines represent transitions that are made with a very low probability.} \label{figure7} \end{figure} The physical behavior of the two-state model depends therefore on the ratio between \(W_{12}\) and \(W_{c2}\) and thus on the potential depth and the surface temperature. Two extreme cases are possible and represent different physisorption scenarios. For \(W_{c2} \ll W_{12}\), \(\tau_e^{-1}\cong (W_{21}/W_{12}) W_{c2}\), which, using detailed balance, can be brought into the Arrhenius form \(\tau_e^{-1}=e^{-\beta(E_2-E_1)}W_{c2}\). Kinetic and prompt sticking coefficients coincide moreover in this parameter range. Hence, an electron trapped in the upper state drops to the lowest state before desorption. Desorption from the lowest state occurs then via a cascade, that is, a series of fast transitions \(1 \rightarrow 2 \rightarrow 1 \rightarrow 2 \rightarrow 1 \dots\) until eventually the transition \(2\rightarrow \text{cont.}\) removes the electron from the bound states. The just described physisorption scenario, which we call type A scenario, is illustrated in the middle panel of \ref{figure7}. Recalling that the upper level stands for a manifold of strongly coupled bound states, it resembles the physisorption of neutral particles via cascades originally proposed and investigated by Gortel and coworkers.~\cite{GKT80a} In the other limit, \(W_{12} \ll W_{c2}\). The inverse of the desorption time and the ratio between prompt and kinetic sticking coefficient are then given by \(\tau_e^{-1}\cong W_{21}\) and \(s_e^\text{kin}/s_e^\text{prompt}\cong W_{12}/W_{c2}\), respectively. The physisorption scenario is now dramatically different from the one discussed before because the desorption time is solely determined by the transition probability from the lower to the upper bound state. As a result, desorption does not occur via a cascade, but as a one-way process \(1\rightarrow 2 \rightarrow \text{cont.}\), where the second transition is so fast that it basically does not affect the desorption time. Hence, in this scenario, which we call type B, the upper bound state can be considered as de-facto belonging to the continuum and desorption as basically equivalent to desorption from a single deep state. For sticking, the type B scenario exhibits moreover a relaxation bottleneck. An electron trapped in the upper state is very unlikely to drop to the lowest bound state, as schematically shown in the right panel of Fig.~\ref{figure7}. Within the limits set by the model for the electron-surface interaction introduced in I and briefly recalled in section \ref{Electron-surface interaction and transition rates}, the two-state model contains the essential physics of electron physisorption. For potentials with \(\epsilon_2>-1\), where a direct one-phonon transition from the second bound state to the continuum is possible the two-state model can be applied directly. Calculating the desorption time within the two-state model shows very good agreement with the results for graphite obtained in I. For the ratio between kinetic and prompt sticking coefficient \(s^\text{kin} / s^\text{prompt}\), which is given in the two-state model by \(\tilde{e}_2^{(0)}\), the agreement is less good but qualitatively correct, reproducing, for instance, the temperature dependent transition between type A and type B. For potentials with \(\epsilon_2<-1\) no one-phonon process from the second bound state to the continuum is possible and the two-state model cannot be applied directly. For physisorption of type B, however, the electron kinetics in the upper bound states is only marginally perturbed by transitions to and from the lowest bound state. The time it takes an electron to get from the second bound state into the continuum is then the desorption time of the system of the upper bound states alone, that is, the negative inverse of the smallest eigenvalue \(-\lambda_0^\text{up}\) of the matrix \(T_{nm}^\text{up}\) which is the matrix \(T_{nm}\) defined in Eq. (\ref{fullrateeqn2}) with \(n,m > 1\). In the two-state model \(\lambda_0^\text{up}\) can be regarded as an effective transition rate between the second state and the continuum. Hence, to apply the two-state model with potentials where the second bound state does not couple by one-phonon processes to the continuum we simply replace in Eqs. (\ref{2statel}) and (\ref{2statee}) \(W_{c2}\) by \(\lambda_0^\text{up}\). Let us finally look at the results obtained in the previous section and in I and II from the perspective of the two-state model. In a one-phonon deep potential, the transitions from the upper bound states to the lowest bound state and from the upper bound states to the continuum are enabled by one-phonon processes. In this case the downward transitions are always more likely than the upward transitions so that one-phonon deep potentials give always rise to physisorption of type A. Hence, they show no relaxation bottleneck and prompt and kinetic sticking coefficient coincide. Two- or more-phonon deep potentials can either lead to physisorption of type A or type B, depending on the surface temperature. In this case one-phonon transitions from the upper bound states to the continuum compete with multi-phonon transitions from the upper bound states to the lowest bound state. As a transition from the upper states to the continuum requires phonon absorption, proportional to \(n_B\), while a transition from the upper state to the lower requires phonon emission, proportional to \(1+n_B\), we expect that for sufficiently low temperature physisorption is always of type A, even for multi-phonon deep potentials. For sufficiently high temperatures all two- or more-phonon deep potentials are however of type B. In this case a relaxation bottleneck results in the discrepancy between prompt and kinetic sticking coefficient (see Fig.~\ref{figure4}). The electron kinetics is primarily determined by the one-phonon transitions among the upper states (see Fig~\ref{figure5}). The temperature at which type A merges into type B depends on the potential depth and the Debye temperature. For room temperature the three-phonon deep potentials of \(\text{Al}_2\text{O}_3\), CaO and \(\text{SiO}_2\) and the two-phonon deep potential of MgO are all of type B. The crossover between type A and type B occurs for the two-phonon deep potential of graphite at room temperature (see Fig.~5 of our previous work II). \section{Conclusions} \label{Summary} Within a simplified one-dimensional model for the polarization-induced interaction between an external electron and a dielectric surface with a sufficiently large energy gap and a sufficiently negative electron affinity, we investigated phonon-induced adsorption and desorption of an electron at a dielectric surface. The required electron energy relaxation, inducing transitions between the eigenstates of the surface potential, which we approximated by a recoil-corrected image potential, is due to the coupling to an acoustic bulk phonon. The majority of dielectrics of interest have a surface potential that is three- or more-phonon deep, that is, the energy difference between the two lowest bound states is more than two Debye energies of the bulk phonon. In our previous work,~\cite{HBF10a,HBF10b} we took multi-phonon processes into account using a T matrix approach, which is however only feasible for one- and two-phonon deep potentials, as it is, for instance, the case for graphite. To overcome this limitation we derived in this work a non-perturbative expression for the multi-phonon transition probability arising solely from the nonlinearity of the electron-phonon interaction. In view of our previous results for one- and two-phonon deep potentials we expect this approximation to give an acceptable correct order of magnitude estimate for the multi-phonon transition probability involving more than two phonons, despite the neglect of resonant processes stemming from the iteration of the T matrix. We presented numerical results for the electron desorption time for graphite, MgO, CaO, \(\text{Al}_2\text{O}_3\), and \(\text{SiO}_2\) and the prompt and kinetic energy-resolved and energy-averaged electron sticking coefficient for CaO, \(\text{Al}_2\text{O}_3\), and \(\text{SiO}_2\). In addition, we calculated the energy-resolved desorption flux and investigated the time evolution of the bound state occupancy after initial trapping of an electron, revealing the characteristic stages of electron physisorption: initial trapping, relaxation in the upper bound states, trickling through to the lowest bound state, and desorption. Ultrafast electron spectroscopy at surfaces with stable image states~\cite{Fauster94,HSR97,Hoefer99} should be able to resolve these different stages experimentally. Using a simple two-state model, we finally identified two vastly different scenarios of electron physisorption, depending on potential depth and surface temperature, and put our results, including the ones of our previous work~\cite{HBF10a,HBF10b}, into perspective. For almost all dielectrics of practical interest, the trapped electron has only for very low temperatures, well below room temperature, a significant chance to trickle through to the lowest bound state. The desorption process in this case would then proceed via a cascade between the first and second bound state, until it eventually makes a transition from there to the continuum. The shallow bound states, albeit important for adsorption, play a minor role for desorption. The second bound state is the most important one. It is a relay state. At room temperature, however, a relaxation bottleneck prevents the trapped electron from falling to the lowest bound state. The electron physisorption kinetics is thus dominated by fast one-phonon transitions in the upper bound states. Only a small fraction of the electron trickles through to the lowest bound state and resides there for a very long time until it makes a one-step desorbing transition to the continuum. {\it Acknowledgments.} This work was supported by the Deutsche Forschungsgemeinschaft (DFG) through the transregional collaborative research center SFB/TRR 24.
1,314,259,996,238
arxiv
\section{Introduction} Clustering techniques are common in multivariate data analysis, data mining, machine learning, and so on. The goal of the clustering or partitioning problem is to find groups such that entities within the same group are similar and different groups are dissimilar. In the graph-partitioning problem, much attention is given to find the precise criteria to obtain a good partition. Clustering methods that use eigenvalues and eigenvectors of matrices associated with graphs are called spectral clustering methods and are widely used in graph- partitioning problems. In particular, eigenvalues and eigenvectors of Laplacian matrices play a vital role in graph-partitioning problems. In 1973, Fiedler defined the second smallest eigenvalue $\lambda_2$ of a difference Laplacian matrix as the algebraic connectivity of a graph \cite{fielder:1973}. In 1975, he showed that we can decompose a graph $G$ into two connected components by only using the sign structure of an eigenvector related to the second smallest eigenvalue \cite{fiedler:1975}. In 2001, Fiedler's investigation was extended by Davies using the discrete nodal domain theorem \cite{Davis:2001}. Laplacian, normalized Laplacian, and adjacency matrices with negative entries can be used with the nodal domain theorem. This theorem is useful to identify the number of connected sign graphs of a given graph on the basis of their eigenvectors and eigenvalues. In 1984, Buser \cite{buser:1984} investigated the graph invariant quantity $\displaystyle i(G)=\min_U \frac{|\partial U|}{|U|}$, which considers the relationship between size of a cut and the size of a separated subset $U$. He defined the isoperimetric number $i(G)$, and the optimal bisection was given by the minimum $i(G)$. Guattery and Miller \cite{step:1995,step:1998} considered two spectral separation algorithms that partition the vertices on the basis of the values of their corresponding entries in the second eigenvector and, in 1995, they provided some counter examples for which each of these algorithms produce poor separators. They used an eigenvector based on the second smallest eigenvalue of a difference Laplacian matrix as well as a specified number of eigenvectors corresponding to the smallest eigenvalues. Finally, they extended it to the generalized version of spectral methods that allows for the use of more than a constant number of eigenvectors and showed that there are some graphs for which the performance of all the above spectral algorithms was poor. We follow their methods especially in the cases of graph automorphism and even -odd eigenvector theorem for the concrete classes of graphs such as roach graphs, double-trees, and double-tree cross paths. We prefer to use a normalized Laplacian matrix rather than a difference Laplacian matrix, and describe these properties in terms of formal graph notation. In 1997, Fan Chung \cite{Fan:1997} discussed the most important theories and properties regarding eigenvalues of normalized Laplacian matrices and their applications to graph separator problems. She considered the partitioning problem using Cheeger constants and derived fundamental relations between the eigenvalues and Cheeger constants. In 2000, Shi and Malik \cite{shi:2000} proposed a measure of disassociation, called normalized cut, for the image segmentations. This measure computed the cut cost as a fraction of total edge connections. The normalized cut is used to minimize the disassociation between groups and maximize the association within groups. However, minimization of normalized cut criteria is an non-deterministic polynomial-time hard (NP- hard) problem. Therefore, approximate discrete solutions are required. The solution to the minimization problem of the normalized cut is given by the second smallest eigenvector of the generalized eigensystem, $(D-W)y=\lambda D y$, where $D$ is the diagonal matrix with vertex degrees and $W$ is a weighted adjacency matrix. Shi and Malik used a minimum normalized cut value as a splitting point and found a bisection using the second smallest eigenvector. They realized that the eigenvectors are well separated and that this type of splitting point is very reliable. The normalized cut introduced by Shi and Malik \cite{shi:2000} is useful in several areas. This measure is of interest not only for image segmentation but also for network theories and statistics \cite{Andrew:2001,sara:2008,sara1:2008,du:2004,sound:2003}. In this study, we review the known results regarding the difference, normalized, and signless Laplacian matrices. Then, we give uniform proofs for the eigenvalues and eigenvectors of paths and cycles. Next, we analyze the minimum normalized cut from the view point of connectivity of graphs and compare the results with those of the spectral bisection method. Special emphasis is given to classify the graphs, that poorly perform on spectral bisections using normalized Laplacian matrices. We use the term $Mcut(G)$ to represent the minimum normalized cut and $Lcut(G)$ to represent the normalized cut of the bipartition created by the second smallest eigenvector of the normalized Laplacian based on the sign pattern. Finding $Mcut(G)$ for a graph is NP-hard. However, we derive a formula for $Mcut(G)$ for some basic classes of graphs such as paths, cycles, complete graphs, double-trees, cycle cross paths, and some complex graphs like lollipop type graphs $LP_{n,m}$, roach type graphs $R_{n,k}$ and weighted paths $P_{n,k}$. Next, we present characteristic polynomials of the normalized Laplacian matrices ${\mathcal L}(P_{n,k})$ and ${\mathcal L}(R_{n,k})$. We provide counter example graphs on the basis of a graph $R_{n,k}$ on which $Mcut(G)$ and $Lcut(G)$ have different values. This paper is organized as follows. In section 2, we present basic terminologies and key results related to the difference, normalized, and signless Laplacian matrices. In particular, we summarize the upper and lower bounds of the second smallest eigenvalues. We also define graphs that are used in other sections using formal notation. In section 3, we review the properties of the $Mcut(G)$ of graphs and derive formulae for the $Mcut(G)$ of some basic classes of graphs and some complex graphs such as $R_{n,k}$, $P_{n,k}$, and $LP_{n,m}$. In section 4, we consider the eigenvalues and eigenvectors of paths and cycles for the three types of Laplacian matrices introduced above. In particular, we review the eigenvalue formulae for the three types of Laplacian matrices using circulant matrices and then review an alternative proof for the eigenvalues of adjacency matrices of paths and cycles using Chebyshev polynomials. We also give concrete formulae for the characteristic polynomials of the normalized Laplacian matrices ${\mathcal L}(P_{n,k})$ and ${\mathcal L}(R_{n,k})$. In section 5, we provide counter example graphs for which spectral techniques perform poorly compared with the normalized cut. Specifically, we find the conditions for which $Mcut(G)$ and $Lcut(G)$ have different values on the $R_{n,k}$ graph. \section{Preliminaries} An undirected graph is an ordered pair $G=(V(G),E(G))$, where $V(G)$ is a finite set, elements of which are called vertices, and we represent $V(G)$ as $V(G)=\{v_1,v_2,\ldots,v_n\}$. $E(G)$ is a set of two-element subsets of $V(G)$, called edges. Conventionally, we denote an edge $\{v_i,v_j\}$ by $(v_i,v_j)$ in this paper. Two vertices $v_i$ and $v_j$ of $G$ are called adjacent, if $(v_i,v_j) \in E(G)$. For simplicity, sometimes we use $V$ instead of $V(G)$ and $E$ instead of $E(G)$. The number of vertices in $G$ is the order of $G$ and the number of edges is the size of $G$. For a given subset $S\subseteq V $, $|S|$ represent the size of the set $S$. For a subset $A \subseteq V$, we represent the set of vertices not belongs to $A$ as $V\setminus A= \{ v_i \ | \ v_i \notin A \}$. A graph of order 1 is called a trivial graph. A graph which has two or more vertices is called a nontrivial graph. A graph of size 0 is called an empty graph. Assume that all graphs in this paper are finite, undirected and have edge weight 1. \begin{definition}[Adjacency matrix] Let $G=(V,E)$ be a graph and $|V|=n$. The adjacency matrix $A(G)=(a_{ij})$ of an undirected graph $G$ is a $n \times n$ matrix whose entries are given by \[ a_{ij}= \left \{ \begin{array}{ll} 1 & \mbox{ if $(v_i,v_j) \in E$,}\\ 0 & \mbox{otherwise.} \end{array} \right. \] \end{definition} \begin{definition}[Degree] The degree $d_i$ of a vertex $v_i$ of a graph $G$ is defined as $\displaystyle d_i= \sum_{j=1}^n a_{ij}$. Minimum and maximum degree of a graph $G$ are denoted by $\delta(G)$ and $\bigtriangleup(G) $, respectively. \end{definition} \begin{definition}[Degree Matrix] The diagonal matrix of a graph $G$ is denoted by $D(G)=diag(d_1,d_2,\ldots,d_n)$, where $d_i$ is the degree of a vertex $v_i$. \end{definition} Note: For simplicity, sometimes we use $D$ instead of $D(G)$. \begin{definition}[Volume] The volume of a graph $G=(V,E)$ denoted by $\displaystyle vol(G)=\sum_{i =1}^{|V|} d_i$, is the sum of the degrees of vertices in $V$. The volume of a subset $A\subset V$ is denoted by $\displaystyle vol(A)=\sum_{i \in A} d_i$. \end{definition} \begin{definition}[Edge Connectivity] The edge connectivity of a graph $G$ is denoted by $\kappa'(G)$, is the minimum number of edges needed to remove in order to disconnect the graph. A graph is called $k$-edge connected if every disconnecting set has at least $k$ edges. A $1$-edge connected graph is called a connected graph. \end{definition} \begin{definition}[Cartesian product] The Cartesian product of graphs $G$ and $H$ is denoted by $G \Box H =( V(G \Box H),E(G \Box H))$, where $\displaystyle V(G \Box H)= V(G) \times V(H)$ and $\displaystyle E(G \Box H)= \{(u_1,v_1),(u_2,v_2) \ $ $| \ u_1=u_2 \ and \ (v_1,v_2) \in E(H) $ $\ or \ v_1=v_2 \ and \ (u_1,u_2) \in E(G) \}$. \end{definition} We note that $\displaystyle G_1 \Box G_2 \cong G_2 \Box G_1 $, $\displaystyle \delta (G_1 \Box G_2) =\delta(G_1) + \delta(G_2)$, and $\displaystyle \kappa'(G \Box H) =\min \{ \kappa'(G)|V(H)|, \kappa'(H)|V(G)|, \delta(G)+\delta(H) \}$. \begin{definition}[Path] Let $G=(V,E)$ be a graph. A path in a graph is a sequence of vertices such that from each of its vertices there is an edge to the next vertex in the sequence. This is denoted by $P=(u=v_0,v_1,\ldots,v_k=v)$, where $(v_i,v_{i+1}) \in E$ for $0 \le i \le k-1$. The length of the path is the number of edges encountered in $P$. \end{definition} \begin{definition}[Shortest Path] Let $G=(V,E,w)$ be a weighted graph. Let $\it{P}$ be a set of paths from vertex $i$ to $j$. Denote $\ell (p)$, the length of the path $p \in \it{P}$. Then $p$ is a shortest path if $\ell (p)=\min_{p' \in \it{P}} \ell (p')$. \end{definition} \begin{definition}[Distance] The distance between two vertices $i,j \in V$ of the graph $G$ is denoted by $dist(i,j)$ is the length of a shortest path between vertex $i$ and $j$. \end{definition} \begin{definition}[Diameter] The diameter of a graph $G=(V,E)$ is given by $\displaystyle diam(G)=\max \{dist(i,j) \ | \ i,j \in V\}$. \end{definition} \begin{definition}[Permutation matrix] Let $G=(V,E)$ be a graph. The permutation $\phi$ defined on $V$ can be represented by a permutation matrix $\displaystyle P=(p_{ij})$, where \[ p_{ij}= \left \{ \begin{array}{cc} 1 & \mbox{if $v_i=\phi(v_j)$,}\\ 0 & \mbox{otherwise. } \end{array} \right. \] \end{definition} \begin{definition}[Automorphism] Let $G=(V,E)$ be a graph. Then a bijection $\phi :V\rightarrow V$ is an automorphism of $G$ if $(v_i, v_j) \in E$ then $(\phi(v_i),\phi(v_j)) \in E$. In other words automorphisms of $G$ are the permutations of vertex set $V$ that maps edges onto edges. \end{definition} \begin{proposition}[Biggs \cite{bigg:1993}] Let $A(G)$ be the adjacency matrix of a graph $G=(V,E)$, and $P$ be the permutation matrix of permutation $\phi$ defined on $V$. Then $\phi$ is an automorphism of $G$ if and only if $PA=AP$. \end{proposition}\hfill\qed \begin{definition}[Weighted graph] A weighted graph is denoted by $G=(V,E,w)$, where $w: E\rightarrow \Re$. \end{definition} \begin{definition}[Weighted adjacency matrix] The weighted adjacency matrix $W=(w_{ij})$ is defined as \[ w_{ij}= \left \{ \begin{array}{ll} w(i,j) & \mbox{if $(i,j)\in E$,} \\ 0 & \mbox{otherwise.} \end{array} \right. \] \end{definition} The degree $d_i$ of a vertex $v_i$ of a weighted graph is defined by $\displaystyle d_i =\sum_{j=1}^{n}w_{ij}$. Unweighted graphs are special cases, where all edge weights are 0 or 1. \begin{definition}[Graph cut] A subset of edges which disconnects the graph is called a graph cut. Let $G=(V,E,w)$ be a weighted graph and $W=(w_{ij})$ the weighted adjacency matrix. Then for $A,B \subset V$ and $A \cap B = \emptyset $, the graph cut is denoted by $\displaystyle cut(A,B)=\sum_{i \in A, j \in B} w_{ij}$. \end{definition} \begin{definition}[Isoperimetric number] The isoperimetric number $i(G)$ of a graph $G$ of order $n \geq 2$ is defined as $$ i(G) = \min \{ \frac{cut(S,V\setminus S)}{|S|},S\subset V, 0 < |S| \leq \frac{n}{2} \}. $$ \end{definition} \begin{definition}[Cheeger Constant-edge expansion] Let $G=(V,E)$ be a graph. For a nonempty subset $S\subset V$, define\\ $\displaystyle h_G(S)=\frac{cut(S,V\setminus S)}{\min(vol(S),vol(V\setminus S))}$. The Cheeger constant(edge expansion) $h_G$ is defined as $\displaystyle h_G=\min_S h_G(S)$. \end{definition} \begin{definition}[Cheeger constant-vertex expansion] Let $G=(V,E)$ be a graph. For a nonempty subset $S\subset V$, define\\ $\displaystyle g_G(S)=\frac{vol(\delta S)}{\min(vol( S),vol(V\setminus S))}$, where $\displaystyle \delta S=\{v \notin S :(u,v)\in E, u \in S\}$. Then the Cheeger constant(vertex expansion) $g_G$ is defined as $\displaystyle g_G=\min_S g_G(S)$. \end{definition} \begin{definition}[Weighted difference Laplacian] The {\bf weighted difference Laplacian} $L(G)=(l_{ij})$ is defined as \[ l_{ij}= \left \{ \begin{array}{lll} d_i-w_{ii} & \mbox{if $v_i=v_j$,} \\ -w_{ij} & \mbox{if $v_i$ and $v_j$ are adjacent and $v_i \neq v_j$,} \\ 0 & \mbox{otherwise.} \end{array} \right. \] This can be written as $L(G)=D(G)-W(G)$. \end{definition} \begin{definition}[Weighted normalized Laplacian] The {\bf weighted normalized Laplacian} $\mathcal{L}(G)=(\ell_{ij})$ is defined as \[ \ell_{ij}= \left \{ \begin{array}{lll} 1-\frac{w_{jj}}{d_j} & \mbox{if $v_i=v_j$,} \\ -\frac{w_{ij}}{\sqrt{d_i d_j}} & \mbox{ if $v_i$ and $v_j$ are adjacent and $v_i \neq v_j$,} \\ 0 & \mbox{otherwise.} \end{array} \right. \] \end{definition} \begin{lemma} Let $G$ be a graph, $n$ the size of graph $G$, $A=(w_{ij})$ a weighted adjacency matrix of $G$, $\lambda$ an eigenvalue of ${\mathcal L}(G)$ and $x$ an eigenvector corresponding to $\lambda$ with $x^Tx=1$. Then, $$ \lambda = \frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^n \left(\frac{x_i}{\sqrt{d_i}}-\frac{x_j}{\sqrt{d_j}}\right)^2 w_{ij}. $$ \end{lemma} \begin{proof} Let $D$ be the degree matrix of $G$. The normalized Laplacian matrix ${\mathcal L}(G)$ is defined by $D^{-\frac{1}{2}}(D-A)D^{-\frac{1}{2}}$. Let $y$ be a vector with size $n$ and $x=D^{\frac{1}{2}} y$. Then $x^T{\mathcal L}(G) x$ $\displaystyle = \left(D^{\frac{1}{2}}y\right)^T{\mathcal L}(G) \left(D^{\frac{1}{2}}y\right)$ $\displaystyle =y^TD^{\frac{1}{2}}{\mathcal L}(G)D^{\frac{1}{2}}y$ $=y^T(D-A)y$ $\displaystyle =\sum_{i=1}^n y_i^2d_i-\sum_{i=1}^n\sum_{j=1}^n y_iy_jw_{ij}$ $\displaystyle =\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^n \left(y_i-y_j\right)^2 w_{ij}$ $\displaystyle =\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^n \left(\frac{x_i}{\sqrt{d_i}}-\frac{x_j}{\sqrt{d_j}}\right)^2 w_{ij}$. Since $x$ is an an eigenvector of ${\mathcal L}(G)$ corresponding to $\lambda$ and $x^T x=1$, we have $\displaystyle \lambda =\frac{x^T(\lambda x)}{x^Tx}$ $\displaystyle =\frac{x^T({\mathcal L}(G)x)}{x^Tx}$ $\displaystyle =\frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n \left(\frac{x_i}{\sqrt{d_i}}-\frac{x_j}{\sqrt{d_j}}\right)^2w_{ij}$. \end{proof} There are several properties about bounds of the second eigenvalue $\lambda_2$. \begin{proposition}[Mohar\cite{mohar:1989}] Let $G=(V,E)$ be a graph and $\lambda_2$ be the second smallest eigenvalue of $L(G)$. Then, $$ \frac{\lambda_2}{2} \leq i(G) \leq \sqrt{(2\triangle(G) -\lambda_2)\lambda_2}. $$ \end{proposition}\hfill\qed \begin{proposition}[Chung\cite{Fan:1997}] Let $G$ be a connected graph and $h_G$ the Cheeger constant of $G$. Then, \begin{enumerate} \item $\displaystyle \frac{2}{vol(G)} < h_G$, \item $\displaystyle 1- \sqrt{1-h^2_G} < \lambda_2 $, and \item $\displaystyle \frac{h^2_G}{2} < \lambda_2 \leq 2h_G$. \end{enumerate} \end{proposition}\hfill\qed \begin{definition}[Signless Laplacian] The {\bf weighted signless Laplacian} $SL(G)=(sl_{ij})$ is defined as \[ sl_{ij}= \left \{ \begin{array}{lll} d_i+w_{ii} & \mbox{if $v_i=v_j$,} \\ w_{ij} & \mbox{if $v_i$ and $v_j$ are adjacent and $v_i \neq v_j$,} \\ 0 & \mbox{otherwise.} \end{array} \right. \] This can be written as $SL(G)=D+W$. \end{definition} \begin{definition}[Path graph] A path graph $P_n=(V_n,E_n)$ consists of a vertex set $\displaystyle V_n=\{v_l \ | \ l \in \mathbb{Z^+}, l \leq n \}$ and an edge set $\displaystyle E_n= \{(v_l,v_{l+1}) \ | \ 1 \leq l <n \}$. \end{definition} \begin{example} The Table~\ref{tab:matrices} shows an adjacency matrix and the three Laplacian matrices discussed in the above for path graph $P_4$. \begin{table} {\small \begin{tabular}{|l|l|} \hline Matrix $M$ & $M(P_4)$ \\ \hline \begin{tabular}{l} Adjacency \\ $A(P_4)$ \end{tabular} & $ \left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right) $ \\ \hline \begin{tabular}{l} Difference Laplacian \\ $L(P_4)$ \end{tabular} & $\left( \begin{array}{cccc} 1 & -1 & 0 & 0 \\ -1 & 2 & -1 & 0 \\ 0 & -1 & 2 & -1 \\ 0 & 0 & -1 & 1 \end{array} \right) $ \\ \hline \begin{tabular}{l} Normalized Laplacian \\ $\mathcal{L}(P_4)$ \end{tabular} & $\left( \begin{array}{cccc} 1 & -\frac{1}{\sqrt{2}} & 0 & 0 \\ -\frac{1}{\sqrt{2}} & 1 & -\frac{1}{2} & 0 \\ 0 & -\frac{1}{2} & 1 & -\frac{1}{\sqrt{2}} \\ 0 & 0 & -\frac{1}{\sqrt{2}} & 1 \end{array} \right) $\\ \hline \begin{tabular}{l} Signless Laplacian \\ $SL(P_4)$ \end{tabular} & $\left( \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 1 & 2 & 1 & 0 \\ 0 & 1 & 2 & 1 \\ 0 & 0 & 1 & 1 \end{array} \right)$\\ \hline \end{tabular} \caption{Matrices associated with graphs.} \label{tab:matrices} } \end{table} \end{example} \begin{lemma} \label{lema1} Let $G=(V,E,w)$ be a weighted graph. Then the eigenvalues of $\mathcal{L}(G)$ and $D^{-1}L(G)$ are equal. \end{lemma} \begin{proof} $\displaystyle D^{-1}L= D^{-1}(D-W)=I-D^{-1}W= D^{-1/2}DD^{-1/2}- D^{-1/2}D^{-1/2}W=D^{-1/2}(D-W)D^{-1/2}$. Therefore $D^{-1}L(G) = \mathcal{L}(G)$ and has the same spectrum.\end{proof} \begin{definition}[Regular graph] A graph $G=(V,E)$ is called $r$-regular, if $d_i=r \ ( i=1,\ldots,|V|)$. \end{definition} \begin{lemma} Let $\mu_i,(i=1, \ldots, n) $ be eigenvalues of difference Laplacian matrix $L(G)=D(G)-A(G)$. Then for any regular graph of degree $r$, normalized Laplacian eigenvalues are $\displaystyle \lambda_i= \frac{\mu_i}{r},(i=1, \ldots, n)$. \end{lemma} \begin{proof} $\displaystyle L=(D-A)=rI-A$. Then $\displaystyle \mathcal{L}(G)= D^{-1/2}LD^{-1/2}=\frac{I}{r^{1/2}}(rI-A)\frac{I}{r^{1/2}}=I-\frac{A}{r}$. Then $\displaystyle r \mathcal{L}(G)=L(G)$. If $\mu_i$ is an eigenvalue of $L$ then it is an eigenvalue of $r\mathcal{L}(G)$. This shows that $\displaystyle \lambda(\mathcal{L}(G))= \frac{\mu_i}{r}(i=1,\ldots, n)$.\end{proof} \begin{proposition} Let $\mathcal{L}(G)$ be the normalized Laplacian matrix of a graph $G$ and $P$ be the permutation matrix corresponding to the automorphism $\phi$ defined on $V$. If $U$ is an eigenvector of $\mathcal{L}(G)$ with an eigenvalue $\lambda$, then $PU$ is also an eigenvector of $\mathcal{L}(G)$ with the same eigenvalue. \label{prop1} \end{proposition} \begin{proof} From the definition of automorphism $P^T\mathcal{L}(G)P=\mathcal{L}(G)$. Then $\mathcal{L}(G)U=\lambda U$ implies that $(P^T\mathcal{L}(G)P)U=\lambda U$. Since $PP^T=I$, we get $ \mathcal{L}(G)PU=\lambda (PU)$. If $U$ is an eigenvector of $\mathcal{L}(G)$ with an eigenvalue $\lambda$ then $PU$ is also an eigenvector with the same eigenvalue.\end{proof} {\bf{Remarks.}}This result holds for any matrix associated with a graph under the automorphism defined on a vertex set. \begin{definition}[Odd-even vectors] Let $G=(V,E)$ be a graph and $\phi:V\rightarrow V$ be an automorphism of order 2. A vector $x$ is called an even vector if $x_i =x_{\phi(i)}$ for all $ 1\leq i \leq n$ and a vector $y$ is called an odd vector if $y_i =-y_{\phi(i)}$ for all $ 1\leq i \leq n$, where $n=|V|$. \end{definition} \begin{proposition} \label{prop2} Let $G$ be a graph, $\phi$ be an automorphism of $G$ with order 2 and $P$ a permutation matrix of $\phi$. If an eigenvalue of $\mathcal{L}(G)$ is simple then the corresponding eigenvector is odd or even with respect to $\phi $. \end{proposition} \begin{proof} Let $\lambda$ be an eigenvalue, $U$ an eigenvector of $\mathcal{L}(G)$. If $\lambda$ is simple then $ PU$ and $U$ are linearly dependent. Then there exists a constant $c$ such that $PU=cU$. Since $P^2 =I$ for an automorphism of order 2, $\displaystyle IU =c PU=c^2U$ and $c= \pm 1$. Then $PU=U$ or $PU=-U$. Hence an eigenvector $U$ is odd or even with respect to $\phi $. \end{proof} \begin{definition} Let $G=(V,E)$ be a graph, $V=\{v_i \ |\ 1\le i \le n\}$ $(n=|V|)$ and $U=(u_1,u_2,\ldots,u_n) \in \Re^n$ a vector. We define three subsets of $V$ as follows: \begin{eqnarray*} V^+(U) &=& \{ v_i \in V \ | \ u_i >0 \},\\ V^-(U) &=& \{ v_i \in V \ | \ u_i <0 \}, \ and \\ V^0(U) &=& \{ v_i \in V \ | \ u_i =0 \}. \end{eqnarray*} \end{definition} \begin{lemma} \label{lemma4} Let $\mathcal{L}(G)$ be the normalized Laplacian of graph $G$ and $U=(u_i),(i=1,\ldots,n)$ the second eigenvector. If $U \neq \mathbf{0}$ then $V^+(U) \neq \emptyset$ and $V^-(U)\neq \emptyset $. \end{lemma} \begin{proof} The vector $D^{1/2}\vec{1}$ is an eigenvector corresponding to the zero eigenvalue. Since the second eigenvector $U$ is orthogonal to $D^{\frac{1}{2}}\vec{1}$, $(D^{\frac{1}{2}}\vec{1})^TU=0$ and $\sum_i \sqrt{d_i}u_i =0$. Since $d_i >0, U \neq \mathbf{0}$, there exist at least two values such that $u_i>0 $ and $u_j <0$ for $i \neq j$. Hence $V^+(U) \neq \emptyset$ and $V^-(U)\neq \emptyset $.\end{proof} \begin{lemma} \label{lemma5} Let $G$ be a graph with an automorphism $\phi$ of order 2. Let $U= (u_1,u_2,\ldots,u_n)$ be an eigenvector and $\displaystyle \phi (U)= (u_{\phi(1)},u_{\phi(2)},\ldots,u_{\phi(n)})$. If $U \neq \mathbf{0}$ and $\phi(U)= -U$ then $V^+(U) \neq \emptyset $ and $V^-(U)\neq \emptyset$. \end{lemma} \begin{proof} Assume $V^+(U)=\emptyset$. If $u_i \leq 0,(i=1,\ldots,n)$, $\phi(U)= -U$ implies that $u_{\phi(i)} >0$. This contradicts that $V^+(U)=\emptyset $. Similarly, if we assume that $V^-(U)=\emptyset $ and $u_i \geq 0$ for $(i=1,\ldots,n)$, then $\phi(U)= -U$ implies that $u_{\phi(i)} <0$. Then this contradicts that $V^-(U)=\emptyset $. If $u_i =0,(i=1,\ldots,n)$, then $U = \mathbf{0}$ and contradicts that $U \neq \mathbf{0}$. \end{proof} \begin{proposition}[Guattery et al.\cite {step:1995}] \label{prop5} Let $P_n$ be a weighted path graph and $\mathcal{L}(P_n)$ be its normalized Laplacian matrix. For any eigenvector $X=(x_1,x_2,\ldots,x_n)$, \begin{enumerate} \item $x_1=0$ implies $X=0$, \item $x_n=0$ implies $X=0$ and, \item $x_i=x_{i+1}=0$ implies that $X=0$. \end{enumerate} \end{proposition}\hfill\qed \begin{lemma}[Guattery et al.\cite{step:1995}] \label{lemasimple} For a path graph $P_n$, $\mathcal{L}(P_n)$ has $n$ simple eigenvalues. \end{lemma} \begin{proof} Let $U=(u_1,u_2,\ldots,u_n)$ and $\bar{U}=(\bar{u}_1,\bar{u}_2,\ldots,\bar{u}_n)$ be two eigenvectors of $\mathcal{L}(P_n)$ with eigenvalue $\lambda$. From the proposition~\ref{prop5}, we have $u_n \neq 0$ and $\bar{u}_n \neq 0$. Let $\displaystyle \alpha =\frac{\bar{u_n}}{u_n}$, where $ \alpha \neq 0$. Consider $\displaystyle \mathcal{L}(P_{n})(\alpha U -\bar{U})= \lambda ( \alpha U -\bar{U})$. The $n$-th element of $\displaystyle (\alpha U -\bar{U})$ is $(\bar{u}_n u_n-\bar{u}_nu_n)=0$. Then $\displaystyle \alpha U =\bar{U}$. Thus $U$ and $\bar{U}$ are linearly dependent and hence $\lambda$ is simple.\end{proof} \begin{proposition} \label{prop3} Let $P_n$ be the path graph and $\phi$ the automorphism of order 2 defined on $V(P_n)$. Then any second eigenvector $U_2$ of $\mathcal{L}(P_n)$ is an odd vector. \end{proposition}\hfill\qed \begin{example} Let \[ M= \left( \begin{array}{cccccc} 1 & -1 & 0 & 0 &0 & 0\\ -1 & 2 & -1 & 0& 0& 0\\ 0 & -1 & 2 & -1 & 0& 0\\ 0& 0&-1& 2&-1& 0\\ 0 &0&0&-1 &2&-1 \\ 0 & 0& 0& 0& -1 & 1 \end{array} \right) \] and \[ P= \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 &0 & 1\\ 0 & 0 & 0 & 0& 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0& 0&1& 0& 0& 0\\ 0 &1 &0&0& 0& 0\\ 1 & 0& 0& 0& 0& 0 \end{array} \right). \] If $U_M$ is a second eigenvector of $M$ then by the Proposition~\ref{prop1}, $PU_M$ is also a second eigenvector. By the Proposition~\ref{prop2}, $PU_M= U_M$ or $PU_M=-U_M$. By the Proposition~\ref{prop3}, $U_M$ is an odd vector and $PU_M=-U_M$.\end{example} \begin{definition}[Weighted Path] For $n$ ($n \ge 1$) and $k$ ($k \ge 1$), the adjacency matrix $(P_{ij})$ of a weighted path $P_{n,k}=(V,E)$ is the $(n+k) \times (n+k)$ matrix such that \[ P_{ij} = \left \{ \begin{array}{cc} 0 & \mbox{($ i=j$ and $i \leq n$) or ($i \neq j+1$ and $j \neq i+1$),}\\ 1 & \mbox{($i=j$ and $n+1 \leq i$) or ($i = j+1$ or $j = i+1$).} \end{array} \right. \] That is $V=\{x_1,x_2,\ldots, x_n, x_{n+1}, \ldots, x_{n+k}\}$ and $E=\{(x_i,x_j)\ |\ P_{ij}=1, 1 \le i,j \le n+k \}$. \end{definition} Let $\Sigma$ be an alphabet and $\Sigma^*$ a set of strings over $\Sigma$ including the empty string $\epsilon $. We denote the length of $w \in \Sigma^*$ by $|w|$. Let $\Sigma^{< n}=\{w \in \Sigma^* | |w| < n\}$ and $\displaystyle \Sigma_1^{< n}=\{w \in \Sigma^* | 1 \le |w| < n \}$. In this paper, we assume $\displaystyle \Sigma=\{0,1\}$. \begin{definition}[Complete binary tree] A complete binary tree $T_n=(V,E)$ of depth $n$ is defined as follows. \begin{eqnarray*} V &=&\Sigma^{< n},\\ E &=& \{ (w,wu) \ | \ w \in \Sigma^{< (n-1)}, u \in \Sigma \}. \end{eqnarray*} \end{definition} \begin{definition}[Double tree] A double tree $DT_n=(V,E)$, where $n$ is the depth of the tree, consists of two complete binary trees connected by their root. We define double tree as follows. \begin{eqnarray*} V&=&\{x(w) \ | \ w\in \Sigma^{< n}\} \cup \{y(w) \ | \ w\in \Sigma^{< n}\},\\ E_1&=&\{(x(w),x(wu))\ | \ w \in \Sigma^{< (n-1)}, u \in \Sigma\}, \\ E_2&=&\{(y(w),y(wu))\ | \ w \in \Sigma^{< (n-1)}, u \in \Sigma\}, \\ E&=& E_1 \cup E_2 \cup \{(x(\epsilon ),y(\epsilon ))\}. \end{eqnarray*} \end{definition} \begin{definition}[Cycle] A cycle $C_n=(V_n,E_n)$ consists of a vertex set $\displaystyle V_n=\{ v_l \ | \ l \in \mathbb{Z^+}, l \leq n \}$ and an edge set $\displaystyle E_n= \{(v_l,v_{l+1})\ | \ 1 \leq l < n \} \cup \{(v_1,v_n)\}$. \end{definition} \begin{definition}[Complete graph] A complete graph $K_n=(V_n,E_n)$ consists of a vertex set $\displaystyle V_n =\{ v_i \ | \ 1 \leq i \leq n \}$ and an edge set $\displaystyle E_n = \{ (v_i,v_j) \ | \ i \neq j \ and \ 1 \leq i \leq n,1 \leq j \leq n \} $. \end{definition} \begin{definition}[Graph $R_{n,k}$] The graph $R_{n,k}(n \ge 1, k \ge 2)$ is a bounded degree planer graph with a vertex set $V=V_1 \cup V_2 $ and an edge set $E=E_1 \cup E_2 \cup E_3$. \begin{eqnarray*} V_1 &=& \{ x_i \ | \ 1 \leq i \leq n+k \},\\ V_2 &=& \{ y_i \ | \ 1 \leq i \leq n+k \}, \\ E_1 &=& \{ ( x_i,x_{i+1}) \ | \ 1 \leq i \leq n+k-1 \}, \\ E_2 &=& \{ ( y_i,y_{i+1}) \ | \ n+k+1 \leq i \leq 2(n+k)-1 \}, \\ E_3 &=& \{ ( x_i,y_{i}) \ | \ n+1 \leq i \leq n+k \}. \end{eqnarray*} \end{definition} \begin{definition}[Cycle cross paths $C_m \Box P_n$] Let $C_m$ be a cycle with $\displaystyle V= \{c_i \ | \ 1 \leq i \leq m \}$ and $\displaystyle E=\{(c_i,c_{i+1})\ | \ 1 \leq i < m \} \cup \{(c_1,c_m) \}$. Let $P_n$ be a path with $\displaystyle V= \{p_i \ | \ 1 \leq i \leq n\}$ and $\displaystyle E=\{(p_i,p_{i+1})\ | \ 1 \leq i < n \}$. Graph $C_m \Box P_n$ has $n$ copies of cycles $C_m$, each corresponding to the one vertex of the path graph. A vertex set $V $ and an edge set $E=E_1 \cup E_2 \cup E_3$ of $C_m \Box P_n$ is defined as follows. \begin{eqnarray*} V &=& \{ (c_i,p_j) \ | \ 1 \leq i \leq m, 1 \leq j \leq n \},\\ E_1 &=& \bigcup_{i=1}^m\{ (( c_i,p_j),(c_i,p_{j+1})) \ | \ 1 \leq j \leq n-1 \}, \\ E_2 &=& \bigcup_{j=1}^n\{ (( c_i,p_j),(c_{i+1},p_j)) \ | \ 1 \leq i \leq m-1 \}, \\ E_3 &=& \{ (( c_1,p_i),(c_m,p_i)) \ | \ 1 \leq i \leq n \}, \\ E&=& E_1 \cup E_2 \cup E_3. \end{eqnarray*} \end{definition} \begin{example} Double tree $DT_3$ shown in the Figure~\ref{fig:figure4-a} has a vertex set $V=\{ x(\epsilon),x(0),x(1),y(\epsilon),y(0),y(1),x(00),x(01),$ $x(10),x(11),y(00), y(01),y(10),y(11)\}$ and an edge set $E=\{(x(\epsilon),y(\epsilon)),(x(\epsilon),x(0)),(x(\epsilon),x(1)),(y(\epsilon),y(0)),(y(\epsilon),y(1)),$ $(x(0),x(00)),(x(0),x(01)),(x(1),x(10)),(x(1),x(11)),y(0),y(00)),$ $((y(0),y(01)),(y(1),y(10)),(y(1),y(11))$. Graph $R_{5,5}$ shown in the Figure~\ref{fig:figure4-b} has a vertex set $V =\{ x_1,x_2,x_3,x_4,x_5,x_6,x_7,$ $x_8,x_9,x_{10},y_1,y_2,y_3,y_4,y_5,y_6,y_7,y_8,y_9,y_{10} \}$ and an edge set $E=\{(x_6,y_6),(x_7,y_7),$ $(x_8,y_8),(x_9,y_9),(x_{10},y_{10}) \}$. \end{example} \begin{figure}[htb] \begin{center} \subfigure[Double Tree $DT_3$]{\label{fig:figure4-a}\includegraphics[scale=0.5]{doubletree.pdf}} \subfigure[$R_{5,5}$]{\label{fig:figure4-b}\includegraphics[scale=0.5] {Rnk.pdf}} \caption{Double tree $DT_3$ and graph $R_{n,k}(n=5,k=5)$.} \label{fig:figure4} \end{center} \end{figure} \begin{definition}[Lollipop graph $LP_{n,m}$] \label{def:lp} The lollipop graph $LP_{n,m},(n \ge 3, m \ge 1)$ is obtained by connecting a vertex of $K_n$ to the end vertex of $P_m$ as shown in the Figure~\ref{fig:figureLP6}. We start vertex numbering from the end vertex of the path. Define $LP_{n,m}=(V,E)$ as follows. \begin{eqnarray*} V&=&\{x_1,x_2,\ldots,x_m,y_1,\ldots,y_n\},\\ E&=& \{ (x_i,x_{i+1}) \ | \ 1 \leq i \leq m-1 \} \cup \{ (y_i,y_{j}) \ | \ i \neq j,\\ &~&1 \leq i \leq n, 1 \leq j \leq n \} \cup \{(x_m,y_1)\}. \end{eqnarray*} \end{definition} \begin{figure}[htb] \begin{center} \includegraphics[scale=0.5]{cmpm1.pdf} \caption{Graph $LP_{n,m}(n=10,m=2)$} \label{fig:figureLP6} \end{center} \end{figure} \section{Minimum normalized cut of graphs} We use the term $Mcut(G)$ to represent the minimum normalized cut. In this section, we review the basic properties of $Mcut(G)$ and its relation to the connectivity and second smallest eigenvalue of normalized Laplacian. We derive $Mcut(G)$ of basic classes of graphs such as paths, cycles, double trees, cycle cross paths, complete graphs and other graphs such as $R_{n,k}$, $P_{n,k}$ and $LP_{n,m}$. \subsection{Properties of minimum normalized cut $Mcut(G)$} \begin{definition}[Normalized cut] Let $G=(V,E)$ be a connected graph. Let $A, B \subset V$, $A\neq \emptyset$, $B\neq \emptyset$ and $A \cap B =\emptyset $. Then the normalized cut $Ncut(A,B)$ of $G$ is defined by $$ Ncut(A,B)= cut(A,B)\left( \frac{1}{vol(A)}+ \frac{1}{vol(B)} \right). $$ \end{definition} \begin{definition}[$Mcut(G)$] Let $G=(V,E)$ be a connected graph. The $Mcut(G)$ is defined by $$ Mcut(G)= \min \{Mcut_j(G) \mid j=1,2,\dots \}. $$ Where, {\small $$ Mcut_j(G)=\min \{ Ncut(A,V\setminus A) \mid cut(A,V\setminus A)=j , A \subset V\}. $$ } \end{definition} \begin{example} Graph $G=(V,E)$ shown in the Figure~\ref{fig:ncutcom} has vertex set $V=\{1,2,3,4,5,6,7\}$ and edge set $ E=\{(1,2),(2,3),(3,1),$ $(3,4),(1,4),(1,5),(3,6),(6,5),(7,5),(7,6)\}$. Volume of the graph is 20. We compute normalized cut for the following cases.\\ $Case(1) \ A=\{1,2,3,4\},B= \{5,6,7\},vol(A)=12,vol(B)=8,$ $ cut(A,B)=2$ and $Ncut(A,B)=0.417 $.\\ $Case(2) \ A=\{1,2,3\},B= \{4,5,6,7\},vol(A)=10, vol(B)=10,$ $ cut(A,B)=4$ and $Ncut(A,B)=0.8 $.\\ $Case(3) \ A=\{1,3,4,5,6,7\},B= \{2\},vol(A)=2, vol(B)=18,$ $ cut(A,B)=2$ and $Ncut(A,B)=1.1111 $. Comparing above 3 cases, we obtain $Mcut(G)$ for the case(1). \begin{figure}[htb] \begin{center} \subfigure[$G=(V,E)$]{\label{fig:ncutcom-a}\includegraphics[scale=0.4] {ncut.pdf}}\subfigure[$Case(1)$]{\label{fig:ncutcom-b}\includegraphics[scale=0.4] {ncut1.pdf}} \subfigure[$Case(2)$ ]{\label{fig:ncutcom-c}\includegraphics[scale=0.4] {ncut2.pdf}}\subfigure[$Case(3)$ ]{\label{fig:ncutcom-d}\includegraphics[scale=0.4]{ncut4.pdf}} \caption{Normalized cut example.} \label{fig:ncutcom} \end{center} \end{figure} \end{example} \begin{lemma} \label{minvol} Let $G=(V,E)$ be a connected graph. Then $\displaystyle \left( \frac{1}{vol(A)} + \frac{1}{vol(V \setminus A)} \right )$ is minimum when $\displaystyle vol(A)=vol(V \setminus A)=\frac{vol(G)}{2}$. \end{lemma}\hfill\qed \begin{proposition} Let $G=(V,E)$ be a connected graph, $A \subseteq V$ and $\triangle(G)$ the maximum degree of $G$. Then \begin{enumerate} \item $\displaystyle cut(A,V \setminus A) \geq \kappa'(G) $, \item $\displaystyle Mcut(G) \geq \frac{4\kappa'(G)}{\triangle(G) |V|}$ and \item If $\displaystyle cut(A,V\setminus A)= \kappa'(G)$ and $\displaystyle 2vol(A)=vol(G)$ then $\displaystyle Mcut(G) = \frac{4\kappa'(G)}{vol(G)}$. \end{enumerate} \label{prop4} \end{proposition} \begin{proof} \noindent \begin{enumerate} \item Since $\kappa'(G)$ is the edge connectivity, $cut(A,V\setminus A) \geq \kappa'(G) $ for any $A \subseteq V$. \item From Lemma~\ref{minvol}, $\displaystyle \left( \frac{1}{vol(A)} + \frac{1}{vol(V\setminus A)} \right)$ is minimum when $vol(A)=vol(V\setminus A)$. That is $\displaystyle \left( \frac{1}{vol(A)} + \frac{1}{vol(V\setminus A)} \right) \geq \frac{2}{vol(A)} =\frac{4}{vol(G)}$. Since $\displaystyle vol(G)= \sum_{i=1}^{|V|} d_{i} \leq |V|\triangle(G) $, $\displaystyle Ncut(A,V\setminus A) = cut(A,V\setminus A)\left( \frac{1}{vol(A)} + \frac{1}{vol(V\setminus A)} \right) \geq \frac{4\kappa'(G)}{ \triangle(G)|V|}$. \item If $cut(A,V\setminus A) = \kappa'(G) $ and $\displaystyle 2vol(A)=vol(G)$ then it is clear that, $\displaystyle Mcut(G)= \frac{4\kappa'(G)}{vol(G)}$. \end{enumerate} \end{proof} \begin{proposition}[Luxburg \cite{luxburg:2007}] \label{mcutlema} Let $G=(V,E)$ be a connected graph and $A \subset V$. Let $\lambda_1 \leq \lambda_2 \leq \cdots\leq \lambda_n$ be eigenvalues of $\mathcal{L}(G)$. Then $\displaystyle Mcut(G) \geq \lambda_2(\mathcal{L}(G))$. \end{proposition} \begin{proof} Let $V= \{1,2,\ldots,n\}$. Let $A\subset V$, $g=(g_1,\ldots,g_n) \in \Re^n $ an eigenvector and $g=D^{1/2}f$. Define $f_i$ as \[ f_i= \left \{ \begin{array}{ll} a & \mbox{if $i \in A$,}\\ -b & \mbox{if $i \notin A$.}\end{array} \right. \] Then \begin{eqnarray*} \frac{\sum_{i=1}^n \sum_{j=1}^n (f_i-f_j)^2 w_{ij}}{2 \sum_{i=1}^n f_i^2 d_i} &=& \frac{2cut(A,(V \setminus A))(a+b)^2}{2(a^2vol(A)+b^2vol(V \setminus A))}. \end{eqnarray*} Let this as $X$. Now let $a=vol(V \setminus A)$ and $b=vol(A)$. Then we have \begin{eqnarray*} X &=& \frac{cut(A,(V \setminus A))(vol(G))^2}{vol(V \setminus A)^2vol(A)+vol(A)^2vol(V \setminus A)}\\ &=& \frac{cut(A,(V \setminus A))(vol(G))^2}{vol(V \setminus A)vol(A)(vol(A)+vol(V \setminus A)}\\ &=& \frac{cut(A,(V \setminus A))(vol(G))}{vol(V \setminus A)vol(A)}\\ &=& cut(A,(V \setminus A))\left(\frac{1}{vol(V \setminus A)}+\frac{1}{vol(A)}\right)\\ &=& Ncut(A,V \setminus A). \end{eqnarray*} With the choice of $f,a,b$ we have, $\displaystyle (D\vec{1})^T f= \sum_{i=1}^n d_i f_i = \sum_{i \in A} d_i a - \sum_{i \notin A} d_i b =0$. So $f \perp D \vec{1}$. Since $\displaystyle \lambda_2= \inf _{f \perp D \vec{1}} \frac{\sum_{i=1}^n \sum_{j=1}^n (f_i-f_j)^2 w_{ij}}{2 \sum_{i=1}^n f_i^2 d_i}$, we have $\displaystyle \lambda_2 \leq \frac{\sum_{i=1}^n \sum_{j=1}^n (f_i-f_j)^2 w_{ij}}{2 \sum_{i=1}^n f_i^2 d_i}= \min_A Ncut(A,(V \setminus A))=Mcut(G)$.\end{proof} \begin{lemma}\label{lemma:mcut4} Let $G=(V,E)$ be a connected graph, $A$ a nonempty subset of $V$. Then {\small \begin{enumerate} \item[(i)] $ \displaystyle Ncut(A,V\setminus A)=\frac{4 cut(A,V\setminus A) \cdot vol(V)}{(vol(V))^2 - (vol(A)-vol(V\setminus A))^2} $, and \item[(ii)] $\displaystyle Mcut_j(G)= \frac{4 j vol(V)}{ vol(V)^2 - X_j }$, where \\ $X_j=\min \{(vol(A)-vol(V\setminus A))^2 \ | \ cut(A,V\setminus A)=j, \ A \subset V\}$. \end{enumerate} } \end{lemma} \begin{proof} \noindent (i) Let $s=vol(V)$, $j=cut(A,V\setminus A)$, $s_A=vol(A)$ and $s_{\bar{A}}=vol(V\setminus A)$. Since $s=s_A+s_{\bar{A}}$, we have $s_A-s_{\bar{A}}$ $=2s_A-s$ and $s^2 - (s_A-s_{\bar{A}})^2$ $=4 s_A s_{\bar{A}}$. \begin{eqnarray*} Ncut(A,V\setminus A) &=& j \cdot (\frac{1}{s_A}+\frac{1}{s_{\bar{A}}}) \\ &=& \frac{j(s_A+s_{\bar{A}})}{s_A s_{\bar{A}}} = \frac{j s}{s_{A}s_{\bar{A}}} \\ &=& \frac{4 j s}{s^2-(s_{A}-s_{\bar{A}})^2} \end{eqnarray*} \noindent (ii) It is followed by the definition of $Mcut_j(G)$ and (i). \end{proof} \begin{lemma} \label{genlem4} Let $G=(V,E)$ be a graph. If there exists a nonempty subset $A \subset V$ such that $$ \left \vert vol(A)-vol(V\setminus A) \right \vert \leq \frac{vol(V)}{\sqrt{cut(A,V\setminus A)+1}}, $$ then $$ \displaystyle Mcut(G)=\min\{ Mcut_j(G) \mid j=1,2,\ldots,cut(A,V\setminus A)\}. $$ \end{lemma} \begin{proof} Let $j=cut(A,V\setminus A)$, $a=|vol(A)-vol(V\setminus A)|$, $s=vol(V)$, $s_A=vol(A)$ and $s_{\bar{A}}=vol(V\setminus A)$. Since $a^2 \le \frac{s^2}{j+1}$ and $Ncut(A,V\setminus A)=\frac{4 j s}{s^2 - a^2}$ by the Lemma~\ref{lemma:mcut4}, we have $s^2-(j+1) a^2 \ge 0$ and \begin{eqnarray*} && \frac{4(j+1)}{s} - Ncut(A,V\setminus A) \\ &=& \frac{4(j+1)}{s} - \frac{4 j s}{s^2-a^2} \\ &=& \frac{4(j+1)(s^2-a^2)-4 j s^2}{ s (s^2-a^2)} \\ &=& \frac{4(s^2-(j+1)a^2)}{s(s^2-a^2)} \ge 0. \end{eqnarray*} Let $B$ be a subset of $V$, $s_B=vol(B)$, $s_{\bar{B}}=vol(V\setminus B)$ and $j_B=cut(B,V\setminus B)$. If $j_B \ge j+1$ then we have the following using Lemma~\ref{lemma:mcut4}. \begin{eqnarray*} Ncut(B,V\setminus B) &=& cut(B,V\setminus B)\left(\frac{1}{vol(B)}+\frac{1}{vol(V\setminus B)}\right) \\ &=& \frac{4 j_B s}{s^2-(s_B-s_{\bar{B}})^2} \\ &\ge& \frac{4 (j+1) s}{s^2-(s_B-s_{\bar{B}})^2} \\ &\ge& \frac{4 (j+1) s}{s^2} = \frac{4 (j+1)}{s} \\ &\ge& Ncut(A,V\setminus A) \ge Mcut_j(G). \end{eqnarray*} So we have $Mcut_{j'}(G) \ge Mcut_j(G)$ for any $j' > j$. \end{proof} \begin{lemma} \label{gencut1} Let $G=(V,E)$ be a graph with $vol(G)\geq 9$. If there exists a subset $A \subset V$ such that $cut(A,V\setminus A)=1$ and $|vol(A)-vol(G)/2 | \leq 3$, then $$ Mcut(G)=Mcut_1(G). $$ \end{lemma} \begin{proof} Let $s=vol(G)$, $s_A=vol(A)$ and $s_{\bar{A}}=vol(V\setminus A)$. Since $\left\vert s_A - \frac{s}{2} \right\vert \le 3$ and $s=s_A+s_{\bar{A}}$, we have $\left\vert s_A - s_{\bar{A}} \right\vert \le 6$. Since $\sqrt{1+1}\left\vert s_A - s_{\bar{A}} \right\vert $ $ \le 6\sqrt{2}$ $< 9$ $ \le s$, we have $Mcut(G)=Mcut_1(G)$ by the Lemma~\ref{genlem4}. \end{proof} \begin{lemma} \label{genmcut2} Let $G=(V,E)$ be a graph and $vol(G) \geq 11$. If there exists a set $A \subset V$ such that $cut(A,V\setminus A)=2$ and $|vol(A)-vol(G)/2| \leq 3$, then $$ Mcut(G)=\min(Mcut_1(G), Mcut_2(G)). $$ \end{lemma} \begin{proof} Since $\left\vert vol(A) - vol(G)/2 \right\vert \leq 3$ and $vol(A)+vol(V\setminus A)=vol(G)$, we have $\left\vert vol(A) - vol(V\setminus A) \right\vert \leq 6$ and $\sqrt{3}\left\vert vol(A) - vol(V\setminus A)\right\vert$ $\leq 6 \sqrt{3}$ $< 11$. So we have $Mcut(G)=\min(Mcut_1(G), Mcut_2(G))$ by the Lemma~\ref{genlem4}. \end{proof} \begin{lemma} Let $G=(V,E)$ be a graph with $vol(G)\geq 11$. Suppose there exists a subset $A\subset V$ such that $cut(A,V\setminus A)=2$ and $|vol(A)-vol(G)/2| \leq 3$. If there exists no subset $B\subset V$ such that $cut(B,V\setminus B)=1$ and $\displaystyle \left\vert vol(B)-vol(G)/2 \right\vert\leq \frac{\sqrt{36+(vol(G))^2}}{2 \sqrt{2}}$, then $$ Mcut(G)=Mcut_2(G). $$ \end{lemma} \begin{proof} Let $s=vol(G)$, $s_A=vol(A)$ and $s_{\bar{A}}=vol(V\setminus A)$. Since $\left\vert s_{A} - s/2 \right\vert \leq 3$, we have $\left\vert s_{A} -s_{\bar{A}} \right\vert \leq 6$ and \begin{eqnarray*} Mcut_2(G) & \leq & Ncut(A,V\setminus A) \\ &=& \frac{8s}{s^2 - (s_A-s_{\bar{A}})^2} \\ &\leq& \frac{8s}{s^2 -36}. \end{eqnarray*} Let $B \subset V$ with $cut(B,V\setminus B)=1$, $s_B=vol(B)$ and $s_{\bar{B}}=vol(V\setminus B)$. If $B$ exists, then $\left\vert s_B - s/2 \right\vert > \frac{\sqrt{s^2+36}}{2\sqrt{2}}$, by the assumption. So we have $\left\vert s_B - s_{\bar{B}} \right\vert > \frac{\sqrt{s^2+36}}{\sqrt{2}}$ and \begin{eqnarray*} Ncut(B,V\setminus B) &=& \frac{4s}{s^2 - (s_B-s_{\bar{B}})^2} \\ &\geq& \frac{4s}{s^2-\frac{s^2+36}{2}} \\ &=& \frac{8s}{s^2 -36} \geq Mcut_2(G). \end{eqnarray*} That is $Mcut(G)=Mcut_2(G)$ by the Lemma~\ref{genmcut2}. \end{proof} Next we derive formulae for minimum normalized cut $Mcut(G)$ of some elementary graphs. \subsection{$Mcut(G)$ of basic classes of graphs} \begin{theorem} \label{propmcut} Let $G=(V,E)$ be a graph. \begin{enumerate} \item If $G$ is a regular graph of degree $d$ and $G \neq K_n, n>3$ and $|V|=n$, then \[ Mcut(G) \geq \left \{ \begin{array}{ll} \frac{4}{n} & \mbox{if $n$ is even,}\\ \frac{4n}{(n^2-1)} & \mbox{if $n$ is odd.} \end{array} \right. \] \item For the cycle $C_{n}$ ($n \ge 3$), \[ Mcut(C_n)= \left \{ \begin{array}{ll} \frac{4}{n} & \mbox{if $n$ is even,}\\ \frac{4n}{(n^2-1)} & \mbox{if $n$ is odd.} \end{array} \right. \] This can be written as $\displaystyle Mcut(C_n) = \frac{n}{\lfloor \frac{n}{2} \rfloor \lceil \frac{n}{2} \rceil}.$ \item For the complete graph $K_n$, \begin{eqnarray*} Mcut(K_n)& = & \frac{n}{n-1} \\ &=& \lambda_2. \end{eqnarray*} \item For the path graph $P_n$ ($n \ge 2$), \[ Mcut(P_n) = \left \{ \begin{array}{ll} \frac{2}{n-1} & \mbox{if $n$ is even,}\\ \frac{2(n-1)}{n(n-2)} & \mbox{if $n$ is odd.} \end{array} \right. \]\\ This can be written as $$ Mcut(P_n)= \frac{2n-2}{4\lfloor \frac{n}{2} \rfloor \lceil \frac{n}{2} \rceil -2n+1}. $$ \item For the cycle cross paths $G = C_m \Box P_n $ , \[ Mcut(C_m \Box P_n) = \left \{ \begin{array}{cc} \frac{2(2n-1)}{16 \lfloor{\frac{n}{2}}\rfloor \lceil{\frac{n}{2}}\rceil-4n+1} & \mbox{$2n > m$,}\\ \frac{nm}{(2n-1)\lfloor{\frac{m}{2}}\rfloor \lceil{\frac{m}{2}}\rceil} & \mbox{$2n \leq m$.} \end{array} \right. \] \item For the double tree $DT_n$ with depth $n$, $\displaystyle Mcut(DT_n)=\frac{2}{2^{n+1} -3}$. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item For a regular graph of degree $d$, $\displaystyle \kappa'(G)=\triangle(G) =\delta(G) =d$. For $A \subset V$, $\displaystyle Ncut(A,V\setminus A) \geq \kappa'(G)\left( \frac{1}{d|A|}+\frac{1}{d|V \setminus A|}\right) = \frac{|V|}{|A||V\setminus A|}$. If $cut(A,V\setminus A)=\kappa'(G)$ then we have $\displaystyle Ncut(A,V\setminus A)=\frac{|V|}{|A||V\setminus A|}$. $Ncut(A,V\setminus A)$ is minimum, when $|A|=|V\setminus A|$ by the Lemma~\ref{minvol}. If $V$ is even then $\displaystyle Mcut(G) \geq \frac{4}{|V|} =\frac{4}{n}$ by Lemma~\ref{minvol}. If $|V|$ is odd then, we can write $|V|$ as $\displaystyle |V|=\frac{|V|-1}{2}+\frac{|V|+1}{2}$, where $\displaystyle -1 \leq |A|-|V \setminus A| \leq 1$. Then $\displaystyle Ncut(A,V\setminus A) \geq \kappa'(G)\left( \frac{2}{d(|V|-1)}+\frac{2}{d(|V|+1)}\right)= \frac{4|V|}{(|V|+1)(|V|-1)}$. Hence $\displaystyle Mcut(G) \geq \frac{4|V|}{(|V|+1)(|V|-1)}=\frac{4n}{n^2-1}$. \hfill\qed\\ \item Let $A_k=\{x_i \ |\ i \le k\}$ ($k=1,\ldots,n-1$). We note $vol(C_n)=2n$, $vol(A_k)=2k$, $vol(V\setminus A_k)=2n-2k$, $vol(A_k)-vol(V\setminus A_k)=4k-2n$ and $$ Ncut(A_k,V\setminus A_k)=\frac{4n}{n^2-(2k-n)^2}. $$ If $n$ is even then $Ncut(A_{\frac{n}{2}},V\setminus A_{\frac{n}{2}})$ $=\frac{4}{n}$ is the minimum of $Ncut(A_k,V\setminus A_k)$. If $n$ is odd then $Ncut(A_{\frac{n+1}{2}},V\setminus A_{\frac{n+1}{2}})$ $=Ncut(A_{\frac{n-1}{2}},V\setminus A_{\frac{n-1}{2}})$ $=\frac{4n}{n^2-1}$ is the minimum of $Ncut(A_k,V\setminus A_k)$. Since $vol(A_{\frac{n}{2}})-vol(V\setminus A_{\frac{n}{2}})=0$, $vol(A_{\frac{n-1}{2}})-vol(V\setminus A_{\frac{n-1}{2}})=-2$ and $\displaystyle \frac{vol(V)}{\sqrt{cut(A_k,V\setminus A_k)+1}}$ $= \frac{2n}{\sqrt{3}}$ $\ge \frac{6}{\sqrt{3}}$, we have $Mcut(C_n)=Mcut_2(C_n)$ by Lemma~\ref{genlem4}. We note that for any nonempty subset $A \subset V$ with $cut(A,V\setminus A)=2$, there exists a $k$ such that $Ncut(A,V\setminus A)=Ncut(A_k,V\setminus A_k)$ and $\kappa'(C_n)=2$. For even $n$, $\displaystyle n=\lfloor \frac{n}{2} \rfloor=\lceil \frac{n}{2} \rceil$ and for odd $n$, $\displaystyle \frac{(n-1)}{2}=\lfloor \frac{n}{2} \rfloor$ and $\displaystyle \frac{(n+1)}{2}=\lceil\frac{n}{2} \rceil$. Combining odd and even cases together we can write $Mcut(C_n)$ as $\displaystyle Mcut(C_n) = \frac{4n}{4\lfloor \frac{n}{2} \rfloor \lceil \frac{n}{2} \rceil}$. \hfill\qed \item For a complete graph $K_n$, $|V|=n$, $\kappa'(K_n)=n-1$ and $vol(K_n)=n(n-1)$. For any subset $A \subset V $, we have $vol(A)=|A|(n-1)$ and $\displaystyle cut(A,(V\setminus A))=|A|(n-|A|)$. Then $\displaystyle Mcut(K_n)= |A|(n-|A|) \left( \frac{1}{|A|(n-1)}+\frac{1}{(n-|A|)(n-1)} \right) =\frac{n}{n-1}$.\hfill\qed \item Let $A_k=\{x_i \ |\ i \le k\}$ ($k=1,\ldots,n-1$). We note that $vol(P_n)=2n-2$, $vol(A_k)=2k-1$, $vol(V\setminus A_k)=2n-2k-1$, $vol(A_k)-vol(V\setminus A_k)=4k-2n$ and $$ Ncut(A_k,V\setminus A_k)=\frac{2(n-1)}{(n-1)^2-(2k-n)^2}. $$ If $n$ is even then $Ncut(A_{\frac{n}{2}},V\setminus A_{\frac{n}{2}})$ $=\frac{2}{n-1}$ is the minimum of $Ncut(A_k,V\setminus A_k)$. If $n$ is odd then $Ncut(A_{\frac{n+1}{2}},V\setminus A_{\frac{n+1}{2}})$ $=Ncut(A_{\frac{n-1}{2}},V\setminus A_{\frac{n-1}{2}})$ $=\frac{2(n-1)}{(n-1)^2-1}$ $=\frac{2(n-1)}{n(n-2)}$ is the minimum of $Ncut(A_k,V\setminus A_k)$. Since $vol(A_{\frac{n}{2}})-vol(V\setminus A_{\frac{n}{2}})=0$, $vol(A_{\frac{n-1}{2}})-vol(V\setminus A_{\frac{n-1}{2}})=-2$ and $\displaystyle \frac{vol(V)}{\sqrt{cut(A_k,V\setminus A_k)+1}}$ $= \frac{2n-2}{\sqrt{2}}$ $\ge \frac{2}{\sqrt{2}}$, we have $Mcut(C_n)=Mcut_1(P_n)$ by Lemma~\ref{genlem4}. \hfill\qed \item The cycle cross path $G=C_m \Box P_n ( n \geq 2, m\geq 3)$ is a graph which has $n$ copies of cycles $C_m$, each corresponding to the one vertex of $P_n$. $\displaystyle \kappa'(C_m \Box P_n) =\min \{ \kappa'(C_m)|V(P_n)|, \kappa'(P_n)|V(C_m)|, \delta(C_m)+\delta(P_n) \}= \delta(C_m)+\delta(P_n) =3$.\\ \noindent{\bf{Case (i)}} Let $\displaystyle A_1=\{(c_i,p_j)| 1 \leq j \leq \lfloor\frac{n}{2} \rfloor, 1 \leq i \leq m \}$ and $\displaystyle V\setminus A_1=\{ (c_i,p_j) | \lfloor\frac{n}{2} \rfloor+1 \leq j \leq n, 1 \leq i \leq m \}$. We note that $\displaystyle vol(A_1)=\lfloor\frac{n}{2} \rfloor(vol(C_m)+2m) -m$, $\displaystyle vol(V \setminus A_1)=\lceil\frac{n}{2} \rceil(vol(C_m)+2m) -m$ and $cut(A_1,V\setminus A_1)=m$. Then $\displaystyle Ncut(A_1,V \setminus A_1) =\frac{m(4mn-2m)}{(\lfloor \frac{n}{2} \rfloor 4m-m) ( \lceil \frac{n}{2} \rceil 4m-m)} =\frac{2(2n-1)}{16 \lfloor\frac{n}{2} \rfloor \lceil \frac{n}{2} \rceil -4n+1}$. When $n$ is even, $\displaystyle Ncut(A_1,V \setminus A_1)=\frac{2}{2n-1}$. When $n$ is odd, $\displaystyle Ncut(A_1,V \setminus A_1)=\frac{2(2n-1)}{(2n-3)(2n+1)}$.\\ {\bf{Case (ii)}} Let $\displaystyle A_2=\{(c_i,p_j) \ | \ 1 \leq i \leq \lfloor\frac{m}{2} \rfloor, 1 \leq j \leq n \}$ and $ \displaystyle (V\setminus A_2)=\{(c_i,p_j) \ | \ \lfloor\frac{m}{2} \rfloor+1 \leq i \leq m, 1 \leq j \leq n \}$. We note that $\displaystyle vol(A_2)= n vol(C_{\lfloor\frac{m}{2}\rfloor}) + 2 \lfloor\frac{m}{2} \rfloor(n-1) =2(2n-1)\lfloor\frac{m}{2} \rfloor$ and $\displaystyle vol(V\setminus A_2)=2(2n-1)\lceil\frac{m}{2} \rceil$. In this case, the graph cut horizontally through the cycles and we have $cut (A_2,V\setminus A_2)=2n$. Hence $\displaystyle Ncut(A_2,V \setminus A_2)= \frac{nm}{(2n-1)\lfloor\frac{m}{2} \rfloor \lceil\frac{m}{2} \rceil}$. When $m$ is odd, $\displaystyle \frac{4nm}{(2n-1)(m^2-1)}$ and when $m$ is even, $\displaystyle \frac{4n}{(2n-1)m}$.\\ \noindent{\bf{Case (iii)}} Let $\displaystyle B_k=\{(c_i,p_1)| 1 \leq i \leq k \}$ ($1 \le k < m$). We note that $cut(B_k,V\setminus B_k)=k+2$ and $vol(B_k)=3k$. Since $Ncut(A_1,V\setminus A_1)-Ncut(B_k,V\setminus B_k)=$ {\tiny$ \frac{2 (-1+2 n) \left(9 k^2+k m \left(3-16 n+4 n^2\right)+2 m \left(-3-4 n+4 n^2\right)\right)}{3 k (3 k+m (2-4 n)) \left(-3-4 n+4 n^2\right)} $}, we can verify $Ncut(A_1,V\setminus A_1)$ $\le Ncut(B_k,V\setminus B_k)$ for any $k$ and $m \le 2n$. \noindent{\bf{Case (iv)}} Let $\displaystyle C_k=\{(c_1,p_j)| 1 \leq j \leq k \}$ ($1 \le k < n$). We note that $cut(C_k,V\setminus B_k)=2k+1$ and $vol(C_k)=4k-1$. Since $Ncut(C_k,V\setminus C_k)=$ {\tiny $-\frac{2 (1+2 k) m (-1+2 n)}{(-1+4 k) (-1+4 k+m (2-4 n))}$ }, we can verify $Ncut(A_2,V\setminus A_2)$ $\le Ncut(C_k,V\setminus C_k)$ for any $k$ and $2n \le m$. Now compare the case (i) with case (ii). For the case of $2n \ge m+1$, we have $\frac{vol(G)}{\sqrt{cut(A_1, V\setminus A_1)}}$ $=\frac{2m (2n-1)}{\sqrt{m+1}}$ $\ge \frac{2m (2n-1)}{\sqrt{2n}}$ $= 2m \sqrt{\frac{4n^2-4n+1}{2n}}$ $\ge 2m \sqrt{2n -2}$ $\ge 4m $ $\ge \vert vol(A_1) - vol(V\setminus A_1) \vert$ and $Mcut_{2n}(G) > Mcut_m(G)$. So $Mcut(G)=Ncut(A_1,V\setminus A_1)$. If $2n \le m$, then we have $\frac{vol(G)}{\sqrt{cut(A_2, V\setminus A_2)}}=$ $\frac{2m (2n-1)}{\sqrt{2n+1}}$ $\ge\frac{4n (2n-1)}{\sqrt{2n+1}}$ $=2(2n-1)\sqrt{\frac{4n^2}{1+2n}}$ $=2(2n-1)\sqrt{2n-\frac{2n}{2n+1}}$ $\ge 2(2n-1)\sqrt{2n-1}$ $\ge 2(2n-1)$ $\ge \vert vol(A_2) - vol(V\setminus A_2) \vert$ and $Mcut_{m}(G) > Mcut_{2n}(G)$. So $Mcut(G)=Ncut(A_2,V\setminus A_2)$. \hfill\qed \item The size of a tree is $|T_n|=1+2+\cdots+2^n=2^n-1$ and the size of a double tree is $|DT_n|=2|T_n|=2^{n+1}-2$. The volume of a tree is $vol(T_n)=2 vol(T_{n-1})+4$, which can be written as $vol(T_n)+4 = 2(vol(T_{n-1})+4)=2^2 (vol(T_{n-2})+4)=\cdots=2^{n-1} (vol(T_1)+4)=2^{n+1}$. Therefore the volume of a tree is $vol(T_n)=2^{n+1}-4$ and the volume of a double tree is $\displaystyle vol(DT_n)= 2vol(T_n) +2= 2^{n+2}-6$. Let $A_1=\{x(w) \ | \ w \in \Sigma^{< n} \}$ and $ V \setminus A_1 =\{ y(w) \ | \ w \in \Sigma^{< n} \}$. Then we have $vol(A_1) = vol(T_n)+1$ $= 2^{n+1} -3$, $vol(V\setminus A_1) = 2^{n+1} -3$, $cut(A_1,V\setminus A_1)= 1$. Therefore $Ncut(A_1,V\setminus A_1) =$ $\frac{2}{(vol(T_n)+1)}$ $= \frac{2}{2^{n+1} -3}$ $=\frac{4}{vol(DT_n)}$. Here $\kappa'(DT_n)=1$ and $2vol(A_1)=vol(DT_n)$. Then from the Proposition~\ref{minvol}, $\displaystyle Mcut(DT_n) = \frac{2}{2^{n+1} -3}$. \end{enumerate} \end{proof} \subsection{$Mcut$ of roach type graphs $R_{n,k}$} \label{subsec:roach} Next, we consider the graph $R_{n,k}$ and derive a formula for $Mcut(R_{n,k})$ based on $n,k$. \begin{theorem} \label{propmcutg} For $R_{n,k} (n\geq 1, k>1)$, $Mcut(R_{n,k})$ is given by \[ \left\{ \begin{array}{cc} \frac{2}{3} & \mbox{$(n=1,k=2)$,}\\ \\ \frac{4}{-2+3 k+2 n} & \mbox{$(*_1 \wedge (k \geq 4) \wedge$}\\ & \mbox{$ (n < K_1))$,}\\ \\ \frac{4 (-2+3 k+2 n)}{(-5+3 k+2 n) (1+3 k+2 n)}& \mbox{$(*_4 \wedge ( k \geq 4) \wedge$}\\ &\mbox{$ ( n < K_4))$,}\\ \\ \frac{4 (-2+3 k+2 n)}{(-4+3 k+2 n)(3 k+2 n)} &\mbox{$(*_3 \wedge (k \geq 4) \wedge$}\\ &\mbox{$ ( n < K_3))$,}\\ \\ \frac{4 (-2+3 k+2 n)}{(-3+3 k+2 n)(-1+3 k+2 n)}& \mbox{$(*_2\wedge (k \geq 4) \wedge $}\\ &\mbox{$ ( n < K_2)),\vee (n=1,k=3)$}\\ &\mbox{$ \vee (n=2,k=3)$,}\\ \\ \frac{6k+4n-4}{(2n-1)(6k+2n-3)} & \mbox{$((k \geq 4)$}\\ &\mbox{$ \wedge ((*_1 \wedge (K_1 \leq n))\vee$}\\ &\mbox{$(*_4 \wedge ( K_4 \leq n))\vee$}\\ &\mbox{$ (*_3 \wedge (K_3 \leq n))\vee $}\\ &\mbox{$(*_2 \wedge (K_2\leq n)) )) \vee$}\\ &\mbox{$ (k=2 \wedge (n\geq 2))\vee $}\\ &\mbox{$ (k=3 \wedge (n \geq 3))$}, \end{array} \right. \] where \begin{eqnarray*} *_1 &=& ((3 \mid n) \wedge (2 \mid k)),\\ *_2 &=& (3 \nmid n) \ and \ (2 \nmid k),\\ *_3 &=& (3 \nmid n) \ and \ ( 2 \mid k),\\ *_4 &=&((3 \mid n) \wedge (2 \nmid k)), \\ K_1 &=& 1-\frac{1}{\sqrt{2}}-\frac{3 k}{2}+\frac{3 k}{\sqrt{2}},\\ K_2 & =& 1-\frac{3k}{2}+\frac{\sqrt{1-12 k+18 k^2}}{2},\\ K_3 & =& 1-\frac{3 k}{2}+\frac{\sqrt{-1-6 k+9 k^2}}{\sqrt{2}},\\ K_4 &=& 1-\frac{3 k}{2}+\frac{\sqrt{-7-12 k+18 k^2}}{2}. \end{eqnarray*} \end{theorem} \begin{proof} Let $\displaystyle V(R_{n,k})= \{ x_i \ | \ 1 \leq i \leq n+k \}\cup \{ y_i \ | \ 1 \leq i \leq n+k \}$. Volume of $R_{n,k}$ is $vol(R_{n,k})= 2(2n-1+3k-1)=6k+4n-4$.\\ We consider the following cases in order to find the $Mcut(R_{n,k})$.\\ \noindent {\bf{Case(i)}} Let $A_1\subseteq V(R_{n,k})$, where $\displaystyle A_1 =\{ x_i \ | \ 1 \leq i \leq n+k \}$ and $\displaystyle V\setminus A_1=\{ y_i \ | \ 1 \leq i \leq n+k \}$. Then the volume $vol(A_1)$ is $\displaystyle \frac{vol(R_{n,k})}{2}=2n+3k-2$ and $cut(A_1,V\setminus A_1)=k$. So we have \begin{eqnarray*} Ncut(A_1,V \setminus A_1) &=& k \left(\frac{1}{2n+3k-2}+\frac{1}{3k+2n-2} \right)\\ &=& \frac{2k}{3k+2n-2}. \end{eqnarray*} Let this value as $c_1$.\\ \noindent {\bf{Case(ii)}} Let $A_2\subseteq V(R_{n,k})$ such that $\displaystyle A_2 =\{ x_i \ | \ 1 \leq i \leq n \}$ and $\displaystyle V\setminus A_2=\{ x_i \ | \ n+1 \leq i \leq n+k \}\cup \{ y_i \ | \ 1 \leq i \leq n+k \}$. Then the volume $vol(A_2)=2n-1$, $vol(V\setminus A_2)=vol(R_{n,k})-vol(A_2)=2n+6k-3$ and $\displaystyle cut(A_2,(V\setminus A_2))=1$. So we have $$ Ncut(A_2,V \setminus A_2)=\frac{(6k+4n-4)}{(2n-1)(6k+2n-3)}. $$ Let this value as $c_2$.\\ \noindent {\bf{Case(iii)}} Suppose there exists $|A_3| <n$ such that $cut(A_3,(V\setminus A_3))=1$. Let $vol(A_3)=2n-1-2x$, where $x=|A_2|-|A_3|$ and $|A_2|=n$. Then $vol(V\setminus A_3)= 6k+2n-3 +2x$. $\displaystyle Ncut(A_3,V \setminus A_3)=\frac{1}{2n-1-2x} +\frac{1}{6k+2n-3+2x} = \frac{6k+4n-4}{(2n-1)(6k+2n-3)+4x(1-(3k+x))}$. Since $4x(1-(3k+x)) <0$, $Ncut(A_3,V\setminus A_3)> Ncut(A_2,V \setminus A_2) (Case(ii)<Case(iii))$. Since $c_2$ is smaller than $Case(iii)$, we can ignore this case.\\ \noindent {\bf{Case(iv)}} Let $\displaystyle A_4(\alpha)=\{ x_i \ | \ 1 \leq i \leq n+\alpha \} \cup \{ y_i \ | \ 1 \leq i \leq n+\alpha \}$, where $1 \leq \alpha <k$ and $\displaystyle V \setminus A_4(\alpha)= \{ x_i \ | \ n+\alpha +1 \leq i \leq n+k \} \cup \{ y_i \ | \ n+\alpha +1 \leq i \leq n+k \}$. Then $\displaystyle vol(A_4(\alpha))=2(2n-1+3\alpha )=4n+6\alpha -2$, $vol(V \setminus A_4(\alpha))=6k-2-6\alpha $ and $cut(A_4(\alpha),V \setminus A_4(\alpha))=2$. Then we have, $$ Ncut(A_4(\alpha),V\setminus A_4(\alpha)) =\frac{(3k+2n-2)}{(2n-1+3\alpha)(3k-3\alpha-1)}. $$ Let this value as $c_4(\alpha)$.\\ Minimum of $c_4(\alpha)$ can be obtained by differentiating with respect to $\alpha$. $\displaystyle \frac{dc_4(\alpha)}{d\alpha} =0$ gives minimum value of $c_4(\alpha)$ at $\displaystyle \alpha_0=\frac{3k-2n}{6}$. But $\alpha_0$ is not an integer for all $n,k$. If $\frac{3k-2n}{6} <1$ that is $\displaystyle 1 \leq k < \frac{6+2n}{3}$ then the minimum value is $c_4(1)$. Then we have $$ c_4(1)= \frac{2-3k-2n}{8-6k+8n-6kn}. $$ If $\displaystyle 1 \leq \frac{3k-2n}{6} < k$ that is $k \geq \frac{6+2n}{3}$ then the minimum value is $\displaystyle c_4(\frac{3k-2n}{6})$ whenever $\displaystyle \frac{3k-2n}{6} \in \mathbf{Z}$. $$ c_4(\frac{3k-2n}{6})= \frac{4}{-2+3k+2n}. $$ If $k \geq \frac{6+2n}{3}$ and $2 \nmid k$ and $3 \mid n$ then the minimum value is $\displaystyle c_4(\frac{3k-2n}{6}+\frac{1}{2})=c_4(\frac{3k-2n}{6}-\frac{1}{2})$. $$ c_4(\frac{3k-2n}{6}+\frac{1}{2})=\frac{4 (-2+3 k+2 n)}{(-5+3 k+2 n) (1+3 k+2 n)}. $$ If $k \geq \frac{6+2n}{3}$ and $3 \nmid n$ and $2 \mid k$ then the minimum value is $\displaystyle c_4(\frac{3k-2n}{6}+\frac{1}{3})=c_4(\frac{3k-2n}{6}-\frac{1}{3})$. $$ c_4(\frac{3k-2n}{6}-\frac{1}{3})=\frac{4 (-2+3 k+2 n)}{(-4+3 k+2 n) (3 k+2 n)}. $$ If $k \geq \frac{6+2n}{3}$ and $3 \nmid n$ and $2 \nmid k$ then the minimum value is $\displaystyle c_4(\frac{3k-2n}{6}+\frac{1}{6})=c_4(\frac{3k-2n}{6}-\frac{1}{6})$. $$ c_4(\frac{3k-2n}{6}-\frac{1}{6})=\frac{4 (-2+3 k+2 n)}{(-3+3 k+2 n)(-1+3 k+2 n)}. $$ \noindent {\bf{Case(v)}} Let $A_5 =\{ x_i \ | \ 1 \leq i \leq n+1 \} $ and $V\setminus A_5=\{x_i \ | \ n+2 \leq i \leq n+k \}\cup \{ y_i \ | \ 1 \leq i \leq n+k \}$. Then $vol(A_5)= 2n+2$ and $vol(V\setminus A_5) =2n+6k-6$. Then we have $\displaystyle Ncut(A_5,V\setminus A_5) = 2 \left( \frac{1}{2n+2} + \frac{1}{2n+6k-6} \right)=\frac{2n+3k-2}{(n+1)(n+3k-3)}$.\\ \noindent Now we can compare all cases considered above.\\ If $k=2$ and $n=1$ then it is easy show that $c_1$ is the minimum. If $k=2$ and $n \geq 2$ then it is easy to show that $c_2$ is the minimum. If $k=3$ and $n=1$ then $c_4(\frac{3k-2n}{6}-\frac{1}{6})$ is the minimum. If $k=3$ and $n=2$ then $c_4(\frac{3k-2n}{6}+\frac{1}{6})$ is the minimum. If $k=3$ and $n \geq 3$ then we can easily show that $c_2$ is the minimum. If $k \ge 4$ and $n=1$ then $c_4$ is the minimum. Next we assume that $k \geq 4$ and $n \ge 2$. It is easy to check that $c_2$ is smaller than $c_1$, $c_3$ and $c_5$. So we compare $c_2$ with $c_4$ for $k \geq 4$. Then we have the following results. If $(*_1 $ and $(n < K_1))$ then $c_4(\frac{3k-2n}{6})$ is smaller than $c_2$. If $(*_2 $ and $( n < K_2))$ then $c_4(\frac{3k-2n}{6}-\frac{1}{6})$ is smaller than $c_2$. If $(*_3 $ and $( n < K_3))$ then $c_4(\frac{3k-2n}{6}-\frac{1}{3})$ is smaller than $c_2$. If $(*_4$ and $( n < K_4))$ then $c_4(\frac{3k-2n}{6}+\frac{1}{2})$ is smaller than $c_2$. We can summarize the results as follows. \[\left\{ \begin{array}{cc} c_1 & \mbox{$n=1,k=2$,}\\ c_4(\frac{3k-2n}{6}) & \mbox{$(*_1 \wedge (k \geq 4) \wedge (n < K_1))$,}\\ c_4(\frac{3k-2n}{6}+1/2) & \mbox{$(*_4 \wedge ( k \geq 4) \wedge ( n < K_4))$,}\\ c_4(\frac{3k-2n}{6}-1/3) & \mbox{$(*_3 \wedge (k \geq 4) \wedge ( n < K_3))$,}\\ c_4(\frac{3k-2n}{6}-1/6) & \mbox{$(*_2\wedge (k \geq 4) \wedge ( n < K_2))\vee$}\\ &\mbox{$(n=1,k=3)\vee (n=2,k=3)$,}\\ c_2& \mbox{$((k \geq 4) \wedge ((*_1 \wedge (K_1 \leq n))\vee$}\\ &\mbox{$(*_2 \wedge (K_2\leq n)) ))$}\\ &\mbox{$\vee (*_3 \wedge (K_3 \leq n))\vee(*_4 \wedge ( K_4 \leq n))$}\\ & \mbox{$\vee (k=2 \wedge (n\geq 2))\vee (k=3 \wedge (n \geq 3))$.} \end{array} \right. \] Finally, we want to show that for any arbitrary subset $A$, $cut(A,V\setminus A)=1$ or $cut(A,V\setminus A)=2$ gives the minimum normalized cut. We notice that every subset $A$ with $cut(A,V\setminus A)=1$ is $A_2$ or $A_3$ and every subset $A$ with $cut(A,V\setminus A)=2$ are $A_1,A_5,A_4$. We consider all cases with $cut(A,V\setminus A)=1$ and the minimum occurs at $A_2$. There may be several partitions with $cut(A,V\setminus A) \geq 2$. Let $k \geq 4$. Then we note that $vol(R_{n,k})\geq 24$ and there exists a subset $A_4$ in Case(iv), which minimize the $\displaystyle \left(\frac{1}{vol(A)}+\frac{1}{vol(R_{n,k})-vol(A)}\right)$ with $cut(A,V\setminus A)=2$. We note that $\displaystyle |vol(A_4)-\frac{vol(R_{n,k})}{2}| \leq 3$. From Lemma~\ref{genmcut2}, $\displaystyle 3\left(\frac{1}{vol(R_{n,k})/2}+\frac{1}{vol(R_{n,k})/2}\right)> 2\left(\frac{1}{vol(R_{n,k})/2+3}+\frac{1}{vol(R_{n,k})/2-3}\right)$ for $vol(R_{n,k}) \geq 11$. Then we can show that there is no subset $A$ with $cut(A,V\setminus A) \geq 3$ and $Mcut(A,V\setminus A) \leq Mcut(A_4,V\setminus A_4)$. This conclude that minimum Ncut always have cut value 2 for all cases which has cut size more than 1. \end{proof} Figure~\ref{fig:rnk} shows the above regions for $n,k$. For a given $R_{n,k}$, we can find $Mcut(R_{n,k})$. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.75]{rnkgraph.pdf} \caption{$Mcut(R_{n,k})$.} \label{fig:rnk} \end{center} \end{figure} \subsection{$Mcut$ of weighted paths $P_{n,k}$} In this section, we consider a weighted path graph $P_{n,k}$ and find a formula for $Mcut(P_{n,k})$ based on $n,k$. We consider subsets of $V(P_{n,k})$ defined by $\displaystyle A(\alpha) =\{ x_i \ | \ 1 \leq i \leq \alpha \}$ for $1 \le \alpha \le n+k-1$. We note that every subset $A \subset V(P_{n,k})$ with $cut(A,V\setminus A)=1$ is $A=A(\alpha)$ for some $\alpha$. \begin{lemma} Let $G=P_{n,k}$. There exists a subset $A \subset V(P_{n,k})$ such that $cut(A,V\setminus A)=1$ and $Mcut(G)=Ncut(A,V\setminus A)$. \end{lemma} \begin{proof} Since $vol(P_{n,k})=2n+3k-2$, if $k \ge \frac{1}{3}(11-2n)$ then $vol(P_{n,k})\ge 9$. By the Lemma~\ref{gencut1}, we have $Mcut(G)=Mcut_1(G)$. If $k < \frac{1}{3}(11-2n)$, we have only five cases $(n,k)=(1,1)$, $(2,1)$, $(3,1)$, $(1,2)$ and $(2,2)$. For each cases $Mcut(P_{1,1})=Ncut(A(1),V\setminus A(1))$, $Mcut(P_{2,1})=Ncut(A(2),V\setminus A(2))$, $Mcut(P_{3,1})=Ncut(A(2),V\setminus A(2))$, $Mcut(P_{1,2})=Ncut(A(2),V\setminus A(2))$, and $Mcut(P_{2,2})=Ncut(A(2),V\setminus A(2))$. \end{proof} Let $P_{n,k}$ ($k \ge \frac{1}{3}(11-2n)$) be a weighted path graph. We first note that \begin{eqnarray*} vol(P_{n,k})&=&2n+3k-2,\\ vol(A(\alpha))&=& \begin{cases} 2 \alpha -1 & \mbox{($\alpha \le n$)} \\ 3 \alpha - n -1 & \mbox{($n+1 \le \alpha$)} \end{cases}, and \\ {\small Ncut(A(\alpha),V\setminus A(\alpha))} &=& c(\alpha), \end{eqnarray*} where a function $c(t)$ ($1 \le t \le n+k$) is defined by $$ c(t)= \begin{cases} \frac{2n+3k-2}{(2 t-1)(2n+3k-2 t-1)} & \mbox{($1 \le t \le n+\frac{1}{2}$)} \\ \frac{2n+3k-2}{(-n+3 t-1)(3n+3k-3 t-1)} & \mbox{($n+\frac{1}{2} < t \le n+k-1$).} \end{cases} $$ We note that $c(\alpha-x)=c(\alpha+x)$ for an integer $\alpha$ ($1 \le \alpha \le n$, $n+1< \alpha \le n+k-1$) and a real number $x$ ($0 \le x \le \frac{1}{2}$). We also note $vol(A_i)<vol(A_{i+1})$ ($1 \le i \le n+k-2$), $vol(A(n))=2n-1$, $vol(A(n+1))=2n+2$ and $vol(A(n+k-1))=2n+3k-4$. Since {\small \begin{eqnarray*} &&Ncut(A(\alpha),V\setminus A(\alpha)) \\ &=& \frac{4 vol(P_{n,k})} {(vol(P_{n,k}))^2-(vol(A(\alpha))-vol(V\setminus A(\alpha)))^2} \\ &=& \frac{4 vol(P_{n,k})} {(vol(P_{n,k}))^2-4(vol(A(\alpha))-\frac{1}{2}vol(P_{n,k}))^2}, \end{eqnarray*} } if $Mcut(P_{n,k})=Ncut(A(\alpha_0),V\setminus A(\alpha_0))$ then {\small \begin{eqnarray*} && \vert vol(A(\alpha_0))-\frac{1}{2}vol(P_{n,k}) \vert \\ &=& \min \{ \vert vol(A(\alpha))-\frac{1}{2}vol(P_{n,k}) \vert \ | \ 1 \le \alpha \le n+k-1 \}. \end{eqnarray*} } We consider four cases: Case (i) $\frac{1}{2}vol(P_{n,k}) \le vol(A(n))$, Case (ii) $ vol(A(n)) < \frac{1}{2}vol(P_{n,k}) \le \frac{1}{2}(vol(A(n))+vol(A(n+1)))$, Case (iii) $\frac{1}{2}(vol(A(n))+vol(A(n+1))) < \frac{1}{2}vol(P_{n,k}) \le vol(A(n+1))$, and Case (iv) $ vol(A(n+1)) < \frac{1}{2}vol(P_{n,k})<vol(A(n))$. \\ \noindent {\bf Case (i)} Assume $\frac{1}{2}vol(P_{n,k}) \le vol(A(n))$. That is $k \le \frac{2}{3} n$. We find $\alpha$ minimizing $\vert vol(A(\alpha))-\frac{1}{2}vol(P_{n,k}) \vert$ $=\vert 2\alpha -1 -(n+\frac{3}{2}k -1) \vert$. For such $\alpha$ we have $$ (2\alpha-1)-1 < n+\frac{3}{2}k-1 \le (2\alpha-1)+1. $$ That is $$ \alpha -\frac{1}{2} < \frac{2n+3k}{4} \le \alpha + \frac{1}{2} $$ which means $\alpha$ is the nearest integer of $\frac{2n+3k}{4}$. We consider three cases ($K \in \mathbf{Z}$), ($K \not\in \mathbf{Z}$ and $2 \mid k$), and ($2 \nmid k$), where $K=\frac{2n+3k}{4}$. If $K \in \mathbf{Z}$ then $\alpha=K$. If $2 \nmid k$ then $\alpha=K+\frac{1}{4}$ or $\alpha=K-\frac{1}{4}$. If $K \not\in \mathbf{Z}$ and $2 \mid k$ then $\alpha=K+\frac{1}{2}$ or $\alpha=K-\frac{1}{2}$. Since $c(\alpha-x)=c(\alpha+x)$ for an integer $\alpha$ ($1 \le \alpha \le n $) and a real number $x$ ($0 \le x \le \frac{1}{2}$), $Mcut(P_{n,k})$ will be \begin{eqnarray*} c(K)&=&\frac{4}{-2+3 k+2 n}, \\ c(K+\frac{1}{2})&=&\frac{4 (-2+3 k+2 n)}{(-4+3 k+2 n) (3 k+2 n)}, or\\ c(K+\frac{1}{4})&=&\frac{4 (-2+3 k+2 n)}{(-3+3 k+2 n) (-1+3 k+2 n)}. \end{eqnarray*} following the conditions of $n$ and $k$. \noindent {\bf Case (ii)} Assume $ vol(A(n)) < \frac{1}{2}vol(P_{n,k}) \le \frac{1}{2}(vol(A(n))+vol(A(n+1)))$. That is $\frac{2}{3}n < k \le \frac{2}{3} n + 1$. In this case $$ Mcut(P_{n,k})=c(n) =\frac{3k+2n-2}{(3k-1)(2n-1)}. $$ \noindent {\bf Case (iii)} Assume $\frac{1}{2}(vol(A(n))+vol(A(n+1))) < \frac{1}{2}vol(P_{n,k}) \le vol(A(n+1))$. That is $\frac{2}{3}n +1 < k \le \frac{2}{3} n+2$. In this case $$ Mcut(P_{n,k})=c(n+1) =\frac{2-3k-2n}{8-6k+8n-6kn}. $$ \noindent {\bf Case (iv)} Assume $ vol(A(n+1)) < \frac{1}{2}vol(P_{n,k})<vol(A(n))$. That is $\frac{2}{3} n+2 < k$. We find $\alpha$ minimizing $\vert vol(A(\alpha))-\frac{1}{2}vol(P_{n,k}) \vert$ $=\vert 3\alpha -n -1 -(n+\frac{3}{2}k -1) \vert$. For such $\alpha$ we have $$ (3\alpha-n-1)-\frac{3}{2} < n+\frac{3}{2}k-1 \le (3\alpha-n-1)+\frac{3}{2}. $$ That is $$ \alpha -\frac{1}{2} < \frac{4n+3k}{6} \le \alpha + \frac{1}{2} $$ which means $\alpha$ is the nearest integer of $\frac{4n+3k}{6}$. We consider four cases ($K' \in \mathbf{Z}$), ($3 \nmid n$ and $2 \mid k$), ($3 \mid n$ and $2 \nmid k$), and ($3 \nmid n$ and $2 \nmid k$), where $K'=\frac{4n+3k}{6}$. If $K' \in \mathbf{Z}$ then $\alpha=K'$. If $3 \nmid n$ and $2 \mid k$ then $\alpha=K'+\frac{1}{3}$ or $\alpha=K'-\frac{1}{3}$. If $3 \mid n$ and $2 \nmid k$ then $\alpha=K'+\frac{1}{2}$ or $\alpha=K'-\frac{1}{2}$. If $3 \nmid n$ and $2 \nmid k$ then $\alpha=K'+\frac{1}{6}$ or $\alpha=K'-\frac{1}{6}$. Since $c(K'-x)$ $=c(K'+x)$ ($x=\frac{1}{2},\frac{1}{3},\frac{1}{6}$), we have $Mcut(P_{n,k})$ as one of \begin{eqnarray*} c(K')&=&\frac{4}{-2+3 k+2 n},\\ c(K'+\frac{1}{2})&=&\frac{4 (-2+3 k+2 n)}{(-5+3 k+2 n) (1+3 k+2 n)},\\ c(K'+\frac{1}{3})&=&\frac{4 (-2+3 k+2 n)}{(-4+3 k+2 n) (3 k+2 n)}, or\\ c(K'+\frac{1}{6})&=&\frac{4 (-2+3 k+2 n)}{(-3+3 k+2 n) (-1+3 k+2 n)} \end{eqnarray*} following the conditions of $n$ and $k$. We note $c(K)=c(K')$, $c(K+\frac{1}{2})=c(K'+\frac{1}{3})$ and $c(K+\frac{1}{4})=c(K'+\frac{1}{6})$ before summarizing them as a proposition. \begin{theorem} \label{pathnk} For $P_{n,k}$,$(n,k \geq 1$, $k \ge \frac{1}{3}(11-2n))$, $Mcut(P_{n,k})$ is given by \[ \left \{ \begin{array}{cc} \frac{4}{-2+3 k+2 n} & \mbox{$(((3 \mid n) \wedge (2 \mid k) \wedge (R_3 < k))\vee$}\\ &\mbox{$(o_1\wedge (k \le R_1))$}\\ \\ \frac{4 (-2+3 k+2 n)}{(-5+3 k+2 n) (1+3 k+2 n)} & \mbox{$((3 \mid n) \wedge (2 \nmid k)\wedge (R_3 < k))$, }\\ \\ \frac{4 (-2+3 k+2 n)}{(-4+3 k+2 n) (3 k+2 n)}& \mbox{$((3 \nmid n)\wedge (2 \mid k)\wedge (R_3 < k))\vee$}\\ &\mbox{$(o_2\wedge (2 \mid k)\wedge (k \le R_1))$,}\\ \\ \frac{4 (-2+3 k+2 n)}{(-3+3 k+2 n) (-1+3 k+2 n)} & \mbox{$((3 \nmid n)\wedge (2 \nmid k)\wedge (R_3 < k))\vee$}\\ &\mbox{$((2 \nmid k) \wedge (k \le R_1))$}\\ \\ \frac{3k+2n-2}{(3k-1)(2n-1)} & \mbox{$(R_1 < k \le R_2)$}\\ \\ \frac{2-3k-2n}{8-6k+8n-6kn}& \mbox{$(R_2 < k \le R_3)$},\\ \end{array}\right. \] where \begin{eqnarray*} o_1 &=& (\frac{3k+2n}{4} \in \mathbf{Z}),\\ o_2 &=& (\frac{3k+2n}{4} \not\in \mathbf{Z}), \\ R_1&=& \frac{2n}{3},\\ R_2 &=& \frac{2n}{3}+1,\\ R_3 &=&\frac{2n}{3}+2. \end{eqnarray*} \end{theorem} \hfill\qed Figure~\ref{fig:patha} shows minimum $Mcut(P_{n,k})$ for each $n,k$. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.7]{p_nk_fig.pdf} \caption{$Mcut(P_{n,k})$.} \label{fig:patha} \end{center} \end{figure} \begin{corollary} For $P_{2k,k}$, \[ Mcut(P_{2k,k})= \left \{ \begin{array}{cc} \frac{4}{-2+7k} & \mbox{$(4 \mid k) $,}\\ \\ \frac{4 (-2+7 k)}{(-4+7k) (7k)} & \mbox{$(4 \nmid k)\wedge (2 \mid k)$,}\\ \\ \frac{4 (-2+7 k)}{(-3+7k) (-1+7k)} & \mbox{$(2 \nmid k)$.}\\ \end{array} \right. \] \end{corollary} \begin{proof} By substituting $n=2k$ to the formula given for $Mcut(P_{n,k})$, we can directly obtain the result. According to the Theorem~\ref{pathnk}, for $n=2k$, $k>R_3$ that is $\displaystyle k > \frac{2n}{3}+2$ implies that $k \leq -6$. Since $k \geq 1$, this does not holds. For $R_1 <k \le R_2$ that is $\displaystyle \frac{2n}{3} < k \le \frac{2n}{3}+1$ implies that $ -3 \le k < 0$. Since $k \geq 1$, this does not holds. For $R_2 <k \le R_3$ that is $\displaystyle \frac{2n}{3}+1 < k \le \frac{2n}{3}+2$ implies that $ -6 \le k < -3$. Since $k \geq 1$, this does not holds. Therefore the only case, which holds for $n=2k$ is, $k \le R_1$ that is $\displaystyle k \le \frac{4k}{3}$. This implies that $k \ge 0$. Substituting $n=2k$ in the Theorem~\ref{pathnk}, we have, \[ Mcut(P_{2k,k})= \left\{ \begin{array}{cc} \frac{4}{-2+7k} & \mbox{ $(4 \mid k)$,}\\ \frac{4 (-2+7 k)}{(-4+7k) (7k)} & \mbox{$(4 \nmid k)\wedge (2 \mid k)$,}\\ \frac{4 (-2+7 k)}{(-3+7k) (-1+7k)} & \mbox{$(2 \nmid k)$.} \end{array} \right. \] \end{proof} \subsection{$Mcut$ of graph $LP_{n,m}$} Here, we consider lollipop graph $LP_{n,m}$ and derive a formula for $Mcut(LP_{n,m})$. A lollipop graph $LP_{n,m}$ defined in Definition~\ref{def:lp} is constructed by joining an end vertex of a path graph $P_m$ to a vertex of a complete graph $K_n$. We consider three kinds of subsets of $V(LP_{n,m})$ defined by $A_1(\alpha)=\{x_i\ |\ 1 \le i \le \alpha\}$ for $1\le \alpha \le m$, $A_2(\beta)=\{x_i\ |\ 1 \le i \le m\}$ $\cup \{y_i\ |\ 1 \le i \le \beta\}$ for $1 \le \beta < n$, and, $B(\alpha,\beta)=\{x_i\ |\ 1 \le i \le \alpha\}$ $\cup \{x_m\}$ $\cup \{y_i \ |\ 1 \le i \le \beta\}$ for $1 \le \alpha < m-1$, $1 \le \beta < n$. \begin{lemma} Let $A$ be a subset of $V(LP_{n,m})$. \begin{enumerate} \item If $y_i \in A$ and $y_{i+1} \not\in A$ for some $i$ ($2 \le i \le n-1$) then $Ncut(A',V\setminus A')$ $=Ncut(A,V\setminus A)$, where $A'=(A\setminus\{y_i\})\cup\{y_{i+1}\}$. \item If $x_i \in A$, $x_{i+1},\ldots,x_{j} \not\in A$, and $x_{j+1} \in A$ for some $i,j$ ($1 \le i < j \le m-1$) then $Ncut(A',V\setminus A')$ $\le Ncut(A,V\setminus A)$, where $A'=(A\setminus\{x_{j+1}\})\cup\{x_{i+1}\}$. \item There exists a subset $A_1(\alpha)$, $A_2(\beta)$ or $B(\alpha,\beta)$ such that $Mcut(LP_{n,m})=Ncut(A_1(\alpha),V\setminus A_1(\alpha))$, $Mcut(LP_{n,m})=Ncut(A_2(\beta),V\setminus A_2(\beta))$, or $Mcut(LP_{n,m})=Ncut(B(\alpha,\beta),V\setminus B(\alpha,\beta))$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item It is easy to check $vol(A)=vol(A')$ and $cut(A,V\setminus A)=cut(A',V\setminus A')$. \item It is easy to check $vol(A)=vol(A')$ and $cut(A',V\setminus A') \le cut(A,V\setminus A)$. \item Let $A$ be a subset of $V(LP_{n,m})$ such that $Mcut(LP_{n,m})=Ncut(A,V\setminus A)$. Using the above results 1. and 2., we have a subset $A'$ which is one of $A_1(\alpha)$, $A_2(\alpha)$ or $B(\alpha,\beta)$ such that $Ncut(A',V\setminus A')=Mcut(LP_{n,m})$. \end{enumerate} \end{proof} Let $G=LP_{n,m}$ ($n \ge 3$, $m \ge 1$) a lollipop graph. We first note that {\small \begin{eqnarray*} vol(LP_{n,m})&=&2m+n(n-1), \\ vol(A_1(\alpha))&=&2\alpha -1, \\ cut(A_1(\alpha),V\setminus A_1(\alpha))&=& 1, \\ vol(A_2(\beta))&=&2m+\beta(n-1), \\ cut(A_2(\beta),V\setminus A_2(\beta))&=& \beta(n-\beta), \\ vol(B(\alpha,\beta))&=& 2\alpha + 2+\beta(n-1), \\ cut(B(\alpha,\beta,V\setminus B(\alpha,\beta))) &=& \beta(n-\beta) + 2, and \\ Ncut(A_1(\alpha),V\setminus A_1(\alpha)) &=& c(\alpha), \end{eqnarray*}} where a function $c(t)$ ($1 \le t \le m$) is defined by $$ c(t)=\frac{2 m-n+n^2}{\left(1+2 m-n+n^2-2 t\right) (-1+2 t)}. $$ It is also showed that {\small \begin{eqnarray*} &&Ncut(A_2(\beta),V\setminus A_2(\beta))\\ &=& \frac{\beta \left(2 m-n+n^2\right)}{(-1+n) (-\beta+2 m+n \beta)}, and \\ &&Ncut (B(\alpha,\beta),V\setminus B(\alpha,\beta))\\ &=& \frac{\left(2 m-n+n^2\right) \left(2+n \beta -\beta ^2\right)}{(2+2 \alpha -\beta +n \beta ) \left(2 m-n+n^2-2 \alpha +\beta -n \beta - 2 \right)}. \end{eqnarray*} } \begin{lemma} \label{lemma:lp} Let $G=LP_{n,m}$ $(n \ge 3$, $m \ge 2)$. \begin{enumerate} \item $c(\alpha-1)) < c(\alpha)$ iff $m > \frac{1}{2}(n^2-n+4)$ \\ $(2 \le \alpha \le m)$. \item $c(m) \le \frac{1}{2}vol(LP_{n,m})$ iff $m \le \frac{1}{2}(n^2-n+4)$. \item $c(m) \le Ncut(A_2(\beta),V\setminus A_2(\beta))$ $(1 \le \beta < n)$. \item If $m \le \frac{1}{2}(n^2-n+2)$ then $$ c(m) \le Ncut(B(\alpha,\beta),V\setminus B(\alpha,\beta)), $$ $(1 \le \alpha \le m-2$, $1 \le \beta < n)$. \end{enumerate} \end{lemma} \begin{proof} Each items are given by straightforward computations. \end{proof} Since $cut(A_1(\alpha),V\setminus A_1(\alpha))=1$, if $vol(A_1(m)) \ge \frac{1}{2}vol(LP_{n,m})$ then there exists some $\alpha$ such that $Mcut(LP_{n,m})=Ncut(A_1(\alpha),V\setminus A_1(\alpha))$. To find the $\alpha$, we solve $$ vol(A_1(\alpha))-1 < \frac{1}{2}vol(LP_{n,m}) \le vol(A_1(\alpha))+1. $$ That is $$ \alpha - \frac{1}{2} < \frac{n^2-n+2m+2}{4} \le \alpha + \frac{1}{2} $$ which means $\alpha$ is the nearest integer of $\frac{n^2-n+2m+2}{4}$. We consider two cases $(K \in \mathbf{Z})$ and $(K \not\in \mathbf{Z})$, where $K=\frac{n^2-n+2m+2}{4}$. If $K \in \mathbf{Z}$ then $\alpha=K$. If $K \not\in \mathbf{Z}$ then $K+\frac{1}{2}$ is an integer and $\alpha=K+\frac{1}{2}$ or $\alpha=K-\frac{1}{2}$. Since $c(K+\frac{1}{2})=c(K-\frac{1}{2})$, $Mcut(LP_{n,m})$ will be {\small \begin{eqnarray*} c(K)&=&\frac{4}{n^2-n+2m}, \ or \\ c(K+\frac{1}{2})&=& \frac{4(n^2-n+2m)}{(n(n-1)+2(m-1))(n(n-1)+2(m+1))} \end{eqnarray*} } By Lemma~\ref{lemma:lp}, if $m \le \frac{1}{2}(n^2-n+4)$ then $Mcut(LP_{n,m})=Ncut(A_1(m),V\setminus A_1(m))$. That is $$ Ncut(A_1(m),V\setminus A_1(m))=\frac{n^2-n+2m}{(2m-1)(n^2-n+1)}. $$ If $m=1$ then it is easy to verify $Mcut(LP_{n,1})=Ncut(A_2(1),V\setminus A_2(1))$ $=\frac{n^2-n+2}{(n+1)(n-1)}$. \begin{theorem} \label{prop316} For the graph $LP_{n,m},(n \geq 3$ and $m \geq 1)$, $Mcut(LP_{n,m})$ is given by, \[\left \{ \begin{array}{cc} \frac{n^2-n+2m}{(2m-1)(n^2-n+1)} & \mbox{$(2\leq m \leq \frac{n^2-n+4}{2})$,} \\ \frac{4}{(n^2-n+2m)}& \mbox{$(o_1 \wedge m > \frac{n^2-n+4}{2})$,}\\ \frac{4(n^2-n+2m)}{(n(n-1)+2(m-1))(n(n-1)+2(m+1))} & \mbox{$(o_2 \wedge m > \frac{n^2-n+4}{2})$,}\\ \frac{n^2-n+2}{(n+1)(n-1)} & \mbox{$(m=1)$,} \end{array} \right. \] where \begin{eqnarray*} o_1 &=& (\frac{n^2-n+2m+2}{4} \in \mathbf{Z}), \\ o_2 &=& (\frac{n^2-n+2m+2}{4} \not\in \mathbf{Z}). \end{eqnarray*} \end{theorem} \hfill\qed \section{Eigenvalues and eigenvectors of paths and cycles} In this section, we derive formulae for the eigenvalues and eigenvectors of cycles and paths using circulant matrices and give an alternate proof for the eigenvalues of adjacency matrix of cycles and paths using Chebyshev polynomials. \subsection{Circulant matrices and eigenvalues of cycles and paths} Let $\omega_n=\mathbf{e}^{-\frac{2\pi}{n}\mathbf{i}}$ $=\cos\frac{2\pi}{n}+\mathbf{i}\sin\frac{2\pi}{n}$ be a primitive $n$-th root of unity. \begin{definition} A circulant matrix $C=(c_{ij})$ is a matrix having a form $c_{ij}=c_{(j-i)\mod n}$. $$ C=\left( \begin{array}{cccccc} c_0 & c_1 & c_2 & \cdots & \cdots & c_{n-1} \\ c_{n-1} & c_0 & c_1 & c_2 & & c_{n-2} \\ \vdots & c_{n-1} & c_0 & c_1 & & \vdots \\ \vdots & & \ddots & \ddots & \ddots &\vdots \\ \vdots & & & \ddots & \ddots & c_1 \\ c_1 & c_2 & \cdots & \cdots & c_{n-1}& c_0 \end{array} \right). $$ \end{definition} \begin{proposition} Let $C=(c_{ij})$ be a circulant matrix and $c_{ij}=c_{(j-i)\mod n}$. For $k=0,\ldots,n-1$, we have $$ C\mathbf{u}_k = \lambda_k \mathbf{u}_k, $$ where $\displaystyle \lambda_k=\sum_{j=0}^{n-1} c_j (\omega_n^k)^j$, $\mathbf{u}_k=(u_{ki})$ and $u_{ki}=(\omega_n^k)^i= \cos\frac{2k\pi i}{n}+\mathbf{i}\sin\frac{2k\pi i}{n}$. \end{proposition} \begin{proof} \begin{eqnarray*} (C \mathbf{u}_k)_i &= & \sum_{j=0}^{n-1} c_{ij} u_{kj} \\ &=& \sum_{j=0}^{n-1} c_{(j-i)\mod n} (\omega_n^k)^j \\ &=& (\omega_n^k)^i \sum_{j=0}^{n-1} c_{(j-i)\mod n} (\omega_n^k)^{j-i} \\ &=& (\omega_n^k)^i \sum_{j=0}^{n-1} c_j (\omega_n^k)^j \\ &=& \lambda_k u_{ki} \\ &=& (\lambda_k \mathbf{u}_k)_i. \end{eqnarray*}\end{proof} \begin{proposition} \begin{enumerate} \item The eigenvalues of the adjacency matrix of $C_n$ is given by $\displaystyle \lambda_k = 2 \cos (\frac{2k\pi}{n})$, \item The eigenvalues of the difference Laplacian matrix of $C_n$ is given by $\displaystyle \lambda_k = 2-2 \cos (\frac{2k\pi}{n})$, \item The eigenvalues of the normalized Laplacian matrix of $C_n$ is given by $\displaystyle \lambda_k = 1- \cos (\frac{2k\pi}{n})$, and \item The eigenvalues of the signless Laplacian matrix of $C_n$ is given by $\displaystyle \lambda_k = 2+2 \cos (\frac{2k\pi}{n})$,\\ where $k=(0,\ldots,n-1)$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item Let $A$ be an adjacency matrix of a cycle graph with $n$ vertices. That is $A=(c_{ij})=c_{(j-i)\mod n}$ and $c_0=0$, $c_1=c_{n-1}=1$ and $c_i=0$ for $i=2,\ldots,n-2$. \begin{eqnarray*} \lambda_k &=& (\omega_n^k)^1 + (\omega_n^k)^{n-1} \\ &=& (\omega_n^k)^1 + (\omega_n^k)^{-1} \\ &=& 2 \cos (\frac{2k\pi }{n}). \end{eqnarray*}\hfill\qed \item Let $L(C_n)$ be the Laplacian matrix of a cycle graph with $n$ vertices. That is $L(C_n)=(c_{ij})=c_{(j-i)\mod n}$ and $c_0=2$, $c_1=c_{n-1}=-1$ and $c_i=0$ for $i=2,\ldots,n-2$. \begin{eqnarray*} \lambda_k &=& 2 - (\omega_n^k)^1 - (\omega_n^k)^{n-1} \\ &=& 2 - ((\omega_n^k)^1 + (\omega_n^k)^{-1}) \\ &=& 2 - 2 \cos (\frac{2k\pi}{n}). \end{eqnarray*}\hfill\qed \item Let ${\cal L}(C_n)$ be the normalized Laplacian matrix of a cycle graph with $n$ vertices. That is ${\cal L}(C_n)=(c_{ij})=c_{(j-i)\mod n}$ and $c_0=1$, $c_1=c_{n-1}=-\frac{1}{2}$ and $c_i=0$ for $i=2,\ldots,n-2$. \begin{eqnarray*} \lambda_k &=& 1 - \frac{1}{2} (\omega_n^k)^1 - \frac{1}{2} (\omega_n^k)^{n-1} \\ &=& 1 - \frac{1}{2}( (\omega_n^k)^1 + (\omega_n^k)^{-1}) \\ &=& 1 - \cos (\frac{2k\pi}{n}). \end{eqnarray*}\hfill\qed \item Let $SL(C_n)$ be the signless Laplacian matrix of a cycle graph with $n$ vertices. That is $SL(C_n)=(c_{ij})=c_{(j-i)\mod n}$ and $c_0=2$, $c_1=c_{n-1}=1$ and $c_i=0$ for $i=2,\ldots,n-2$. \begin{eqnarray*} \lambda_k &=& 2 + (\omega_n^k)^1 + (\omega_n^k)^{n-1} \\ &=& 2 + ((\omega_n^k)^1 + (\omega_n^k)^{-1}) \\ &=& 2 + 2 \cos (\frac{2k\pi}{n}).\hfill\qed \end{eqnarray*} \end{enumerate} \end{proof} \begin{proposition} \label{propcycle} Let $\lambda_k(0\leq k \leq n-1)$ be the $k^{th}$ eigenvalue of an adjacency matrix of $C_n$. Then $\lambda_k =\lambda_{n-k}$, for $k=1,\ldots,n-1$. \end{proposition} \begin{proof} Eigenvalues of an adjacency matrix of cycle is given by $\displaystyle \lambda_k= 2\cos(\frac{2k\pi}{n})$, where $k=0,\ldots,n-1$. \begin{eqnarray*} \lambda_0 &=& 2,\\ \lambda_1 &=& 2\cos (\frac{2\pi}{n}),\\ \lambda_2 &=& 2\cos (\frac{4\pi}{n}),\\ & \vdots &\\ \lambda_{n-2} &=& 2\cos (\frac{4\pi}{n}), \\ \lambda_{n-1} &=& 2\cos (\frac{2\pi}{n}). \end{eqnarray*} This shows that $\lambda_k =\lambda_{n-k}$ for $k=1,\ldots,n-1$. \end{proof} \begin{proposition} \label{prop:path} \begin{enumerate} \item The eigenvalues of an adjacency matrix of a path graph $P_n$ are given by $\displaystyle \lambda_k(A(P_n))= 2\cos(\frac{(k+1) \pi}{n+1}),(k=0, \ldots, n-1)$ and an eigenvector $\mathbf{u_k}$ is given by $\displaystyle (u_{ki})=\sin \frac{(i+1)(k+1) \pi}{n+1},(i=0,\ldots,n-1)$ and $(k=0,\ldots,n-1)$. \item The eigenvalues of difference Laplacian matrix of $P_n$ are given by $\displaystyle \lambda_k(L(P_n))=2-2\cos(\frac{k \pi}{n})$, $(k=0, \ldots, n-1)$ and its eigenvector $\mathbf{u_k}$ is given by $\displaystyle (u_{ki})=\cos\left(\frac{(2i+1)k \pi}{2n}\right)(i=0,\ldots,n-1)$. \item The eigenvalues of normalized Laplacian matrix of a path $P_n$ are given by $\displaystyle \lambda_k(\mathcal{L}(P_n))=1-\cos(\frac{k \pi}{n-1})$ $(k=0, \ldots, n-1)$ and its eigenvector $\mathbf{u_k}$ is given by \[ u_{ki}=\left\{\begin{array}{cc} \sqrt{2} \cos\left(\frac{2 i \pi k}{2n-2}\right) & \mbox{$i=1,\ldots,n-2$,}\\ \cos(\frac{2 i \pi k}{2n-2}) & \mbox{$i=0$ and $i=n-1$.} \end{array} \right. \] \item The eigenvalues of signless Laplacian matrix of $P_n$ are given by $\displaystyle \lambda_k(SL(P_n))=2+2\cos(\frac{(k+1) \pi}{n})$, $(k=0, \ldots, n-1)$ and its eigenvector $\mathbf{u_k}$ is given by $\displaystyle (u_{ki})= \sin(\frac{(2i+1)k \pi}{2n}),(i=0,\ldots,n-1)$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item Let $\mathbf{u}=(u_i), (i=0,\ldots,n-1)$ be an eigenvector for an eigenvalue $\lambda$ of path $P_n$. Then, we can write $$ P_n \mathbf{u} = \left( \begin{array}{cccccc} 0 & 1 &0 & \cdots & \cdots & 0 \\ 1 & 0 & 1 & 0 & & \vdots \\ 0 & 1 & 0 & 1 & & \vdots \\ \vdots & & \ddots & \ddots & \ddots &\vdots \\ \vdots & & & \ddots & \ddots & 1 \\ 0 & 0 & \cdots & \cdots & 1& 0 \end{array} \right) \left( \begin{array}{c} u_0 \\ u_1\\ u_2\\ \vdots\\\vdots\\u_{n-1} \end{array} \right) = \lambda \mathbf{u}. $$ Then we have the following equations: \begin{eqnarray} \label{eqcy1} u_1 &=& \lambda u_0, \nonumber\\ u_0+u_2&=&\lambda u_1, \nonumber\\ u_1+u_3 &=& \lambda u_2,\\ & \vdots & \nonumber\\ u_{n-2}&=&\lambda u_{n-1}.\nonumber \end{eqnarray} Let $\mathbf{u}'=(u'_i), (i=0,\ldots,2n+1)$ be an eigenvector of $C_{2n+2}$, where $\displaystyle (u'_i)=\sin \left(\frac{2(i+1)(k+1)\pi}{2n+2}\right), (i=0,\ldots,2n+1)$ and $(k=0,\ldots,2n+1)$. The eigenvalues of an adjacency matrix of a cycle $C_{2n+2}$ are $\displaystyle \lambda_k=2\cos\left(\frac{(k+1) \pi}{n+1}\right),(k=0,\ldots,2n+1)$. We note that $u'_n=u'_{2n+1}=0$. Hence we can write the equation $C_{2n+2} \mathbf{u}' =\lambda_k \mathbf{u}'$ as $$ \left( \begin{array}{cccccccc} 0 & 1 & 0 & \cdots & \cdots &\cdots& \cdots & 1 \\ 1 & 0 & 1 & \cdots & \cdots &\cdots&\cdots &0 \\ 0 & \ddots & \ddots& \ddots & & & &\vdots \\ \vdots & & 1 & 0 & 1 & & &\vdots\\ \vdots & & & 1 & 0 & 1 & &\vdots\\ \vdots && & & \ddots & \ddots & \ddots & \vdots \\ \vdots & & & & &1 & 0 & 1 \\ 1 & 0 & \cdots & \cdots & \cdots & \cdots & 1 & 0 \end{array} \right) \left( \begin{array}{c} u'_0 \\ \vdots \\ u'_{n-1} \\ 0 \\ u'_{n+1} \\ \vdots \\u'_{2n} \\ 0 \end{array} \right)$$\\ $= \lambda_k \mathbf{u}'$. Then we have the following equations: \begin{eqnarray} \label{eqcy2} u'_1 &=& \lambda_k u'_0, \nonumber \\ u'_0+u'_2 &=& \lambda_k u'_1, \nonumber \\ & \vdots & \\ u'_{n-2} &=& \lambda_k u'_{n-1} \nonumber. \end{eqnarray} Comparing Equation~\ref{eqcy1} with Equation~\ref{eqcy2}, we have $P_n \mathbf{u}=\lambda_k \mathbf{u}$, where $\mathbf{u}=(u'_i) (i=0,\ldots,n-1)$. That is $\lambda_k, (k=0,\ldots,n-1)$ are eigenvalues of $P_n$ and $\mathbf{u}$ is an eigenvector of $\lambda_k$. Since $\lambda_i \neq \lambda_j$ for ($ i \neq j$ and $0 \leq i,j \leq n-1)$, we have $n$ different eigenvalues of $P_n$ and that is the complete set of eigenvalues of $P_n$. \item Let $\mathbf{u}=(u_i), (i=0,\ldots,n-1)$ be an eigenvector for an eigenvalue $\lambda$ of difference Laplacian matrix $L(P_n)$. Then we can write the equation $ L(P_n) \mathbf{u} = \lambda \mathbf{u}$ as \[\left( \begin{array}{cccccc} 1 & -1 & 0 & \cdots & \cdots & 0 \\ -1 & 2 & -1 & 0 & & 0 \\ 0 & -1 & 2 & -1 & & \vdots \\ \vdots & & \ddots & \ddots & \ddots &\vdots \\ \vdots & & & -1 & 2 & -1 \\ 0 & 0 & \cdots & \cdots & -1& 1 \end{array} \right) \left( \begin{array}{c} u_0 \\ u_1 \\ \vdots\\ \vdots\\ u_{n-2}\\ u_{n-1} \end{array} \right) = \lambda \mathbf{u}. \] Then we have the following equations. \begin{eqnarray} \label{eqcy3} u_0 - u_1 &=& \lambda u_0, \nonumber\\ -u_0 + 2u_1 - u_2 &=& \lambda u_1, \nonumber\\ & \vdots & \\ -u_{n-2} + u_{n-1} &=& \lambda u_{n-1}.\nonumber \end{eqnarray} Let $\mathbf{u}'=(u'_i), (i=0,\ldots,2n-1)$ be an eigenvector of difference Laplacian matrix of $C_{2n}$, where $\displaystyle ({u}'_i)=\cos \left(\frac{(2i+1)k\pi}{2n}\right), (i=0,\ldots,2n-1)$ and $(k=0,\ldots,2n-1)$. The eigenvalues of $L(C_{2n})$ are $\displaystyle \lambda_k=2-2\cos(\frac{k \pi}{n}),(k=0,\ldots,2n-1)$. We note that $u'_0=u'_{2n-1}, u'_1=u'_{2n-2},\ldots, u'_{n-1}=u'_n$. Then we can write the equation $L(C_{2n}) \mathbf{u}'=\lambda_k \mathbf{u}'$ as $$ \left( \begin{array}{cccccc} 2 & -1 & 0 & \cdots & \cdots & -1 \\ -1 & 2 & -1 & 0 & & 0 \\ 0 & -1 & 2 & -1 & & \vdots \\ \vdots & & \ddots & \ddots & \ddots &\vdots \\ \vdots & & & -1 & 2 & -1 \\ -1 & 0 & \cdots & \cdots & -1& 2 \end{array} \right) \left( \begin{array}{c} u'_0 \\ u'_1 \\ \vdots \\\vdots\\ u'_{2n-2} \\ u'_{2n-1} \end{array} \right) = \lambda_k \mathbf{u}'. $$ \begin{eqnarray} \label{eqcy4} 2u'_0 -u'_1 -u'_{2n-1} &=& u'_0 - u'_1 = \lambda_k u'_0, \nonumber\\ -u'_0 + 2u'_1 - u'_2 &=& \lambda_k u'_1, \nonumber\\ & \vdots & \\ -u'_{n-2}+ 2u'_{n-1} - u'_{n} &=& -u'_{n-2} + u'_{n-1} = \lambda_k u'_{n-1}. \nonumber \end{eqnarray} Comparing Equation~\ref{eqcy3} and Equation~\ref{eqcy4}, we have $P_n \mathbf{u}=\lambda_k \mathbf{u}$, where $\mathbf{u}=(u'_i), (i=0,\ldots,n-1)$. That is $\lambda_k, (k=0,\ldots,n-1)$ are eigenvalues of $P_n$ and $\mathbf{u}$ is an eigenvector of $\lambda_k$. Since $\lambda_i \neq \lambda_j$ for ($ i \neq j$ and $0 \leq i,j \leq n-1$), we have $n$ different eigenvalues of $P_n$ and that is the complete set of eigenvalues of $P_n$. \item Let $\mathbf{u}=(u_i), (i=0,\ldots,n-1)$ be an eigenvector for an eigenvalue $\lambda$ of normalized Laplacian matrix of path $P_n$. Then we can write the equation $ \mathcal{L}(P_n) \mathbf{u} = \lambda \mathbf{u}$ as $$ \left( \begin{array}{cccccc} 1 & -\frac{1}{\sqrt{2}} & 0 &\cdots & \cdots & 0 \\ -\frac{1}{\sqrt{2}}& 1 & -\frac{1}{2} & 0 & &\vdots \\ 0& \ddots &\ddots& \ddots & & \vdots\\ \vdots& &\ddots& \ddots & \ddots & \vdots\\ \vdots & & & -\frac{1}{2} & 1 & -\frac{1}{\sqrt{2}} \\ \vdots & \cdots&\cdots & \cdots & -\frac{1}{\sqrt{2}} & 1 \end{array} \right) \left( \begin{array}{c} u_0 \\ u_1 \\ \vdots\\ \vdots \\ u_{n-2} \\ u_{n-1} \end{array} \right)$$\\ $= \lambda \mathbf{u}.$ By expanding this we have the following equations. \begin{eqnarray} \label{eqcy5} u_0 - \frac{1}{\sqrt{2}}u_1 &=& \lambda u_0, \nonumber\\ - \frac{1}{\sqrt{2}} u_0 + u_1 -\frac{1}{2} u_2 &=& \lambda u_1, \nonumber\\ & \vdots &\\ -\frac{1}{2} u_{n-3} + u_{n-2} - \frac{1}{\sqrt{2}} u_{n-1} &=& \lambda u_{n-2}, \nonumber\\ -\frac{1}{\sqrt{2}} u_{n-2} + u_{n-1} &=& \lambda u_{n-1}.\nonumber \end{eqnarray} Let $\mathbf{u}'=(u'_i), (i=0,\ldots,2n-3)$ be an eigenvector of normalized Laplacian matrix of $C_{2n-2}$, where $(u'_i)=\cos (\frac{2 i k\pi}{2n-2}), (i=0,\ldots,2n-3)$ and $\lambda_k=1-\cos(\frac{2k \pi}{2n-2}),(k=0,\ldots,2n-3)$ be its eigenvalue. We note that $u'_1=u'_{2n-3}, u'_2=u'_{2n-4},\ldots,u'_{n-2}=u'_n$. Then we multiply each of these values by $\frac{1}{\sqrt{2}}$ and obtain the vector, $u'_0, \frac{1}{\sqrt{2}} u'_1,\frac{1}{\sqrt{2}} u'_2,\ldots,\frac{1}{\sqrt{2}}u'_{n-2}, u'_{n-1},\frac{1}{\sqrt{2}}u'_{n},\ldots,\frac{1}{\sqrt{2}} u'_{2n-3}$. We can write ${\cal L}(C_{2n-2}) \mathbf{u}' =\lambda_k \mathbf{u}'$ as $$ \left( \begin{array}{cccccc} 1 & -\frac{1}{2} & \cdots & \cdots&\cdots& -\frac{1}{2} \\ -\frac{1}{2}& 1 & -\frac{1}{2}& & &\vdots\\ \vdots & \ddots & \ddots & \ddots & &\vdots\\ \vdots & & \ddots & \ddots & \ddots & \vdots \\ \vdots & & &-\frac{1}{2} & 1 & -\frac{1}{2} \\ -\frac{1}{2} & \cdots & \cdots & \cdots & -\frac{1}{2} & 1 \end{array} \right) \left( \begin{array}{c} u'_0 \\ \frac{1}{\sqrt{2}}u'_1 \\ \vdots\\ \\ u'_{n-1} \\\vdots\\ \frac{1}{\sqrt{2}} u'_{2n-3} \end{array} \right)$$\\ $= \lambda_k \mathbf{u}'$. By expanding we have, {\small {\begin{eqnarray} \label{eqcy6} u'_0 - \frac{1}{2}\frac{1}{\sqrt{2}}u'_1 - \frac{1}{2}\frac{1}{\sqrt{2}}u'_1 &=& u'_0 - \frac{1}{\sqrt{2}}u'_1 \nonumber\\ &=& \lambda_k u'_0, \nonumber\\ -\frac{1}{2} u'_0 + \frac{1}{\sqrt{2}} u'_1 -\frac{1}{2}\frac{1}{\sqrt{2}}u'_2 &=& \frac{1}{\sqrt{2}}(-\frac{1}{\sqrt{2}}u'_0 + u'_1-\frac{1}{2}u'_2) \nonumber\\ &=& \lambda_k(\frac{1}{\sqrt{2}}u'_1), \\ & \vdots & \nonumber\\ -\frac{1}{2} \frac{1}{\sqrt{2}}u'_{n-3} + \frac{1}{\sqrt{2}} u'_{n-2} -\frac{1}{2}u'_{n-1} & =& \lambda_k(\frac{1}{\sqrt{2}}u'_{n-2}), \nonumber\\ -\frac{1}{2} \frac{1}{\sqrt{2}}u'_{n-2} + u'_{n-1} -\frac{1}{2}\frac{1}{\sqrt{2}}u'_{n-2} &=& -\frac{1}{\sqrt{2}}u'_{n-2} + u'_{n-1} \nonumber\\ &=& \lambda_k u'_{n-1}. \nonumber \end{eqnarray}}} Comparing Equation~\ref{eqcy5} and Equation~\ref{eqcy6}, we have $P_n \mathbf{u}=\lambda_k \mathbf{u}$, where $\mathbf{u}=(u'_0,\sqrt{2} u'_1,\ldots,\sqrt{2}u'_{n-2},u'_{n-1})$. That is $\lambda_k, (k=0,\ldots,n-1)$ are eigenvalues of $P_n$ and $\mathbf{u}$ is an eigenvector of $\lambda_k$. Since $\lambda_i \neq \lambda_j$ for ($ i \neq j$ and $0 \leq i,j \leq n-1$), we have $n$ different eigenvalues of $P_n$ and that is the complete set of eigenvalues of $P_n$. \item Let $\mathbf{u}=(u_i), (i=0,\ldots,n-1)$ be an eigenvector for an eigenvalue $\lambda$ of signless Laplacian matrix of path $P_n$. Then we can write the equation $SL(P_n)\mathbf{u}=\lambda \mathbf{u}$ as \[ \left( \begin{array}{cccccc} 1 & 1 &0 & \cdots & \cdots & 0 \\ 1 & 2 & 1 & 0 & & \vdots \\ 0 & 1 & 2 & 1 & & \vdots \\ \vdots & & \ddots & \ddots & \ddots &\vdots \\ \vdots & & & \ddots & \ddots & 1 \\ 0 & 0 & \cdots & \cdots & 1& 1 \end{array} \right) \left( \begin{array}{c} u_0 \\ u_1 \\ \vdots \\ \vdots \\ u_{n-2}\\ u_{n-1} \end{array} \right)\\ = \lambda \mathbf{u}\]. \begin{eqnarray} \label{eqcy7} u_0 + u_1 &=& \lambda u_0, \nonumber\\ u_0 + 2u_1 + u_2 &=& \lambda u_1, \nonumber\\ & \vdots &\\ u_{n-2} + u_{n-1} &=& \lambda u_{n-1}.\nonumber \end{eqnarray} Let $\mathbf{u}'=(u'_i), (i=0,\ldots,2n-1)$ be an eigenvector of signless Laplacian matrix of $C_{2n}$, where $\displaystyle (u'_i)=\sin \frac{(2i+1)k\pi}{2n}, (i=0,\ldots,2n-1)$ and $\displaystyle \lambda_k=2+2\cos(\frac{(k+1) \pi}{2n}),(k=0,\ldots,2n-1)$ be its eigenvalue. We note that $u'_0=-u'_{2n-1}, u'_1=-u'_{2n-2},\ldots,u'_{n-1}=-u'_n$. Then we can write the equation $ SL(C_{2n}) \mathbf{u}'=\lambda_k \mathbf{u}'$ as $$ \left( \begin{array}{cccccc} 2 & 1 & 0 & \cdots & \cdots & 1 \\ 1 & 2 & 1 & 0 & & 0 \\ 0 & 1 & 2 & 1 & & \vdots \\ \vdots & & \ddots & \ddots & \ddots &\vdots \\ \vdots & & & 1 & 2 & 1 \\ 1 & 0 & \cdots & \cdots & 1& 2 \end{array} \right) \left( \begin{array}{c} u'_0 \\ u'_1 \\ \vdots \\\vdots\\ u'_{2n-2} \\ u'_{2n-1} \end{array} \right)\\ = \lambda_k \mathbf{u}'. $$ \begin{eqnarray} \label{eqcy8} 2u'_0 +u'_1-u'_0 &=& u'_0+ u'_1 = \lambda_k u'_0, \nonumber\\ u'_0 + 2u'_1 + u'_2 &=& \lambda_k u'_1, \nonumber\\ & \vdots & \\ u'_{n-2}+ 2u'_{n-1} - u'_{n-1} &=& u'_{n-2} + u'_{n-1} = \lambda_k u'_{n-1}. \nonumber \end{eqnarray} Comparing Equation~\ref{eqcy7} and Equation~\ref{eqcy8}, we have $P_n \mathbf{u}=\lambda_k \mathbf{u}$, where $\mathbf{u}=(u'_i), (i=0,\ldots,n-1)$. That is $\lambda_k, (k=0,\ldots,n-1)$ are eigenvalues of $P_n$ and $\mathbf{u}$ is an eigenvector of $\lambda_k$. Since $\lambda_i \neq \lambda_j$ for $( i \neq j$ and $0 \leq i,j \leq n-1)$, we have $n$ different eigenvalues of $P_n$ and that is the complete set of signless eigenvalues of $P_n$. \end{enumerate} \end{proof} \subsection{Tridiagonal Matrices} In this section, we derive eigenvalues of adjacency matrices of paths and cycles using Chebyshev polynomials. \begin{definition} Let $T_0(x)=1$ and $U_0(x)=0$. For $n \in {\mathbf{N}}$, $T_n(x)$ and $U_n(x)$ are defined by $$ \left(\begin{array}{c} T_{n+1}(x)\\U_{n+1}(x) \end{array} \right) = \left(\begin{array}{cc} x & x^2-1 \\ 1 & x \end{array} \right) \left(\begin{array}{c} T_{n}(x)\\U_{n}(x) \end{array} \right). $$ We call $T_n(x)$ as the Chebyshev polynomials of the first kind, and $U_n(x)$ as the Chebyshev polynomials of the second kind. \end{definition} \begin{example} By using the above definition we have, \begin{eqnarray*} \left(\begin{array}{c} T_{1}(x)\\U_{1}(x) \end{array} \right) &=& \left(\begin{array}{cc} x & x^2-1 \\ 1 & x \end{array} \right) \left(\begin{array}{c} 1 \\ 0 \end{array} \right) = \left(\begin{array}{c} x \\ 1 \end{array} \right), \\ \left(\begin{array}{c} T_{2}(x)\\U_{2}(x) \end{array} \right) &=& \left(\begin{array}{cc} x & x^2-1 \\ 1 & x \end{array} \right) \left(\begin{array}{c} x \\ 1 \end{array} \right) = \left(\begin{array}{c} 2x^2-1 \\ 2x \end{array} \right), \\ \left(\begin{array}{c} T_{3}(x)\\U_{3}(x) \end{array} \right) &=& \left(\begin{array}{cc} x & x^2-1 \\ 1 & x \end{array} \right) \left(\begin{array}{c} 2x^2-1 \\ 2x \end{array} \right)\\ &~&= \left(\begin{array}{c} 4x^3-3x \\ 4x^2-1 \end{array} \right). \\ \end{eqnarray*} \end{example} \begin{proposition} \label{propcheb} $T_0(x)=1$, $T_1(x)=x$, $U_0(x)=0$, $U_1(x)=1$, \begin{eqnarray*} T_{n+1}(x)&=& 2xT_n(x)-T_{n-1}(x), \mbox{\ and\ } \\ U_{n+1}(x)&=& 2xU_n(x)-U_{n-1}(x). \end{eqnarray*} \end{proposition}\hfill\qed \begin{proposition} \begin{eqnarray*} \cos n\theta &=& T_n(\cos \theta), \\ \sin n\theta &=& U_n(\cos \theta) \sin \theta. \end{eqnarray*} \end{proposition}\hfill\qed We note that the degree of the polynomial $T_n(x)$ is $n$ and the degree of the polynomial $U_n(x)$ is $n-1$ for $n \ge 2$. \begin{proposition} Let $x=\cos\theta$ and $n \ge 2$. Then \begin{eqnarray*} T_{n}(x)=0 &\Leftrightarrow& x=\cos(\frac{(2k+1)\pi}{2 n}) \ \ (k=0,\ldots,n-1). \\ U_{n}(x)=0 &\Leftrightarrow& x=\cos(\frac{k\pi}{n}) \ \ (k=1,\ldots,n-1). \\ \end{eqnarray*} \end{proposition}\hfill\qed The determinant of tridiagonal matrices can be represented by using recurrence relations. We consider tridiagonal matrices with similar diagonal elements. Then we derive a formula for eigenvalues of tridiagonal matrices. \begin{definition} A $n\times n$ tridiagonal matrix $A_n=(a_{ij})$ is a matrix which has the form $$ A_n = \left( \begin{array}{ccccc} \alpha_1 & \beta_1 & 0 & \cdots & 0\\ \gamma_1 & \alpha_2 & \beta_2 & \ddots & \vdots \\ 0 & \gamma_2 & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & \alpha_{n-1} & \beta_{n-1} \\ 0 & \cdots & 0 & \gamma_{n-1} & \alpha_{n} \end{array} \right). $$ \end{definition} \begin{proposition}\label{prop:tridiagonal} Let $n \ge 2$, $|A_0|=1$, and $|A_1|=\alpha_1$. Then we have, $$ |A_n| = \alpha_{n} |A_{n-1}| -\beta_{n-1}\gamma_{n-1}|A_{n-2}|. $$ \end{proposition} \begin{proposition} Eigenvalues of adjacency matrix of a path graph are given by $\displaystyle \lambda_k(A(P_n))=2 \cos(\frac{k \pi}{n+1}) (k=1,\ldots,n)$. \end{proposition} \begin{proof} The matrix $\lambda I - P_n$ is a tridiagonal matrix with $\alpha_i=\lambda$, $\beta_i=\gamma_i=-1$. Let $f_n(\lambda)=|\lambda I - P_n|$. By Proposition~\ref{prop:tridiagonal}, $f_n(\lambda)$ is defined by $f_n(\lambda)=\lambda f_{n-1}(\lambda)-f_{n-2}(\lambda)$, where $f_0(\lambda)=1$ and $f_1(\lambda)=\lambda$. Let $\displaystyle g_n(\lambda)=U_{n+1}(\frac{\lambda}{2})$. Then \begin{eqnarray*} g_0(\lambda) &=& U_{1}(\frac{\lambda}{2}) = 1, \\ g_1(\lambda) &=& U_{2}(\frac{\lambda}{2}) = 2(\frac{\lambda}{2}) = \lambda, \\ g_n(\lambda) &=& U_{n+1}(\frac{\lambda}{2}) \\ &=& 2\frac{\lambda}{2}U_n(\frac{\lambda}{2})-U_{n-1}(\frac{\lambda}{2}) \ (by \ prop. ~\ref{propcheb} )\\ &=& \lambda g_{n-1}(\lambda) -g_{n-2}(\lambda). \end{eqnarray*} Then we have $\displaystyle f_n(\lambda)=g_n(\lambda)=U_{n+1}(\frac{\lambda}{2})$. That is \begin{eqnarray*} f_n(\lambda)=0 & \Leftrightarrow& U_{n+1}(\frac{\lambda}{2})=0. \\ &\Leftrightarrow& \frac{\lambda}{2}=\cos(\frac{k\pi}{n+1}) \ (k=1,\ldots,n).\\ &\Leftrightarrow& \lambda = 2\cos(\frac{k\pi}{n+1}) \ (k=1,\ldots,n).\\ \end{eqnarray*} Thus we obtain the result.\end{proof} \begin{proposition} Eigenvalues of adjacency matrix of a cycle are given by $\lambda_k(A(C_n)) =2\cos(\frac{2k\pi}{n})(k=1,\ldots,n)$. \end{proposition} \begin{proof} The matrix $\lambda I - C_n$ is not a tridiagonal matrix. But we have $\displaystyle |\lambda I - C_n|=2(T_n(\frac{\lambda}{2})-1)$. Since $T_n(x)=1$ $\Leftrightarrow$ $\cos n\theta=1$ $\Leftrightarrow$ $\displaystyle \theta = \frac{2k\pi}{n}$. We obtain $$ |\lambda I - C_n|=\prod_{k=1}^n(\lambda - 2\cos(\frac{2k\pi}{n})). $$ \end{proof} \begin{proposition} Let $P_n=(V_n,E_n)$ be a path graph. If $G=(V_n,E_n \cup \{(v_1,v_1)\},\{(v_n,v_n)\})$ then the eigenvalues of Laplacian matrix of $G$ are given by $\displaystyle \lambda_k = a+2\cos(\frac{k\pi}{n+1}),(k=1,\ldots,n)$. \end{proposition} \begin{figure}[htb] \begin{center} \includegraphics[scale=0.5]{pathex1.pdf} \caption{Path graph with equal vertex degrees.} \label{fig:fig1} \end{center} \end{figure} \begin{proof} Let $L(P_n)$ the Laplacian matrix of a path graph with vertex weight $a$ on $n$ vertices. The matrix $\lambda I - L(P_n)$ is a tridiagonal matrix with $\alpha_i=\lambda-a$, $\beta_i=\gamma_i=-1$. Let $f_n(\lambda)=|\lambda I - L(P_n)|$ and $f_n(\lambda)$ is defined by $f_n(\lambda)=\lambda f_{n-1}(\lambda-a)-f_{n-2}(\lambda)$, where $f_0(\lambda)=1$ and $f_1(\lambda)=\lambda-a$. Let $\displaystyle g_n(\lambda)=U_{n+1}(\frac{\lambda-a}{2})$. Since \begin{eqnarray*} g_0(\lambda) &=& U_{1}(\frac{\lambda-a}{2}) = 1, \\ g_1(\lambda) &=& U_{2}(\frac{\lambda-a}{2}) = 2(\frac{\lambda-a}{2}) =\lambda-a , and \\ g_n(\lambda) &=& U_{n+1}(\frac{\lambda-a}{2}) \\ &=& 2(\frac{\lambda-a}{2})U_n(\frac{\lambda-a}{2})-U_{n-1}(\frac{\lambda-a}{2}) \\ &=& (\lambda-a) g_{n-1}(\lambda) -g_{n-2}(\lambda), \end{eqnarray*} we have $\displaystyle f_n(\lambda)=g_n(\lambda)=U_{n+1}(\frac{\lambda-a}{2})$. That is \begin{eqnarray*} f_n(\lambda)=0 &\Leftrightarrow& U_{n+1}(\frac{\lambda-a}{2})=0 \\ &\Leftrightarrow& \frac{\lambda-a}{2}=\cos(\frac{k\pi}{n+1}) \ (k=1,\ldots,n)\\ &\Leftrightarrow& \lambda = a+2\cos(\frac{k\pi}{n+1}) \ (k=1,\ldots, n).\\ \end{eqnarray*}\end{proof} \subsection{Determinant of tridiagonal matrices} Let $n \ge 2$. We define a $n \times n$ matrix $A_n(a,b)$ as follows: $$ A_n(a,b)=\left( \begin{array}{ccccccc} a & b & 0 & \cdots & \cdots & \cdots& 0 \\ b & a & b & 0 & & & \vdots \\ 0 & b & a & b & 0 & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & 0 & b & a & b & 0 \\ \vdots & & & 0 & b & a & b \\ 0 & \cdots & \cdots & \cdots & 0 & b & a \end{array} \right). $$ \begin{lemma} Let $n \ge 2$ and $a \not= 2b$. $$ |A_n(a,b)|=b^n \cdot \frac{\sin(n+1)\theta}{\sin \theta}, $$ where $\displaystyle \frac{a}{2b}=\cos \theta$ ($0 < \theta < 2\pi$). \end{lemma} \noindent \begin{proof} By Proposition~17, we have $|A_n(a,1)| = a |A_{n-1}(a,1)| - |A_{n-2}(a,1)|$. Let $|A_0(a,1)|=1$ and $|A_1(a,1)|=a$. Since $U_1(x)=1$, $U_2(x)=2x$ and $U_{n+1}=2xU_n(x)-U_{n-1}(x)$, we have $|A_n(a,1)|$ $\displaystyle =U_{n+1}\left(\frac{a}{2}\right)$ and $\displaystyle A_n(a,1)=\frac{\sin(n+1)\theta}{\sin \theta}$, where $\displaystyle \frac{a}{2}$ $=\cos \theta$. Since $\displaystyle A_n(a,b)=b^n \cdot A_n\left(\frac{a}{b},1\right)$, we have $$ |A_n(a,b)| = b^n \cdot \left\vert A_n\left(\frac{a}{b},1\right)\right\vert \\ = b^n \cdot U_{n+1}\left(\frac{a}{2b}\right) \\ = b^n \frac{\sin(n+1)\alpha}{\sin \alpha} $$ where $\displaystyle \frac{a}{2b}=\cos \alpha$. \end{proof} \begin{example} \begin{description} \item[(i)] $\displaystyle \left\vert A_n\left(\lambda-1,\frac{1}{2}\right) \right\vert = \left(\frac{1}{2}\right)^n \frac{\sin (n+1) \alpha}{\sin \alpha}$, where $\displaystyle \lambda=1+\cos \alpha$. \item[(ii)] $\displaystyle \left\vert A_n\left(\eta-\frac{2}{3},\frac{1}{3}\right) \right\vert = \left(\frac{1}{3}\right)^n \frac{\sin (n+1) \beta}{\sin \beta}$, where $\displaystyle \eta= \frac{2}{3}(1 + \cos \beta)$. \end{description} \end{example} Let $n \ge 3$. We define a $n \times n$ matrix $B_n(a_0,b_0,a,b)$ and $C_n(a,b,a_0,b_0)$ as follows: \begin{eqnarray*} B_n(a_0,b_0,a,b) &=& \left( \begin{array}{c|cccc} a_0 & b_0 & 0 & \cdots & 0 \\ \hline b_0 & & & & \\ 0 & & \multicolumn{2}{c}{A_{n-1}(a,b)} & \\ \vdots & & && \\ 0 & & & & \end{array} \right) \\ C_n(a,b,a_0,b_0) &=& \left( \begin{array}{cccc|c} & & & & 0 \\ & \multicolumn{2}{c}{A_{n-1}(a,b)} & & \vdots \\ & & & & 0 \\ & & & & b_0 \\ \hline 0 & \cdots & 0 & b_0 & a_0 \end{array} \right) \\ \end{eqnarray*} We note that \begin{eqnarray*} \vert B_n(a_0,b_0,a,b) \vert &=& a_0 \vert A_{n-1}(a,b) \vert - b_0^2 \vert A_{n-2}(a,b)\vert, \mbox{\ and } \\ \vert C_n(a,b,a_0,b_0) \vert &=& a_0 \vert A_{n-1}(a,b) \vert - b_0^2 \vert A_{n-2}(a,b)\vert . \\ \end{eqnarray*} We define functions \begin{eqnarray*} g_{n}(\beta)&=&2 \sin ((n+1)\beta) + \sin n\beta -\sin ((n-1)\beta), \mbox{\ and}\\ h_{n}(\gamma)&=&2 \sin ((n+1)\gamma) - \sin n\gamma -\sin ((n-1)\gamma) \end{eqnarray*} before introducing the next Lemma. \begin{lemma} Let $n \ge 3$. \begin{enumerate} \item $\displaystyle \left\vert B_n(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2}) \right\vert = \frac{1}{2^{n-1}} \cos n \alpha$, where $\lambda=1+\cos \alpha$. \item $\displaystyle \left\vert C_n(\eta-\frac{2}{3}, \frac{1}{3},\eta-\frac{1}{2}, \frac{1}{\sqrt{6}}) \right\vert = \frac{1}{2\cdot 3^{n} \cdot \sin \beta} g_{n}(\beta)$, where $\displaystyle \eta = \frac{2}{3}(1+\cos \beta)$. \item $\displaystyle \left\vert C_n(\mu-\frac{4}{3}, \frac{1}{3},\mu-\frac{3}{2}, \frac{1}{\sqrt{6}}) \right\vert = \frac{1}{2\cdot 3^{n} \cdot \sin \gamma} h_{n}(\gamma)$, where $\displaystyle \mu = \frac{2}{3}(2+\cos \gamma)$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item \begin{eqnarray*} \left\vert B_n(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2}) \right\vert &=& (\lambda-1)\left\vert A_{n-1}(\lambda-1,\frac{1}{2}) \right\vert\\ &~&- \frac{1}{2} \left\vert A_{n-2}(\lambda-1,\frac{1}{2}) \right\vert \end{eqnarray*} \begin{align*} &= (\lambda-1)\left(\frac{1}{2}\right)^{n-1} \frac{\sin n \alpha}{\sin \alpha} - \frac{1}{2}\left(\frac{1}{2}\right)^{n-2}\frac{\sin(n-1)\alpha}{\sin \alpha}\\ &= \left(\frac{1}{2}\right)^{n-1} \cdot \frac{1}{\sin \alpha}( (\lambda-1 ) \sin n \alpha - \sin (n-1) \alpha) \\ &= \left(\frac{1}{2}\right)^{n-1} \cdot \frac{1}{\sin \alpha}( \cos \alpha \sin n \alpha -\sin (n\alpha - \alpha )) \\ &= \left(\frac{1}{2}\right)^{n-1} \cdot \frac{1}{\sin \alpha} (\cos n\alpha \sin \alpha) \\ &=\left(\frac{1}{2}\right)^{n-1} \cos n \alpha \end{align*} \item \begin{align*} \left\vert C_n(\eta-\frac{2}{3}, \frac{1}{3},\eta-\frac{1}{2}, \frac{1}{\sqrt{6}}) \right\vert &= \left(\eta - \frac{1}{2}\right) \left\vert A_{n-1}\left(\eta-\frac{2}{3},\frac{1}{3}\right) \right\vert\\ &-\frac{1}{6} \left\vert A_{n-2}\left(\eta-\frac{2}{3},\frac{1}{3}\right) \right\vert \end{align*} \begin{align*} &=\left(\eta-\frac{1}{2}\right) \left(\frac{1}{3}\right)^{n-1}\frac{\sin n \beta}{\sin \beta}- \frac{1}{6} \left(\frac{1}{3}\right)^{n-2} \frac{\sin (n-1) \beta}{\sin\beta}\\ &= \left(\frac{1}{3}\right)^{n-1}\left(\left(\eta-\frac{1}{2}\right)\frac{\sin n \beta}{\sin \beta}-\frac{1}{2}\frac{\sin (n-1) \beta}{\sin \beta}\right) \\ &= \left(\frac{1}{3}\right)^{n-1}\left(\left(\frac{1}{6} + \frac{2}{3}\cos\beta\right) \frac{\sin n \beta}{\sin \beta} -\frac{1}{2}\frac{\sin (n-1) \beta}{\sin \beta} \right) \\ &= \left(\frac{1}{3}\right)^{n-1} \frac{1}{6 \sin \beta} ( \sin n \beta + 4 \cos \beta \sin n \beta - 3 \sin (n\beta -\beta )) \\ &= \left(\frac{1}{3}\right)^{n-1} \frac{1}{6 \sin \beta} ( \sin n \beta + 4 \cos \beta \sin n \beta - \sin(n\beta-\beta)\\ &-2 \sin n\beta \cos \beta + 2 \cos n\beta \sin \beta ) \end{align*} \begin{align*} &= \left(\frac{1}{3}\right)^{n-1} \frac{1}{6 \sin \beta} ( \sin n \beta + 2 \cos n\beta \sin \beta - \sin(n\beta-\beta)\\ & + 2 \sin n\beta \cos \beta ) \\ &=\left(\frac{1}{3}\right)^{n-1}\frac{1}{6 \sin \beta} ( \sin n \beta + 2 \sin (n \beta + \beta ) - \sin (n\beta-\beta ) ) \\ &= \left(\frac{1}{3}\right)^{n-1} \frac{1}{6 \sin \beta} (2 \sin (n+1) \beta + \sin n \beta - \sin (n-1) \beta ) \\ &= \left(\frac{1}{3}\right)^{n} \frac{1}{2 \sin \beta} g_n(\beta) \end{align*} \item \begin{align*} \left\vert C_n(\mu-\frac{4}{3}, \frac{1}{3},\mu-\frac{3}{2}, \frac{1}{\sqrt{6}}) \right\vert &= \left(\mu - \frac{3}{2}\right) \left\vert A_{n-1}\left(\mu-\frac{4}{3},\frac{1}{3}\right) \right\vert\\ &-\frac{1}{6} \left\vert A_{n-2}\left(\mu-\frac{4}{3},\frac{1}{3}\right) \right\vert \end{align*} \begin{align*} &=\left(\mu-\frac{3}{2}\right) \left(\frac{1}{3}\right)^{n-1}\frac{\sin n \gamma}{\sin \gamma}- \frac{1}{6} \left(\frac{1}{3}\right)^{n-2} \frac{\sin (n-1) \gamma}{\sin\gamma}\\ &=\left(\frac{1}{3}\right)^{n-1}\left(\left(\mu-\frac{3}{2}\right)\frac{\sin n \gamma}{\sin \gamma}-\frac{1}{2}\frac{\sin (n-1) \gamma}{\sin \gamma}\right) \\ &= \left(\frac{1}{3}\right)^{n-1}\left(\left(-\frac{1}{6} + \frac{2}{3}\cos\gamma\right) \frac{\sin n \gamma}{\sin \gamma} -\frac{1}{2}\frac{\sin (n-1) \gamma}{\sin \gamma} \right)\\ &= \left(\frac{1}{3}\right)^{n-1} \frac{1}{6 \sin \gamma} ( -\sin n \gamma + 4 \cos \gamma \sin n \gamma - 3 \sin (n\gamma -\gamma )) \\ &= \left(\frac{1}{3}\right)^{n-1} \frac{1}{6 \sin \gamma} (2 \sin (n+1) \gamma - \sin n \gamma - \sin (n-1) \gamma ) \\ &= \left(\frac{1}{3}\right)^{n} \frac{1}{2 \sin \gamma} h_n(\gamma) \end{align*} \end{enumerate} \end{proof} \subsection{Eigenvalues of $\mathcal{L}(P_n)$} \begin{example} The adjacency matrix and the normalized Laplacian matrix of a path graph $P_{5}$. \begin{eqnarray*} A(P_5)&=& \left( \begin{array}{ccccc} 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 \end{array} \right) \\ \mathcal{L}(P_5) &=& \left( \begin{array}{ccccc} 1 & -\frac{1}{\sqrt{2}} & 0 & 0 & 0 \\ -\frac{1}{\sqrt{2}} & 1 & -\frac{1}{2} & 0 & 0 \\ 0 & -\frac{1}{2} & 1 & -\frac{1}{2} & 0 \\ 0 & 0 & -\frac{1}{2} & 1 & -\frac{1}{\sqrt{2}} \\ 0 & 0 & 0 & -\frac{1}{\sqrt{2}} & 1 \\ \end{array} \right) \end{eqnarray*} \end{example} Let $n \ge 4$. We define $n \times n$ matrix $Q_n(a_0,b_0,a,b)$ as the following. \begin{eqnarray*} Q_n(a_0,b_0,a,b) &=& \left( \begin{array}{c|ccc|c} a_0 & b_0 & 0 & \cdots & 0 \\ \hline b_0 & & & & \vdots \\ 0 & \multicolumn{3}{c|}{A_{n-2}(a,b)} & 0 \\ \vdots & & & & b_0 \\ \hline 0 & \cdots & 0 & b_0 & a_0 \end{array} \right) \end{eqnarray*} We note that $$ \vert Q_n(a_0,b_0,a,b) \vert = a_0 \vert C_{n-1}(a,b,a_0,b_0) \vert - b_0^2 \vert C_{n-2}(a,b,a_0,b_0) \vert. $$ \begin{proposition} Let $n \ge 4$. The characteristic polynomial of ${\mathcal L}(P_n)$ is $$ \vert \lambda I_n - \mathcal{L}(P_n) \vert = -\left(\frac{1}{2}\right)^{n-2} (\sin \alpha \sin ((n-1)\alpha)), $$ where $\lambda=1 + \cos \alpha$. That is $\lambda = 1 - \cos(\frac{k \pi}{n-1})$ \, ($k=0, \ldots, n-1$). \end{proposition} \begin{proof} First, we note $\displaystyle \mathcal{L}(P_n) = Q_n\left(1, -\frac{1}{\sqrt{2}}, 1, -\frac{1}{2}\right)$ and\\ $\displaystyle \vert \lambda I_n - \mathcal{L}(P_n) \vert = \left\vert Q_n\left(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2}\right) \right\vert$. {\small{ \begin{align*} & \left\vert Q_n\left(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2}\right) \right\vert \\ &= (\lambda-1) \left\vert C_{n-1}\left(\lambda-1,\frac{1}{2},\lambda-1,\frac{1}{\sqrt{2}}\right) \right\vert \\ &-\frac{1}{2} \left\vert C_{n-2}\left(\lambda-1,\frac{1}{2},\lambda-1,\frac{1}{\sqrt{2}}\right) \right\vert \\ &= (\lambda-1) \left\vert B_{n-1}\left(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2}\right) \right\vert\\ &-\frac{1}{2} \left\vert B_{n-2}\left(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2}\right) \right\vert \\ &= (\lambda-1)\left(\frac{1}{2}\right)^{n-2} \cos((n-1)\alpha) -\frac{1}{2}\left(\frac{1}{2}\right)^{n-3} \cos((n-2)\alpha) \\ &= \left(\frac{1}{2}\right)^{n-2}(\cos \alpha \cdot \cos ((n-1)\alpha) - \cos ((n-2)\alpha))\\ &= \left(\frac{1}{2}\right)^{n-2}(\cos \alpha \cdot \cos((n-1)\alpha) - (\cos((n-1)\alpha) \cdot \cos \alpha +\\ & \sin((n-1)\alpha) \cdot \sin \alpha))\\ &= - \left(\frac{1}{2}\right)^{n-2} \sin((n-1)\alpha) \cdot \sin \alpha. \end{align*}}} We have $\displaystyle \alpha=\frac{k\pi}{n-1}$ ($k=0,\ldots, n-1$). Since $\displaystyle \cos\left(\frac{k\pi}{n-1}\right) =\cos\left(\frac{(k+(n-1))\pi}{n-1}\right)$, we have $\displaystyle \lambda=1+\cos\left(\frac{k\pi}{n-1}\right)$ ($k=0,\ldots, n-1$). The set is equal to $\displaystyle \lambda=1-\cos\left(\frac{k\pi}{n-1}\right)$ ($k=0,\ldots, n-1$). \end{proof} \subsection{Eigenvalues of weighted paths and $\mathcal{L}(R_{n,k})$} \begin{example} The adjacency matrix and the normalized Laplacian matrix of a weighted path graph $P_{4,3}$. \begin{eqnarray*} A(P_{4,3})&=& \left( \begin{array}{ccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 \end{array} \right) \\ \mathcal{L}(P_{4,3})&=& \left( \begin{array}{ccccccc} 1 & -\frac{1}{\sqrt{2}} & 0 & 0 & 0 & 0 & 0 \\ -\frac{1}{\sqrt{2}} & 1 & -\frac{1}{2} & 0 & 0 & 0 & 0 \\ 0 & -\frac{1}{2} & 1 & -\frac{1}{2} & 0 & 0 & 0 \\ 0 & 0 & -\frac{1}{2} & 1 & -\frac{1}{\sqrt{6}} & 0 & 0 \\ 0 & 0 & 0 & -\frac{1}{\sqrt{6}} & \frac{2}{3} & -\frac{1}{3} & 0 \\ 0 & 0 & 0 & 0 & -\frac{1}{3} & \frac{2}{3} & -\frac{1}{\sqrt{6}} \\ 0 & 0 & 0 & 0 & 0 & -\frac{1}{\sqrt{6}} & \frac{1}{2} \\ \end{array} \right) \end{eqnarray*} \end{example} Let $n \ge 3$ and $k \ge 3$. Then $$ \mathcal{L}(P_{n,k}) = \left( \begin{array}{c|c} B_n(1,-\frac{1}{\sqrt{2}},1,-\frac{1}{2}) & X_{n,k} \\ \hline X_{n,k}^t & C_k(\frac{2}{3},-\frac{1}{3},\frac{1}{2},-\frac{1}{\sqrt{6}}) \end{array} \right), $$ where $X_{n,k}$ is the $n \times k$ matrix defined by \[ X_{n,k}= \left( \begin{array}{cccc} 0 & \cdots & \cdots & 0 \\ \vdots & & & \vdots \\ 0 & 0 & & \vdots \\ -\frac{1}{\sqrt{6}} & 0 & \cdots & 0 \end{array} \right).\] \begin{theorem}\label{prop:pnk} Let $n\ge 3$ and $k \ge 3$. The characteristic polynomial of ${\mathcal L}(P_{n,k})$ is $$ \left\vert \lambda I_{n+k} - \mathcal{L}(P_{n,k}) \right\vert = p_{n,k}(\lambda), $$ where \begin{eqnarray*} p_{n,k}(\lambda) &=& \frac{1}{2^n3^k\sin \beta} (g_k(\beta)\cos(n\alpha)) - g_{k-1}(\beta)\cos((n-1)\alpha)), \end{eqnarray*} $\lambda = 1 + \cos \alpha$ and $\displaystyle\lambda = \frac{2}{3}(1 + \cos \beta)$. \end{theorem} \begin{proof} Since $\displaystyle |B_{n}(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2})|= \frac{\cos (n\alpha)}{2^{n-1}}$ and\\ $\displaystyle |C_k(\lambda-\frac{2}{3},\frac{1}{3},\lambda-\frac{1}{2},\frac{1}{\sqrt{6}})|= \frac{g_k(\beta)}{2\cdot 3^k \cdot \sin \beta}$, we have {\small{ $$ \left\vert \lambda I_{n+k} - \mathcal{L}(P_{n,k}) \right\vert =\left\vert \begin{array}{c|c} B_n(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2}) & X_{n,k} \\ \hline X_{n,k}^t & C_k(\lambda-\frac{2}{3},\frac{1}{3},\lambda-\frac{1}{2},\frac{1}{\sqrt{6}}) \end{array} \right\vert $$ \begin{eqnarray*} &=& -\frac{1}{4} |B_{n-2}(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2})| \cdot |C_{k}(\lambda-\frac{2}{3},\frac{1}{3},\lambda-\frac{1}{2},\frac{1}{\sqrt{6}})| \\ && + (\lambda -1) |B_{n-1}(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2})|\cdot|C_{k}(\lambda-\frac{2}{3},\frac{1}{3},\lambda-\frac{1}{2},\frac{1}{\sqrt{6}})| \\ &&-\frac{1}{6} |B_{n-1}(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2})| \cdot |C_{k-1}(\lambda-\frac{2}{3},\frac{1}{3},\lambda-\frac{1}{2},\frac{1}{\sqrt{6}})| \\ &=& -\frac{1}{4}\cdot \frac{\cos ((n-2)\alpha)}{2^{n-3}} \cdot \frac{g_{k}(\beta)}{2 \cdot 3^k \cdot \sin \beta} + \cos \alpha \cdot \frac{\cos ((n-1)\alpha)}{2^{n-2}} \cdot\\ &&\frac{g_{k}(\beta)}{2 \cdot 3^k \cdot \sin \beta} -\frac{1}{6} \frac{\cos ((n-1)\alpha)}{2^{n-2}} \cdot \frac{g_{k-1}(\beta)}{2 \cdot 3^{k-1} \cdot \sin \beta} \\ &=& \frac{1}{2^n\cdot 3^k \cdot \sin \beta} (-\cos((n-2)\alpha)g_k(\beta)+2 \cos \alpha \cos((n-1)\alpha) g_k(\beta) \\ && - \cos((n-1)\alpha)g_{k-1}(\beta))\\ &=& \frac{1}{2^n\cdot 3^k \cdot \sin \beta} (\cos(n\alpha)g_k(\beta) - \cos((n-1)\alpha)g_{k-1}(\beta)) \\ &=& p_{n,k}(\lambda) \end{eqnarray*}}} We note that \begin{eqnarray*} &&-\cos((n-2)\alpha)+2\cos \alpha \cos((n-1)\alpha) \\ &=& -\cos \alpha \cos ((n-1)\alpha) -\sin \alpha\sin((n-1)\alpha)\\ &&+2\cos \alpha \cos((n-1)\alpha)\\ &=&\cos \alpha \cos ((n-1)\alpha) - \sin \alpha \sin((n-1)\alpha)\\ &=&\cos (n\alpha). \end{eqnarray*} \end{proof} \begin{lemma}\label{lemma:gk} Let $k \ge 3$. \begin{enumerate} \item If $\displaystyle \frac{(4k-2)\pi}{4k-1} < \alpha < \pi$, $0 < \beta < \pi$ and $\displaystyle 1+\cos \alpha = \frac{2}{3}(1+\cos \beta)$ then $\displaystyle \frac{(2k+1)\pi}{2(k+1)} < \beta$. \item If $\displaystyle \frac{(2k+1)\pi}{2(k+1)} < \beta < \pi$ then $g_k(\beta) \not= 0$ and $\displaystyle \frac{g_{k-1}(\beta)}{g_k(\beta)}<-1$. \item If $n=2k$ ($k \ge 0$), $\displaystyle \frac{(2n-2)\pi}{2n-1} < \alpha < \pi$, $0 < \beta < \pi$ and $\displaystyle 1+\cos \alpha = \frac{2}{3}(1+\cos \beta)$ then\\ $\displaystyle g_{k}(\beta)\cos(n \alpha)-g_{k-1}(\beta)\cos((n-1)\alpha) \not=0. $ \end{enumerate} \end{lemma} \begin{proof} {1.\ } Since $k \ge 3$, we have $\displaystyle \frac{4k-1}{k+1}$ $\displaystyle = 4-\frac{5}{k+1}$ $\displaystyle \ge \frac{11}{4}$ and $\displaystyle \frac{1}{4k-1} \le \frac{4}{11(k+1)}$. Since $\displaystyle \frac{33}{8}\sqrt{\frac{2}{3}}$ $>3.36>\pi$, we have $\displaystyle \sqrt{\frac{3}{2}}\cdot \frac{1}{2} \cdot \frac{4\pi}{11}$ $\displaystyle <\sqrt{\frac{3}{2}}\cdot \frac{1}{2} \cdot \frac{4}{11} \cdot \frac{33}{8} \sqrt{\frac{2}{3}}$ $\displaystyle = \frac{3}{4}$. Since $\displaystyle \frac{3x}{2} \le \sin(\frac{\pi x}{2})$ ($\displaystyle 0 \le x \le \frac{1}{3}$), we have $\displaystyle \frac{3}{4(k+1)} \le \sin \frac{\pi}{4(k+1)}$. Since $\displaystyle 1+\cos \alpha=\frac{2}{3}(1+\cos \beta)$, we have $\displaystyle \cos^2 \frac{\alpha}{2}=\frac{2}{3}\cos^2 \frac{\beta}{2}$ and $\displaystyle \cos \frac{\beta}{2}=\sqrt{\frac{3}{2}}\cos \frac{\alpha}{2}$. \begin{eqnarray*} \sin (\frac{\pi-\beta}{2})=\cos \frac{\beta}{2} &=& \sqrt{\frac{3}{2}} \cos \frac{\alpha}{2} \\ &=& \sqrt{\frac{3}{2}} \sin (\frac{\pi-\alpha}{2}) \\ &\le& \sqrt{\frac{3}{2}}(\frac{\pi-\alpha}{2}) \\ &<& \sqrt{\frac{3}{2}} \cdot \frac{1}{2} \cdot (\frac{\pi}{4k-1}) \\ &<& \sqrt{\frac{3}{2}} \cdot \frac{1}{2} \cdot (\frac{4\pi}{11(k+1)} \\ &<& \frac{3}{4(k+1)} \\ &=& \frac{1}{2}\cdot \frac{3}{2(k+1)} \\ &\le& \sin \frac{\pi}{4(k+1)} \end{eqnarray*} Then $\displaystyle \frac{\pi-\beta}{2} < \frac{\pi}{4(k+1)}$ and $\displaystyle \frac{(2k+1)\pi}{2(k+1)} < \beta$. \noindent {2.\ }Let $\beta'=\pi-\beta$. Then $\displaystyle 0 < \beta' < \frac{\pi}{2(k+1)}$. We note that if $k$ is even then $\sin(k\beta)=-\sin(k\beta')$ and if $k$ is odd then $\sin(k\beta)=\sin(k\beta')$. Since $y=\sin x$ is convex on $\displaystyle 0<x <\frac{\pi}{2}$, $\sin(t x_1+(1-t) x_2)> t\sin x_1+(1-t)\sin x_2$ for $\displaystyle 0<x_1<x_2<\frac{\pi}{2}$ and $0<t<1$. Since $\displaystyle 0<(k-2)\beta'<k\beta'<(k+1)\beta'<\frac{\pi}{2}$ and $\displaystyle \frac{1}{3}(k-2)+(1-\frac{1}{3})(k+1)=k$, we have $\displaystyle \sin(k\beta')>\frac{1}{3}\sin((k-2)\beta') +\frac{2}{3}\sin((k+1)\beta')$. \begin{eqnarray*} g_{k-1}(\beta)+g_{k}(\beta) &=& 2\sin(k\beta)+\sin((k-1)\beta)-\sin((k-2)\beta)\\ &~&+2\sin((k+1)\beta)+\sin(k\beta)-\sin((k-1)\beta)\\ &=& -\sin((k-2)\beta) +3\sin(k\beta) + 2 \sin((k+1)\beta) \end{eqnarray*} If $k$ is even then $g_k(\beta)$ $= 2 \sin((k+1)\beta)+\sin(k\beta)-\sin((k-1)\beta)$ $= 2 \sin((k+1)\beta')-\sin(k\beta')-\sin((k-1)\beta)>0$. \begin{eqnarray*} g_{k-1}(\beta)+g_{k}(\beta) &=& -\sin((k-2)\beta) +3\sin(k\beta) + 2 \sin((k+1)\beta)\\ &=& \sin((k-2)\beta) -3\sin(k\beta) + 2 \sin((k+1)\beta)\\ &=& 3(\frac{1}{3}\sin((k-2)\beta')+\frac{2}{3}\sin((k+1)\beta')\\ &~&-\sin(k\beta')) <0. \end{eqnarray*} Since $g_k(\beta)>0$, $\displaystyle \frac{g_{k-1}(\beta)}{g_k(\beta)}+1<0$. If $k$ is odd then $g_k(\beta)$ $= 2 \sin((k+1)\beta)+\sin(k\beta)-\sin((k-1)\beta)$ $= -2 \sin((k+1)\beta')+\sin(k\beta')+\sin((k-1)\beta)>0$. \begin{eqnarray*} g_{k-1}(\beta)+g_{k}(\beta) &=& -\sin((k-2)\beta) +3\sin(k\beta) + 2 \sin((k+1)\beta)\\ &=& -\sin((k-2)\beta) +3\sin(k\beta) - 2 \sin((k+1)\beta)\\ &=& 3(\sin(k\beta'))-\frac{1}{3}\sin((k-2)\beta')\\ &~&-\frac{2}{3}\sin((k+1)\beta') >0. \end{eqnarray*} Since $g_k(\beta)<0$, $\displaystyle \frac{g_{k-1}(\beta)}{g_k(\beta)}+1<0$. \noindent {3.\ } Let $\alpha'=\pi-\alpha$. Then $\displaystyle 0 < \alpha' < \frac{\pi}{4k-1}$. We note that if $n$ is even then $\cos(n\alpha)=\cos(n\alpha')$ and if $n$ is odd then $\cos(n\alpha)=-\cos(n\alpha')$. {\small{ \begin{eqnarray*} g_k(\beta)\cos(n\alpha)-g_{k-1}(\beta)\cos((n-1)\alpha) &=&g_k(\beta)\cos((n-1)\alpha)\\ &~&(\frac{\cos(n\alpha)}{\cos((n-1)\alpha)}- \frac{g_{k-1}(\beta)}{g_k(\beta)}). \end{eqnarray*}}} Since $n=2k$, we have $\cos(n\alpha)=\cos(n\alpha')$ and $\cos((n-1)\alpha)=-\cos((n-1)\alpha')<0$. Since $0<(n-1)\alpha'<n\alpha'<\pi$ and $\displaystyle (n-1)\alpha' < \frac{(n-1)\pi}{2n-1}< \frac{\pi}{2}$, we have $\cos((n-1)\alpha')>\cos(n\alpha')$, $-\cos((n-1)\alpha)>\cos(n\alpha)$ and $\displaystyle \frac{\cos(n\alpha)}{\cos((n-1)\alpha)} >-1$. Since $\displaystyle \frac{g_{k-1}(\beta)}{g_k(\beta)}<-1$, we have $\displaystyle \frac{\cos(n\alpha)}{\cos((n-1)\alpha)} - \frac{g_{k-1}(\beta)}{g_k(\beta)} > 0$. If $k$ is even then $g_k(\beta)>0$ and $\cos((n-1)\alpha)<0$, then $$ g_k(\beta)\cos(n\alpha)-g_{k-1}(\beta)\cos((n-1)\beta)<0. $$ If $k$ is odd then $g_k(\beta)<0$ and $\cos((n-1)\alpha)<0$, then $$ g_k(\beta)\cos(n\alpha)-g_{k-1}(\beta)\cos((n-1)\beta)>0. $$ \end{proof} \begin{proposition} \label{propO3} If $k \ge 3$ and $\lambda_2({\mathcal L}(P_{2k,k}))$ the second eigenvalue of ${\mathcal L}(P_{2k,k})$ then $$ 1 - \cos \frac{\pi}{4k-1} \le \lambda_2(\mathcal{L}(P_{2k,k})). $$ \end{proposition} \begin{proof} Let $0<\alpha<\pi$. If $\displaystyle 1+\cos\alpha < 1-\cos\frac{\pi}{4k-1}$ then $\displaystyle \cos\alpha<-\cos\frac{\pi}{4k-1}$ and $\displaystyle \frac{\pi}{4k-1}<\alpha$. By Theorem~\ref{prop:pnk} and Lemma~\ref{lemma:gk}, we have $p_{n,k}(\lambda) \not=0$ if $\displaystyle \frac{\pi}{4k-1}<\alpha<\pi$ and $\lambda=1+\cos\alpha$. This shows that $$ 1 - \cos \frac{\pi}{4k-1} \le \lambda_2(\mathcal{L}(P_{2k,k})). $$ \end{proof} \begin{example} If $k=3$ then $n=6$ and $\displaystyle 1-\cos\frac{\pi}{4k-1}=0.0405\cdots$. If $k=4$ then $n=8$ and $\displaystyle 1-\cos\frac{\pi}{4k-1}=0.02185\cdots$. The blue curve in the Figure~\ref{fig:fig22} is $y=p_{6,3}(x)$ and the red curve is $y=p_{8,4}(x)$. \begin{figure} \begin{center} \includegraphics[scale=0.8]{p63p84.pdf} \end{center} \caption{Eigenvalues of $P_{6,3}$ and $P_{8,4}$} \label{fig:fig22} \end{figure} \end{example} \subsection{Eigenvalues of $\mathcal{L}(R_{n,k})$} \begin{example} The adjacency matrix and the normalized Laplacian matrix of a graph $R_{5,5}$.\\ $ A(R_{5,5})= \left( \begin{smallmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{smallmatrix} \right)$\\ $\mathcal{L}(R_{5,5})$ can be written as \\ {\tiny $\left( \begin{smallmatrix} 1 & -\frac{1}{\sqrt{2}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -\frac{1}{\sqrt{2}} & 1 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -\frac{1}{2} & 1 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -\frac{1}{2} & 1 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -\frac{1}{2} & 1 & -\frac{1}{\sqrt{6}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -\frac{1}{\sqrt{6}} & 1 & -\frac{1}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 1 & -\frac{1}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 1 & -\frac{1}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 1 & -\frac{1}{\sqrt{6}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{\sqrt{6}} & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{2} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -\frac{1}{\sqrt{2}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{\sqrt{2}} & 1 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{2} & 1 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{2} & 1 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{2} & 1 & -\frac{1}{\sqrt{6}} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{\sqrt{6}} & 1 & -\frac{1}{3} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 1 & -\frac{1}{3} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 1 & -\frac{1}{3} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{3} & 1 & -\frac{1}{\sqrt{6}} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{1}{\sqrt{6}} & 1 \\ \end{smallmatrix} \right)$ } \end{example} \begin{theorem} Let $n \ge 3$, $k \ge 3$. The characteristic polynomial of ${\mathcal L}(R_{n,k})$ is $$ \vert \lambda I_{2(n+k)} - \mathcal{L}(R_{n,k}) \vert = p_{n,k}(\lambda) \cdot q_{n,k}(\lambda), $$ where \begin{eqnarray*} p_{n,k}(\lambda) &=& \frac{1}{2^n3^k\sin \beta} (g_k(\beta)\cos(n\alpha)) - g_{k-1}(\beta)\cos((n-1)\alpha)), \\ q_{n,k}(\lambda) &=& \frac{1}{2^n3^k\sin\gamma} (h_{k}(\gamma)\cos(n\alpha) - h_{k-1}(\gamma)\cos((n-1)\alpha)), \end{eqnarray*} and $\displaystyle \lambda = 1 + \cos \alpha= \frac{2}{3}(1 + \cos \beta)=\frac{2}{3}(2 + \cos \gamma)$. \end{theorem} \begin{proof} Since $\displaystyle |B_{n}(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2})|= \frac{\cos (n\alpha)}{2^{n-1}}$ and \\ $\displaystyle |C_k(\lambda-\frac{4}{3},\frac{1}{3},\lambda-\frac{3}{2},\frac{1}{\sqrt{6}})|= \frac{h_k(\gamma)}{2\cdot 3^k \cdot \sin \gamma}$, we have \noindent {\small{ \begin{eqnarray*} &&\left\vert \begin{array}{c|c} B_n(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2}) & X_{n,k} \\ \hline X_{n,k}^t & C_k(\lambda-\frac{4}{3},\frac{1}{3},\lambda-\frac{3}{2},\frac{1}{\sqrt{6}}) \end{array} \right\vert \\&=& -\frac{1}{4} |B_{n-2}(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2})| \cdot |C_{k}(\lambda-\frac{4}{3},\frac{1}{3},\lambda-\frac{3}{2},\frac{1}{\sqrt{6}})| \\ && + (\lambda -1) |B_{n-1}(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2})| \cdot |C_{k}(\lambda-\frac{4}{3},\frac{1}{3},\lambda-\frac{3}{2},\frac{1}{\sqrt{6}})| \\ && -\frac{1}{6} |B_{n-1}(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2})| \cdot |C_{k-1}(\lambda-\frac{4}{3},\frac{1}{3},\lambda-\frac{3}{2},\frac{1}{\sqrt{6}})| \\ &=& -\frac{1}{4}\cdot \frac{\cos ((n-2)\alpha)}{2^{n-3}} \cdot \frac{h_{k}(\gamma)}{2 \cdot 3^k \cdot \sin \gamma}\\ &~& + \cos \alpha \cdot \frac{\cos ((n-1)\alpha)}{2^{n-2}} \cdot \frac{h_{k}(\gamma)}{2 \cdot 3^k \cdot \sin \gamma}\\ &~& -\frac{1}{6} \frac{\cos ((n-1)\alpha)}{2^{n-2}} \cdot \frac{h_{k-1}(\gamma)}{2 \cdot 3^{k-1} \cdot \sin \gamma}\\ &=& \frac{1}{2^n\cdot 3^k \cdot \sin \gamma} (-\cos((n-2)\alpha)h_k(\gamma)+2 \cos \alpha \cos((n-1)\alpha)\\ &~&h_k(\gamma) - \cos((n-1)\alpha)h_{k-1}(\gamma))\\ &=& \frac{1}{2^n\cdot 3^k \cdot \sin \gamma} (\cos(n\alpha)h_k(\gamma) - \cos((n-1)\alpha)h_{k-1}(\gamma)) \\ &=& q_{n,k}(\lambda). \end{eqnarray*}}} We note that \begin{eqnarray*} && -\cos((n-2)\alpha)+2\cos \alpha \cos((n-1)\alpha)\\ &=& -\cos \alpha \cos ((n-1)\alpha) - \sin \alpha \sin((n-1)\alpha)\\ &&+2\cos \alpha \cos((n-1)\alpha)\\ &=& \cos \alpha \cos ((n-1)\alpha) - \sin \alpha \sin((n-1)\alpha) \\ &=& \cos (n\alpha). \end{eqnarray*} So we have, {\small \begin{eqnarray*} &&\left\vert \lambda I_{2(n+k)} - \mathcal{L}(R_{n,k}) \right\vert\\ &=& \left\vert \begin{array}{c|c} B_n(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2}) & X_{n,k} \\ \hline X_{n,k}^t & C_k(\lambda-\frac{2}{3},\frac{1}{3},\lambda-\frac{1}{2},\frac{1}{\sqrt{6}}) \end{array} \right\vert \\ && \times \left\vert \begin{array}{c|c} B_n(\lambda-1,\frac{1}{\sqrt{2}},\lambda-1,\frac{1}{2}) & X_{n,k} \\ \hline X_{n,k}^t & C_k(\lambda-\frac{4}{3},\frac{1}{3},\lambda-\frac{3}{2},\frac{1}{\sqrt{6}}) \end{array} \right\vert \\ &=& p_{n,k}(\lambda) \times q_{n,k}(\lambda), \end{eqnarray*} } where $\displaystyle \lambda = 1 + \cos \alpha = \frac{2}{3}(1 + \cos \beta) = \frac{2}{3}(2 + \cos \gamma)$. \end{proof} \begin{definition} Let $n \ge 1$. we define two matrices $T((a_i)_{1\le i \le n}, (b_i)_{1\le i \le n-1}, (c_i)_{2\le i \le n})$ and $F$ as follows: \begin{eqnarray*} && T((a_i)_{1\le i \le n}, (b_i)_{1\le i \le n-1}, (c_i)_{2\le i \le n}) \\ &=& \left( \begin{array}{ccccccc} a_1 & b_1 & 0 & 0 & \text{...} & 0 & 0 \\ c_2 & a_2 & b_2 & 0 & \text{...} & 0 & 0 \\ 0 & c_3 & a_3 & b_3 & \text{...} & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \text{...} & 0 & c_{n-2} & a_{n-2} & b_{n-2} & 0 \\ 0 & \text{...} & 0 & 0 & c_{n-1} & a_{n-1} & b_{n-1} \\ 0 & \text{...} & 0 & 0 & 0 & c_n & a_n \end{array} \right), \mbox{\ and} \end{eqnarray*} $$ F=(f_{ij})_{1\le i, j \le n}, \mbox{\ \ where \ \ } f_{ij} = \begin{cases} (-1)^i & (i=j), \\ 0 & (\mbox{otherwise}). \end{cases} $$ \end{definition} \begin{lemma}\label{lemma:ftf} $$ F^{-1}\cdot T((a_i),(b_i),(c_i))\cdot F = T((a_i),(-b_i),(-c_i)). $$ \end{lemma} \begin{proof} First, we note that $F^{-1}=F$. Each element of $b_i$ or $c_i$ is in odd row and even column or even row and odd column. The right multiplication of $F$ changes the sign of an odd row and the left multiplication of $F$ changes the sing of an odd column. The sign of $a_i$ is changed twice and the sign of $b_i$ or $c_i$ is changed once. So we have $F^{-1}\cdot T((a_i),(b_i),(c_i))\cdot F$ $= T((a_i),(-b_i),(-c_i))$. \end{proof} \begin{proposition}\label{prop:evenpnk} Let $n \ge 1$, $k \ge 2$, $$ P = \left( \begin{array}{c|c} B_n(1,-\frac{1}{\sqrt{2}},1,-\frac{1}{2}) & X_{n,k} \\ \hline X_{n,k}^t & C_k(\frac{2}{3},-\frac{1}{3},\frac{1}{2},-\frac{1}{\sqrt{6}}) \end{array} \right) \mbox{\ \ and \ \ } $$ $$ Q = \left( \begin{array}{c|c} B_n(1,-\frac{1}{\sqrt{2}},1,-\frac{1}{2}) & X_{n,k} \\ \hline X_{n,k}^t & C_k(\frac{4}{3},-\frac{1}{3},\frac{3}{2},-\frac{1}{\sqrt{6}}) \end{array} \right). $$ \begin{enumerate} \item Let $\lambda\in \Re$ and $u\in R^{n+k}$. Then $Pu=\lambda u$ if and only if $Q(Fu)=(2-\lambda)(Fu)$. \item An eigenvalue $\lambda\not=0$ of $P$ is simple. \item An eigenvalue $\lambda\not=0$ of $Q$ is simple. \item Let $\lambda \in \Re$, $u=(u_i)_{1\le i\le 2(n+k)} \in R^{2(n+k)}$ and $u_i=u_{n+k+i}$ ($1 \le i \le n+k$). Then ${\mathcal L}(R_{n,k})u=\lambda u$ if and only if $Pu=\lambda u$, where $u=(u_i)_{1\le i \le n+k}$. \item Let $\lambda \in \Re$, $u=(u_i)_{1\le i\le 2(n+k)} \in R^{2(n+k)}$ and $u_i=-u_{n+k+i}$ ($1 \le i \le n+k$). Then ${\mathcal L}(R_{n,k})u=\lambda u$ if and only if $Qu=\lambda u$ where $u=(u_i)_{1\le i \le n+k}$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item First, we note that $Q=F^{-1}(2I-P)F$ by Lemma~\ref{lemma:ftf}. So $Q$ and $2I-P$ have same eigenvalues and $Fu$ is an eigenvector of $Q$ if and only if $u$ is an eigenvector of $P$. \item If $\lambda$ is not simple, we can have an eigenvector $u=(u_i)_{1\le i \le n+k}$, where $u_1=0$. By $Pu = \lambda u$ and $\lambda\not=0$, we have $u=0$ and it contradict that $u$ is an eigenvector of $P$. So we have $\lambda(\not=0)$ is simple. \item It is similar to 2. \item Assume ${\mathcal L}(R_{n,k})u=\lambda u$, then we have $Pu=\lambda u$ by direct computations. The converse is also hold. \item It is similar to 4. \end{enumerate} \end{proof} \begin{example} Let $n=k=2$. Then {\small $$ {\mathcal L}(R_{2,2})= \left( \begin{array}{cccccccc} 1 & -\frac{1}{\sqrt{2}} & 0 & 0 & 0 & 0 & 0 & 0 \\ -\frac{1}{\sqrt{2}} & 1 & -\frac{1}{\sqrt{6}} & 0 & 0 & 0 & 0 & 0 \\ 0 & -\frac{1}{\sqrt{6}} & 1 & -\frac{1}{\sqrt{6}} & 0 & 0 & -\frac{1}{3} & 0 \\ 0 & 0 & -\frac{1}{\sqrt{6}} & 1 & 0 & 0 & 0 & -\frac{1}{2} \\ 0 & 0 & 0 & 0 & 1 & -\frac{1}{\sqrt{2}} & 0 & 0 \\ 0 & 0 & 0 & 0 & -\frac{1}{\sqrt{2}} & 1 & -\frac{1}{\sqrt{6}} & 0 \\ 0 & 0 & -\frac{1}{3} & 0 & 0 & -\frac{1}{\sqrt{6}} & 1 & -\frac{1}{\sqrt{6}} \\ 0 & 0 & 0 & -\frac{1}{2} & 0 & 0 & -\frac{1}{\sqrt{6}} & 1 \end{array} \right), $$ } $$ P= \left( \begin{array}{cccc} 1 & -\frac{1}{\sqrt{2}} & 0 & 0 \\ -\frac{1}{\sqrt{2}} & 1 & -\frac{1}{\sqrt{6}} & 0 \\ 0 & -\frac{1}{\sqrt{6}} & \frac{2}{3} & -\frac{1}{\sqrt{6}} \\ 0 & 0 & -\frac{1}{\sqrt{6}} & \frac{1}{2} \end{array} \right), \mbox{\ \ and \ \ } $$ $$ Q= \left( \begin{array}{cccc} 1 & -\frac{1}{\sqrt{2}} & 0 & 0 \\ -\frac{1}{\sqrt{2}} & 1 & -\frac{1}{\sqrt{6}} & 0 \\ 0 & -\frac{1}{\sqrt{6}} & \frac{4}{3} & -\frac{1}{\sqrt{6}} \\ 0 & 0 & -\frac{1}{\sqrt{6}} & \frac{3}{2} \end{array} \right). $$ Eigenvalues of $R_{2,2}$ are $2.$, $1.79533$, $1.62867$, $1$, $1$, $0.371333$, $0.204666$ and $0$. Corresponding eigenvectors are {\tiny $$ \left( \begin{array}{cccccccc} 0.707107 & -1. & 1.22474 & -1. & -0.707107 & 1. & -1.22474 & 1. \\ -6.90985 & 7.772 & -3.17291 & 1. & -6.90985 & 7.772 & -3.17291 & 1. \\ -0.868326 & 0.772002 & 0.315168 & -1. & 0.868326 & -0.772002 & -0.315168 & 1. \\ 0.707107 & 0 & -1.22474 & 1 & 0.707104 & 0 & -1.22474 & 1\\ -0.707107 & 0 & 1.22474 & 1 & 0.707104 & 0 & -1.22474 & -1\\ -0.868326 & -0.772002 & 0.315168 & 1. & -0.868326 & -0.772002 & 0.315168 & 1. \\ -6.90985 & -7.772 & -3.17291 & -1. & 6.90985 & 7.772 & 3.17291 & 1. \\ 0.707107 & 1. & 1.22474 & 1. & 0.707107 & 1. & 1.22474 & 1. \end{array} \right). $$ } Eigenvalues of $P$ are $1.79533$, $1$, $0.371333$, and $0$. Corresponding eigenvectors are $$ \left( \begin{array}{cccc} -6.90985 & 7.772 & -3.17291 & 1. \\ 0.707107 & 0. & -1.22474 & 1. \\ -0.868326 & -0.772002 & 0.315168 & 1. \\ 0.707107 & 1. & 1.22474 & 1. \end{array} \right). $$ Eigenvalues of $Q$ are $2$, $1.62867$, $1$, and $0.204666$. Corresponding eigenvectors are $$ \left( \begin{array}{cccc} -0.707107 & 1. & -1.22474 & 1. \\ 0.868326 & -0.772002 & -0.315168 & 1. \\ -0.707107 & 0. & 1.22474 & 1. \\ 6.90985 & 7.772 & 3.17291 & 1. \end{array} \right). $$ Each eigenvector of $P$ is corresponding to an even eigenvector of ${\mathcal L}(R_{2,2})$ and $Q$ an odd eigenvector of ${\mathcal L}(R_{2,2})$. Even though eigenvalues of $P$ and $Q$ are simple, an eigenvalue $1$ of ${\mathcal L}(R_{2,2})$ is not simple. \end{example} \section{counter examples for $Mcut(G) \neq Lcut(G)$} This section present counter example graphs, on which spectral methods and minimum normalized cut produce different clusters. \subsection{$Mcut(G)$ and $Lcut(G)$} \begin{definition}[$Lcut(G)$] Let $G=(V,E)$ be a connected graph, $\lambda_2$ the second smallest eigenvalue of $\mathcal{L}(G)$, $U_2=((U_2)_i)$ $(1 \le i \le |V|)$ a second eigenvector of $\mathcal{L}(G)$ with $\lambda_2$. We assume that $\lambda_2$ is simple. Then $Lcut(G)$ is defined as $\displaystyle Lcut(G)= Ncut(V^+(U_2)\cup V^0(U_2), V^-(U_2))$. \end{definition} \begin{example} Figure~\ref{fig:lmcut} shows some examples, where $Mcut(G)=Lcut(G)$ and $Mcut(G) \neq Lcut(G)$. \begin{figure}[htb] \begin{center} \subfigure[$Lcut(G)=Mcut(G)$]{\label{fig:lmcut-a}\includegraphics[scale=0.4]{lcutmcut.pdf}}\subfigure[$Lcut(R_{4,7})=Mcut(R_{4,7})$]{\label{fig:lmcut-b}\includegraphics[scale=0.4]{R47.pdf}} \subfigure[$Mcut(R_{6,4})$]{\label{fig:lmcut-c}\includegraphics[scale=0.4]{R64.pdf}}\subfigure[$Lcut(R_{6,4})$]{\label{fig:lmcut-d}\includegraphics[scale=0.4]{LR64.pdf}} \caption{$Mcut(G)$ and $Lcut(G)$.} \label{fig:lmcut} \end{center} \end{figure} \end{example} \begin{proposition}[\cite{luxburg:2007}] \label{prop25} Let $G=(V,E,w)$ be a weighted graph, $W$ the weighted adjacency matrix of $G$, $L$ the weighted difference Laplacian $L(G)$ of $G$, and $A$ a subset of $V$. If vector $\displaystyle y=(y_1,\ldots,y_n)^T \in \Re^n$ is defined as \[ y= \left \{ \begin{array}{ll} \sqrt{\frac{vol(V \setminus A)}{vol(A)}} & \mbox{if $v_i \in A$,}\\ -\sqrt{\frac{vol(A)}{vol(V \setminus A)}} & \mbox{if $v_i \in V \setminus A$,} \end{array} \right. \] then \begin{enumerate} \item $y^TLy= vol(V)\cdot Ncut(A, V \setminus A)$, \item $y^TDy=vol(V)$ and \item $(Dy)^T\vec{1}=0$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item {\small {\begin{eqnarray*} y^TLy&=&y^TDy-y^TWy\\ &=&\sum_{i=1}^n d_iy_i^2 -\sum_{i,j} y_iw_{ij}y_j\\ &=&\frac{1}{2} \left( \sum_{i=1}^n d_iy_i^2 -2 \sum_{i,j} y_iy_jw_{ij} +\sum_{j=1}^n d_jy_j^2 \right )\\ &=& \frac{1}{2} \sum_{i,j=1}^n w_{ij}(y_i-y_j)^2 \end{eqnarray*}}} This can be further reduced to,\\ {\small{ \begin{eqnarray*} & =& \frac{1}{2}\sum_{i\in A, j\in (V \setminus A)} w_{ij}\left( \sqrt{\frac{vol(V \setminus A)}{vol(A)}}+ \sqrt{\frac{vol(A)}{vol(V \setminus A)}}\right)^2+\\ &&\frac{1}{2}\sum_{i\in (V \setminus A), j\in A} w_{ij}\left( -\sqrt{\frac{vol(A)}{vol(V \setminus A)}}- \sqrt{\frac{vol(V \setminus A)}{vol(A)}} \right)^2\\ &=& cut(A,V \setminus A) \left(\frac{vol(A)}{vol(V \setminus A)}+\frac{vol(V \setminus A)}{vol(A)}+2 \right)\\ &=&cut(A,V \setminus A)\left( \frac{vol(A)+vol(V \setminus A)}{vol(V \setminus A)}+ \frac{vol(A)+vol(V \setminus A)}{vol(A)}\right)\\ &=& vol(V).Ncut(A, V \setminus A). \end{eqnarray*}}} \item {\small{\begin{eqnarray*} y^TDy &=& \sum_{i=1}^n d_{i}y_i^2 = \sum_{i\in A} d_iy_i^2+ \sum_{i\in V \setminus A} d_iy_i^2\\ &=& \sum_{i\in A}d_i \left( \frac{vol(V \setminus A)}{vol(A)}\right)+ \sum_{i \in (V \setminus A)}d_i \left(\frac{vol(A)}{vol(V \setminus A)} \right)\\ &=& vol(A)\frac{vol(V \setminus A)}{vol(A)}+vol(V \setminus A)\frac{vol(A)}{vol(V \setminus A)}\\ &=& vol(V). \end{eqnarray*}}}\hfill\qed \item {\small{ \begin{eqnarray*} (Dy)^T\vec{1}&=& \sum_{i=1}^n d_i y_i \\ &=& \sum_{i \in A} d_i \sqrt{\frac{vol(V \setminus A)}{vol(A)}}- \sum_{i \in V \setminus A} d_i \sqrt{\frac{vol(A)}{vol(V \setminus A)}}\\ &=& vol(A) \sqrt{\frac{vol(V \setminus A)}{vol(A)}}- vol(V \setminus A)\sqrt{\frac{vol(A)}{vol(V \setminus A)}}\\ &=& 0. \end{eqnarray*} }} \end{enumerate} \end{proof} \noindent By Proposition~\ref{prop25}, we have {\small \begin{eqnarray*} Ncut(A,V\setminus A)&=& \frac{y^T L y}{y^T D y}\\ &=& \frac{y^T (D-W) y}{y^T D y} \\ &=& \frac{(D^{1/2}y)^T(I-D^{-1/2}WD^{-1/2})(D^{1/2}y)} {(D^{1/2}y)^T (D^{1/2}y)}\\ &=& \frac{z^T \mathcal{L}(G) z}{z^T z}, \end{eqnarray*} } \noindent where $z=D^{1/2}y$ and $\mathcal{L}(G)=I-D^{-1/2}WD^{-1/2}$. The least eigenvalue of $\mathcal{L}(G)$ is $0$ and an eigenvector is $D^{1/2}\vec{1}$. Let $\lambda_2$ be the second eigenvalue of $\mathcal{L}(G)$. It is well known $$ \lambda_2 = \min\{ \frac{z^T \mathcal{L}(G) z}{z^T z}\ |\ z \in \Re^n, \ z \perp D^{1/2}\vec{1} \}. $$ If $z$ is a second eigenvector, then $\displaystyle \lambda_2=\frac{z^T \mathcal{L}(G) z}{z^T z}$ and $z \perp D^{1/2}\vec{1}$. These results guide to consider relations between a set $A$ attaining $Mcut(G)=Ncut(A,V\setminus A)$ and a set $V^+(U)$, where $U$ is a second eigenvector of $\mathcal{L}(G)$. The set $V^+(U)$ is a good approximation of $A$. \subsection{The graph $R_{n,k}$} In this section, we review the formulae of $Mcut(R_{n,k})$ and conditions in Theorem~\ref{propmcutg}, consider some properties of subsets $A$ of $V(R_{n,k})$, which attains $Lcut(R_{n,k})=Ncut(A,V\setminus A)$, and assign a condition of $n$ and $k$ to cause $Mcut(R_{n,k})\not=Lcut(R_{n,k})$. Let $R_{n,k}=(V,E)$, $V=\{v_i\ |\ 1\le i \le 2(n+k)\}$, where $$ v_i= \begin{cases} x_i & (1 \le i \le n+k), \\ y_{i-(n+k)} & (n+k+1 \le i \le 2(n+k)). \end{cases} $$ We review subsets $A_1$, $A_2$ and $A_4(\alpha)$ defined in the proof of Theorem~\ref{propmcutg}. That is \begin{eqnarray*} A_1 &=& \{v_i \ |\ 1 \le i \le n+k\}, \\ A_2 &=& \{v_i \ |\ 1 \le i \le n \}, \mbox{\ and} \\ A_4(\alpha) &=&\{v_i, v_{i+n+k} \ |\ 1 \le i \le n+ \alpha \} \\ && (1 \le \alpha <k). \\ \end{eqnarray*} For a vector $U=(u_1,u_2,\ldots,u_{2(n+k)}) \in \Re^{2(n+k)}$, we write $\bar{U}=(u_1,u_2,\ldots,u_{n+k}) \in \Re^{n+k}$. For a vector $\bar{U}=(u_1,u_2,\ldots,u_{(n+k)}) \in \Re^{n+k}$, we write $(\bar{U},\bar{U}) \in \Re^{2(n+k)}$ as a vector $U=(u_1,u_2,\ldots,u_{2(n+k)}) \in \Re^{2(n+k)}$ such that $u_{i+(n+k)}=u_i \ (1\le i \le n+k)$. In this section, we consider an automorphism $\phi$, where $\phi(v_i)=v_{i+n+k}$ to consider even and odd vectors. \begin{proposition} \label{prop34} If $U=(u_1,u_2,\ldots,u_{2(n+k)})$ is an eigenvector of $\mathcal{L}(R_{n,k})$ with an eigenvalue $\lambda$, then $\bar{U}$ is an eigenvector of $\mathcal{L}(P_{n,k})$ with an eigenvalue $\lambda$. Conversely, if $\bar{U}=(u_1,u_2,\ldots,u_{n+k})$ is an eigenvector of $\mathcal{L}(P_{n,k})$ with an eigenvalue $\lambda$, then $U=(\bar{U},\bar{U})$ is an eigenvector of $\mathcal{L}(R_{n,k})$. \end{proposition} \begin{proof} If $U$ is an even vector then we can write $U=(\bar{U},\bar{U})$. The matrix $\mathcal{L}(R_{n,k})$ can be written as \[ \mathcal{L}(R_{n,k})=\left( \begin{array}{cc} \mathcal{L}_1 & C \\ C^T & \mathcal{L}_1 \end{array} \right), \] where $\mathcal{L}_1$ is the $(n+k) \times (n+k)$ principal sub matrix of $\displaystyle \mathcal{L}(R_{n,k})$ and $C=(c_{ij})$ is the $(n+k) \times (n+k)$ matrix such that \[ c_{ij}= \left\{\begin{array}{cc} -\frac{1}{d_i} &\mbox{ if $n+1 \le i \le n+k $ and $i=j$,}\\ 0 &\mbox{otherwise.} \end{array} \right. \] We notice that $\displaystyle \mathcal{L}_1+C=\mathcal{L}(P_{n,k})$. If $\lambda$ is an eigenvalue of $\mathcal{L}(R_{n,k})$ then $\mathcal{L}(R_{n,k}) U=\lambda U $ can be written as, \[\left( \begin{array}{cc} \mathcal{L}_1 & C \\ C^T & \mathcal{L}_1 \end{array} \right) \left( \begin{array}{c} \bar{U} \\ \bar{U} \end{array} \right) = \lambda \left( \begin{array}{c} \bar{U} \\ \bar{U} \end{array}\right) \] This gives \begin{eqnarray*} \mathcal{L}_1\bar{U} + C \bar{U} &=& \lambda \bar{U}. \end{eqnarray*} This can be written as, $\displaystyle (\mathcal{L}_1+C )\bar{U} =\mathcal{L}(P_{n,k})\bar{U}= \lambda \bar{U}$. Therefore $\lambda$ is an eigenvalue of $\mathcal{L}(P_{n,k})$ and $\bar{U}$ is an eigenvector. Thus if $U$ is an even vector of $\displaystyle \mathcal{L}(R_{n,k})$ with eigenvalue $\lambda$, then $\bar{U}$ is an eigenvector of $\displaystyle \mathcal{L}(P_{n,k})$ with the same eigenvalue. The converse also holds.\end{proof} \begin{proposition} \label{lemma16} Let $U=(u_1,u_2,\ldots,u_{(n+k)})$ be an eigenvector of $\mathcal{L}(P_{n,k})$ with a second smallest eigenvalue $\lambda_2$. Then there exists some $\alpha \in \mathbf{Z^+}$ such that $\displaystyle u_i \geq 0 \ (1 \leq i \leq \alpha)$ and $\displaystyle u_i <0 \ (\alpha +1 \leq i \leq n+k)$ or $\displaystyle u_i <0 \ (1 \leq i \leq \alpha)$ and $\displaystyle u_i \ge 0 \ (\alpha +1 \leq i \leq n+k)$. \end{proposition} \begin{proof} If $U=(u_1,u_2,\ldots,u_{n+k})$ is the second eigenvector of $\mathcal{L}(P_{n,k})$, then $\displaystyle U \perp D^{1/2}\vec{1}$. Then by Lemma~\ref{lemma4}, $V^+(U) \neq \emptyset $ and $V^-(U) \neq \emptyset $. Since $\lambda_2$ is simple, induced subgraphs by $V^+(U)$, $V^-(U)$, $V^+(U)\cup V^0(U)$ and $V^-(U)\cup V^0(U)$ are connected by the nodal domain theorem \cite{Davis:2001}. Thus there exists some $\alpha \in \mathbf{Z^+}$ as given in the proposition. \end{proof} \begin{corollary} If $\bar{U}=(u_1,u_2,\ldots,u_{n+k})$ is a first eigenvector of $\mathcal{L}(P_{n,k})$, then $U=(\bar{U},\bar{U})$ is a first eigenvector of $\mathcal{L}(R_{n,k})$. \end{corollary} \hfill\qed \begin{proposition} \label{propeven} Let $\lambda_2$ be the second smallest eigenvalue of $\mathcal{L}(R_{n,k})$, $\lambda'_2$ the second smallest eigenvalue of $\mathcal{L}(P_{n,k})$, and $U=(u_1,u_2,\ldots,u_{2(n+k)})$ an eigenvector of $\mathcal{L}(R_{n,k})$ with $\lambda_2$. If $U$ is an even vector then $\lambda_2=\lambda'_2$. That is $\bar{U}=(u_1,u_2,\ldots,u_{n+k})$ is an second eigenvector of $\mathcal{L}(P_{n,k})$ with $\lambda'_2$. \end{proposition} \begin{proof} Since $U$ is an even vector, $\bar{U}$ is an eigenvector of $\mathcal{L}(P_{n,k})$ with $\lambda_2$. So we have $\lambda'_2 \le \lambda_2$. We note $\displaystyle U \perp D^{1/2}(R_{n,k})\vec{1}$ and $\bar{U} \perp D^{1/2}(P_{n,k})\vec{1}$. Let $U'=(u'_1,u'_2,\ldots,u'_{n+k})$ be a second eigenvector of $\mathcal{L}(P_{n,k})$ with $\lambda'_2$. Since $(U',U')$ is an eigenvector of $\mathcal{L}(R_{n,k})$ with $\lambda'_2$, we have $\lambda_2 \le \lambda'_2$ and $\lambda_2=\lambda'_2$. \end{proof} Let $\lambda_2$ be the second eigenvalue of $R_{n,k}$, $U$ an eigenvector of $R_{n,k}$ with $\lambda_2$. Since $\lambda_2$ is simple, induced subgraphs by $V^-(U)$ and $V^+(U)\cup V^0(U)$ are connected by the nodal domain theorem \cite{Davis:2001}. Since $U$ is an odd vector or an even vector, it is easy to show Lemma~\ref{lemma15} and Lemma~\ref{lemma17}. \begin{lemma} \label{lemma15} Let $U=(u_1,\ldots,u_{2(n+k)})$ be a second eigenvector of $\mathcal{L}(R_{n,k})$. If $U$ is an odd vector then $$ Lcut(R_{n,k})=Ncut(A_1,V\setminus A_1). $$ \end{lemma} \hfill\qed \begin{lemma} \label{lemma17} Let $U=(u_1,\ldots,u_{2(n+k)})$ be a second eigenvector of $\mathcal{L}(R_{n,k})$. If $U$ is an even vector then there exists $\alpha$ $(1 \le \alpha < k)$ such that $$ Lcut(R_{n,k})=Ncut(A_4(\alpha),V\setminus A_4(\alpha)). $$ \end{lemma} \hfill\qed \begin{proposition} Let $G=R_{n,k}(n \geq 1, k \ge 2)$. If $n$ and $k$ belong to the following region $R$ then $Mcut(G) < Lcut(G)$. \begin{eqnarray*} R &=& \{(n,k) \ | \ ((k \geq 4)\wedge (2 \mid k) \wedge (3 \mid n) \wedge \\ &~&(1-\frac{1}{\sqrt{2}}-\frac{3 k}{2}+\frac{3 k}{\sqrt{2}} \leq n) )\vee\\ &~& (k=2 \wedge (n\geq 2))\vee (k=3 \wedge (n \geq 3)) \}. \end{eqnarray*} \end{proposition} \begin{proof} Let $G=(V,E)$, $K_1$, $K_2$, $K_3$ and $K_4$ are formulae defined in the Theorem~\ref{propmcutg}. If $k \ge 2$ then $K_2<K_3<K_4<K_1$. So if $(n,k) \in R$ then $Mcut(G)=Ncut(A_2,V\setminus A_2)$ and denoted by $c_2$ in the Theorem~\ref{propmcutg}. Let $U=(u_1,u_2,\ldots,u_{2(n+k)})$ be an eigenvector corresponding to the second smallest eigenvalue of $\mathcal{L}(R_{n,k})$. If $U$ is an odd vector, then $Lcut(G)=Ncut(A_1,V\setminus A_1)$ by Lemma~\ref{lemma15}. So we have $Mcut(G) < Lcut(G)$ by Theorem~\ref{propmcutg}. If $U$ is an even vector, then $Lcut(G)=Ncut(A_4(\alpha),V\setminus A_4(\alpha))$ for some $\alpha$ by Lemma~\ref{lemma17}. So we have $Mcut(G) < Lcut(G)$ by Theorem~\ref{propmcutg}. \end{proof} \begin{theorem} Let $k \ge 3$, $\lambda_2({\mathcal L}(P_{2k,k}))$, $\lambda_2({\mathcal L}(P_{4k}))$, and $\lambda_2({\mathcal L}(R_{2k,k}))$ the second eigenvectors of ${\mathcal L}(P_{2k,k})$, ${\mathcal L}(P_{4k})$, and ${\mathcal L}(R_{2k,k})$, respectively. \begin{enumerate} \item $\lambda_2({\mathcal L}(P_{4k})) < \lambda_2({\mathcal L}(P_{2k,k}))$. \item $\lambda_2({\mathcal L}(R_{2k,k})) < \lambda_2({\mathcal L}(P_{4k}))$. \item A second eigenvector $U$ of ${\mathcal L}(R_{2k,k})$ is an odd vector. \item The second eigenvalue of ${\mathcal L}(R_{2k,k})$ is simple. \item $Mcut(R_{2k,k})<Lcut(R_{2k,k})$. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item Since $\lambda_2({\mathcal L}(P_{4k}))$ $\displaystyle =1-\cos\left(\frac{\pi}{4k-1}\right)$ by Proposition~\ref{prop:path}, we have $\lambda_2({\mathcal L}(P_{4k}))$ $<\lambda_2({\mathcal L}(P_{2k,k}))$ by Proposition~\ref{propO3}. \item Let $A=(a_{ij})_{1\le i,j\le 4k}$ be the adjacency matrix of $P_{4k}$, $B=(b_{ij})_{1\le i,j\le 6k}$ be the adjacency matrix of $R_{2k,k}$, $d=(d_i)_{1\le i \le 4k},$ where $\displaystyle d_i=\sum_{j=1}^{4k}a_{ij}$, $e=(e_i)$ where $\displaystyle e_i=\sum_{j=1}^{6k}b_{ij}$, and $x=(x_i)_{1\le i \le 4k}$ an eigenvector of ${\mathcal L}(P_{4k})$ corresponding to $\lambda_2({\mathcal L}(P_{4k}))$ with $x^T x=1$. We note that $\displaystyle d^{\frac{1}{2}}\vec{1} \perp x$ and $\lambda_2({\mathcal L}(P_{4k})) = x^T{\mathcal L}(P_{4k})x$ $\displaystyle =\frac{1}{2}\sum_{i=1}^{4k} \sum_{j=1}^{4k} \left(\frac{x_i}{\sqrt{d_i}}-\frac{x_j}{\sqrt{d_j}}\right)^2a_{ij}$. Let $$ y_i = \begin{cases} x_i & (1 \le i \le 2k) \\ 0 & (2k+1 \le i \le 3k, \ 5k+1 \le i \le 6k) \\ x_{7k-i+1} & (3k+1 \le i \le 5k) \end{cases} $$ and consider a vector $y=(y_i)_{1\le i \le 6k}$. Since $x$ is a second eigenvector of ${\mathcal L}(P_{4k})$, we have $\displaystyle \sum_{i=1}^{6k}y_i^2=\sum_{i=1}^{4k}x_i^2=1$, $\displaystyle \sum_{i=1}^{6k}\sqrt{e_i}y_i=\sum_{i=1}^{4k}\sqrt{d_i}x_i=0$, and $x_{2k}=-x_{2k+1}\not=0$. So we have $y^T y=1$, $\displaystyle e^{\frac{1}{2}}\vec{1}\perp y$, and {\small \begin{eqnarray*} \displaystyle \lambda_2(R_{2k,k}) & = & \inf_{e^{\frac{1}{2}}\vec{1}\perp u} \frac{u^T{\mathcal L}(R_{2k,k})u}{u^Tu} \\ & \le & y^T {\mathcal L}(R_{2k,k})y \\ & = & \frac{1}{2}\sum_{i=1}^{6k} \sum_{j=1}^{6k} \left(\frac{y_i}{\sqrt{e_i}}-\frac{y_j}{\sqrt{e_j}}\right)^2b_{ij} \\ & = & \frac{1}{2}\sum_{i=1}^{4k} \sum_{j=1}^{4k} \left(\frac{x_i}{\sqrt{d_i}}-\frac{x_j}{\sqrt{d_j}}\right)^2a_{ij} -\left(\frac{x_{2k}}{\sqrt{d_{2k}}}-\frac{x_{2k+1}}{\sqrt{d_{2k+1}}}\right)^2 \\ && +\left(\frac{y_{2k}}{\sqrt{e_{2k}}}-\frac{y_{2k+1}}{\sqrt{e_{2k+1}}}\right)^2 +\left(\frac{y_{5k}}{\sqrt{e_{5k}}}-\frac{y_{5k+1}}{\sqrt{e_{5k+1}}}\right)^2 \\ &= & \lambda_2\left({\mathcal L}\left(P_{4k}\right)\right) -\left(\frac{2x_{2k}}{\sqrt{d_{2k}}}\right)^2 +\left(\frac{y_{2k}}{\sqrt{e_{2k}}}\right)^2 +\left(\frac{y_{5k}}{\sqrt{e_{5k}}}\right)^2 \\ & = &\lambda_2\left({\mathcal L}\left(P_{4k}\right)\right) -2\left(\frac{x_{2k}}{\sqrt{d_{2k}}}\right)^2 \\ & < & \lambda _2\left({\mathcal L}\left(P_{4k}\right)\right) \end{eqnarray*} } \item If a second eigenvector $U$ of ${\mathcal L}(R_{2k,k})$ corresponding to $\lambda_2({\mathcal L}(R_{2k,k}))$ is an even vector, then $\lambda_2({\mathcal L}(R_{2k,k}))=\lambda_2({\mathcal L}(P_{2k,k}))$ by Proposition~\ref{prop:evenpnk}. But it contradicts that $\lambda_2({\mathcal L}(R_{2k,k}))<\lambda_2({\mathcal L}(P_{2k,k}))$ induced by 1. and 2. So we have a second eigenvector $U$ of ${\mathcal L}(R_{2k,k})$, which is an odd vector. \item By 3. and Proposition~\ref{prop:evenpnk}, 3. and 4., $\lambda_2({\mathcal L}(R_{2k,k}))$ is simple. \item Since the second eigenvector of $R_{2k,k}$ is an odd vector, $Lcut(R_{2k,k})=Ncut(A_1,V\setminus A_1)$ by Lemma~\ref{lemma15}. Thus we have $Mcut(R_{2k,k}) < Lcut(R_{2k,k})$ by Theorem~\ref{propmcutg}. \end{enumerate} \end{proof} \section{Conclusion} We presented a survey of the known results associated with difference, normalized, and signless Laplacian matrices. We also stated upper and lower bounds for the difference and normalized Laplacian matrices using isoperimetric numbers and the Cheeger constant. We gave a uniform proof for the eigenvalues and eigenvectors of paths and cycles on the basis of all three Laplacian matrices using circulant matrices, and presented an alternate proof for finding the eigenvalues of the adjacency matrix of cycles and paths using Chebyshev polynomials. We also introduced concrete formulae for $Mcut(G)$ for some classes of graphs. Then, we established characteristic polynomials for the normalized Laplacian matrices ${\mathcal L}(P_{n,k})$ and ${\mathcal L}(R_{n,k})$. Finally, we presented counter example graphs based on $R_{n,k}$, where $Mcut(G)$ and $Lcut(G)$ produce different clusters. In particular, we established criteria for $Mcut(G)$ and $Lcut(G)$ to have different values. \section*{Acknowledgments} We would like to specially thank Professor Hiroyuki Ochiai for his ideas pertaining to computations and comparisons of the second eigenvalues of ${\mathcal L}(P_{n,k})$, which gave us useful hints to finish this study. We are also grateful to Dr. Tetsuji Taniguchi for his helpful comments and encouragement during the course of this study. This research was partially supported by the Global COE Program "Educational-and-Research Hub for Mathematics-for-Industry" at Kyushu University. \bibliographystyle{plain}
1,314,259,996,239
arxiv
\section{Introduction} Steering the collective behaviour of a network of dynamical agents towards a desired common target state is a fundamental problem in network control \cite{CHEN2013,Cornelius2013,LiuS2011}. A paradigmatic example is the problem of achieving consensus, where the goal is for all agent states in the network to asymptotically converge towards each other \cite{OlfatFM2007}. The existing literature on consensus is vast and many extensions and different approaches have been proposed, e.g. \cite{Ren2007c,ReCa:11}. Often, it is assumed that the agent dynamics are either trivial (simple or higher order integrators \cite{Ren2007HO}) or identical across the network \cite{ScardS2009,Li2010}. Also, the presence of disturbances and noise is often neglected. In contrast, many real world applications are modelled as networks of heterogeneous dynamical systems, and are affected by disturbances and noise. Take for instance a network of power generators, as those considered in \cite{Hill2006,Motter2013power}. Different power sources and transmission lines, multiple load variations, and even communication failures between generators make the network highly heterogeneous. \begin{figure}[tbp] \centering { \subfigure[] {\label{fig:introDMPIa} {\includegraphics[scale=0.24]{Intro_b}}} \subfigure[] {\label{fig:introDMPIb} {\includegraphics[scale=0.19]{Intro_a}}} } \caption{(a): The network to be controlled is represented by black links and the blue and yellow connections represent the additional proportional and integral links that are used for control. (b) Multiplex representation of a network controlled by proportional and integral distributed controllers.} \label{fig:introDMPI} \end{figure} The use of dynamic couplings implemented via the deployment of a distributed integral action has been proposed in the literature as a viable alternative to diffusive coupling when disturbances are present and/or the nodes are heterogeneous. A distributed integral action is used, for example, in \cite{Freeman2006} to prove convergence in a network of homogeneous first order linear systems affected by constant disturbances, while in \cite{Andreasson2014} a similar integral action is exploited to achieve consensus in homogeneous networks of simple and double integrators affected by constant disturbances. Further extensions of such distributed PI control to the case where the nodes have a more general homogeneous dynamics have been reported in \cite{Seyboth2015}. Applications have been discussed to achieve clock synchronization in networks of discrete-time integrators in \cite{CarliCSZ2008}, and for solving network congestion control problems in \cite{Zhang2014}. The use of distributed integral actions is also often used to achieve synchronization in power systems; see for instance \cite{Sarlette2012,Simpson-Porco2013,Andreasson2014,Bidram2014} and references therein. More recently, extensions have been proposed to the case where agents do not share the same dynamics. In this case the network is heterogeneous and fewer results are available particularly when the presence of disturbances, e.g. constant biases, is taken explicitly into account (see Sec. \ref{Liter_review} for a more detailed discussion of the relevant previous work in the literature). In most of the available results, convergence is proved under the assumption that the integral action is deployed across all links in the network. Take for instance the recent work presented in \cite{Andreasson2014} or the distributed PID approach in \cite{Burbano2014ISCAS,BurbanoLombana2015} (and references therein). In this paper we propose instead a multiplex strategy where the proportional and integral layer each possess a different structure (see Fig.~\ref{fig:introDMPIb}). The resulting closed-loop network is described by a multigraph (hypergraph) \cite{Bondy2008} which represents a class of networks recently defined as {\em multiplex networks}, which are the focus of much research attention in Physics and Applied Science (see the recent paper in Science \cite{Mucha2010}). Namely, according to the {\em multiplex PI strategy} described in this paper, two control layers are used to steer the dynamics of the open loop network offering a new degree of freedom during the design: the possibility of selecting independently the structure of the integral layer from that of the diffusive proportional one. We show that the key analytical hurdle represented by the presence of multiple Laplacians describing each of the layers in the multigraph can be overcome so as to obtain a rigorous proof of convergence. The conditions we find are global and can be used to tune both the gains {\em and} the structure of the two control layers to achieve consensus, despite the presence of heterogeneities and constant disturbances. All the theoretical results are illustrated via representative examples that are also used to investigate the beneficial effects (in terms of stability and performance) of varying the structure of the integral layer (while keeping that of the proportional layer unchanged). \subsection{Relevant previous work} \label{Liter_review} The idea of using a distributed integral action to achieve consensus in a multi-agent system has been discussed in a number of previous papers in the literature often, but not always, under the assumption of homogeneous node dynamics. Here we give a brief overview of some key previous work to better expound our results in the context of the existing literature. We wish to emphasise that the use of distributed integral actions is also common practice to achieve synchronization and frequency control in power grids, see for example \cite{Sarlette2012,Simpson-Porco2013} and references therein. In \cite{Freeman2006}, a distributed PI protocol is presented to achieve consensus in a multi-agent system. The proof of convergence is obtained for a network of scalar homogeneous agents with possibly different gains for the P and I actions, but such that they are either both present on an edge or not. Basically, while the strength of the P and I couplings can be modulated independently, the structure of the P and I interconnections is assumed to be the same. Note that this assumption is crucial for the proof of convergence presented therein as is the hypothesis that all nodes share the same dynamics. This is also the case for the work presented in \cite{Andreasson2014} where a distributed integral action is deployed to achieve consensus in a network of scalar, homogeneous agents in the presence of constant disturbances. The idea of using integrators on the Laplacian dynamics for arbitrary homogeneous linear systems is also discussed in \cite{ScardS2009}. A more general approach is presented in the seminal work \cite{Wieland2009,Wieland2011} where the problem is considered of achieving output consensus in a network of heterogeneous linear systems, subject to arbitrary (non-constant) disturbances. Therein, the internal model principle is used to prove that exact (non-trivial) output synchronisation is only possible if the intersection of the agents' spectra is non-empty. In practice, agents can only synchronize to ``a trajectory generated by a dynamical system contained in the dynamics of each agent or exosystem'' (as explained in \cite{Seyboth2012}). As pointed out in \cite{Seyboth2012} this condition is not always satisfied, as for example, in a network of heterogeneous harmonic oscillators. Also the structure of the proportional and integral layers is assumed to be the same. The use of the internal model principle is also adopted in \cite{Lunze2012} to study synchronization of heterogeneous agents. The internal model principle is further exploited in \cite{Bai2010} to extend the previous work in \cite{Freeman2006} and prove convergence in the presence of time-varying inputs including polynomial inputs of known order and sinusoidal inputs with known frequencies. It is also used in \cite{Bai2014} together with incremental passivity to prove convergence in a network of nonlinear systems under a certain class of disturbances. In particular, it is shown that consensus is achieved if the Laplacian describing the integral layer is symmetric. Also, the integral action is based on the output of an internal model system and the disturbance is assumed to be generated by a known dynamical model. Finally, synchronization of heterogeneous nonlinear systems is studied in a number of papers in the literature as for example in \cite{ZhongOct,DeLellis2015} and extensions of the internal model principle to this class of systems has been recently presented in \cite{Burger2014,Wieland2013}. When compared to the existing literature, in this paper we present a different approach based on the deployment of a distributed PI action in networks of heterogeneous linear agents in the presence of constant disturbances (or affine terms) and, unlike other previous work, when the control layers have different structures. We wish to emphasize that arguments based on the internal model principle (such as those reported in \cite{Wieland2009},\cite{Wieland2011}) to prove existence of a consensus equilibrium cannot be applied in our case (see Remark \ref{Remark:9} in Sec \ref{Sec:conv:equ} for further details). \section{Preliminaries} \label{sec:PR} We denote by $\mathbf{I}_N$ the identity matrix of dimension $N\times N$; by $\mathbb{0}_{M\times N}$ a matrix of zeros of dimension $M\times N$, and by $\mathbb{1}_N$ a $N\times 1$ vector with unitary elements. The Frobenius norm is denoted by $\left\| \cdot \right\|$ while the spectral norm by $\specnorm{\cdot}$. A diagonal matrix, say $\mathbf{D}$, with diagonal elements $d_1, \ldots,d_N$ is indicated by $\mathbf{D}=\mbox{diag}\{d_1,\ldots,d_N\}$. The determinant of a matrix is denoted by $\det(.)$, $\lambda _k(\mathbf{A})$ denotes the $k$-th eigenvalue of a squared matrix $\mathbf{A}$, and $\mathbf{A}'=\mathbf{A}+\mathbf{A}^T$ denotes the symmetric part of a matrix. \begin{prop} Given two vectors $\mathbf{v}_1\in \use@mathgroup \M@U \symAMSb{R}^{n\times 1}$, $\mathbf{v}_2\in \use@mathgroup \M@U \symAMSb{R}^{m\times 1}$ and two matrices $\mathbf{Q}_1\in \use@mathgroup \M@U \symAMSb{R}^{m\times n}$, $\mathbf{Q}_2\in \use@mathgroup \M@U \symAMSb{R}^{m\times m}$, some algebraic manipulations yield \begin{equation} \label{pos:eq:1} 2\mathbf{v}_1^T\mathbf{Q}_1^T\mathbf{Q}_2\mathbf{v}_2 \le \varepsilon\mathbf{v}_1^T\mathbf{Q}_1^T\mathbf{Q}_1\mathbf{v}_1+\frac{1}{\varepsilon}\mathbf{v}_2^T\mathbf{Q}_2^T\mathbf{Q}_2\mathbf{v}_2,\forall \varepsilon>0 \end{equation} \end{prop} \begin{pf*}{Proof.} Consider the $m \times 1$ vector $ {a{\mathbf{Q}_1}{\mathbf{v}_1} \pm b{\mathbf{Q}_2}{\mathbf{v}_2}} $ with $a,b \in \use@mathgroup \M@U \symAMSb{R}^{+}$. From its quadratic form one has $\left( {a{\mathbf{Q}_1}{\mathbf{v}_1} \pm b{\mathbf{Q}_2}{\mathbf{v}_2}} \right)^T\left( {a{\mathbf{Q}_1}{\mathbf{v}_1} \pm b{\mathbf{Q}_2}{\mathbf{v}_2}} \right)\ge 0$ and \[{a^2}\mathbf{v}_1^T\mathbf{Q}_1^T{\mathbf{Q}_1}{\mathbf{v}_1} \pm 2ab\mathbf{v}_1^T\mathbf{Q}_1^T{\mathbf{Q}_2}{\mathbf{v}_2} + {b^2}\mathbf{v} _2^T\mathbf{Q}_2^T{\mathbf{Q}_2}{\mathbf{v}_2} \ge 0\] then, dividing both sides of the inequality by $ab$ we have that $2\mathbf{v}_1^T\mathbf{Q}_1^T{\mathbf{Q}_2}{\mathbf{v}_2} \le {a}/{b}\mathbf{v}_1^T\mathbf{Q}_1^T{\mathbf{Q}_1}{\mathbf{v}_1} + {b}/{a}\mathbf{v}_2^T\mathbf{Q}_2^T{\mathbf{Q}_2}{\mathbf{v}_2}$. Finally, setting $\varepsilon=a/b$ we obtain \eqref{pos:eq:1}. \end{pf*} \begin{lem} \label{lemm:quadratic_form} Given a symmetric matrix $\mathbf{A}\in \use@mathgroup \M@U \symAMSb{R}^{n\times n}$, denoting by ${\lambda _{\min }}(\mathbf{A})$ and ${\lambda _{\max }}(\mathbf{A})$ the smallest and largest eigenvalues of $\mathbf{A}$, the following statements are true \cite{RAH_CRJ_1987} \begin{eqnarray} \label{symm:bound} {\lambda _{\min }}(\mathbf{A}) {{ {{\mathbf{v}}} }^T} {{\mathbf{v}}} \le {{ {{\mathbf{v}}} }^T}\mathbf{A} {{\mathbf{v}}} \le {\lambda _{\max }}(\mathbf{A}){{ {{\mathbf{v}}} }^T} {{\mathbf{v}}},\forall \mathbf{v} \in \use@mathgroup \M@U \symAMSb{R}^{n\times 1}\\ \label{specnorm:bou} \specnorm{ \mathbf{A} } = \mathop {\max }\limits_k \left\{ {\left| {{\lambda _k}(\mathbf{A})} \right|} \right\}\leq{\left\| \mathbf{A} \right\|}\\ \label{eig_bloksim} {\lambda _{\min }}(\mathbf{A}) \le {\lambda _{\min }}({\mathbf{A}_o}) \le {\lambda _{\max }}({\mathbf{A}_o}) \le {\lambda _{\max }}(\mathbf{A}) \end{eqnarray} where ${\mathbf{A}_o}\in \use@mathgroup \M@U \symAMSb{R}^{k\times k}$ is a principal sub-matrix of $\mathbf{A}$ {(See Corollary 8.4.6 in \cite{Bernstein2009})}. \end{lem} \begin{lem}\cite{Bernstein2009} \label{kron:rel} Given the matrices $\mathbf{A}$,$\mathbf{B}$, $\mathbf{C}$ and $\mathbf{D}$ of appropriate dimensions, the Kronecker product satisfies the following properties \begin{eqnarray} \label{kron:rel:a} (\mathbf{A} \otimes \mathbf{B}) + (\mathbf{A} \otimes \mathbf{C) = \mathbf{A} \otimes (\mathbf{B} + \mathbf{C})} \\ \label{kron:rel:b} (\mathbf{A} \otimes \mathbf{B})(\mathbf{A} \otimes \mathbf{D}) = \mathbf{AB} \otimes \mathbf{BD} \\ \label{kron:rel:c} \specnorm{(\mathbf{A} \otimes \mathbf{B})} = \specnorm{\mathbf{A}}\specnorm{\mathbf{B}} \end{eqnarray} \end{lem} \subsection{Algebraic graph theory} An {\em undirected graph} $\mathscr{G}$ is a pair defined by $\mathscr{G} = \left( {\mathcal{N},\mathcal{E}} \right)$ where $\mathcal{N} = \left\{ {{1},{2}, \cdots ,{N}} \right\}$ is the finite set of $N$ node indices; $\mathcal{E} \subset \mathcal{N} \times \mathcal{N}$ is the set containing the $P$ edges between the nodes. We assume each edge has an associated weight denoted by $w_{ij} \in \use@mathgroup \M@U \symAMSb{R}^{+}$ for all $i,j \in \mathcal{N}$. The weighed \textit{adjacency matrix} $\boldsymbol{\mathcal{A}}(\mathscr{G})\in {\use@mathgroup \M@U \symAMSb{R}^{N \times N}}$ with $\mathcal{A}_{ij}$ entries, is defined as $\mathcal{A}_{ij}(\mathscr{G})=w_{ij}$ if there is an edge from node $i$ to node $j$ and zero otherwise. Similarly, the \textit{Laplacian matrix} $\boldsymbol{\mathcal{L}}(\mathscr{G})\in {\use@mathgroup \M@U \symAMSb{R}^{N \times N}}$ is defined as the matrix whose elements ${\mathcal{L}_{ij}}(\mathscr{G}) = \sum\nolimits_{j = 1,j \ne i}^N {{w_{ij}}} $ if $i=j$ and $-{{w_{ij}}}$ otherwise. Thus, the Laplacian matrix can be recast in compact form as $\boldsymbol{\mathcal{L}}(\mathscr{G}) = \mbox{diag}\{\boldsymbol{\mathcal{A}}(\mathscr{G})\mathbb{1}_N\} - \boldsymbol{\mathcal{A}}(\mathscr{G})$, where the matrix $\mbox{diag}\{\boldsymbol{\mathcal{A}}(\mathscr{G})\mathbb{1}_N\}$ is often called the degree matrix of the graph $\mathscr{G}$. Given two graphs sharing the same set of nodes $\mathscr{G}_1=(\mathcal{N},\mathcal{E}_{1})$ and $\mathscr{G}_2=(\mathcal{N},\mathcal{E}_{2})$, we define the {\em projection graph} as the graph $\mbox{proj}(\mathscr{G}_1,\mathscr{G}_2):=(\mathcal{N},\mathcal{E}_{p})$ with associate adjacency matrix $\boldsymbol{\mathcal{A}}_p:=\boldsymbol{\mathcal{A}}(\mathscr{G}_1) + \boldsymbol{\mathcal{A}}(\mathscr{G}_2)$. \begin{defn}\cite{Lu2006214} We say that an $N\times N$ matrix $\boldsymbol{\mathcal{S}} = [{\mathcal{S}_{ij}}], \forall i,j\in \mathcal{N}$ belongs to the set $\use@mathgroup \M@U \symAMSb{W}$ if it verifies the following properties: \begin{enumerate} \item ${{\mathcal{S}}_{ij}} \le 0,\,i \ne j,$ and ${{\mathcal{S}}_{ii}} = - \sum\limits_{j = 1,j \ne i}^N {{{\mathcal{S}}_{ij}}}$, \item its eigenvalues in ascending order are such that $\lambda _1(\boldsymbol{\mathcal{S}})=0$ while all the others, $\lambda _k(\boldsymbol{\mathcal{S}})$, $k\in \{2,\cdots,N\}$, are real and positive. \end{enumerate} \label{def_laplacian} \end{defn} The set of matrices $\use@mathgroup \M@U \symAMSb{W}$ defined above are in fact a special instance of $M$-matrices as defined in \cite{Poole1974}. Note that the Laplacian matrix $\boldsymbol{\mathcal{L}}$ belongs to the set $\use@mathgroup \M@U \symAMSb{W}$ if its associated graph $\mathscr{G}$ is connected \cite{OlfatFM2007}. Next, we present a decomposition of the Laplacian matrix that will be crucial for the derivations reported in the rest of the paper. As suggested in \cite{BurbanoLombana2015} such a decomposition is particularly useful to prove convergence in the presence of heterogeneous nodes. \begin{lem}\cite{BurbanoLombana2015} \label{lemm:simmetric_L} Let $\boldsymbol{\mathcal{L}}\in\use@mathgroup \M@U \symAMSb{W}$ be the Laplacian matrix of an undirected and connected graph $\mathscr{G}$, then $\boldsymbol{\mathcal{L}}$ can be written in block form as $\boldsymbol{\mathcal{L}}=\mathbf{R} \mathbf{\Lambda} \mathbf{R}^{-1}$, where $\mathbf{R}$ is an orthonormal matrix defined with its inverse as \begin{equation} \label{block:decompo} \begin{array}{l} \mathbf{R} = \left[ {\begin{array}{*{20}{c}} 1 & {N{\mathbf{R}_{21}^T}} \\ {{\mathbb{1}_{N - 1}}} & {N{\mathbf{R}_{22}^T}} \\ \end{array}} \right],\,{\mathbf{R}^{-1}} = \left[ {\begin{array}{*{20}{c}} {{r_{11}}} & {{\mathbf{R}_{12}}} \\ {{\mathbf{R}_{21}}} & {{\mathbf{R}_{22}}} \\ \end{array}} \right] \end{array} \end{equation} with \begin{eqnarray} r_{11} = \frac{1}{N}, \qquad \mathbf{R}_{12}= \frac{1}{N}\mathbb{1}_{N - 1}^T,\label{eq:blockdef} \end{eqnarray} $\mathbf{R}_{21}\in {\use@mathgroup \M@U \symAMSb{R}^{{(N - 1) \times 1} }}$, $\mathbf{R}_{22}\in {\use@mathgroup \M@U \symAMSb{R}^{{(N - 1) \times (N - 1)}}}$ being blocks of appropriate dimensions, $\mathbf{\Lambda} = \mbox{diag}\left\{ {0,{\lambda _2(\boldsymbol{\mathcal{L}})}, \cdots ,{\lambda _N(\boldsymbol{\mathcal{L}})}} \right\}$ with $0 = {\lambda _1(\boldsymbol{\mathcal{L}})} < {\lambda _2(\boldsymbol{\mathcal{L}})} \le \cdots \le {\lambda _N(\boldsymbol{\mathcal{L}})}$ being the eigenvalues of $\boldsymbol{\mathcal{L}}$ in ascending order. Also, the blocks in $\mathbf{R}$ and $\mathbf{R}^{-1}$ must fulfill the following conditions \begin{eqnarray} \label{prop:U:1} {r_{11}}{\mathbf{I}_n} + ({\mathbf{R}_{12}}{\mathbb{1}_{N - 1}} \otimes {\mathbf{I}_n}) = {\mathbf{I}_n}\\ \label{prop:U:2} ({\mathbf{R}_{21}} \otimes {\mathbf{I}_n}) + ({\mathbf{R}_{22}}{\mathbb{1}_{N - 1}} \otimes {\mathbf{I}_n}) = {\mathbb{0}_{(n(N - 1) \times 1)}}\\ \label{prop:U:3} ({\mathbf{R}_{21}}\mathbf{R}_{21}^T \otimes {\mathbf{I}_n}) + ({\mathbf{R}_{22}}\mathbf{R}_{22}^T \otimes {\mathbf{I}_n}) = \frac{1}{N}({\mathbf{I}_{N - 1}} \otimes {\mathbf{I}_n})\\ \label{prop:U:4} {r_{11}}(\mathbf{R}_{21}^T \otimes {\mathbf{I}_n}) + ({\mathbf{R}_{12}}\mathbf{R}_{22}^T \otimes {\mathbf{I}_n}) = {\mathbb{0}_{(1 \times n(N - 1))}}\\ \label{prop:U:5} ({\mathbf{R}_{21}}\mathbf{R}_{21}^T \otimes {\mathbf{I}_n}) = ({\mathbf{R}_{22}}{\mathbb{1}_{N-1}\mathbb{1}_{N-1}^T}\mathbf{R}_{22}^T \otimes {\mathbf{I}_n})\\ \label{prop:U:5norm} \specnorm{ {({\mathbf{R}_{22}} \otimes {\mathbf{I}_n})} } \le \frac{1}{{\sqrt N }}\\ \label{bound_R21_termsofN} \left\| \mathbf{R}_{21}\right\|\leq\sqrt{N-1}\specnorm{\mathbf{R}_{22}} \leq \sqrt{(N-1)/N}\\ \label{transpose_R} {\mathbf{R}^T=N\mathbf{R}^{-1}}\\ \label{prop:U:7inv} N\mathbf{R}_{22}^T = (\mathbf{I}_{N-1}+\mathbb{1}_{N-1}\mathbb{1}_{N-1}^T)^{-1}\mathbf{R}_{22}^{-1} \end{eqnarray} \end{lem} \begin{pf*}{Proof.} See appendix \ref{Appendix_II}. \end{pf*} \begin{defn} A multigraph, is the set of M graphs $\mathscr{M}:= \{\mathscr{G}_1, \cdots, \mathscr{G}_M \}$ called layers of $\mathscr{M}$, where all the graphs in $\mathscr{M}$ share the same set of nodes, that is $\mathscr{G}_k=\left( {\mathcal{N},\mathcal{E}_k} \right)$, for $k \in \{1,\cdots,M \}$. \end{defn} \section{Problem statement and multiplex PI control} We consider the problem of achieving consensus in a network of $N$ agents governed by open-loop heterogeneous dynamics of the form \begin{equation} \label{eq:sys:1} {{\dot {\mathbf{x}}}_i}(t) = \mathbf{A}_i{\mathbf{x}_i}(t) + {\mathbf{b}_i} - \sigma \sum\nolimits_{j = 1}^N {{\mathcal{L}_{C,ij}}{\mathbf{x}_j(t)}} +{\mathbf{u}_i}(t) \end{equation} for all $i \in \mathcal{N}$, where $\mathbf{x}_i(t)\in \use@mathgroup \M@U \symAMSb{R}^{n\times 1}$ represents the state of the $i$-th agent, $\mathbf{A}_i\in \use@mathgroup \M@U \symAMSb{R}^{n\times n}$ is the intrinsic node dynamic matrix, ${\mathbf{b}_i}\in \use@mathgroup \M@U \symAMSb{R}^{n\times 1}$ is some constant bias (or constant disturbance) acting on each node, $\sigma$ is a non-negative constant modelling the global coupling strength among any pair of nodes, ${\mathcal{L}_{C,ij}}$ are the elements of the Laplacian matrix $\boldsymbol{\mathcal{L}}_C$ of the weighed graph $\mathscr{G}_{C}:=(\mathcal{N},\mathcal{E}_{C})$ representing the open-loop network to be controlled (see Fig. \ref{fig:introDMPIb}), and $\mathbf{u}_i(t)\in \use@mathgroup \M@U \symAMSb{R}^{n\times 1}$ is the control input. In this paper we assume that at least one bias $\mathbf{b}_i \ne \mathbb{0}_{(n\times 1)}$ for some $i \in \mathcal{N}$. In so doing, the trivial solution is excluded that is associated to the case where all the agent dynamics $\mathbf{A}_i$ are exponentially stable with null biases. Indeed, in this case all nodes would achieve consensus onto zero and no distributed control action would be required. \noindent \begin{defn} \label{def:1} Network \eqref{eq:sys:1} is said to achieve admissible consensus if, for any set of initial conditions $x_i(0)=x_{i0}$, there exists some non negative constant $W$ such that $\mathop {\lim }\nolimits_{t \to \infty } {\left\| {\mathbf{x}_j(t)-\mathbf{x}_i(t) } \right\|=0}$ for $\ i,j\in \mathcal{N}$ and $\left\| {\mathbf{u}_i(t)} \right\| < W <+ \infty$, for all $t\geq 0$. \end{defn} The problem we shall solve is to find bounded and distributed control inputs $\mathbf{u}_i(t)$, such that all states $\mathbf{x}_i(t)$ converge asymptotically towards each other, i.e. admissible consensus. We then propose the use of a distributed multiplex PI control strategy, obtained by setting: \begin{equation} \label{eq:cont:2b} \begin{split} {\mathbf{u}_i}(t) &= \sigma_{P} \sum\limits_{j = 1}^N {{\alpha _{ij}}{(\mathbf{x}_j(t)-\mathbf{x}_i(t))}}\\ & \quad + \sigma_{I} \sum\limits_{j = 1}^N {{\beta _{ij}}\int\limits_0^t {{(\mathbf{x}_j(\tau)-\mathbf{x}_i(\tau))}d\tau } } \end{split} \end{equation} where the non-negative constants $\alpha_{ij}\ge 0 $ and $\beta_{ij}\ge 0$ represent the control strengths of the proportional and integral control actions respectively (we do not consider self-loops, that is $\alpha_{ii}=\beta_{ii}=0$). It is important to highlight that this controller allows the deployment of proportional and integral actions independently from each other ($\alpha_{ij}=0$ or $\beta_{ij}=0$ for some $i$,$j\in \mathcal{N}$, $i \ne j$). The constants $\sigma_P$, $\sigma_I \in \use@mathgroup \M@U \symAMSb{R}^+$ are additional parameters modulating globally the contribution of each control layer with respect to each other. Equation \eqref{eq:cont:2b} effectively defines two control layers each represented by a different weighted graph $\mathscr{G}_{P}:=(\mathcal{N},\mathcal{E}_{P})$ for the proportional layer and $\mathscr{G}_{I}:=(\mathcal{N},\mathcal{E}_{I})$ for the integral layer, where $\mathcal{E}_{P}$ is the set of edges with associated weights $\alpha_{ij}$ and $\mathcal{E}_{I}$ that with associated weights $\beta_{ij}$. We denote the Laplacian matrices corresponding to each of these layers by $\boldsymbol{\mathcal{L}}_P:=[\mathcal{L}_{P,ij}]$ and $\boldsymbol{\mathcal{L}}_I:=[\mathcal{L}_{I,ij}]$, respectively; with their elements being defined as $\mathcal{L}_{P,ij} = \sum\nolimits_{j = 1,j \ne i}^N {{\alpha_{ij}}} $ and $\mathcal{L}_{I,ij} = \sum\nolimits_{j = 1,j \ne i}^N {{\beta_{ij}}} $ if $i=j$ and $\mathcal{L}_{P,ij} = -\alpha_{ij}$, $\mathcal{L}_{I,ij} = -\beta_{ij}$ otherwise. As depicted in Fig. \ref{fig:introDMPI}, the resulting control strategy is therefore a {\em multiplex} distributed control strategy, and the closed-loop network a multiplex network associated to the multigraph $\mathscr{M}= \{\mathscr{G}_{C},\mathscr{G}_{P},\mathscr{G}_{I} \}$. Next, we define $\widehat{\boldsymbol{\mathcal{L}}}_C:=({\boldsymbol{\mathcal{L}}_C \otimes {\mathbf{I}_n}})$, $\widehat{\boldsymbol{\mathcal{L}}}_P:=({\boldsymbol{\mathcal{L}}_P \otimes {\mathbf{I}_n}})$, $\widehat{\boldsymbol{\mathcal{L}}}_I:=({\boldsymbol{\mathcal{L}}_I \otimes {\mathbf{I}_n}})$. Letting $\mathbf{x}(t)=[\mathbf{x}_1^T(t),\cdots, \mathbf{x}_N^T(t)]^T$ be the stack vector of all agent states and \begin{equation} \label{integral:term} {\mathbf{z}}(t) =\left[ {\mathbf{z}_1^T}(t), \ldots ,{\mathbf{z}_N^T}(t) \right]^T:= - \sigma_I {\widehat\boldsymbol{\mathcal{L}}}_I\int_0^t {\mathbf{x}(\tau )d\tau } \end{equation} the stack vector of all integral states, the overall dynamics of the closed-loop network can then be written as \begin{equation} \label{eq:DMPI} \left[ {\begin{array}{*{20}{c}} {\dot {\mathbf{x}}(t)}\\ {\dot {\mathbf{z}}(t)} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\widehat {\mathbf{A}} - \boldsymbol{\mathcal{H}}}&{{\mathbf{I}_{nN}}}\\ { - \sigma_I \widehat{\boldsymbol{\mathcal{L}}}_I}&{\mathbb{0}_{(nN\times nN)}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {\mathbf{x}(t)}\\ {\mathbf{z}(t)} \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} \mathbf{B} \\ \mathbb{0} \end{array}} \right] \end{equation} where $\widehat {\mathbf{A}}\in \use@mathgroup \M@U \symAMSb{R}^{nN\times nN}$ is a block diagonal matrix encoding the node dynamics, $\widehat {\mathbf{A}}:=\mbox{diag}\left\{\mathbf{A}_1,\cdots,\mathbf{A}_N\right\}$, $\boldsymbol{\mathcal{H}}:=\sigma \widehat{\boldsymbol{\mathcal{L}}}_C+\sigma_P \widehat{\boldsymbol{\mathcal{L}}}_P$, and $\mathbf{B}\in \use@mathgroup \M@U \symAMSb{R}^{nN\times 1}$ is the stack vector of the constant biases, $\mathbf{B}:=[\mathbf{b}_1^T,\cdots,\mathbf{b}_N^T]^T$. Thus, the problem becomes that of finding conditions on the node dynamics, the gains $\sigma$, $\sigma_P$, and $\sigma_I$, and most importantly the structural properties of the open-loop network layer $\mathscr{G}_{C}$ and control layers $\mathscr{G}_{P}$ and $\mathscr{G}_{I}$, so as to guarantee emergence of admissible consensus in the closed-loop multiplex network \eqref{eq:DMPI}. \section{Convergence Analysis} \label{Sec:conv} In this section we first show that the collective dynamics of the multiplex closed-loop network \eqref{eq:DMPI} has a unique equilibrium which is the solution of the admissible consensus problem. Then we derive some sufficient conditions guaranteeing asymptotic stability of such equilibrium. \subsection{Consensus equilibrium} \label{Sec:conv:equ} \begin{prop} \label{equil_point} If the matrix $\mathbf{\Psi}_{11} := (1/N)\sum\nolimits_{k = 1}^N {{\mathbf{A}_k}}$ is non-singular, then the closed-loop network \eqref{eq:DMPI} has a unique equilibrium $\mathbf{x}^*:=\left( {{\mathbb{1}_N} \otimes {\mathbf{x}_\infty }} \right)$ and ${\mathbf{z}^*}: = - ( {\widehat {\mathbf{A}}\mathbf{x}^* + \mathbf{B} } )$ where \begin{equation} \label{eq:equili} \begin{array}{l} {\mathbf{x}_\infty }: = - (1/N){\mathbf{\Psi}_{11}^{ - 1}}\sum\nolimits_{k = 1}^N {{\mathbf{b}_k}} \end{array} \end{equation} \end{prop} \begin{pf*}{Proof.} Setting the left-hand side of \eqref{eq:DMPI} to zero one has that $\mathbf{x}^* = (\mathbb{1}_N \otimes \mathbf{v})$, $\forall \mathbf{v} \in \use@mathgroup \M@U \symAMSb{R}^{n\times 1}$ and ${\mathbf{z}^* } = -\left(\widehat {\mathbf{A}}(\mathbb{1}_N \otimes \mathbf{v}) + \mathbf{B} \right)$. From \eqref{integral:term}, we also have that $(\mathbb{1}_N^T\otimes \mathbf{I}_n)\mathbf{z}(t) = \mathbb{0}_{nN\times1}$, then $(\mathbb{1}_N^T\otimes \mathbf{I}_n)\mathbf{z}^* = \mathbb{0}_{nN\times 1}$ and we obtain \[ \begin{split} (\mathbb{1}_N^T\otimes \mathbf{I}_n)\widehat {\mathbf{A}}(\mathbb{1}_N \otimes \mathbf{v}) &= -(\mathbb{1}_N^T\otimes \mathbf{I}_n)\mathbf{B}\\ (1/N)\sum\nolimits_{k = 1}^N {\mathbf{A}_k}\mathbf{v} &= -(1/N)\sum\nolimits_{k = 1}^N {{\mathbf{b}_k}} \end{split} \] then $\mathbf{v}=- (1/N){\mathbf{\Psi}_{11}^{ - 1}}\sum\nolimits_{k = 1}^N {{\mathbf{b}_k}} = \mathbf{x}_\infty$ which completes the proof. \end{pf*} \begin{rem}\hspace{0.2cm} \begin{itemize} \item Note indeed that if controller \eqref{eq:cont:2b} is able to render this equilibrium stable, it is also able to guarantee consensus of all node states $\mathbf{x}(t)$ to a constant vector $\mathbf{x}_{\infty}$ using bounded control energy. Also, the consensus trajectory can be interpreted as the solution of the ``exo-system'' given by $\dot {\mathbf{s}}(t) =\mathbf{\Psi}_{11}\mathbf{s}(t) + (1/N)\sum\nolimits_{k = 1}^N {{\mathbf{b} _k}}$. Unlike the work in \cite{Wieland2011} where the existence of the consensus equilibrium requires all the agents in the network to have eigenvalues in common; here, we just need to show that $\mathbf{\Psi}_{11}$ is a full rank matrix. \item Note that, in the notation of \cite{Wieland2009}, our strategy corresponds to setting the matrices $\mathbf{B}_i=\mathbf{E}_i=\mathbf{C}_i=\mathbf{G}_i=\mathbf{H}_i=\mathbf{K}_i=\mathbf{I}_n$ and more importantly the matrix defining the own dynamics of the local controllers $\mathbf{F}_i=\mathbb{0}$. Therefore, existence of the consensus equilibrium cannot be proved in our case following the arguments therein. Specifically, the assumptions of detectability made in \cite{Wieland2009} do not apply. \end{itemize} \label{Remark:9} \end{rem} Now, to prove convergence, it suffices to guarantee that ($\mathbf{x}^*,\mathbf{z}^*$) is globally asymptotically stable. We start by shifting the origin via the state transformation $\mathbf{y}(t):=\mathbf{z}(t) + \mathbf{B}$ so that \eqref{eq:DMPI} becomes \begin{equation} \label{eq:PID:2} \left[ {\begin{array}{*{20}{c}} {\dot {\mathbf{x}}(t)}\\ {\dot {\mathbf{y}}(t)} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\widehat {\mathbf{A}} - \boldsymbol{\mathcal{H}}}&{{\mathbf{I}_{nN}}}\\ { - \sigma_I \widehat{\boldsymbol{\mathcal{L}}}_I}&{\mathbb{0}_{(nN\times nN)}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {\mathbf{x}(t)}\\ {\mathbf{y}(t)} \end{array}} \right] \end{equation} \begin{lem} Let $\boldsymbol{\mathcal{L}}_1=\mathbf{R}\mathbf{\Lambda}_1\mathbf{R}^{-1}$ and $\boldsymbol{\mathcal{L}}_2=\mathbf{U}\mathbf{\Lambda}_2\mathbf{U}^{-1}$ be two generic Laplacian matrices belonging to the set $\use@mathgroup \M@U \symAMSb{W}$, where $\mathbf{R}$ and $\mathbf{U}$ are block matrices with the same structure as in \eqref{block:decompo} and $\mathbf{\Lambda}_k$, $k\in\left\{1,2\right\}$ are diagonal matrices containing the eigenvalues of $\boldsymbol{\mathcal{L}}_1$ and $\boldsymbol{\mathcal{L}}_2$ respectively. Then, \begin{equation} \label{lem:dif:lap} (\mathbf{R}^{-1}\boldsymbol{\mathcal{L}}_2\mathbf{R}\otimes\mathbf{I}_n)= \left[ {\begin{array}{*{20}{c}} {{\mathbb{0}_{(n \times n)}}}&{{\mathbb{0}_{(n \times (nN - 1))}}}\\ {{\mathbb{0}_{((nN - 1) \times n)}}}&{\left( {\mathbf{T} {{\bar {\mathbf{\Lambda}} }_2}{\mathbf{T} ^T} \otimes {\mathbf{I}_n}} \right)} \end{array}} \right] \end{equation} where $\mathbf{T}=N\mathbf{R}_{22}(\mathbb{1}_{N-1}\mathbb{1}_{N-1}^T+\mathbf{I}_{N-1})\mathbf{U}_{22}^T$ and $\mathbf{\bar{\Lambda}}_2=\mbox{diag}\left\{\lambda_2(\boldsymbol{\mathcal{L}}_2),\cdots,\lambda_N(\boldsymbol{\mathcal{L}}_2)\right\}$. Moreover, ${ {\mathbf{T} {{\bar {\mathbf{\Lambda}} }_2}{\mathbf{T} ^T}} }$ is a symmetric matrix. \label{prop:dif_lap} \end{lem} \begin{pf*}{Proof.} See Appendix \ref{Appendix_I}. \end{pf*} \subsection{Error dynamics} Assuming that the graphs in all layers of $\mathscr{M}$ are connected, using Lemma \ref{lemm:simmetric_L} we can write $\boldsymbol{\mathcal{L}}_C=\mathbf{R}\mathbf{\Lambda}_C\mathbf{R}^{-1}$, $\boldsymbol{\mathcal{L}}_P=\mathbf{U}\mathbf{\Lambda}_P\mathbf{U}^{-1}$ and $\boldsymbol{\mathcal{L}}_I=\mathbf{Q}\mathbf{\Lambda}_I\mathbf{Q}^{-1}$. (In Corollary 13 we relax the assumption of connectivity of the open-loop network). Next we define the error dynamics given by the state transformation $\mathbf{e}(t) = ({\mathbf{R}^{-1}}\otimes \mathbf{I}_n)\mathbf{x}(t)$; therefore, using the block representation of $\mathbf{R}^{-1}$ and letting $\bar{\mathbf{e}}(t):=[\mathbf{e}_2^T(t),\cdots,\mathbf{e}_N^T(t)]^T$ and $\bar{\mathbf{x}}(t):=[\mathbf{x}_2^T(t),\cdots,\mathbf{x}_N^T(t)]^T$, we obtain \begin{eqnarray} \label{eq:transformation_Ua} \mathbf{e}_1 (t) &=& {r_{11}}{\mathbf{x}_1}(t) + ({\mathbf{R}_{12}} \otimes {\mathbf{I}_n})\bar {\mathbf{x}}(t)\\ \label{eq:transformation_Ub} {{\bar {\mathbf{e}}} }(t) &=& ({\mathbf{R}_{21}} \otimes {\mathbf{I}_n}){\mathbf{x}_1}(t) + ({\mathbf{R}_{22}} \otimes {\mathbf{I}_n})\bar {\mathbf{x}}(t) \end{eqnarray} Thus expressing $({\mathbf{R}_{21}} \otimes {\mathbf{I}_n})$ from \eqref{prop:U:2} and substituting in \eqref{eq:transformation_Ub} yields $${{\bar {\mathbf{e}}}} (t) = ({\mathbf{R}_{22}}\otimes \mathbf{I}_n)\left( {\bar{\mathbf{x}}(t) - (\mathbb{1}_{N-1}\otimes \mathbf{I}_n){\mathbf{x}_1}(t) } \right)$$ note that ${{\bar {\mathbf{e}}}} (t)=\mathbb{0}$ if and only if ${\bar{\mathbf{x}}(t) - (\mathbb{1}_{N-1}\otimes \mathbf{I}_n){\mathbf{x}_1}(t) } =\mathbb{0}$ since ${\mathbf{R}_{22}}$ is a full rank matrix \cite{BurbanoLombana2015}. Then, admissible consensus is achieved if $\lim_{t \to \infty }{{\bar {\mathbf{e}}}} (t) = \mathbb{0}$ and $\left\| {\mathbf{y}(t)} \right\|\le W < +\infty, \forall t>0$. Now, recasting \eqref{eq:PID:2} in the new coordinates $\mathbf{e}(t)$ and $\mathbf{w}(t):= {\mathbf{R}^{-1}}\mathbf{y}(t)$, and letting ${\bar{\mathbf{\Lambda}}}_C:=\mbox{diag}\{ \lambda_2(\boldsymbol{\mathcal{L}}_C),\cdots,$ $\lambda_N(\boldsymbol{\mathcal{L}}_C) \}$, ${\bar{\mathbf{\Lambda}}}_P:=\mbox{diag}\{ \lambda_2(\boldsymbol{\mathcal{L}}_P),\cdots,\lambda_N(\boldsymbol{\mathcal{L}}_P) \}$, ${\bar{\mathbf{\Lambda}}}_I:=\mbox{diag}\{ \lambda_2(\boldsymbol{\mathcal{L}}_I),\cdots,\lambda_N(\boldsymbol{\mathcal{L}}_I) \}$ we get \begin{equation} \label{eq:final:PID} \begin{array}{l} {{\dot {\mathbf{e}}}}(t) = \left( {\mathbf{\Psi} - \widehat{\boldsymbol{\mathcal{H}}} } \right){\mathbf{e} }(t) + \left[ {\begin{array}{*{20}{c}} {\mathbb{0}_{n\times1}}\\ {\bar {\mathbf{w}}(t)} \end{array}} \right]\\ {{\dot {\bar {\mathbf{w}}}}}(t) = - \beta (\mathbf{T}_I {{\bar {\mathbf{\Lambda}} }_I}{\mathbf{T}_I ^T}\otimes \mathbf{I}_n){{\bar {\mathbf{e}}} }(t) \end{array} \end{equation} where $\bar {\mathbf{w}}(t) := \left[ \mathbf{w}_2^T(t), \ldots , \mathbf{w}_N^T(t) \right]^T$. Note that the dynamics of $\mathbf{w}_1(t)$ can be neglected as it is trivial with null initial conditions and represents an uncontrollable and unobservable state. The quantities in \eqref{eq:final:PID} are defined as follows \begin{itemize} \item $\mathbf{\Psi}$ is a block matrix defined as \[\begin{array}{l} \mathbf{\Psi} := \left[ {\begin{array}{*{20}{c}} {{\mathbf{\Psi} _{11}}}&{{\mathbf{\Psi} _{12}}}\\ {{\mathbf{\Psi} _{21}}}&{{\mathbf{\Psi} _{22}}} \end{array}} \right]=({\mathbf{R}^{ - 1}}\otimes\mathbf{I}_n)\widehat{\mathbf{A}}({\mathbf{R}}\otimes\mathbf{I}_n) = \\({\mathbf{R}^{ - 1}}\otimes\mathbf{I}_n)\left[ {\begin{array}{*{20}{c}} {\mathbf{A}_1}&{{\mathbb{0}_{(n \times n(N - 1))}}}\\ {{\mathbb{0}_{(n(N - 1) \times n)}}}&{\bar {\mathbf{A}}} \end{array}} \right]({\mathbf{R}}\otimes\mathbf{I}_n) \end{array}\] where $\bar{\mathbf{A}} := \mbox{diag}\left\{ {\mathbf{A}_2, \cdots ,\mathbf{A}_N} \right\}$ is a block diagonal matrix. Using properties (\ref{prop:U:1})-(\ref{prop:U:4}), we can write (see Appendix \ref{Appendix_III} for the derivation) \begin{eqnarray} \label{eq:PSI:11} {\mathbf{\Psi} _{11}} &=& (1/N)\sum\nolimits_{k = 1}^N {{\mathbf{A}_k}}\\ \label{eq:PSI:12} {\mathbf{\Psi}_{12}} &=& \mathbf{P}_1(\mathbf{R}_{22}^T\otimes\mathbf{I}_n)\\ \label{eq:PSI:21} {\mathbf{\Psi}_{21}} &=& (\mathbf{R}_{22}\otimes\mathbf{I}_n)\mathbf{P}_2\\ \label{eq:PSI:22} {\mathbf{\Psi} _{22}} &=& N({\mathbf{R}_{22}}\otimes \mathbf{I}_n)\mathbf{H}({\mathbf{R}_{22}^T}\otimes \mathbf{I}_n) \end{eqnarray} with \begin{eqnarray} \label{eq:Hmatr} {\mathbf{H}} &:=& (\mathbb{1}_{N-1}\mathbb{1}_{N-1}^T\otimes\mathbf{A}_1)+\bar{\mathbf{A}}\\ \label{eq:P1Matr} {\mathbf{P}_1} &:=& [{\mathbf{A}_2} - {\mathbf{A}_1}, \cdots, {\mathbf{A}_{N}} - {\mathbf{A}_1}]\\ \label{eq:P2Matr} {\mathbf{P}_2} &:=& [{\mathbf{A}_2^T} - {\mathbf{A}_1^T}, \cdots, {\mathbf{A}_{N}^T} - {\mathbf{A}_1^T}]^T \end{eqnarray} \item the matrix $\mathbf{T}_I=N\mathbf{R}_{22}(\mathbb{1}_{N-1}\mathbb{1}_{N-1}^T+\mathbf{I}_{N-1})\mathbf{Q}_{22}^T$ was obtained using Lemma \ref{prop:dif_lap} for $(\mathbf{R}^{-1}\otimes \mathbf{I}_n)\widehat{\boldsymbol{\mathcal{L}}}_I(\mathbf{R}\otimes \mathbf{I}_n)$. \item $\widehat{\boldsymbol{\mathcal{H}}}:=({\mathbf{R}^{ - 1}}\otimes\mathbf{I}_n) \boldsymbol{\mathcal{H}} ({\mathbf{R}}\otimes\mathbf{I}_n)$ and using again Lemma \ref{prop:dif_lap} yields $$ \widehat{\boldsymbol{\mathcal{H}}} = \left[ {\begin{array}{*{20}{c}} {0}&{\mathbb{0}_{1\times (N-1)}}\\ {\mathbb{0}_{(N-1)\times 1}}&{\sigma \bar{\mathbf{\Lambda}}_C + \sigma_P \mathbf{T}_P\bar{\mathbf{\Lambda}}_P\mathbf{T}_P^T} \end{array}} \right]\otimes \mathbf{I}_{n} $$ with $\mathbf{T}_P=N\mathbf{R}_{22}(\mathbb{1}_{N-1}\mathbb{1}_{N-1}^T+\mathbf{I}_{N-1})\mathbf{U}_{22}^T$. \end{itemize} \subsection{Main Result} \begin{thm} \label{Th:I:PI} Consider the multiplex network \eqref{eq:DMPI} associated to the multigraph $\mathscr{M}= \{\mathscr{G}_{C},\mathscr{G}_{P},\mathscr{G}_{I} \}$. Assuming the open-loop network structure $\mathscr{G}_{C}$ is connected, admissible consensus is achieved if the following conditions hold \begin{enumerate} { \item[i)] The matrix ${\mathbf{\Psi} _{11}} = (1/N)\sum\nolimits_{k = 1}^N {{\mathbf{A}_k}}$ is non-singular, and its symmetric part ${\mathbf{\Psi} _{11}^{\prime}}$ is Hurwitz, \item[ii)] $\sigma_P\lambda_2({\boldsymbol{\mathcal{L}}_P}) >\frac{1}{2}\left( {\frac{\mu }{{N\left| \eta \right|}} + \rho } \right) -\sigma\lambda_2({\boldsymbol{\mathcal{L}}_C})$ \item[iii)] $\lambda_2({\boldsymbol{\mathcal{L}}_I}) > 0$ and $\sigma_I>0$ } \end{enumerate} where \begin{subequations} \label{eq:cond} \begin{alignat}{3} \label{eq:conda} \mu &:= \lambda_{\max}\left( { {{{\sum\nolimits_{k = 2}^N {\left( {{\mathbf{A}_k^{\prime}} - {{\mathbf{A}_1^{\prime}} } } \right)} }^2}} } \right)\\ \label{eq:condb} \eta &:= \lambda_{\max}\left(\mathbf{\Psi} _{11}^{\prime}\right)\\ \label{eq:condc} \rho &:= \mathop {\max }\limits_{k \in \mathcal{N}} \left\{ {{\lambda _{\max }}\left( {{\mathbf{A}_k^{\prime}}} \right)} \right\} \end{alignat} \end{subequations} Moreover, all node states asymptotically converge to $\mathbf{x}_\infty= - (1/N){\mathbf{\Psi}_{11}^{ - 1}}\sum\nolimits_{k = 1}^N {{\mathbf{b}_k}}$. \end{thm} \begin{pf*}{Proof.} From the assumptions, ${\mathbf{\Psi} _{11}}$ is a non-singular matrix; therefore, we have that the consensus equilibrium \eqref{eq:equili} exists. Then, consider the candidate Lyapunov function (in what follows we remove the time dependence of the state variables to simplify the notation) \begin{equation} \label{lyap:PI} V = \frac{1}{2}( {{\mathbf{e}}}_1^T{{ {{\mathbf{e}}}}_1} + {{ {\bar{\mathbf{e}}}}^T} {\bar{\mathbf{e}}}) + \frac{1}{{2\sigma_I }}{{ {\bar{\mathbf{w}}}}^T}{(\mathbf{T}_I {{\bar {\mathbf{\Lambda}} }_I}{\mathbf{T}_I^T}\otimes \mathbf{I}_n)}^{-1} {\bar{\mathbf{w}}} \end{equation} From Lemma \ref{prop:dif_lap} we know that $\mathbf{T}_I\bar{\mathbf{\Lambda}}_I\mathbf{T}_I^T$ is an eigendecomposition of a symmetric matrix with positive eigenvalues, which are the diagonal entries of $\bar {\mathbf{\Lambda}}_I$; therefore, its inverse exist and it is also a positive definite matrix. Consequently, \eqref{lyap:PI} is a positive definite and radially unbounded function. Then, differentiating $V$ along the trajectories of \eqref{eq:final:PID} and using expressions \eqref{eq:PSI:12} and \eqref{eq:PSI:21}, one has \begin{equation} \begin{split} \label{diff:lyap:PI} \dot V &= V_1({\mathbf{e}}_1) + V_2(\bar{\mathbf{e}}) + V_3(\bar{\mathbf{e}}) + V_4({\mathbf{e}}_1,\bar{\mathbf{e}}) \end{split} \end{equation} where, $V_1({\mathbf{e}}_1) = {\mathbf{e}}_1^T{\mathbf{\Psi} _{11}}{{ {\mathbf{e}}}_1}$, $V_2(\bar{\mathbf{e}}) = \bar {\mathbf{e}}^T \mathbf{\Psi} _{22} \bar{\mathbf{e}}$, $ V_3(\bar{\mathbf{e}}) = - \bar {\mathbf{e}}^T( \sigma (\bar {\mathbf{\Lambda}}_C \otimes \mathbf{I}_n) + \sigma_P (\mathbf{T}_P\bar {\mathbf{\Lambda}}_P\mathbf{T}_P^T\otimes \mathbf{I}_n)) \bar{\mathbf{e}} $, and $V_4({\mathbf{e}}_1,\bar{\mathbf{e}}) = { {\mathbf{e}}_1^T(\mathbf{P}_1+\mathbf{P}_2^T)(\mathbf{R}_{22}^T\otimes \mathbf{I}_n)\bar {\mathbf{e}} }$. Now, we proceed to find an upper-bound for each of the terms in \eqref{diff:lyap:PI}. From the assumptions we know that $\mathbf{\Psi} _{11}+\mathbf{\Psi} _{11}^T$ is Hurwitz; therefore, using \eqref{eq:condb} and property \eqref{symm:bound}, one has that $V_1({\mathbf{e}}_1)\leq -(1/2)\left| \eta \right|{\mathbf{e}}_1^T{\mathbf{e}}_1$. {Next, consider the symmetric matrix $\mathbf{\Psi}^{\prime}:=\mathbf{\Psi}+\mathbf{\Psi}^T$; therefore, using \eqref{transpose_R} $\mathbf{\Psi}^{\prime}=({\mathbf{R}^{ - 1}}\otimes\mathbf{I}_n)(\widehat{\mathbf{A}}+\widehat{\mathbf{A}}^T)({\mathbf{R}}\otimes\mathbf{I}_n)$. Then, it immediately follows that ${\lambda _{\max }}\left( {\mathbf{\Psi}+\mathbf{\Psi}^T} \right)=\rho$, where $\rho$ is given in \eqref{eq:condc}. Now, we can write $V_2(\bar{\mathbf{e}}) = (1/2)\bar {\mathbf{e}}^T \mathbf{\Psi} _{22}^{\prime} \bar{\mathbf{e}}$, and from the fact that $\mathbf{\Psi} _{22}^{\prime}$ is a principal sub-matrix of $\mathbf{\Psi}^{\prime}$, by using property \eqref{eig_bloksim} one has $V_2(\bar{\mathbf{e}})\le \rho/2 {{\bar {\mathbf{e}}}^T}{{\bar {\mathbf{e}}}}$.} From Lemma \ref{prop:dif_lap} we know that $\mathbf{T}_P\bar {\mathbf{\Lambda}}_P\mathbf{T}_P^T$ is a symmetric positive definite matrix. Hence, using \eqref{symm:bound} we have that $V_3(\bar{\mathbf{e}}) \leq -(\sigma \lambda_2({\boldsymbol{\mathcal{L}}_C}) + \sigma_P\lambda_2({\boldsymbol{\mathcal{L}}_P}) ) {{ \bar{\mathbf{e}}}^T} {{ \bar{\mathbf{e}}}}$. Finally, setting $\mathbf{v}_1=\mathbf{e}_1$, $\mathbf{v}_2=\bar{\mathbf{e}}$, $\mathbf{Q}_1^T=\mathbf{P}_1+\mathbf{P}_2^T$ and $\mathbf{Q}_2=\mathbf{R}_{22}^T\otimes \mathbf{I}_n$ and using \eqref{pos:eq:1} yields \[ \begin{split} V_4({\mathbf{e}}_1,\bar{\mathbf{e}}) &< \frac{\varepsilon}{2}{\mathbf{e}}_1^T\mathbf{Q}_1^T\mathbf{Q}_1{\mathbf{e}}_1 + \frac{1}{2\varepsilon}\bar {\mathbf{e}}^T\mathbf{Q}_2^T\mathbf{Q}_2\bar {\mathbf{e}}\\ & < {\frac{\varepsilon}{2}{\mathbf{e}}_1^T{ {{{\sum\limits_{k = 2}^N {\left( {{\mathbf{A}_k^{\prime}} - {{\mathbf{A}_1^{\prime}} } } \right)} }^2}} }{\mathbf{e}}_1+ \frac{1}{2\varepsilon}\bar {\mathbf{e}}^T\mathbf{Q}_2^T\mathbf{Q}_2\bar {\mathbf{e}} } \end{split} \] We can further simplify this expression by noticing that $\mathbf{Q}_2^T\mathbf{Q}_2$ is a symmetric matrix and using \eqref{symm:bound}, \eqref{specnorm:bou}, and \eqref{prop:U:5norm}, we can write $\bar{\mathbf{e}}^T\mathbf{Q}_2^T\mathbf{Q}_2\bar {\mathbf{e}} \le \specnorm{\mathbf{Q}_2}^2\bar{\mathbf{e}}^T\bar {\mathbf{e}} \le (1/N)\bar{\mathbf{e}}^T\bar {\mathbf{e}}$. Then, using \eqref{eq:conda} yields ${V_4({\mathbf{e}}_1,\bar{\mathbf{e}})} \le ({\varepsilon\mu})/{2}{\mathbf{e}}_1^T{\mathbf{e}}_1 + {1}/{(2N\varepsilon)}\bar {\mathbf{e}}^T\bar {\mathbf{e}}$. Exploiting all the bounds we found for each term in \eqref{diff:lyap:PI} yields \begin{equation} \label{diff:lyap:PIb} \begin{split} \dot V &\le (1/2)\left(\varepsilon\mu-\left| \eta \right|\right){\mathbf{e}}_1^T{{ {\mathbf{e}}}_1} - (\sigma \lambda_2(\boldsymbol{\mathcal{L}}_C)+\sigma_P \lambda_2(\boldsymbol{\mathcal{L}}_P) ){{\bar {\mathbf{e}}}^T}{{\bar {\mathbf{e}}}} \\ & \quad + \left( {\frac{1}{2N\varepsilon} + \frac{\rho}{2}} \right) \bar {\mathbf{e}}^T\bar {\mathbf{e}}\\ &\le \xi_1{\mathbf{e}}_1^T{{ {\mathbf{e}}}_1} + \xi_2 \bar {\mathbf{e}}^T\bar {\mathbf{e}} \end{split} \end{equation} where $ \xi_1:=\varepsilon\mu-\left| \eta \right|<0$ and $\xi_2:=1/(2N\varepsilon)+\rho/2 - \sigma \lambda_2(\boldsymbol{\mathcal{L}}_C)- \sigma_P \lambda_2(\boldsymbol{\mathcal{L}}_P) <0$. Now, $\xi_1<0$ is ensured if $\varepsilon<\left| \eta \right|/\mu$. Also, $\xi_2<0$ if condition ii) is fulfilled. Therefore, under the hypotheses, all agents in \eqref{eq:sys:1} achieve admissible consensus to $\mathbf{x}_{\infty}$ as defined in \eqref{eq:equili}. \end{pf*} \begin{rem}\hspace{0.2cm} \begin{itemize} \item Note that the conditions of Theorem 11 can be used as an effective tool to tune the control gain and/or rewire the control layers. \item The stability analysis problem for the whole network has been simplified. In particular, rather than studying the stability of the $2nN\times 2nN$ matrix in \eqref{eq:DMPI}, only conditions i) and ii) need to be verified which only depend upon $n\times n$ matrices. \item Note that condition (ii) can always be ensured by choosing $\sigma_P$ sufficiently large. Crucially, our bound, depending on the network structure and the node dynamics, allows to estimate the threshold value of $\sigma_P$ required to guarantee global convergence. This can be extremely useful when tuning the gains in practice and also for network design. \item It is important to highlight that optimal values for the proportional layer ($\sigma_P$, $\lambda_2({\boldsymbol{\mathcal{L}}_P})$) can be obtained by properly labeling node 1 so that $\mu$ is such that the quantity ${\mu }/({{N\left| \eta \right|}})$ in condition ii) is the smallest. \item The topology of the integral control layer can be chosen arbitrarily. Hence, the independence of its structure from that of the other layers allows to minimize the number of control interventions across the network. \end{itemize} \end{rem} In the case where the graph associated to the open loop network $\mathcal{L}_C$ is connected, it is possible to use the following result that comes immediately from Theorem 11. \begin{cor} Let $\mathscr{G}_{cp}=\mbox{proj}(\mathscr{G}_C,\mathscr{G}_P)$ denotes the projection graph of $\mathscr{G}_{C}$ and $\mathscr{G}_{P}$ and $\boldsymbol{\mathcal{L}}_{cp}$ be its associated Laplacian matrix; then, assuming $\mathscr{G}_{cp}$ is connected, the multiplex closed-loop network \eqref{eq:DMPI} reaches admissible consensus if conditions i) and iii) of Theorem \ref{Th:I:PI} are fulfilled together while condition ii) is substituted with $\lambda_2(\boldsymbol{\mathcal{L}}_{cp}) >({1}/{2})\left( {{\mu }/{({N\left| \eta \right|})} + \rho } \right)$. \end{cor} \begin{pf*}{Proof.} Since the graph $\mathscr{G}_{cp}=\mbox{proj}(\mathscr{G}_{C},\mathscr{G}_{P})$ is connected then we have that $\boldsymbol{\mathcal{L}}_{cp}=\mathbf{U}\mathbf{\Lambda}_{cp}\mathbf{U}^T$ where $\mathbf{U}$ is the matrix composed by the eigenvectors of $\boldsymbol{\mathcal{L}}_{cp}$ and $\mathbf{\Lambda}_{cp}=\mbox{diag}\{0,\lambda_2(\boldsymbol{\mathcal{L}}_{cp}),\cdots,\lambda_N(\boldsymbol{\mathcal{L}}_{cp}) \}$. Hence, we have that $\boldsymbol{\mathcal{H}}=({\boldsymbol{\mathcal{L}}_{cp} \otimes {\mathbf{I}_n}})$ in \eqref{eq:DMPI} and following a similar procedure as in Section \ref{Sec:conv} completes the proof. \end{pf*} \begin{cor} Considering a connected open-loop network with homogeneous node dynamics, i.e $\mathbf{A}_i=\mathbf{A}, i\in \mathcal{N}$ where $\mathbf{A}$ and $\mathbf{A}^{\prime}$ are Hurwitz stable. Then the closed-loop network \eqref{eq:DMPI}, reaches admissible consensus for any connected proportional and integral graph topologies with $\sigma_P,\sigma_I>0$. \end{cor} \begin{pf*}{Proof.} Firstly, note that when all nodes share the same intrinsic dynamics we have that $\mu=0$ in \eqref{eq:conda}, and ${\mathbf{\Psi} _{11}}=\mathbf{A}$. Hence, from the assumptions, conditions i) and iii) of Theorem \ref{Th:I:PI} are automatically satisfied and from the fact that matrix $\mathbf{A}+\mathbf{A}^T$ is Hurwitz, one has that $\rho<0$ in \eqref{eq:condc}; therefore, condition ii) of Theorem \ref{Th:I:PI} is also automatically fulfilled. \end{pf*} Now consider the case where ${\mathbf{\Psi} _{11}}$ is not Hurwitz stable; then, it is possible to apply a local feedback control action to a subset of the nodes so as to render ${\mathbf{\Psi} _{11}}$ Hurwitz stable and guarantee the existence of the consensus equilibrium $(\mathbf{x}^*,\mathbf{z}^*)$ in the closed-loop network. Or, equivalently, make the network \textit{consensuable} according to the definition given in \cite{Wang2014}. Specifically, consensusability can be achieved by adding an extra control input, say $\mathbf{v}_i$, onto a fraction $K<N$ nodes so that $\mathbf{\Psi}_{11}$ is stable. For example, one can choose the controller \begin{equation} \label{control:II} {\mathbf{v}_i}(t) = \mathbf{H}_i{\mathbf{x}_i}(t) \end{equation} where $\mathbf{H}_i\in \use@mathgroup \M@U \symAMSb{R}^{n\times n}$ is a gain matrix to be designed appropriately. Note that typically one could simply choose $K=1$ so that the dynamics of just one node is altered by this feedback controller. \begin{cor} The heterogeneous network \eqref{eq:sys:1} is said to be consensusable under the distributed control action \eqref{control:II}, if there exist matrices $\mathbf{H}_i$ such that conditions i), ii) and iii) in Theorem \ref{Th:I:PI} are fulfilled. \end{cor} \begin{rem} Note that the presence of local controllers acting on some nodes can be used not only for improving the closed-loop network stability, but also to change the value of the consensus vector $\mathbf{x}_{\infty}$. \end{rem} \subsection{Control Algorithm} \label{cont:alg} The results presented so far can be distilled into the following algorithmic steps to design the multilayer PI network control strategy proposed in this paper. Specifically, \begin{enumerate} \item[S1] Compute matrix ${\mathbf{\Psi} _{11}}=(1/N)\sum\nolimits_{k = 1}^N {{\mathbf{A}_i}}$ from the open-loop network \eqref{eq:sys:1}. \item[S2] If matrix ${\mathbf{\Psi} _{11}}$ and ${\mathbf{\Psi} _{11}^{\prime}}$ are Hurwitz stable then go to step S4, otherwise go to S3. \item[S3] Design local controllers \eqref{control:II} such that ${\mathbf{\Psi} _{11}}$ together with its symmetric part ${\mathbf{\Psi} _{11}^{\prime}}$ are Hurwitz. Note that matrices $\mathbf{H}_i$ can also be properly chosen for selecting different values of the consensus vector $\mathbf{x}_\infty$ in \eqref{eq:equili} \item[S4] Select any connected and weighed undirected graph $\mathscr{G}_{I}$ for the integral layer e.g. a minimal spanning tree. Then compute the quantities $\mu$, $\eta$, and $\rho$ defined in \eqref{eq:cond} \item[S5] Find a connected and weighed undirected graph $\mathscr{G}_{P}$ for the proportional layer and a value of the global coupling gain $\sigma_P$ such that $\sigma_P\lambda_2({\boldsymbol{\mathcal{L}}_P}) >(1/2)\left( {{\mu }/ ({{N\left| \eta \right|}}) + \rho } \right) - c\lambda_2({\boldsymbol{\mathcal{L}}_C})$ \end{enumerate} \subsection{Example} \label{ExampleI} For the sake of simplicity and without loss of generality we consider three types of node dynamics; oscillatory ($\mathbf{E}_1$), stable ($\mathbf{E}_2$) and unstable ($\mathbf{E}_3$) \[{\mathbf{E}_1}: = \left[ {\begin{array}{*{20}{c}} {0}&1\\ -1&{0} \end{array}} \right],{\mathbf{E}_2}: = \left[ {\begin{array}{*{20}{c}} -1.5&0\\ {-1}&-1 \end{array}} \right],{\mathbf{E}_3}: = \left[ {\begin{array}{*{20}{c}} 1&1\\ 0&{0.5} \end{array}} \right]\] Then, we consider eight decoupled agents governed by \eqref{eq:sys:1}, with $\sigma = 0$, $\mathbf{A}_k = \mathbf{E}_1, k \in \{1,3\}$, $\mathbf{A}_k = \mathbf{E}_2,k \in \{2,5,7\}$, and $\mathbf{A}_k = \mathbf{E}_3, k \in \{4,6,8\}$ and disturbances $\mathbf{b}_i\in{\use@mathgroup \M@U \symAMSb{R}^{2\times 1}}$ given by $\mathbf{B} = \left[\mathbf{b}_1^T,\cdots,\mathbf{b}_8^T\right]^T= \left[0,10,0,30,0,1,20,0,30,30,60,10,-10,40,0,0 \right]$. Note that no disturbance is acting on the 8-th node and that some of the agents are marginally stable or unstable. Nevertheless, their average dynamics is characterised by a full rank matrix ${\mathbf{\Psi} _{11}}$ so that Proposition 8 ensures the existence of a consensus equilibrium while Theorem 11 can be used to prove convergence under the action of our multiplex PI strategy. To show the effectiveness of such an approach, for the sake of comparison we start by using a distributed proportional controller setting $\sigma_I=0$ in $\eqref{eq:cont:2b}$. As can be seen in Fig. \ref{fig:oprn}, this can only guarantee bounded convergence. \begin{figure}[tbp] \centering { \subfigure[] {\label{fig:oprna} {\includegraphics[scale=0.27]{open_a_5}}} \subfigure[] {\label{fig:oprnb} {\includegraphics[scale=0.27]{open_a_10}}} } \caption{State space evolution of the heterogeneous network controlled by distributed proportional control for: (a) $\sigma_P=5$ and (b) $\sigma_P=10$.} \label{fig:oprn} \end{figure} To achieve admissible consensus, we deploy next the multiplex PI-Control strategy presented in this paper. Following the control design steps in Section \ref{cont:alg}, we have from S1 that \[{\mathbf{\Psi} _{11}} = \left[ {\begin{array}{*{20}{c}} {-0.1875}&{0.625}\\ {-0.625}&{-0.1875} \end{array}} \right],{\mathbf{\Psi} _{11}^{\prime}} = \left[ {\begin{array}{*{20}{c}} { -0.375}&{0}\\ {0}&{-0.375} \end{array}} \right]\] where ${\mathbf{\Psi} _{11}}$ is a full rank matrix and ${\mathbf{\Psi} _{11}^{\prime}}$ is a Hurwitz stable matrix. Then, following S4 we select a ring network of 8 nodes with unitary weights ($\beta_{ij}=1 \quad \forall i,j \in \mathcal{N}$) as the connected integral network, and from \eqref{eq:cond} we have that $\mu=59.8328$, $\eta=0.3750$, and $\rho=2.618$. From S5 we have that $\sigma_P\lambda_2({\boldsymbol{\mathcal{L}}_P})>11.2812$. Then, choosing, w.l.o.g again a ring network with $\alpha_{ij}=1 \quad \forall i,j \in \mathcal{N}$ so that $\lambda_2({\boldsymbol{\mathcal{L}}_P}) = 0.5858$, the closed-loop network of 8 agents achieves admissible consensus for $\sigma_P>19.25$. We choose $\sigma_P=19.3$, and $\sigma_I=15$. The resulting evolution of the node states and integral actions is shown in Fig. \ref{fig:Tim:res}, where admissible consensus is reached as expected to the predicted value ${\mathbf{x}_\infty }:= - (1/N)\mathbf{\Psi} _{11}^{-1}\sum\nolimits_{k = 1}^N {{\mathbf{b} _k}}= [27.7064, -11.6881]^T$ and the integral terms remain bounded. \begin{figure}[tbp] \centering { \subfigure[] {\label{fig:Tim:resa} {\includegraphics[scale=0.27]{closed_x}}} \subfigure[] {\label{fig:Tim:resb} {\includegraphics[scale=0.27]{closed_z}}} } \caption{State space evolution of the closed-loop multiplex network for $\sigma_P=19.3$ and $\sigma_I=15$ where the proportional and integral networks have both ring structures with all weights equal to 3 and 1 respectively.} \label{fig:Tim:res} \end{figure} \subsection{Discussion} The admissible consensus conditions presented in Theorem \ref{Th:I:PI} only require the graph structure of the integral layer $\mathscr{G}_{I}$ to be connected. However, in general, we found that the stability of the consensus equilibrium and the rate of convergence are affected by the specific choice of $\mathscr{G}_{I}$. \begin{figure}[tbp] \centering { \subfigure[] {\label{fig:varNeta} {\includegraphics[scale=0.15]{all2all}}} \subfigure[] {\label{fig:varNetb} {\includegraphics[scale=0.15]{star}}} \subfigure[] {\label{fig:varNetc} {\includegraphics[scale=0.15]{ring}}} \subfigure[] {\label{fig:varNetd} {\includegraphics[scale=0.15]{tre}}} \subfigure[] {\label{fig:Ring_All2All} {\includegraphics[scale=0.27]{Ring_All2All}}} \subfigure[] {\label{fig:Ring_star} {\includegraphics[scale=0.27]{Ring_Star}}} \subfigure[] {\label{fig:Ring_Ring} {\includegraphics[scale=0.27]{Ring_Ring}}} \subfigure[] {\label{fig:Ring_Tree} {\includegraphics[scale=0.27]{Ring_Tree}}} } \caption{Different network structures with unitary weights considered for the integral control layer: (a) all-to-all, (b) star, (c) ring, and (d) Tree. Two-dimensional stability diagrams varying the topology of $\mathscr{G}_{I}$ [(e): all-to-all, (f) star, (g) ring, (h) tree]. Red regions denote parameter values where consensus is not achieved, blue regions those where consensus is attained.} \label{fig:StabilRegio} \end{figure} \begin{figure}[tbp] \centering { {\includegraphics[scale=0.42]{err_varTop}} } \caption{Time response of the consensus index $d_x$ when the topology of the integral network is varied} \label{fig:varNet} \end{figure} To illustrate this point, we considered different structures for the graph $\mathscr{G}_{I}$ while leaving $\mathscr{G}_P$ unchanged, and computed two-dimensional stability diagrams in the control parameter space $(\sigma_P, \sigma_I)$, see Fig. \ref{fig:StabilRegio}. Namely, at each point in the $(\sigma_P, \sigma_I)$ space, we computed the maximum eigenvalue of the error system dynamics \eqref{eq:final:PID} depicting in blue those points where the eigenvalue is negative (consensus is achieved) and in red those where it is positive (convergence is not attained). As shown in Fig. \ref{fig:StabilRegio}(e)-(f), varying the structure of the integral layer has a notable effect on the shape of the stability region. We also found that changing the structure of $\mathscr{G}_I$ influences the speed of convergence of the closed-loop multiplex network towards consensus. Specifically, in Fig. \ref{fig:varNet}, we plot the time evolution of the consensus index ${d_x} := \left\| {\mathbf{x}(t) - (1/N)\left( {{\mathbb{1}_N}\mathbb{1}_N^T \otimes {\mathbf{I}_n}} \right)\mathbf{x}(t)} \right\|$, where $d_x=0$ indicates that the closed-loop network has reached admissible consensus. We observe that the structure of $\mathscr{G}_{I}$ changes the speed of convergence. Obtaining an analytical estimate of such a rate is a highly cumbersome task as discussed in \cite{Kenji2011}, but some estimations can be found in the case where the agents are one-dimensional and homogeneous \cite{BurbanoLombana2015}. Finally, it is worth pointing out that in a practical implementation of the multiplex strategy \eqref{eq:cont:2b}, the relative difference $(\mathbf{x}_j(t)-\mathbf{x}_i(t))$ between agents may be affected by measurement errors \cite{Garulli2011,Liu2011,Meng2014}. This might render the integral terms unable to converge. In practice, anti-windup strategies (saturations) can be added to the integral terms or higher order actions (e.g. PI$^m$) can be used. Also, the multiplex nature of the proposed PI strategy can be further exploited if an estimate of the measurement errors is available. In this case, given that the integral and control layers can have different structures, integral actions can only be deployed on those edges which are less noisy than the others. Preliminary simulations (not reported here for the sake of brevity) confirm this observation which will be the subject of future work. \section{Application to Power systems} In this section, we show that the convergence analysis used to prove stability of the multiplex PI strategy developed in this paper can be effectively used to prove the emergence of synchronisation in heterogeneous networks of power generators. Specifically, we consider $N$ power generators governed by the swing equation \cite{Motter2013power} \begin{equation} \label{swing:eq} \frac{{2{H_i}}}{{{\omega _R}}}{\ddot \delta _i} = P_i^m(t) - P_i^{net}(t), i\in \mathcal{N} \end{equation} where $H_i$ and $\omega _R$ are constants representing the inertia and reference frequency for the $i$-th generator. The quantity $P_i^m(t):=P_i^*-d_i{\dot \delta _i}(t)$ is the mechanical power provided by the $i$-th generator and it is composed by a constant power injection $P_i^*$ and a damping term $d_i{\dot \delta _i}(t), d_i>0$ which models power losses and primary control loops. Moreover, $P_i^{net}(t)$ is the power demanded by the network. Note that when \eqref{swing:eq} is at rest onto an equilibrium, $P_i^m = P_i^{net}$ and the frequency of each generator ${\omega _i}(t): = {\dot \delta _i}(t)$ remains equal to a common constant for all generators in the grid. For the sake of simplicity, we linearize the swing equation \eqref{swing:eq} around the synchronous state ${\omega _1}(t) = \cdots = {\omega _N}(t)$, letting $m_i= 2H_i/\omega _R$, we obtain \cite{Andreasson2014} \begin{subequations} \label{eq:cont:PS} \begin{alignat}{3} \label{eq:cont:PSa} {m_i\dot \omega _i}(t) &= - d_i\omega _i(t) + P_i^* - {P_i^{net}}(t)+\ v_i(t)\\ \label{eq:cont:PSb} {\dot P_i^{net}}(t) &= \sum\limits_{j = 1,j \ne i}^N {{E_i}{E_j}\left| {{Y_{ij}}} \right|\left( {{\omega _i} - {\omega _j}} \right)}, i \in \mathcal{N} \end{alignat} \end{subequations} where ${E_i}>0$ is the nodal voltage, and $Y_{ij}$ is the admittance among buses $i$ and $j$. To achieve synchronization, we consider the distributed control protocol \begin{equation} \label{PIDcontr:PWS} {v_i}(t) = \frac{1}{m_i}\left(k_i\omega_i(t) + \sigma_P \sum\nolimits_{k = 1}^N {{\mathcal{L}_{P,ij}}{\omega _i}(t)}\right) \end{equation} with $k_i\in\use@mathgroup \M@U \symAMSb{R}$ being a constant representing a local feedback gain for the $i$th-node, $\sigma_P>0$ and $\boldsymbol{\mathcal{L}}_P\in\use@mathgroup \M@U \symAMSb{W}$ representing the Laplacian matrix of the proportional layer $\mathscr{G}_{P}$ with link weights $\alpha_{ij}$. Now, let $\beta_{ij}:={E_i}{E_j}\left| {{Y_{ij}}} \right|$ be the weights on each edge of the power network in \eqref{eq:cont:PSb} and $\boldsymbol{\mathcal{L}}_I\in\use@mathgroup \M@U \symAMSb{W}$ the associated Laplacian matrix describing the equivalent distributed integral action \eqref{eq:cont:PSb}. Setting $z(t)=-(1/m_i) P_i^{net}(t)$, the problem becomes that of proving convergence in the heterogeneous network given by \begin{subequations} \label{eq:exampl:1} \begin{alignat}{3} \label{eq:exampl:1a} \dot {{\boldsymbol\omega }}(t) &= \left( {\mathbf{H} -\sigma_P \boldsymbol{\mathcal{L}}_P} \right){\boldsymbol\omega}(t) + {\mathbf{z}}(t) + \mathbf{B}\\ \label{eq:exampl:1b} \dot {\mathbf{z}}(t) &= - \mathbf{M}\boldsymbol{\mathcal{L}}_I\boldsymbol\omega(t) \end{alignat} \end{subequations} where\ ${\boldsymbol\omega}(t):=[{\omega _1}(t), \cdots ,{\omega _N}(t)]$, ${\mathbf{z}}(t):=[{z _1}(t), \cdots ,{z _N}(t)]$ are the stack vectors of frequencies and rescaled electrical powers respectively, $\mathbf{H}:=\mbox{diag}\{k_1-d_1/m_1,\cdots,k_N-d_N/m_N\}$, $\mathbf{M}:=\mbox{diag}\{1/m_1,\cdots,1/m_N\}$ and the vector $\mathbf{B} := \mbox{diag}\{P_1^*/m_1,\cdots,P_N^*/m_N\}$. The closed-loop power system \eqref{eq:exampl:1} has the same structure of the multiplex network $\eqref{eq:DMPI}$ where the input biases $\mathbf{b}_i$ represent the rescaled constant power injections $P_i^*/m_i$ of each node. \begin{prop} \label{equil_point:powergrid} The closed-loop power network \eqref{eq:exampl:1} has a unique equilibrium given by ${\boldsymbol\omega^*}:=\omega_{\infty}\mathbb{1}_{N}$, with $\omega_{\infty}:=-{{\sum\nolimits_{i = 1}^N {{P_i^*}} }} / {{\sum\nolimits_{i = 1}^N {( {m_ik_i-d_i} )} }}$ and ${\mathbf{z}^* }:=- ( {\omega_{\infty}\mathbf{H}\mathbb{1}_{N} + \mathbf{B} } )$ \end{prop} \begin{pf*}{Proof.} As done in the proof of Proposition \ref{equil_point}, by setting the left-hand side of \eqref{eq:exampl:1} to zero, one has that $\mathbf{x}^* = a\mathbb{1}_N$, $\forall a \in \use@mathgroup \M@U \symAMSb{R}$ and ${\mathbf{z}^* } = -\left( {a\mathbf{P}\mathbb{1} + \mathbf{B} } \right)$. Now letting $\mathbf{v}:=[m_1,\cdots,m_N]^T$, by the definition of $\mathbf{z}(t)$ one has that $\mathbf{v}^T\mathbf{z}(t)=0$. Therefore $\mathbf{v}^T\mathbf{z}^* = 0$ and we obtain $a= - {\mathbf{v}^T\mathbf{B}}/{\mathbf{v}^T\mathbf{H}{\mathbb{1}_N}}=:\omega_{\infty}$ \end{pf*} \begin{cor} Under the control dynamics \eqref{PIDcontr:PWS}, the power network \eqref{eq:cont:PS} with $m_i=m,m>0$ $\forall i\in \mathcal{N}$ asymptotically converges to $\omega_{\infty}$ if the following conditions are satisfied \begin{subequations} \begin{equation} \label{eq:c1} {{\psi _{11}}}=\sum\limits_{i = 1}^N {\left( {{k_i} - \frac{{{d_i}}}{{{m}}}} \right)} < 0 \end{equation} \begin{equation} \label{eq:c2} \sigma_P {\lambda _2}\left( P \right) > \frac{{\sum\nolimits_{i = 1}^N {{{\left( {{k_i} - \frac{{{d_i}}}{{{m}}}} \right)}^2}} }}{{N\left| {{\psi _{11}}} \right|}} + \mathop {\max }\limits_i \left\{ {{k_i} - \frac{{{d_i}}}{{{m}}}} \right\} \end{equation} \end{subequations} \label{Coll:PI} \end{cor} \begin{pf*}{Proof.} Note that \eqref{eq:exampl:1} can be seen as a group of $N$ first order heterogeneous agents controlled by a multiplex PI strategy. Specifically, letting $A_i={k_i} - {d_i}/{m}$, the dynamics of each node can be written as \[\begin{array}{l} {{\dot \omega }_i}(t) = A_i{\omega _i}(t) + {b_i} - \sigma_P \sum\nolimits_{j = 1}^N {{\mathcal{L}_{P,ij}}{\omega _j}(t)} + {z_i}(t)\\ {{\dot z}_i}(t) = - (1/m)\sum\nolimits_{j = 1}^N {{\mathcal{L}_{I,ij}}{\omega _j}(t)} \end{array}\] Therefore, using Theorem \eqref{Th:I:PI} with $\sigma=0$, and $\sigma_I = (1/m)$ completes the proof. \end{pf*} \subsection{Illustrative example} As an illustration, consider the power network shown in Fig. \ref{fig:PGa}. For the sake of simplicity, and without loss of generality, we consider all line admittances and nodal voltages to be $Y_{ij}=0.0001$ and $ E_i=2kV$ $\forall i,j\in \mathcal{N}$ respectively. Moreover, we assume $m=0.2$ and four different values of damping, that is $d_i=0.5$, for $i\in\{1,4,7,8,11,14\}$, $d_i=0.45$, $i\in\{2,6,9,13,15\}$, $d_i=0.40$, $i\in\{3,10,12\}$, while $d_i=0.6$, $i\in\{5,16\}$. Furthermore the vector containing the nominal power injections (expressed in MW) for each node is given by $\mathbf{P}^{*} = [40, 30, 30, 22, 10, 20, 50, 35, 50, 20, 30, 25, 30, 20, 17, 30]$. Following the approach in \cite{Andreasson2014}, we assume that the network has been operating in these nominal conditions for $t<0$ [see Fig. \ref{fig:PGc}]. As the power network \eqref{eq:exampl:1} has a natural integral controller which encode the phase angles $\delta_i(t)$, consensus is expected on a value dependent on the network parameters and the nominal power injections. Such a value can be easily computed from Proposition \ref{equil_point:powergrid} by setting all $k_i =0$ yielding $\omega_{\infty}=60Hz$. Now, consider the scenario where, at $t=0s$ the nominal power injections are decreased by 600kW from the nominal value at buses $4$, $8$ and $10$ and consequently, the frequencies of all generators decrease as well. To compensate those disturbances, we use local feedback controllers on a fraction of nodes together with a distributed proportional action to manipulate and stabilize the desired convergence value. Specifically, we introduce feedback controllers with appropriate gains at nodes $1$, $3$, $5$, $8$, $10$ and $14$ [denoted by self feedback loops indicated in black in Fig. \ref{fig:PGa})] in order to shift $\omega_{\infty}$ to the desired value $\omega_{\infty}=60$. To address the stability of such consensus equilibrium, we use Corollary \ref{Coll:PI}. Firstly we find that ${{\psi _{11}}}:=-2.3875$ and condition \eqref{eq:c1} is fulfilled. Secondly, we have that $\mathop {\max }\nolimits_i \left\{ {{k_i} - {{{d_i}}}/{{{m}}}} \right\}=2$ and therefore the power network reaches admissible consensus if $\sigma_P {\lambda _2}\left( \boldsymbol{\mathcal{L}}_P \right) > 6.3991$. Choosing a simple path graph for the proportional control layer as shown in Fig. \ref{fig:PGb} yields $\sigma_P>0.8326$ to guarantee convergence. Heuristically, we found that setting $\sigma_P$ = 55 also ensures that the maximum frequency overshoot during transient is less than $100mHz$ (necessary to avoid unwanted damage to the grid). The behaviour of the closed-loop power network is shown in Fig. \ref{fig:PGc} where the distributed controller is switched on at $t=0.1$. As expected we observe the power network to quickly regain stability onto the desired target frequency. \begin{figure}[tbp] \centering { \subfigure[] {\label{fig:PGa} {\includegraphics[scale=0.134]{c}}} \subfigure[] {\label{fig:PGb} {\includegraphics[scale=0.134]{d}}} \subfigure[] {\label{fig:PGc} {\includegraphics[scale=0.5]{power_gridOCL}}} } \caption{(a),(b) Network architectures representing the integral and proportional control layers respectively. The gains of the proportional layer are set as $\alpha_{ij}=200$. (c) Evolution of the power network. The blue dash-dot line represent the convergence value $\omega_{\infty}$.} \label{fig:F_E2} \end{figure} \section{Conclusions} We have proposed a novel approach for controlling networks of heterogeneous nodes with generic $n$-dimensional linear dynamics in the presence of constant biases (disturbances). In particular, we discussed the use of different control layers, each with its own topology, deploying proportional and integral actions across the network. We proved convergence of the strategy and derived conditions to select the control gains as a function of the open loop and control network structures and the node dynamics. We showed the effectiveness of the proposed strategy via numerical simulations on two representative examples. Several open problems are left for further study. First and foremost the effect of varying the structure of the network control layers should be studied in more detail as preliminary results show the performance of the network evolution towards consensus can be affected by such variations. We wish to emphasize that more sophisticated approaches can be developed by considering other linear or nonlinear control actions rather than the simpler proportional and integral actions considered in this paper. For example a robustifying distributed action could be designed by considering an extra network control layer of variable structure controllers. This is currently under investigation and will be presented elsewhere. \bibliographystyle{plain}
1,314,259,996,240
arxiv
\subsection{Algorithm to Generate Noise with Staircase Distribution} \begin{algorithm} \caption{Generation of Random Variable with Staircase Distribution} \label{algo:staircase_mech} \begin{algorithmic} \State \textbf{Input: } $\e$, $\D$, and $\gamma \in [0,1]$. \State \textbf{Output: } $\noise$, a random variable (r.v.) with staircase distribution specified by $\e, \D$ and $\gamma$. \\ \State Generate a r.v. $S$ with $\text{Pr}[S = 1] = \text{Pr}[S = -1] = \frac{1}{2}$. \State Generate a geometric r.v. $G$ with $\text{Pr}[G = i] = (1-b)b^i $ for integer $i \ge 0$, where $b = e^{-\e}$. \State Generate a r.v. $U$ uniformly distributed in $[0,1]$. \State Generate a binary r.v. $B$ with $\text{Pr}[B = 0] = \frac{\gamma}{\gamma + (1-\gamma)b}$ and $\text{Pr}[B = 1] = \frac{(1-\gamma)b}{\gamma + (1-\gamma)b}$. \State $\noise \gets S \left( (1-B)\left((G+\gamma U)\D\right) + B\left((G+\gamma + (1-\gamma)U)\D\right)\right)$. \State Output $\noise$. \end{algorithmic} \end{algorithm} In the formula, \begin{align} \noise \gets S \left( (1-B)\left((G+\gamma U)\D\right) + B\left((G+\gamma + (1-\gamma)U)\D\right) \right), \end{align} \begin{itemize} \item $S$ determines the sign of the noise, \item $G$ determines which interval $[G\D, (G+1)\D)$ the noise lies in, \item $B$ determines which subinterval of $[G\D, (G+\gamma)\D)$ and $[(G+\gamma)\D, (G+1)\D)$ the noise lies in, \item $U$ helps to uniformly sample the subinterval. \end{itemize} \subsection{Optimal Noise Probability Distribution with Minimum Expectation of Noise Amplitude} To minimize the expectation of amplitude, we have cost function $\loss(x) = |x|$, and it is easy to see that it satisfies Property \ref{property1} and Property \ref{property2}. To simplify notation, define $b \triangleq e^{-\e}$, and define \begin{align} V(\p) \triangleq \int_{x \in \R} \loss(x) \p(dx). \end{align} for a given probability distribution $\p$. \begin{theorem}\label{thm:1} To minimize the expectation of the amplitude of noise, the optimal noise probability distribution has probability density function $f_{\gamma^*}(\cdot)$ with \begin{align} \gamma^* = \frac{1}{1 + e^{\frac{\e}{2}}}, \end{align} and the minimum expectation of noise amplitude is \begin{align} V(\p_{\gamma^*}) = \D \frac{e^{\frac{\e}{2}}}{e^{\e}-1}. \end{align} \end{theorem} \begin{proof} See Appendix \ref{app:1}. \end{proof} \begin{figure}[t] \centering \includegraphics[scale=0.2]{optimalr.pdf} \caption{Optimal $\gamma^*$ for cost function $L(x) = x^2$} \label{fig:optimalr} \end{figure} Next, we compare the performances of the optimal noise probability distribution and the Laplacian mechanism. The Laplace distribution has probability density function \begin{align} f(x) = \frac{1}{2\lambda} e^{-\frac{|x|}{\lambda}}, \end{align} where $\lambda = \frac{\D}{\e}$. So the expectation of the amplitude of noise with Laplace distribution is \begin{align} V_{Lap} \triangleq \int_{-\infty}^{+\infty} |x|f(x) dx = \frac{\D}{\e}. \end{align} By comparing $V(\p_{\gamma^*})$ and $V_{Lap}$, it is easy to see that in the high privacy regime ($\e$ is small) Laplacian mechanism is asymptotically optimal, and the additive gap from optimal value goes to 0 as $\e \to 0$; in the low privacy regime ($\e$ is large), $V_{Lap} = \frac{\D}{\e})$, while $V(\p_{\gamma^*}) = \Theta(\D e^{-\frac{\e}{2}})$. Indeed, \begin{corollary}\label{thm:2} Consider the cost function $\loss(x) = |x|$. In the high privacy regime ($\e$ is small), \begin{align} V_{Lap} - V(\p_{\gamma^*}) &= \D\left( \frac{\e}{24} - \frac{7\e^3}{5760} + O(\e^5) \right), \end{align} as $\e \to 0$. And in the low privacy regime ($\e$ is large), \begin{align} V_{Lap} &= \frac{\D}{\e}, \\ V(\p_{\gamma^*}) &= \Theta(\D e^{-\frac{\e}{2}}), \end{align} as $\e \to +\infty$. \end{corollary} \subsection{Optimal Noise Probability Distribution with Minimum Power} Given the probability distribution $\p$ of the noise, the power of noise is defined as $\int_{x \in \R} x^2 \p (dx)$. Accordingly, the cost function $\loss(x) = x^2 $, and it is easy to see it satisfies Property \ref{property1} and Property \ref{property2}. Recall $b \triangleq e^{-\e}$. \begin{theorem}\label{thm:3} To minimize the power of noise (accordingly, $\loss(x) = x^2$), the optimal noise probability distribution has probability density function $f_{\gamma^*}(\cdot)$ with \begin{align} \gamma^* = -\frac{b}{1-b} + \frac{(b-2b^2+2b^4-b^5)^{1/3}}{2^{1/3} (1-b)^2}, \end{align} and the minimum power of noise is \begin{align} V(\p_{\gamma^*}) = \D^2\frac{ 2^{-2/3} b^{2/3} (1+b)^{2/3} + b }{ (1-b)^2 }. \end{align} \end{theorem} \begin{proof} See Appendix \ref{app:3}. \end{proof} Next, we compare the performances of the optimal noise probability distribution and the Laplacian mechanism. The power of noise with Laplace distribution with $\lambda = \frac{\D}{\e}$ is \begin{align} V_{Lap} \triangleq \int_{-\infty}^{+\infty} x^2 \frac{1}{2\lambda} e^{-\frac{|x|}{\lambda}} dx = 2 \frac{\D^2}{\e^2}. \end{align} By comparing $V(\p_{\gamma^*})$ and $V_{Lap}$, it is easy to see that in the high privacy regime ($\e$ is small) Laplacian mechanism is asymptotically optimal, and the additive gap from optimal value is upper bounded by a constant as $\e \to 0$; in the low privacy regime ($\e$ is large), $V_{Lap} = \Theta(\frac{2\D^2}{\e^2})$, while $V(\p_{\gamma^*}) = \Theta(\D^2 e^{-\frac{2\e}{3}})$. Indeed, \begin{corollary}\label{thm:4} Consider the cost function $\loss(x) = x^2$. In the high privacy regime ($\e$ is small), \begin{align} V_{Lap}- V(\p_{\gamma^*}) &= \D^2 \left( \frac{1}{12} - \frac{\e^2}{720} + O(\e^4) \right), \end{align} as $\e \to 0$. And in the low privacy regime ($\e$ is large), \begin{align} V_{Lap} &= \frac{2\D^2}{\e^2}, \\ V(\p_{\gamma^*}) &= \Theta(\D^2 e^{-\frac{2\e}{3}}), \end{align} as $\e \to +\infty$. \end{corollary} \subsection{Problem Formulation} We first give the problem formulation in the discrete setting. Consider an integer-valued query function \footnote{Without loss of generality, we assume that in the discrete setting the query output is integer-valued. Indeed, any uniformly-spaced discrete setting can be reduced to the integer-valued setting by scaling the query output. } \begin{align} q: \database \rightarrow \Z, \end{align} where $\database$ is the domain of the databases. Let $\D$ denote the sensitivity of the query function $q$ as defined in \eqref{def:sensitivity}. Clearly, $\D$ is an integer in this discrete setting. In the discrete setting, a generic randomized mechanism $\KM$ is a family of noise probability distributions over the domain $\Z$ indexed by the query output (denoted by $i$), i.e., \begin{align} \KM = \{ \p_i : i \in \Z \}, \end{align} and given dataset $D$, the mechanism $\KM$ will release the query output $i = q(D)$ corrupted by additive random noise with probability distribution $\p_i$: \begin{align} \KM(D) = i + X_{i}, \end{align} where $X_{i}$ is a discrete random variable with probability distribution $\p_{i}$. Then, the $\e$-differential privacy constraint \eqref{eqn:dpgeneral} on $\KM$ is that for any $i_1, i_2 \in Z$ such that $ |i_1 - i_2| \le \D$ (corresponding to the query outputs for two neighboring datasets), and for any subset $ S \subset \Z$, \begin{align} \nm_{i_1} (j) \le e^{\e} \nm_{i_2}(j + i_1 - i_2), \forall j \in \Z, |i_1 - i_2| \le \D, \label{eqn:disgeneralnoise} \end{align} and the goal is to minimize the worst-case cost \begin{align} \sup_{i \in \Z} \sum_{j=-\infty}^{+\infty} \loss(j) \nm_i(j) \end{align} subject to the differential privacy constraint \eqref{eqn:disgeneralnoise}. \subsection{Optimality of Query-Qutput Independent Perturbation} In this section, we show that query-output independent perturbation is optimal in the discrete setting. For any integer $n \ge 1$, define \begin{align} \KM_{n} \triangleq \{ \; \{\nm_i\}_{i \in \Z} | \; \{\nm_i\}_{i \in \Z} \; \text{satisfies}\; \eqref{eqn:disgeneralnoise},\; \text{and} \; \nm_{i+n} = \nm_{i}, \forall i \in \Z \}. \end{align} \begin{theorem}\label{thm:dismaingeneral} Given any family of probability distribution $\{\nm_i\}_{i \in \Z} \in \cup_{n \ge 1} \KM_{n}$, there exists a probability distribution $\nm^*$ such that the family of probability distributions $\{\p^*_i\}_{i \in \Z}$ with $\p^*_i \equiv \p^*$ satisfies the differential privacy constraint \eqref{eqn:disgeneralnoise} and \begin{align} \sup_{i \in \Z} \sum_{j=-\infty}^{+\infty} \loss(j) \nm^*_i(j) \le \sup_{i \in \Z} \sum_{j=-\infty}^{+\infty} \loss(j) \nm_i(j) . \end{align} \end{theorem} \begin{IEEEproof} The proof is essentially the same as the proof of Theorem \ref{thm:maingeneral}, and thus is omitted. \end{IEEEproof} Theorem \ref{thm:dismaingeneral} states that if we assume the family of noise probability distributions is periodic in terms of $i$ (the period can be arbitrary), then in the optimal mechanism we can assume $\p_i$ does not dependent on $i$. We conjecture that the technical condition can be done away with. \subsection{Optimal Noise Probability Distribution} Due to Theorem \ref{thm:dismaingeneral}, we restrict to query-output independent perturbation mechanisms. Let $q(D)$ be the value of the query function evaluated at dataset $D$. The noise-adding mechanism $\KM$ will output \begin{align} \KM(D) = q(D) + \noise, \end{align} where $\noise$ is the integer-valued noise added by the mechanism to the output of query function. Let $\p$ be the probability distribution of the noise $\noise$. Then the optimization problem we study is \begin{align} \mathop{\text{minimize}}\limits_{ \p} & \ \sum_{i = -\infty}^{+\infty} \loss(i) \p(i) \label{eqn:optdis}\\ \text{subject to} & \ \p(i) \le e^{\e} \p(i+d), \forall i \in \Z, d \in \Z, |d| \le |\D|. \label{eqn:dpdiscrete} \end{align} It turns out that when the cost function $\loss(\cdot)$ is symmetric and monotonically increasing for $i \ge 0$, the solution to the above optimization problem is a discrete variant of the staircase mechanism in the continuous setting. As in the continuous setting, we also assume that the cost function $\loss(\cdot)$ is symmetric and monotonically increasing for $x \ge 0$, i.e., \begin{property}\label{propertydis} \begin{align} \loss(i) &= \loss(-i), \forall i \in \Z \\ \loss(i) &\le \loss(i), \forall i,j \in \Z, 0 \le i \le j . \end{align} \end{property} The easiest case is $\D = 1$. In the case that $\D = 1$, the solution is the geometric mechanism, which was proposed in \cite{Ghosh09}. Recall $b \triangleq e^{-\e}$. \begin{theorem}\label{thm:discrete1} If the cost function $\loss(\cdot)$ satisfies Property \ref{propertydis} and $\Delta =1$, then the geometric mechanism, which has a probability mass function $\p$ with $\p(i) = \frac{1-b}{1+b}b^{|i|}, \forall i \in \Z$, is the optimal solution to \eqref{eqn:optdis}. \end{theorem} \begin{proof} See Appendix \ref{sec:discrete_proof}. \end{proof} \begin{figure}[t] \centering \includegraphics[scale=0.4]{fig_dis_stair.pdf} \caption{The Staircase-Shaped Probability Mass Function $\p_{\mt}(i)$} \label{fig:disfgamma} \end{figure} For fixed general $\D \ge 2$, consider a class of symmetric and staircase-shaped probability mass functions defined as follows. Given an integer $ 1 \le \mt \le \D$, denote $\p_{\mt}$ as the probability mass function defined by \begin{align} \p_{\mt}(i) = \begin{cases} a(\mt) & 0 \le i < \mt \\ e^{-\e} a(\mt) & \mt \le i < \D \\ e^{-k\e} \p_{\mt}(i - k\D) & k \D \le i < (k+1)\D \; \text{for} \; k \in \N \\ \p_{\mt}(-i) & i<0 \end{cases} \label{eqn:defprdis} \end{align} for $i \in \Z$, where \begin{align} a(\mt) \triangleq \frac{ 1 - b}{2 \mt + 2 b (\D - \mt) - (1 - b) }. \end{align} It is easy to verify that for any $1 \le \mt \le \D$, $\p_{\mt}$ is a valid probability mass function and it satisfies the $\e$-differential privacy constraint \eqref{eqn:dpdiscrete}. We plot the staircase-shaped probability mass function $\p_{\mt}(i)$ in Figure \ref{fig:disfgamma}. Let $\sP$ be the set of all probability mass functions which satisfy the $\e$-differential privacy constraint \eqref{eqn:dpdiscrete}. \begin{theorem}\label{thm:discrete2} For $\D \ge 2$, if the cost function $\loss(x)$ satisfies Property \ref{propertydis}, then \begin{align} \inf_{\p \in \sP} \sum_{i = -\infty}^{+\infty} \loss(i) \p(i) = \min_{\{\mt \in \N | 1 \le \mt \le \D \}} \sum_{i = -\infty}^{+\infty} \loss(i) \p_{\mt}(i). \end{align} \end{theorem} \begin{proof} See Appendix \ref{sec:discrete_proof}. \end{proof} Therefore, the optimal noise probability distribution to preserve $\e$-differential privacy for integer-valued query function has a staircase-shaped probability mass function, which is specified by three parameters $\e$, $\D$ and $\mt^* = \mathop{\arg \min} \limits_{\{\mt \in \N | 1 \le \mt \le \D \} } \sum_{i = -\infty}^{+\infty} \loss(i) \p_{\mt}(i)$. In the case $\D = 1$, the staircase-shaped probability mass function is reduced to the geometric mechanism. \subsection{Outline of Proof} The proof technique is very similar to the proof in the continuous settings in Appendix \ref{sec:proof}. The proof consists of 5 steps in total, and in each step we narrow down the set of probability distributions where the optimal probability distribution should lie in: \begin{itemize} \item Step 1 proves that we only need to consider probability mass functions which are monotonically increasing for $i \le 0$ and monotonically decreasing for $i \ge 0$. \item Step 2 proves that we only need to consider symmetric probability mass functions. \item Step 3 proves that we only need to consider symmetric probability mass functions which have periodic and geometric decay for $i \ge 0$, and this proves Theorem \ref{thm:discrete1}. \item Step 4 and Step 5 prove that the optimal probability mass function over the interval $[0, \D)$ is a discrete step function, and they conclude the proof of Theorem \ref{thm:discrete2}. \end{itemize} \subsection{Step 1} Recall $\sP$ denotes the set of all probability mass functions which satisfy the $\e$-differential privacy constraint \eqref{eqn:dpdiscrete}. Define \begin{align} V^* \triangleq \inf_{\p \in \sP} \sum_{i = -\infty}^{+\infty} \loss(i) \p(i). \end{align} First we prove that we only need to consider probability mass functions which are monotonically increasing for $i \le 0$ and monotonically decreasing for $i \ge 0$. Define \begin{align} \pe \triangleq \{ \p \in \sP | \p(i) \le \p(j), \p(m) \ge \p(n), \forall i \le j \le 0, 0 \le m \le n \}. \end{align} \begin{lemma}\label{lem:doublesizemono} \begin{align} V^* = \inf_{\p \in \pe} \sum_{i = -\infty}^{+\infty} \loss(i) \p(i). \end{align} \end{lemma} \begin{IEEEproof} We will prove that given a probability mass function $\p_a \in \sP$, we can construct a new probability mass function $\p_b \in \pe$ such that \begin{align} \sum_{i = -\infty}^{+\infty} \loss(i) \p_a(i) \ge \sum_{i = -\infty}^{+\infty} \loss(i) \p_b(i). \end{align} Given $\p_a \in \sP$, consider the sequence $sa = \{\p_a(0), \p_a(1), \p_a(-1), \p_a(2), \p_a(-2), \dots \}$. Use the same argument in Lemma \ref{lem:no_point_mass} and we can show $\p_a(i) > 0, \forall \, i \in \Z$. Let the sequence $sb = \{b_0,b_1,b_{-1},b_2, b_{-2}, \dots \}$ be a permutation of the sequence $sa$ in descending order. Since $\sum_{i = -\infty}^{+\infty} \p_a(i) = 1$, $\lim_{i \to -\infty} \p_a(i) = \lim_{i \to +\infty} \p_a(i) = 0$, and thus $sb$ is well defined. Let $\pi$ be the corresponding permutation mapping, i.e., $\pi: \Z \to \Z$, and \begin{align} b_i = \p_a(\pi(i)). \end{align} Since $\loss(\cdot)$ is a symmetric function and monotonically decreasing for $i \ge 0 $, we have \begin{align} \loss(0) \le \loss(1) \le \loss(-1) \le \loss(2) \le \loss(-2) \le \cdots \le \loss(i) \le \loss(-i) \le \loss(i+1) \le \loss(-(i+1)) \le \cdots. \end{align} Therefore, if we define a probability mass function $\p_b$ with \begin{align} \p_b(i) = b_i, \forall i \in \Z, \end{align} then \begin{align} \sum_{i = -\infty}^{+\infty} \loss(i) \p_a(i) \ge \sum_{i = -\infty}^{+\infty} \loss(i) \p_b(i). \end{align} Next, we only need to prove $\p_b \in \pe$, i.e., we need to show that $\p_b$ satisfies the differential privacy constraint \eqref{eqn:dpdiscrete}. Due to the way how we construct the sequence $sb$, we have \begin{align} b_0 \ge b_1 \ge b_2 \ge b_3 \ge \cdots, \\ b_0 \ge b_{-1} \ge b_{-2} \ge b_{-3} \ge \cdots. \end{align} Therefore, it is both sufficient and necessary to prove that \begin{align} \frac{b_i}{ b_{i+\D}} &\le e^{\e}, \forall i \ge 0, \\ \frac{b_i}{ b_{i-\D}} &\le e^{\e}, \forall i \le 0. \end{align} Since $\p_a \in \sP$, $\forall \, i \in \{\pi(0)-\D, \pi(0)-\D+1, \pi(0)-\D+2, \dots, \pi(0)+\D \}$, \begin{align} \frac{\p_a(\pi(0))}{\p_a(i)} \le e^{\e}. \end{align} Therefore, in the sequence $sb$ there exist at least $2\D$ elements which are no smaller than $b_0 e^{-\e}$. Since $b_{-\D}$ and $b_{\D}$ are the $2\D$th and $(2\D-1)$th largest elements in the sequence $sb$ other than $b_0$, we have $\frac{b_0}{b_{-\D}} \le e^{\e}$ and $\frac{b_0}{b_{\D}} \le e^{\e}$. In general, given $i \in \Z$, we can use Algorithm \ref{alg:discrete} to find at least $2\D$ elements in the sequence $sb$ which are no bigger than $b_i$ and no smaller than $b_i e^{-\e}$. More precisely, given $i \in \Z$, let $j^*_R$ and $j^*_L$ be the output of Algorithm \ref{alg:discrete}. Note that since the while loops in Algorithm \ref{alg:discrete} can take only at most $2(|i|+1)$ steps, the algorithm will always terminate. For all integers $j \in [\pi(j^*_L)-\D, \pi(j^*_L)-1]$, $\p_a(j)$ is no bigger than $b_i$ and is no smaller than $\p_a(j^*_L) e^{-\e}$; and for all integers $j \in [\pi(j^*_R)+1, \pi(j^*_R)+\D]$, $\p_a(j)$ is no bigger than $b_i$ and is no smaller than $\p_a(j^*_R) e^{-\e}$. Since $\p_a(j^*_R), \p_a(j^*_L) \ge b_i$, for all $j \in [\pi(j^*_L)-\D, \pi(j^*_L)-1] \cup [\pi(j^*_R)+1, \pi(j^*_R)+\D]$, $\p_a(j)$ is no bigger than $b_i$ and is no smaller than $b_i e^{-\e}$. Therefore, there exist at least $2\D$ elements in the sequence $sb$ which are no bigger than $b_i$ and no smaller than $b_i e^{-\e}$. If $i \le 0$, then $b_{i-\D}$ is the $2\D$th largest element in the sequence $sb$ which is no bigger than $b_i$ and no smaller than $b_i e^{-\e}$; and if $i \ge 0$, then $b_{i+\D}$ is the $(2\D-1)$th largest element in the sequence $sb$ which is no bigger than $b_i$ and no smaller than $b_i e^{-\e}$. Therefore, we have \begin{align} \frac{b_i}{ b_{i+\D}} &\le e^{\e}, \forall i \ge 0, \\ \frac{b_i}{ b_{i-\D}} &\le e^{\e}, \forall i \le 0. \end{align} This completes the proof of Lemma \ref{lem:doublesizemono}. \begin{algorithm} \caption{} \label{alg:discrete} \begin{algorithmic} \State $j^*_R \gets i$ \While{there exists some $j$ which appears before $i$ in the sequence $\{0,1,-1,2,-2,\dots\}$ and $\pi(j) \in [\pi(j^*_R)+1, \pi(j^*_R)+\D]$} \State $j^*_R \gets j$ \EndWhile \\ \State $j^*_L \gets i$ \While{there exists some $j$ which appears before $i$ in the sequence $\{0,1,-1,2,-2,\dots\}$ and $\pi(j) \in [\pi(j^*_L)-\D, \pi(j^*_L)-1]$} \State $j^*_L \gets j$ \EndWhile \\ \State Output $j^*_R$ and $j^*_L$. \end{algorithmic} \end{algorithm} \end{IEEEproof} \subsection{Step 2} Next we prove that we only need to consider symmetric probability mass functions which are monotonically decreasing when $i \ge 0$. Define \begin{align} \pdissym \triangleq \{ \p \in \pe |\; \p(i) = \p(-i), \forall \, i \in \Z \}. \end{align} \begin{lemma}\label{lem:monosym} \begin{align} V^* = \inf_{\p \in \pdissym} \sum_{i = -\infty}^{+\infty} \loss(i) \p(i). \end{align} \end{lemma} \begin{IEEEproof} The proof is essentially the same as the proof of Lemma \ref{lem:symmetric}. Given $\p_a \in \pe$, define a new probability mass function $\p_b$ with \begin{align} \p_b (i) \triangleq \frac{\p_a(i) + \p_a(-i) }{2}, \forall i \in \Z. \end{align} It is easy to see $\p_b$ is a valid probability mass function and symmetric. Since the cost function $\loss(\cdot)$ is symmetric, \begin{align} \sum_{i = -\infty}^{+\infty} \loss(i) \p_a(i) = \sum_{i = -\infty}^{+\infty} \loss(i) \p_b(i). \end{align} Next we show that $\p_b$ also satisfies the differential privacy constraint \eqref{eqn:dpdiscrete}. For any $i \in \Z$ and $ |d| \le \D$, since $\p_a(i) \le e^{\e} \p_a(i + d)$ and $\p_a(-i) \le e^{\e} \p_a(-i-d)$, we have \begin{align} \p_b(i) &= \frac{\p_a(i) + \p_a(-i) }{2} \\ &\le \frac{e^{\e} \p_a(i+d) + e^{\e} \p_a(-i-d) }{2} \\ &= e^{\e} \p_b(i+d). \end{align} Therefore, $\p_b$ satisfies \eqref{eqn:dpdiscrete}. Finally, for any $0 \le i \le j$, \begin{align} \p_b(i) &= \frac{\p_a(i) + \p_a(-i) }{2} \\ &\ge \frac{\p_a(j) + \p_a(-j) }{2} \\ &= \p_b(j). \end{align} So $\p_b \in \pe$, and thus $\p_b \in \pdissym$. We conclude \begin{align} V^* = \inf_{\p \in \pdissym} \sum_{i = -\infty}^{+\infty} \loss(i) \p(i). \end{align} \end{IEEEproof} \subsection{Step 3} Next we show that among all symmetric and monotonically decreasing (for $i \ge 0$) probability mass function, we only need to consider those which are periodically and geometrically decaying. More precisely, define \begin{align} \pf \triangleq \{ \p \in \pdissym | \frac{\p(i)}{\p(i+\D)} = e^{\e}, \forall\, i \in \N \}. \end{align} Then \begin{lemma}\label{lem:discretepd} \begin{align} V^* = \inf_{\p \in \pf} V(\p). \end{align} \end{lemma} \begin{IEEEproof} Due to Lemma \ref{lem:monosym}, we only need to consider probability mass functions which are symmetric and monotonically decreasing for $i \ge 0$. We first show that given $\p_a \in \pdissym$, if $\frac{\p_a{0}}{\p_a{\D}} < e^{\e}$, then we can construct a probability mass function $\p_b \in \pdissym$ such that $\frac{\p_b{0}}{\p_b{\D}} = e^{\e}$ and \begin{align} V(\p_a) \ge V(\p_b). \end{align} Since $\p_a$ is symmetric, \begin{align} V(\p_a) = \loss(0)\p_a(0) + 2\sum_{i=1}^{+\infty} \loss(i)\p_a(i). \end{align} Suppose $\frac{\p_a{0}}{\p_a{\D}} < e^{\e}$, then define a new symmetric probability mass function $\p_b$ with \begin{align} \p_b(0) &\triangleq (1+\delta) \p_a(0), \\ \p_b(i) &\triangleq (1-\delta') \p_a(i), \forall i \in \Z \backslash \{0\}, \end{align} where \begin{align} \delta &= \frac{e^{\e}\frac{\p_a(\D)}{\p_a(0)} -1 }{1 + e^{\e}\frac{\p_a(\D)}{1 - \p_a(0)}} > 0 , \\ \delta' &= \frac{e^{\e}\frac{\p_a(\D)}{\p_a(0)} -1 }{\frac{1}{\p_a(0)} + e^{\e}\frac{\p_a(\D)}{\p_a(0)} - 1 } >0 , \end{align} so that $\frac{\p_b(0)}{\p_b(\D)} = e^{\e}$. It is easy to see $\p_b \in \pdissym$, and \begin{align} & V(\p_b) - V(\p_a) \\ = & \delta \loss(0) \p_a(0) - 2 \delta' \sum_{i=1}^{+\infty} \loss(i) \p_a(i) \\ \le & \delta \loss(0) \p_a(0) - 2 \delta' \sum_{i=1}^{+\infty} \loss(0) \p_a(i) \\ \le & \delta \loss(0) \p_a(0) - \delta' \loss(0) (1 - \p_a(0)) \\ =& 0 . \end{align} Therefore, we only need to consider $\p \in \pdissym$ satisfying $\frac{\p(0)}{\p(\D)} = e^{\e}$. By using the same argument as in the proof of Lemma \ref{lem:pd}, one can conclude that we only need to consider $\p \in \pdissym$ satisfying \begin{align} \frac{\p(i)}{\p(i+\D)} = e^{\e}, \forall i \in \N. \label{eqn:period} \end{align} Therefore, $V^* = \inf_{\p \in \pf} V(\p)$. \end{IEEEproof} \begin{IEEEproof}[Proof of Theorem \ref{thm:discrete1}] In the case that $\D = 1$, due to Lemma \ref{lem:discretepd}, the symmetry property and \eqref{eqn:period} completely characterize the optimal noise probability mass function, which is the geometric mechanism. \end{IEEEproof} \subsection{Step 4} Due to Lemma \ref{lem:discretepd}, the optimal probability mass function $\p$ is completely characterized by $\p(0),\p(1),\dots,\p(\D-1)$. Next we derive the properties of optimal probability mass function in the domain $\{0,1,2,\dots, \D-1\}$. Since Lemma \ref{lem:discretepd} solves the case $\D = 1$, in the remaining of this section, we assume $\D \ge 2$. Define \begin{align} \pj_{\lambda} \triangleq \{ \p \in \pf | \exists k \in \{0,1,\dots,\D-2\}, \p(i) = \p(0), \forall i \in \{0,1,\dots,k\}, \p(j) = \lambda \p(0), \forall j \in \{k+1,k+2,\dots,\D-1\} \}. \end{align} \begin{lemma}\label{lem:ratiodis} \begin{align} V^* = \inf_{\p \in \cup_{\lambda \in [e^{-\e}, 1]} \pj_{\lambda} } V(\p). \end{align} \end{lemma} \begin{IEEEproof} If $\D = 2$, then for any $\p \in \pf$, we can set $k=0$, and $\p \in \pj_{\frac{\p(\D-1)}{\p(0)}}$. Therefore, Lemma \ref{lem:ratiodis} holds for $\D = 2$. Assume $\D \ge 3$. First, we prove that we only need to consider probability mass function $\p \in \pf$ such that there exists $k \in \{1,2,\dots, \D-2\}$ with \begin{align} \p(i) &= \p(0), \forall i \in \{0,1,\dots,k-1 \} \label{eqn:ration11}\\ \p(j) &= \p(\D-1), \forall i \in \{ k+1,k+2,\dots,\D-1 \}. \label{eqn:ration12} \end{align} More precisely, let $\p_a \in \pf$, we can construct a probability mass function $\p_b \in \pf$ such that there exists $k$ satisfying \eqref{eqn:ration11} and \eqref{eqn:ration12}, and $V(\p_b) \ge V(\p_a)$. The proof technique is very similar to proof of Lemma \ref{lem:binary}. Suppose there does not exists such $k$ for $\p_a$, then let $k_1$ be the smallest integer in $\{1,2,\dots,\D-1\}$ such that \begin{align} \p_a(k_1) \neq \p_a(0), \end{align} and let $k_2$ be the biggest integer in $\{0,1,\dots,\D-2\}$ such that \begin{align} \p_a(k_2) \neq \p_a(\D-1). \end{align} It is easy to see that $k_1 < k_2$, and $k_1 \neq 0$. Then we can increase $\p_a(k_1)$ and decrease $\p_a(k_2)$ simultaneously by the same amount to derive a new probability mass function $\p_b \in \pf$ with smaller cost. Indeed, if \begin{align} \p_a(0) - \p_a(k_1) \leq \p_a(k_2) - \p_a(\D-1), \end{align} then consider a probability mass function $\p_b \in \pf$ with \begin{align} \p_b(i) &= \p_a(0), \forall 0 \le i \le k_1, \\ \p_b(i) &= \p_a(i), \forall k_1 < i < k_2, \\ \p_b(k_2) &= \p_a(k_2) - (\p_a(0) - \p_a(k_1)), \\ \p_b(i) &= \p_a(i), \forall k_2 < i \le \D-1. \end{align} Define \begin{align} w_0 &\triangleq \loss(0) + 2\sum_{k=1}^{\infty} \loss(k\D)e^{-k\e},\\ w_i &\triangleq 2\sum_{k=0}^{\infty} \loss(i+k\D)e^{-k\e}, \forall i \in \{1,2,\dots,\D-1\}. \end{align} Note that since $\loss(\cdot)$ is a monotonically decreasing function when $i\ge 0$, we have $w_0 \le w_1 \le \cdots \le w_{\D-1}$. Then we can verify that $V (\p_b) \le V (\p_a)$ via \begin{align} & \; V (\p_b) - V (\p_a) \\ =& \; \sum_{i=0}^{\D-1} \p_b(i)w_i - \sum_{i=0}^{\D-1} \p_a(i) w_i \\ = & \; (\p_a(0) - \p_a(k_1) ) (w_{k_1} - w_{k_2}) \\ \le & \; 0 . \end{align} If \begin{align} \p_a(0) - \p_a(k_1) \ge \p_a(k_2) - \p_a(\D-1), \end{align} then we can define $\p_b \in \pf$ by setting \begin{align} \p_b(i) &= \p_a(0), \forall 0 \le i < k_1, \\ \p_b(k_1) &= \p_a(k_1) + (\p_a(k_2) - \p_a(\D-1)) , \\ \p_b(i) &= \p_a(i), \forall k_1 < i < k_2, \\ \p_b(i) &= \p_a(\D-1), \forall k_2 \le i \le \D-1. \end{align} And similarly, we have \begin{align} V (\p_b) - V (\p_a) &= (\p_a(k_2) - \p_a(\D-1) ) (w_{k_1} - w_{k_2}) \le 0 . \end{align} Therefore, continue in this way, and finally we will obtain a probability mass function $\p_b \in \pf$ such that there exists $k$ to satisfy \eqref{eqn:ration11} and \eqref{eqn:ration12} and $V (\p_b) \le V (\p_a)$. From the above argument, we can see that in the optimal solution $\p^* \in \pf$, the probability mass function can only take at most three distinct values for all $i \in \{0,1,\dots,\D-1\}$, which are $\p^*(0),\p^*(k)$ and $\p^*(\D-1)$. Next we show that indeed either $\p^*(k) = \p^*(0)$ and $\p^*(k) = \p^*(\D-1)$, and this will complete the proof of Lemma \ref{lem:ratiodis}. The optimal probability mass function $\p \in \pf$ can be specified by three parameters $\p(0), \lambda \in [e^{-\e},1]$, $k \in \{1,2,\dots,\D-2\}$ and $\p(k)$. We will show that when $k$ and $\lambda$ are fixed, to minimize the cost, we have either $\p(k) = \p(0)$ or $\p(k) = \p(\D-1) = \lambda \p(0)$. Since $\sum_{i=-\infty}^{+\infty} \p(i) = 1$, \begin{align} 2\frac{ k\p(0) + \p(k) + (\D-k-1)\lambda \p(0) }{ 1 - b} - \p(0) = 1, \end{align} and thus $\p(k) = \frac{(1+\p(0))(1-b) - 2\p(0)k - 2\lambda \p(0)(\D-k-1)}{2}$. The cost for $\p$ is \begin{align} V(\p) &= \p(0) \sum_{i=0}^{k-1}w_i + \p(\D-1) \sum_{i=k+1}^{\D-1}w_i + \p(k)w_k \\ &= \p(0) \sum_{i=0}^{k-1}w_i + \lambda \p(0) \sum_{i=k+1}^{\D-1}w_i + (\frac{(1+\p(0))(1-b) - 2\p(0)k - 2\lambda \p(0)(\D-k-1)}{2})w_k, \end{align} which is a linear function of the parameter $\p(0)$. Since $\p(k) \ge \lambda \p(0)$ and $\p(k) \le \p(0)$, we have \begin{align} 2\frac{ k\p(0) + \p(k) + (\D-k-1)\lambda \p(0) }{ 1 - b} - \p(0) = 1 \le 2\frac{ k\p(0) + \p(0) + (\D-k-1)\lambda \p(0) }{ 1 - b} - \p(0),\\ 2\frac{ k\p(0) + \p(k) + (\D-k-1)\lambda \p(0) }{ 1 - b} - \p(0) = 1 \ge 2\frac{ k\p(0) + \lambda \p(0) + (\D-k-1)\lambda \p(0) }{ 1 - b} - \p(0), \end{align} and thus the constraints on $\p(0)$ are \begin{align} \frac{1-b}{2k+2+2\lambda(\D-k-1) - 1 + b} \le \p(0) \le \frac{1-b}{2k+2\lambda(\D-k) - 1 + b}. \label{eqn:inequaona} \end{align} Since $V(\p)$ is a linear function of $\p(0)$, to minimize the cost $V(\p)$, either $\p(0) = \frac{1-b}{2k+2+2\lambda(\D-k-1) - 1 + b} $ or $\p(0) = \frac{1-b}{2k+2\lambda(\D-k) - 1 + b}$, i.e., $\p(0)$ should take one of the two extreme points of \eqref{eqn:inequaona}. To get these two extreme points, we have either $\p(k) = \p(0)$ or $\p(k) = \lambda \p(0) = \p(\D-1)$. Therefore, in the optimal probability mass function $\p \in \pf$, there exists $k \in \{0,1,\dots,\D-2\}$ such that \begin{align} \p(i) &= \p(0), \forall i \in \{0,1,\dots,k\}\\ \p(i) &= \p(\D-1), \forall i \in \{k+1,k+2,\dots,\D-1\}. \end{align} This completes the proof of Lemma \ref{lem:ratiodis}. \end{IEEEproof} \subsection{Step 5} In the last step, we prove that although $\lambda \in [e^{-\e},1]$, in the optimal probability mass function, $\lambda$ is either $e^{-\e}$ or $1$, and this will complete the proof of Theorem \ref{thm:discrete2}. \begin{IEEEproof} For fixed $k \in \{0,1,\dots,\D-2 \}$, consider $\p \in \pf$ with \begin{align} \p(i) &= \p(0), \forall i \in \{0,1,\dots,k \}, \\ \p(i) &= \lambda \p(0), \forall i \in \{k+1,k+2,\dots,\D-1\}. \end{align} Since $\sum_{i=-\infty}^{+\infty} \p(i) = 1$, \ \begin{align} 2\frac{(k+1)\p(0) + (\D-k-1)\lambda \p(0)}{1-b} - \p(0) = 1, \end{align} and thus \begin{align} \p(0) = \frac{1-b}{2(k+1) + 2(\D-k-1)\lambda -1 + b}. \end{align} Hence, $\p$ is specified by only one parameter $\lambda$. The cost of $\p$ is \begin{align} V(\p) &= \sum_{i=0}^{\D-1} \p(i)w_i \\ &= \p(0)\sum_{i=0}^k w_i + \lambda \p(0) \sum_{k+1}^{\D-1} w_i \\ &= \frac{(1-b) (\sum_{i=0}^k w_i + \lambda \sum_{i=k+1}^{\D-1}w_i )}{2(k+1) + 2(\D-k-1)\lambda -1 + b} \\ &= (1-b) (C_1 + \frac{C_2}{ 2(k+1) + 2(\D-k-1)\lambda -1 + b }), \end{align} where $C_1$ and $C_2$ are constant terms independent of $\lambda$. Therefore, to minimize $V(\p)$ over $\lambda \in [e^{-\e},1]$, $\lambda$ should take the extreme points, either $e^{-\e}$ or $1$, depending on whether $C_2$ is negative or positive. When $\lambda = 1$, then the probability mass function is uniquely determined, which is $\p \in \pf$ with \begin{align} \p(i) = \frac{1-b}{2\D-1+b}, \forall i \in \{0,1,\dots,\D-1\}, \end{align} which is exactly $\p_{r}$ defined in \eqref{eqn:defprdis} with $r = \D$. When $\lambda = e^{-\e}$, the probability mass function is exactly $\p_r$ with $r = k+1$. Therefore, we conclude that \begin{align} V^* = \min_{\{\mt \in \N | 1 \le \mt \le \D \}} \sum_{i = -\infty}^{+\infty} \loss(i) \p_{\mt}(i). \end{align} \end{IEEEproof} \subsection{Laplacian Mechanism vs Staircase Mechanism} The Laplacian mechanism is specified by two parameters, $\e$ and the query function sensitivity $\D$. $\e$ and $\D$ completely characterize the differential privacy constraint \eqref{eqn:dpconstraintfinal}. On the other hand, our staircase mechanism is specified by three parameters, $\e$, $\D$, and $\gamma^*$ which is determined by $\e$ and the cost function $\loss(\cdot)$. For certain classes of cost functions $\loss(\cdot)$, there are closed-form expressions for the optimal $\gamma^*$. From the two examples given in Section \ref{sec:application}, we can see that although Laplacian mechanism is not strictly optimal, in the high privacy regime ($\e \to 0$), Laplacian mechanism is asymtotically optimal: \begin{itemize} \item for $L(x) = |x|$, the additive gap from the optimal values goes to 0 as $\e \to 0$, \item for $L(x) = x^2$, the additive gap from the optimal values is upper bounded by a constant as $\e \to 0$. \end{itemize} However, in the low privacy regime ($\e \to +\infty$), the multiplicative gap from the optimal values can be arbitrarily large. We conclude that in high privacy regime Laplacian mechanism is nearly optimal, while in low privacy regime significant improvement can be achieved by using staircase mechanism. We plot the multiplicative gain of Staircase mechanism over Laplacian mechanism for cost functions $\loss(x) = |x|$ and $\loss(x) = x^2$ in Figure \ref{fig:comparison}. We can see that even for modest $\e \approx 10$, staircase mechanism has about 15-fold and 23-fold improvement. Since our staircase mechanism is derived under the same problem setting with Laplacian mechanism, staircase mechanism can be applied wherever Laplacian mechanism is used, and it performs strictly better than Laplacian mechanism. \begin{figure}[h] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1.1\textwidth]{fig1.pdf} \caption{ $0 < \e \le 10 $} \label{fig:compare1} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1.1\textwidth]{fig2.pdf} \caption{ $10 \le \e \le 20$} \label{fig:compare2} \end{subfigure} \caption{Multiplicative Gain of Staircase Mechanism over Laplacian Mechanism} \label{fig:comparison} \end{figure} \subsection{Worst-case Result} We emphasize that our staircase mechanism is a worst-case optimal result. We impose no further assumptions on the properties of query function $q$ beyond its sensitivity. If we know more properties $q$ satisfies, it is entirely possible that staircase mechanism is not the best. For example, if we know the range of $q$ is $\Z$, then we even do not need to add noise which are not integers, in which case discrete probability distributions are the best. \subsection{Relation to Ghosh et. al. \cite{Ghosh09}} There are some excellent works on studying noise distributions in differential privacy, e.g., \cite{geometry} and \cite{Ghosh09}. One of the most well-know result is \cite{Ghosh09}, which shows geometric mechanism, a discrete variant of Laplacian mechanism, is optimal for single count query function. The difference between our work and \cite{Ghosh09} is \begin{itemize} \item \cite{Ghosh09} studies count query function, which is integer-valued, while we study general real-valued query function. Therefore, the geometric mechanism in \cite{Ghosh09} add integer-valued noise, while our staircase mechanism add real-valued noise and has continuous probability distribution. \item In \cite{Ghosh09} the sensitivity of count query function is one, i.e, $\D = 1$, while in our work there is no constraint on $\D$. From the proof of Theorem \ref{thm:main}, it is easy to see for integer-valued query function with $\D \ge 2$, the optimal noise probability mass function is also staircase-shaped, which can be viewed as a discrete variant of staircase mechanism. In the case $\D =1$, the staircase-shaped probability mass function is reduced to corresponding to a strictly decreasing geometric series. \end{itemize} \subsection{Asymptotic Properties of $\gamma^*$} In Section \ref{sec:application}, we have seen that for the cost functions $\loss(x) = |x| $ and $\loss(x) = x^2$, the optimal $\gamma^*$ lies in the interval $[0,\frac{1}{2}]$ for all $\e$ and is a monotonically decreasing function of $\e$; and furthermore, $\gamma^* \to \frac{1}{2}$ as $\e$ goes to $0$, and $\gamma^* \to 0$ as $\e$ goes to $+\infty$. We generalize these asymptotic properties of $\gamma$ as a function of $\e$ to all moment cost functions. More precisely, given $m \in \N$ and $m \ge 1$, \begin{theorem} \label{thm:gammaprop} Consider the cost function $\loss(x) = |x|^m$. Let $\gamma^*$ be the optimal $\gamma$ in the staircase mechanism for $\loss(x)$, i.e., \begin{align} \gamma^* = \mathop{\arg \min} \limits_{\gamma \in [0,1] } \int_{x \in \R} |x|^m f_{\gamma}(x) dx. \end{align} We have \begin{align} \gamma^* &\to \frac{1}{2}, \; \text{as} \; \e \to 0, \\ \gamma^* &\to 0, \; \text{as}\; \e \to +\infty. \end{align} \end{theorem} \begin{proof} See Appendix \ref{app:gammaprop}. \end{proof} \begin{corollary} For all the cost functions $\loss(\cdot)$ which can be written as \begin{align} \loss(x) = \sum_{i=1}^n \alpha_i |x|^{d_i}, \end{align} where $n \ge 1$, $\alpha_i \in \R, d_i \in \N$ and $\alpha_i,d_i \ge 0$ for all $i$, the optimal $\gamma^*$ in the staircase mechanism has the following asymptotic properties: \begin{align} \gamma^* &\to \frac{1}{2}, \; \text{as} \; \e \to 0, \\ \gamma^* &\to 0, \; \text{as}\; \e \to +\infty. \end{align} \end{corollary} \subsection{A Heuristic Choice of $\gamma$} We have shown that in general the optimal $\gamma^*$ in the staircase mechanism depends on both $\e$ and the cost function $\loss(\cdot)$. Here we give a heuristic choice of $\gamma$ which depends only on $\e$, and show that the performance is reasonably good in the low privacy regime. Consider a particular choice of $\gamma$, which is \begin{align} { \tilde{\gamma} }} % heuristic gamma = \frac{b}{2 := \frac{b}{2} = \frac{e^{-\e}}{2}. \end{align} It is easy to see that ${ \tilde{\gamma} }} % heuristic gamma = \frac{b}{2$ has the same asymptotic properties as the optimal $\gamma^*$ for momentum cost functions, i.e., \begin{align} { \tilde{\gamma} }} % heuristic gamma = \frac{b}{2 \to 0, \; \text{as} \; b \to 0, \\ { \tilde{\gamma} }} % heuristic gamma = \frac{b}{2 \to \frac{1}{2}, \; \text{as} \; b \to 1. \end{align} Furthermore, the probability that the noise magnitude is less than $\frac{e^{-\e}}{2}\D$ is approximately $\frac{1}{3}$ in the low privacy regime ($\e \to +\infty$). Indeed, \begin{align} \text{Pr}[ |\noise| \le \frac{e^{-\e}}{2} \D] = \text{Pr}[ |\noise| \le { \tilde{\gamma} }} % heuristic gamma = \frac{b}{2 \D] = 2 a({ \tilde{\gamma} }} % heuristic gamma = \frac{b}{2) { \tilde{\gamma} }} % heuristic gamma = \frac{b}{2 \D = \frac{1-b}{{ \tilde{\gamma} }} % heuristic gamma = \frac{b}{2 + b(1-{ \tilde{\gamma} }} % heuristic gamma = \frac{b}{2)} { \tilde{\gamma} }} % heuristic gamma = \frac{b}{2 = \frac{b-b^2}{3b-b^2}, \end{align} which goes to $\frac{1}{3}$ as $\e \to +\infty$ (accordingly, $b \to 0$). On the other hand, for Laplace mechanism, \begin{align} \text{Pr}[ |\noise| \le \frac{e^{-\e}}{2} \D] &= \int_{-\frac{e^{-\e}}{2} \D}^{\frac{e^{-\e}}{2} \D} \frac{1}{2\lambda} e^{-\frac{|x|}{\lambda}}dx = 1 - e^{-\frac{\e e^{-\e}}{2}}, \end{align} which goes to zero as $\e \to +\infty$. We conclude that in the low privacy regime as $\e \to +\infty$, the staircase mechanism with the heuristic parameter ${ \tilde{\gamma} }} % heuristic gamma = \frac{b}{2 = \frac{e^{-\e}}{2}$ can guarantee with probability about $\frac{1}{3}$ the additive noise is very close to zero, while the probability given by Laplacian mechanism is approximately zero. \subsection{Background on Differential Privacy} The basic problem setting in differential privacy for statistical database is as follows: suppose a dataset curator is in charge of a statistical database which consists of records of many individuals, and an analyst sends a query request to the curator to get some aggregate information about the whole database. Without any privacy concerns, the database curator can simply apply the query function to the dataset, compute the query output, and send the result to the analyst. However, to protect the privacy of individual data in the dataset, the dataset curator should use a randomized query-answering mechanism such that the probability distribution of the query output does not differ too much whether any individual record is in the database or not. Formally, consider a real-valued query function \begin{align} q: \database \rightarrow \R, \end{align} where $\database$ is the set of all possible datasets. The real-valued query function $q$ will be applied to a dataset, and query output is a real number. Two datasets $D_1, D_2 \in \database$ are called neighboring datasets if they differ in at most one element, i.e., one is a proper subset of the other and the larger dataset contains just one additional element \cite{DPsurvey}. A randomized query-answering mechanism $\KM$ for the query function $q$ will randomly output a number with probability distribution depends on query output $q(D)$, where $D$ is the dataset. \begin{definition}[$\e$-differential privacy \cite{DPsurvey}] A randomized mechanism $\KM$ gives $\e$-differential privacy if for all data sets $D_1$ and $D_2$ differing on at most one element, and all $S \subset \text{Range}(\KM)$, \begin{align} \text{Pr}[\KM(D_1) \in S] \le \exp(\e) \; \text{Pr}[\KM(D_2) \in S], \label{eqn:dpgeneral} \end{align} where $\KM(D)$ is the random output of the mechanism $\KM$ when the query function $q$ is applied to the dataset $D$. \end{definition} The differential privacy constraint \eqref{eqn:dpgeneral} essentially requires that for all neighboring datasets, the probability distributions of the output of the randomized mechanism should be approximately the same. Therefore, for any individual record, its presence or absence in the dataset will not significantly affect the output of the mechanism, which makes it hard for adversaries with arbitrary background knowledge to make inference on any individual from the released query output information. The parameter $\e \in (0, +\infty)$ quantifies how private the mechanism is: the smaller $\e$ is , the more private the randomized mechanism is. \subsubsection{Operational Meaning of $\e$-Differential Privacy in the Context of Hypothesis Testing} As shown by \cite{Zhou08}, one can interpret the differential privacy constraint \eqref{eqn:dpgeneral} in the context of hypothesis testing in terms of false alarm probability and missing detection probability. Indeed, consider a binary hypothesis testing problem over two neighboring datasets, $H_0: D_1 $ versus $H_1: D_2$, where an individual's record is in $D_2$ only. Given a decision rule, let $S$ be the decision region such that when the released output lies in $S$, $H_1$ will be rejected, and when the released output lies in $S^C$ (the complement of $S$), $H_0$ will be rejected. The false alarm probability $P_{FA}$ and the missing detection probability $P_{MD}$ can be written as \begin{align} P_{FA} &= P(K(D_1) \in S^C), \\ P_{MD} &= P(K(D_2) \in S). \end{align} Therefore, from \eqref{eqn:dpgeneral} we get \begin{align} 1 - P_{FA} \le e^{\e} P_{MD} . \end{align} Thus \begin{align} e^{\e} P_{MD} + P_{FA} \ge 1 . \end{align} Switch $D_1$ and $D_2$ in \eqref{eqn:dpgeneral}, and we get \begin{align} \text{Pr}[\KM(D_2) \in S] \le \exp(\e) \; \text{Pr}[\KM(D_1) \in S] . \end{align} Therefore, \begin{align} 1 - P_{MD} \le e^{\e} P_{FA} , \end{align} and thus \begin{align} P_{MD} + e^{\e} P_{FA} \ge 1 . \end{align} In conclusion, we have \begin{align} e^{\e} P_{MD} + P_{FA} &\ge 1 , \label{eq:famd1}\\ P_{MD} + e^{\e} P_{FA} &\ge 1 \label{eq:famd2}. \end{align} The $\e$-differential privacy constraint implies that in the context of hypothesis testing, $P_{FA}$ and $P_{MD}$ can not be both too small. \subsubsection{Laplacian Mechanism} The standard approach to preserving $\e$-differential privacy is to perturb the query output by adding random noise with Laplacian distribution proportional to the sensitivity $\Delta$ of the query function $q$, where the sensitivity of a real-valued query function is defined as \begin{definition}[Query Sensitivity \cite{DPsurvey}] For a real-valued query function $q: \database \rightarrow \R$, the sensitivity of $q$ is defined as \begin{align} \D := \max_{D_1,D_2 \in \database} | q(D_1) - q(D_2)|, \label{def:sensitivity} \end{align} for all $D_1,D_2$ differing in at most one element. \end{definition} Formally, the Laplacian mechanism is: \begin{definition}[Laplacian Mechanism \cite{DMNS06}] For a real-valued query function $q: \database \rightarrow \R$ with sensitivity $\Delta$, Laplacian mechanism will output \begin{align} K(D) := q(D) + \text{Lap}(\frac{\D}{\e}), \end{align} where $\text{Lap}(\lambda)$ is a random variable with probability density function \begin{align} f(x) = \frac{1}{2\lambda} e^{-\frac{|x|}{\lambda}}, \quad \forall x \in \R. \end{align} \end{definition} Consider two neighboring datasets $D_1$ and $D_2$ where $|q(D_1) - q(D_2)| = \D$. It is easy to compute the tradeoff between the false alarm probability $P_{FA}$ and the missing detection probability $P_{MD}$ under Laplacian mechanism, which is \begin{align} P_{MD} = \begin{cases} 1 - e^{\e}P_{FA} & P_{FA} \in [0, \frac{1}{2} e^{-\e}) \\ \frac{e^{-\e}}{4P_{FA}} & P_{FA} \in [ \frac{1}{2} e^{-\e}, \frac{1}{2} ) \\ e^{-\e}(1 - P_{FA} ) & P_{FA} \in [\frac{1}{2}, 1] \end{cases}\label{eqn:regionLap} \end{align} Since its introduction in \cite{DMNS06}, the Laplacian mechanism has become the standard tool in differential privacy and has been used as the basic building block in a number of works on differential privacy analysis in other more complex problem settings, e.g., \cite{HLM12, MM09, Xiao11, Huang12, McSherry10, Li10, Barak07, DKMMN06, DL09, Roth10, LO11, Smith2011, CM08, continual, Ding11, Hardt2010, Geo12, Ka11, Mironov12bit, Sarathy2011, Xiao2011ireduct, Dankar12, Friedman10, zhang2012functional, lei2011differentially, wasserman2010, dwork2010pan, Guptadp2010, blum2011fast, hsu2012distributed, hsu2012dp, blocki2012johnson, hardt2012beyond, hardt2012private, gupta2012iterative, kasi13, karwa2011private, cormode2012differentially}. Given this near-routine use of the query-output independent adding of Laplacian noise, the following two questions are natural: \begin{itemize} \item Is query-output independent perturbation optimal? \item Assume query-output independent perturbation, is Lapacian noise distribution optimal? \end{itemize} In this work we answer the above two questions. Our main result is that given an $\e$-differential privacy constraint, under a general utility-maximization (equivalently, cost-minimization) model: \begin{itemize} \item adding query-output independent noise is indeed optimal (under a mild technical condition), \item the optimal noise distribution is {\em not} Laplacian distribution; instead, the optimal one has a {\em staircase-shaped} probability density function. \end{itemize} These results are derived under the following settings: \begin{itemize} \item when the domain of the query output is the entire real line or the set of integers; \item nothing more about the query function is known beyond its global sensitivity; \item either local sensitivity \cite{NRS07} of the query function is unknown or it is the same as global sensitivity (as in the important case of count queries). \end{itemize} If any of these conditions are violated (the output domain has sharp boundaries, or the local sensitivity deviates from the global sensitivity \cite{NRS07}, or we are restricted to specific query functions \cite{CM08}), then the optimal privacy mechanism need not be data or query output dependent. \subsection{Problem Formulation} We formulate a utility-maximization (cost-minimization) problem under the differential privacy constraint. \subsubsection{Differential Privacy Constraint} A general randomized releasing mechanism $\KM$ is a family of noise probability distributions indexed by the query output (denoted by $t$), i.e., \begin{align} \KM = \{\p_t : t \in \R \}, \end{align} and given dataset $D$, the mechanism $\KM$ will release the query output $t = q(D)$ corrupted by additive random noise with probability distribution $\p_t$: \begin{align} \KM(D) = t + X_{t}, \end{align} where $X_{t}$ is a random variable with probability distribution $\p_{t}$. The differential privacy constraint \eqref{eqn:dpgeneral} on $\KM$ is that for any $t_1,t_2 \in \R$ such that $|t_1 - t_2| \le \D $ (corresponding to the query outputs for two neighboring datasets) , \begin{align} \nm_{t_1} (S) \le e^{\e} \nm_{t_2}(S + t_1 - t_2), \forall \; \text{measurable set} \; S \subset \R, \label{eqn:diffgeneralnoise1} \end{align} where for any $t \in \R$, $S+t \, := \, \{s+t \, | \, s \in S\}$. \subsubsection{Utility Model} The utility model we use in this work is a very general one, which is also used in the works by Ghosh, Roughgarden, and Sundararajan \cite{Ghosh09}, Gupte and Sundararajan \cite{minimax10}, and Brenner and Nissim \cite{Nissim10}. Consider a cost function $\loss(\cdot): \R \to \R$, which is a function of the additive noise. Given additive noise $x$, the cost is $\loss(x)$. Given query output $t \in \R$, the additive noise is a random variable with probability distribution $\p_t$, and thus the expectation of the cost is \begin{align} \int_{x\in \R} \loss(x) \p_t (dx). \end{align} The objective is to minimize the worst case cost among all possible query output $t \in \R$, i.e., \begin{align} \text{minimize}\; \sup_{t \in \R} \int_{x \in \R} \loss(x) \p_t(dx). \label{eqn:objective} \end{align} \subsubsection{Optimization Problem} Combining the differential privacy constraint \eqref{eqn:diffgeneralnoise1} and the objective function \eqref{eqn:objective}, we formulate a functional optimization problem: \begin{align} \mathop{\text{minimize}}\limits_{ \{\p_t\}_{t \in \R} } & \ \sup_{t \in \R} \int_{x \in \R} \loss(x) \p_t(dx) \label{eqn:objecinopt}\\ \text{subject to} & \ \nm_{t_1} (S) \le e^{\e} \nm_{t_2}(S + t_1 - t_2), \forall \ \text{measurable set} \ S\subseteq \R, \ \forall |t_1 - t_2| \le \D. \label{eqn:diffgeneralnoise} \end{align} \subsection{An Overview of Our Results} \subsubsection{Optimal Noise Probability Distribution} When the query output domain is the real line or the set of integers, we show (subject to some mild technical conditions on the family of differentially private mechanisms) that adding query-output-independent noise is optimal. Thus we only need to study what the optimal noise probability distribution is. Let $\p$ denote the probability distribution of the noise added to the query output. Then the optimization problem \eqref{eqn:objecinopt} and \eqref{eqn:diffgeneralnoise} is reduced to \begin{align} \mathop{\text{minimize}}\limits_{ \p} & \ \int_{x \in \R} \loss(x) \p (dx) \\ \text{subject to} & \ \p(S) \le e^{\e} \p(S + d), \forall \ \text{measurable set} \ S\subseteq \R, \ \forall |d| \le \D. \end{align} Consider a staircase-shaped probability distribution with probability density function (p.d.f.) $f_{\gamma}(\cdot)$ defined as \begin{align} f_{\gamma}(x) = \begin{cases} a(\gamma) & x \in [0, \gamma \D) \\ e^{-\e} a(\gamma) & x \in [\gamma \D, \D) \\ e^{-k\e} f_{\gamma}(x - k\D) & x \in [ k \D, (k+1)\D) \; \text{for} \; k \in \N \\ f_{\gamma}(-x) & x<0 \end{cases \end{align} where \begin{align} a(\gamma) \triangleq \frac{ 1 - e^{-\e}}{2 \D (\gamma + e^{-\e}(1-\gamma))} \end{align} is a normalizing constant to make $\int_{x\in \R} f_{\gamma}(x) dx = 1$. Our main result is \begin{theorem} If the cost function $\loss(\cdot)$ is symmetric and increasing, and $\sup_{x \ge T} \frac{\loss(x+1)}{\loss(x)} < + \infty$ for some $T>0$, the optimal noise probability distribution has a staircase-shaped probability density function $f_{\gamma^*}(\cdot)$, where \begin{align} \gamma^* = \mathop{\arg \min} \limits_{\gamma \in [0,1] } \int_{x \in \R} \loss (x) f_{\gamma}(x) dx . \end{align} \end{theorem} We plot the probability density functions of Laplace mechanism and staircase mechanism in Figure \ref{fig:probdf}. Figure \ref{fig:fgamma} in Section \ref{sec:result} gives a precise description of staircase mechanism. The staircase mechanism is specified by three parameters: $\e$, $\D$, and $\gamma^*$ which is determined by $\e$ and the cost function $\loss(\cdot)$. For certain classes of cost functions, there are closed-form expressions for the optimal $\gamma^*$. \begin{figure}[h] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=0.8\textwidth]{fig_laplace.pdf} \caption{ Laplace Mechanism } \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=0.8 \textwidth]{fig_stair.pdf} \caption{ Staircase Mechanism} \end{subfigure} \caption{Probability Density Functions of Laplacian Mechanism and Staircase Mechanism} \label{fig:probdf} \end{figure} \subsubsection{Applications: Minimum Noise Magnitude and Noise Power} We apply our main result Theorem \ref{thm:main} to two typical cost functions $\loss(x) = |x|$ and $\loss(x) = x^2$, which measure noise magnitude and noise power, respectively. We derive the closed-form expressions for the optimal parameters $\gamma^*$ for these two cost functions. Comparing the optimal performances with those of the Laplacian mechanism, we show that in the high privacy regime ($\e$ is small), the Laplacian mechanism is asymptotically optimal as $\e \to 0$; in the low privacy regime ($\e$ is large), the minimum expectation of noise amplitude and minimum noise power are $\Theta(\D e^{-\frac{\e}{2}})$ and $\Theta(\D^2 e^{-\frac{2\e}{3}})$ as $\e \to +\infty$, while the expectation of noise amplitude and power using the Laplacian mechanism are $\frac{\D}{\e}$ and $\frac{2\D^2}{\e^2}$, respectively, where $\D$ is the sensitivity of the query function. We conclude that the gains are more pronounced in the low privacy regime. \subsubsection{Extension to the Discrete Setting} Since for many important practical applications query functions are integer-valued, we also derive the optimal differentially private mechanisms for answering a single integer-valued query function. We show that adding query-output independent noise is optimal under a mild technical condition, and the optimal noise probability distribution has a staircase-shaped probability mass function, which can be viewed as the discrete variant of the staircase mechanism in the continuous setting. This result helps us directly compare our work and the existing works \cite{Ghosh09, minimax10} on integer-valued query functions. Our result shows that for integer-valued query function, the optimal noise probability mass function is also staircase-shaped, and in the case the sensitivity $\D = 1$, the optimal probability mass function is reduced to the geometric distribution, which was derived in \cite{Ghosh09, minimax10}. Therefore, this result can be viewed as a generalization of \cite{Ghosh09, minimax10} in the discrete setting for query functions with arbitrary sensitivity. \subsection{Connection to the Literature}\label{subsec:connection} In this section, we discuss the relations of our results and some directly related works in the literature, and the implications of our results on other works. \subsubsection{Laplacian Mechanism vs Staircase Mechanism} The Laplacian mechanism is specified by two parameters, $\e$ and the query function sensitivity $\D$. $\e$ and $\D$ completely characterize the differential privacy constraint. On the other hand, the staircase mechanism is specified by three parameters, $\e$, $\D$, and $\gamma^*$ which is determined by $\e$ and the utility function/cost function. For certain classes of utility functions/cost functions, there are closed-form expressions for the optimal $\gamma^*$. From the two examples given in Section \ref{sec:application}, we can see that although the Laplacian mechanism is not strictly optimal, in the high privacy regime ($\e \to 0$), Laplacian mechanism is asymptotically optimal: \begin{itemize} \item For the expectation of noise amplitude, the additive gap from the optimal values goes to 0 as $\e \to 0$, \item For noise power, the additive gap from the optimal values is upper bounded by a constant as $\e \to 0$. \end{itemize} However, in the low privacy regime ($\e \to +\infty$), the multiplicative gap from the optimal values can be arbitrarily large. We conclude that in the high privacy regime, the Laplacian mechanism is nearly optimal, while in the low privacy regime significant improvement can be achieved by using the staircase mechanism. We plot the multiplicative gain of staircase mechanism over Laplacian mechanism for expectation of noise amplitude and noise power in Figure \ref{fig:comparison}, where $V_{\text{Optimal}}$ is the optimal (minimum) cost, which is achieved by staircase mechanism, and $V_{Lap}$ is the cost of Laplacian mechanism. We can see that for $\e \approx 10$, the staircase mechanism has about 15-fold and 23-fold improvement, with noise amplitude and power respectively. While $\epsilon \approx 10$ corresponds to really low privacy, our results show that low privacy can be had very cheaply (particularly when compared to the state of the art Laplacian mechanism). Since the staircase mechanism is derived under the same problem setting as Laplacian mechanism, the staircase mechanism can be applied {\em wherever} Laplacian mechanism is used, and it performs {\em strictly better} than Laplacian mechanism (and significantly better in low privacy scenarios). \begin{figure}[h] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1.1\textwidth]{fig1.pdf} \caption{ $0 < \e \le 10 $} \label{fig:compare1} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1.1\textwidth]{fig2.pdf} \caption{ $10 \le \e \le 20$} \label{fig:compare2} \end{subfigure} \caption{Multiplicative Gain of the Staircase Mechanism over the Laplacian Mechanism. } \label{fig:comparison} \end{figure} \subsubsection{Relation to Shamai and Verdu, \cite{SV92}} Shamai and Verdu \cite{SV92} consider the minimum variance noise for a fixed value of the average of false alarm and missed detection probabilities of binary hypothesis testing. In \cite{SV92}, the binary hypotheses correspond to the signal being in a binary set $\{ -\Delta, +\Delta\}$. Their solution involved the noise being discrete and, further, having a pmf on the integer lattice (scaled by $\Delta$). Our setting is related, but is differentiated via the following two key distinctions: \begin{itemize} \item Instead of a constraint on the sum of false alarm and missed detection probabilities, we have constraints on symmetric weighted combinations of the two error probabilities (as in Equations~\eqref{eq:famd1} and~\eqref{eq:famd2}). \item Instead of the binary hypotheses corresponding to the signal being in a binary set $\{-\Delta, +\Delta\}$ we consider all possible binary hypotheses for the signal to be in $\{ x_1, x_2\}$ where $x_1, x_2 \in [-\Delta, \Delta]$ are arbitrtary. \end{itemize} \subsubsection{Relation to Ghosh et. al. \cite{Ghosh09} } Ghosh, Roughgarden, and Sundararajan \cite{Ghosh09} show that for a single count query with sensitivity $\D = 1$, for a general class of utility functions, to minimize the expected cost under a Bayesian framework the optimal mechanism to preserve differential privacy is the geometric mechanism, which adds noise with geometric distribution. We discuss the relations and differences between \cite{Ghosh09} and our work in the following: Both \cite{Ghosh09} and our work are similar in that, given the query output, the cost function only depends on the additive noise magnitude, and is an increasing function of noise magnitude. On the other hand, there are two main differences: \begin{itemize} \item \cite{Ghosh09} works under a Bayesian setting, while ours is to minimize the worst case cost. \item \cite{Ghosh09} studies a count query where the query output is integer-valued, bounded and sensitivity is unity. In our work, we first study general real-valued query function where the query output can take any real value, and then generalize the result to discrete setting where query output is integer valued. In both cases, the sensitivity of query functions can be arbitrary, not restricted to one. \end{itemize} \subsubsection{Relation to Gupte and Sundararajan \cite{minimax10} } Gupte and Sundararajan \cite{minimax10} derive the optimal noise probability distributions for a single count query with sensitivity $\D = 1$ for minimax (risk-averse) users. Their model is the same as the one in \cite{Ghosh09} except that their objective function is to minimize the worst case cost, the same as our objective. \cite{minimax10} shows that although there is no universally optimal solution to the minimax optimization problem in \cite{minimax10} for a general class of cost functions, each solution (corresponding to different cost functions) can be derived from the same geometric mechanism by randomly remapping. As in \cite{Ghosh09}, \cite{minimax10} assumes the query-output is bounded. Our result shows that when the query sensitivity is one, without any boundedness knowledge about the query-output, the optimal mechanism is to add random noise with geometric distribution to the query output. \subsubsection{Relation to Brenner and Nissim \cite{Nissim10} } While \cite{Ghosh09} shows that for a single count query with sensitivity $\D = 1$, there is a universally optimal mechanism for a general class of utility functions under a Bayesian framework, Brenner and Nissim \cite{Nissim10} show that for general query functions no universally optimal mechanisms exist. Indeed, this follows directly from our results: under our optimization framework, the optimal mechanism is adding noise with staircase-shaped probability distribution which is specified by three parameters $\e, \D$ and $\gamma^*$, where in general $\gamma^*$ depends on the cost function. Generally, for different cost functions, the optimal noise probability distributions have staircase-shaped probability density functions specified by different parameters. \subsubsection{Relation to Nissim, Raskhodnikova and Smith \cite{NRS07} } Nissim, Raskhodnikova and Smith \cite{NRS07} show that for certain nonlinear query functions, one can improve the accuracy by adding data-dependent noise calibrated to the smooth sensitivity of the query function, which is based on the local sensitivity of the query function. In our model, we use the global sensitivity of the query function only, and assume that the local sensitivity is the same as the global sensitivity, which holds for a general class of query functions, e.g., count, sum. \subsubsection{Relation to Hardt and Talwar \cite{geometry} } Hardt and Talwar \cite{geometry} study the tradeoff between privacy and error for answering a set of linear queries over a histogram in a differentially private way. The error is defined as the worst expectation of the $\ell^2$-norm of the noise. The lower bound given in \cite{geometry} is $\Omega( \e^{-1} d\sqrt{d})$, where $d$ is the number of linear queries. An immediate consequence of our result is that for fixed d, when $\e \to +\infty$, an upper bound of $\Theta(e^{-\frac{\e}{3d}}d \sqrt{d})$ is achievable by adding independent staircase-shaped noise with parameter $\frac{\e}{d}$ to each component. \subsubsection{Relation to Other Works} There are many existing works on studying how to improve the accuracy for answering more complex queries under differential privacy, in which the basic building block is the standard Laplacian mechanism. For example, Hay et. al. \cite{Hay10} show that one can improve the accuracy for a general class of histogram queries, by exploiting the consistency constraints on the query output, and Li et. al. \cite{Li10} study how to optimize linear counting queries under differential privacy by carefully choosing the set of linear queries to be answered. In these works, the error is measured by the mean squared error of query output estimates, which corresponds to the variance of the noise added to the query output to preserve differential privacy. In terms of $\epsilon$, the error bound in these works scales linearly to $\frac{1}{\e^2}$, because of the use of Laplacian noise. If Laplacian distribution is replaced by staircase distribution in these works, one can improve the error bound to $\Theta(e^{-C\e})$ (for some constant $C$ which depends on the number of queries) when $\e \to +\infty$ (corresponding to the low privacy regime). \subsection{Organization} The paper is organized as follows. We show the optimality of query-output independent perturbation in Section \ref{sec:optimality}, and present the optimal differentially private mechanism, staircase mechanism, in Section \ref{sec:result}. In Section \ref{sec:application}, we apply our main result to derive the optimal noise probability distribution with minimum expectation of noise amplitude and power, respectively, and compare the performances with the Laplacian mechanism. Section \ref{sec:gammaproperty} presents the asymptotic properties of $\gamma^*$ in the staircase mechanism for momentum cost functions, and suggests a heuristic choice of $\gamma$ that appears to work well for a wide class of cost functions. Section \ref{sec:discrete} generalizes the staircase mechanism for integer-valued query function in the discrete setting, and Section \ref{sec:abstractsetting} extends the staircase mechanism to the abstract setting. Section \ref{sec:conclusion} concludes this paper. \subsection{Background on Differential Privacy} The basic problem setting in differential privacy for statistical database is as follows: suppose a dataset curator is in charge of a statistical database which consists of records of many individuals, and an analyst sends a query request to the curator to get some aggregate information about the whole database. Without any privacy concerns, the database curator can simply apply the query function to the dataset, compute the query output, and send the result to the analyst. However, to protect the privacy of individual data in the dataset, the dataset curator should use a randomized query-answering mechanism such that the probability distribution of the query output does not differ too much whether any individual record is in the database or not. Formally, consider a real-valued query function \begin{align} q: \database \rightarrow \R, \end{align} where $\database$ is the set of all possible datasets. The real-valued query function $q$ will be applied to a dataset, and query output is a real number. Two datasets $D_1, D_2 \in \database$ are called neighboring datasets if they differ in at most one element, i.e., one is a proper subset of the other and the larger dataset contains just one additional element \cite{DPsurvey}. A randomized query-answering mechanism $\KM$ for the query function $q$ will randomly output a number with probability distribution depends on query output $q(D)$, where $D$ is the dataset. \begin{definition}[$\e$-differential privacy \cite{DPsurvey}] A randomized mechanism $\KM$ gives $\e$-differential privacy if for all data sets $D_1$ and $D_2$ differing on at most one element, and all $S \subset \text{Range}(\KM)$, \begin{align} \text{Pr}[\KM(D_1) \in S] \le \exp(\e) \; \text{Pr}[\KM(D_2) \in S], \label{eqn:dpgeneral} \end{align} where $\KM(D)$ is the random output of the mechanism $\KM$ when the query function $q$ is applied to the dataset $D$. \end{definition} The differential privacy constraint \eqref{eqn:dpgeneral} essentially requires that for all neighboring datasets, the probability distributions of the output of the randomized mechanism should be approximately the same. Therefore, for any individual record, its presence or absence in the dataset will not significantly affect the output of the mechanism, which makes it hard for adversaries with arbitrary background knowledge to make inference on any individual from the released query output information. The parameter $\e \in (0, +\infty)$ quantifies how private the mechanism is: the smaller $\e$ is , the more private the randomized mechanism is. \subsubsection{Operational Meaning of $\e$-Differential Privacy in the Context of Hypothesis Testing} As shown by \cite{Zhou08}, one can interpret the differential privacy constraint \eqref{eqn:dpgeneral} in the context of hypothesis testing in terms of false alarm probability and missing detection probability. Indeed, consider a binary hypothesis testing problem over two neighboring datasets, $H_0: D_1 $ versus $H_1: D_2$, where an individual's record is in $D_2$ only. Given a decision rule, let $S$ be the decision region such that when the released output lies in $S$, $H_1$ will be rejected, and when the released output lies in $S^C$ (the complement of $S$), $H_0$ will be rejected. The false alarm probability $P_{FA}$ and the missing detection probability $P_{MD}$ can be written as \begin{align} P_{FA} &= P(K(D_1) \in S^C), \\ P_{MD} &= P(K(D_2) \in S). \end{align} Therefore, from \eqref{eqn:dpgeneral} we get \begin{align} 1 - P_{FA} \le e^{\e} P_{MD} . \end{align} Thus \begin{align} e^{\e} P_{MD} + P_{FA} \ge 1 . \end{align} Switch $D_1$ and $D_2$ in \eqref{eqn:dpgeneral}, and we get \begin{align} \text{Pr}[\KM(D_2) \in S] \le \exp(\e) \; \text{Pr}[\KM(D_1) \in S] . \end{align} Therefore, \begin{align} 1 - P_{MD} \le e^{\e} P_{FA} , \end{align} and thus \begin{align} P_{MD} + e^{\e} P_{FA} \ge 1 . \end{align} In conclusion, we have \begin{align} e^{\e} P_{MD} + P_{FA} &\ge 1 , \\ P_{MD} + e^{\e} P_{FA} &\ge 1 . \end{align} The $\e$-differential privacy constraint implies that in the context of hypothesis testing, $P_{FA}$ and $P_{MD}$ can not be both too small. We plot the regions of $P_{FA}$ and $P_{MD}$ under $\e$-differential privacy in Figure \ref{fig:famd}. \begin{figure}[t] \centering \includegraphics[scale=0.6]{fig_codes/epsilon_figure.pdf} \caption{The Region of $P_{FA}$ and $P_{MD}$ under $\e$-Differential Privacy} \label{fig:famd} \end{figure} \subsubsection{Laplacian Mechanism} The standard approach to preserving $\e$-differential privacy is to perturb the query output by adding random noise with Laplacian distribution proportional to the sensitivity $\Delta$ of the query function $q$, where the sensitivity of a real-valued query function is defined as \begin{definition}[Query Sensitivity \cite{DPsurvey}] For a real-valued query function $q: \database \rightarrow \R$, the sensitivity of $q$ is defined as \begin{align} \D := \max_{D_1,D_2 \in \database} | q(D_1) - q(D_2)|, \label{def:sensitivity} \end{align} for all $D_1,D_2$ differing in at most one element. \end{definition} Formally, the Laplacian mechanism is: \begin{definition}[Laplacian Mechanism \cite{DMNS06}] For a real-valued query function $q: \database \rightarrow \R$ with sensitivity $\Delta$, Laplacian mechanism will output \begin{align} K(D) := q(D) + \text{Lap}(\frac{\D}{\e}), \end{align} where $\text{Lap}(\lambda)$ is a random variable with probability density function \begin{align} f(x) = \frac{1}{2\lambda} e^{-\frac{|x|}{\lambda}}, \quad \forall x \in \R. \end{align} \end{definition} Consider two neighboring datasets $D_1$ and $D_2$ where $|q(D_1) - q(D_2)| = \D$. It is easy to compute the tradeoff between the false alarm probability $P_{FA}$ and the missing detection probability $P_{MD}$ under Laplacian mechanism, which is \begin{align} P_{MD} = \begin{cases} 1 - e^{\e}P_{FA} & P_{FA} \in [0, \frac{1}{2} e^{-\e}) \\ \frac{e^{-\e}}{4P_{FA}} & P_{FA} \in [ \frac{1}{2} e^{-\e}, \frac{1}{2} ) \\ e^{-\e}(1 - P_{FA} ) & P_{FA} \in [\frac{1}{2}, 1] \end{cases}\label{eqn:regionLap} \end{align} The region of $P_{FA}$ and $F_{MD}$ under Laplacian mechanism for two neighboring datasets $D_1$ and $D_2$ such that $|q(D_1) - q(D_2)| = \D$ is ploted in Figure \ref{fig:regionLap}. \begin{figure}[t] \centering \includegraphics[scale=0.6]{fig_codes/Laplace_figure.pdf} \caption{The Region of $P_{FA}$ and $P_{MD}$ under Laplacian Mechanism} \label{fig:regionLap} \end{figure} Since its introduction in \cite{DMNS06}, the Laplacian mechanism has become the standard tool in differential privacy and has been used as the basic building block in a number of works on differential privacy analysis in other more complex problem settings, e.g., \cite{HLM12, MM09, Xiao11, Huang12, McSherry10, Li10, Barak07, DKMMN06, DL09, Roth10, LO11, Smith2011, CM08, continual, Ding11, Hardt2010, Geo12, Ka11, Mironov12bit, Sarathy2011, Xiao2011ireduct, Dankar12, Friedman10, zhang2012functional, lei2011differentially, wasserman2010, dwork2010pan, Guptadp2010, blum2011fast, hsu2012distributed, hsu2012dp, blocki2012johnson, hardt2012beyond, hardt2012private, gupta2012iterative, kasi13, karwa2011private, cormode2012differentially}. Given this near-routine use of the query-output independent adding of Laplacian noise, the following two questions are natural: \begin{itemize} \item Is query-output independent perturbation optimal? \item Assume query-output independent perturbation, is Lapacian noise distribution optimal? \end{itemize} In this work we answer the above two questions. Our main result is that given an $\e$-differential privacy constraint, under a general utility-maximization (equivalently, cost-minimization) model, for single real-valued query function (assuming local sensitivity is the same as global sensitivity), \begin{itemize} \item adding query-output independent noise is indeed optimal (under a mild technical condition), \item the optimal noise distribution is {\em not} Laplacian distribution; instead, the optimal one has a {\em staircase-shaped} probability density function. \end{itemize} We also generalize the same result to the discrete setting where the query output is integer-valued and to more abstract settings. \subsection{Problem Formulation} We formulate a utility-maximization (cost-minimization) problem under the differential privacy constraint. \subsubsection{Differential Privacy Constraint} A general randomized releasing mechanism $\KM$ is a family of noise probability distributions indexed by the query output (denoted by $t$), i.e., \begin{align} \KM = \{\p_t : t \in \R \}, \end{align} and given dataset $D$, the mechanism $\KM$ will release the query output $t = q(D)$ corrupted by additive random noise with probability distribution $\p_t$: \begin{align} \KM(D) = t + X_{t}, \end{align} where $X_{t}$ is a random variable with probability distribution $\p_{t}$. The differential privacy constraint \eqref{eqn:dpgeneral} on $\KM$ is that for any $t_1,t_2 \in \R$ such that $|t_1 - t_2| \le \D $ (corresponding to the query outputs for two neighboring datasets) , \begin{align} \nm_{t_1} (S) \le e^{\e} \nm_{t_2}(S + t_1 - t_2), \forall \; \text{measurable set} \; S \subset \R, \label{eqn:diffgeneralnoise1} \end{align} where for any $t \in \R$, $S+t \, := \, \{s+t \, | \, s \in S\}$. \subsubsection{Utility Model} The utility model we use in this work is a very general one, which is also used in the works by Ghosh, Roughgarden, and Sundararajan \cite{Ghosh09}, Gupte and Sundararajan \cite{minimax10}, and Brenner and Nissim \cite{Nissim10}. Consider a cost function $\loss(\cdot): \R \to \R$, which is a function of the additive noise. Given additive noise $x$, the cost is $\loss(x)$. Given query output $t \in \R$, the additive noise is a random variable with probability distribution $\p_t$, and thus the expectation of the cost is \begin{align} \int_{x\in \R} \loss(x) \p_t (dx). \end{align} The objective is to minimize the worst case cost among all possible query output $t \in \R$, i.e., \begin{align} \text{minimize}\; \sup_{t \in \R} \int_{x \in \R} \loss(x) \p_t(dx). \label{eqn:objective} \end{align} \subsubsection{Optimization Problem} Combining the differential privacy constraint \eqref{eqn:diffgeneralnoise1} and the objective function \eqref{eqn:objective}, we formulate a functional optimization problem: \begin{align} \mathop{\text{minimize}}\limits_{ \{\p_t\}_{t \in \R} } & \ \sup_{t \in \R} \int_{x \in \R} \loss(x) \p_t(dx) \label{eqn:objecinopt}\\ \text{subject to} & \ \nm_{t_1} (S) \le e^{\e} \nm_{t_2}(S + t_1 - t_2), \forall \ \text{measurable set} \ S\subseteq \R, \ \forall |t_1 - t_2| \le \D. \label{eqn:diffgeneralnoise} \end{align} \subsection{An Overview of Our Results} \subsubsection{Adding Query-Output-Independent Noise is Optimal} Our first result is that under a mild technical condition, adding query-output-independent noise is optimal, i.e., we can assume that $\p_t \equiv \p$ for all $t \in \R$ for some probability distribution $\p$. For any positive integer $n$, and for any positive real number $T$, define \begin{align} \KM_{T,n} \triangleq \{\; \{\nm_t\}_{t \in \R} \, |\, \{\nm_t\}_{t \in \R} \; \text{satisfies} \; \eqref{eqn:diffgeneralnoise}, \; & \nm_t = \nm_{k \frac{T}{n}}, \text{for}\; t \in [k\frac{T}{n}, (k+1)\frac{T}{n}), k \in \Z, \nonumber \\ & \text{and} \; \nm_{t+T} = \nm_{t}, \forall t \in \R \}. \end{align} \begin{theorem} \label{thm:maingeneral1} Given any family of probability distribution $\{\nm_t\}_{t \in \R} \in \cup_{T >0} \cup_{n \ge 1} \KM_{T,n}$, there exists a probability distribution $\nm^*$ such that the family of probability distributions $\{\p^*_t\}_{t \in \R}$ with $\p^*_t \equiv \p^*$ satisfies the differential privacy constraint \eqref{eqn:diffgeneralnoise} and \begin{align} \sup_{t \in \R} \int_{x \in \R} \loss(x) \p^*_t(dx) \le \sup_{t \in \R} \int_{x \in \R} \loss(x) \p_t(dx) . \end{align} \end{theorem} Theorem \ref{thm:maingeneral1} states that if we assume the family of noise probability distributions is piecewise constant (the length of pieces can be arbitrarily small) over $t$, and periodic ( the period can be arbitrary) over $t$ , then in the optimal mechanism we can assume $\p_t$ does not dependent on $t$. We conjecture that the technical condition can be done away with. \subsubsection{Optimal Noise Probability Distribution} Due to Theorem \ref{thm:maingeneral1}, adding query-output-independent noise is optimal, and thus we only need to study what the optimal noise probability distribution is. Let $\p$ denote the probability distribution of the noise added to the query output. Then the optimization problem \eqref{eqn:objecinopt} and \eqref{eqn:diffgeneralnoise} is reduced to \begin{align} \mathop{\text{minimize}}\limits_{ \p} & \ \int_{x \in \R} \loss(x) \p (dx) \\ \text{subject to} & \ \p(S) \le e^{\e} \p(S + d), \forall \ \text{measurable set} \ S\subseteq \R, \ \forall |d| \le \D. \label{eqn:dpconstraintfinal} \end{align} Consider a staircase-shaped probability distribution with probability density function (p.d.f.) $f_{\gamma}(\cdot)$ defined as \begin{align} f_{\gamma}(x) = \begin{cases} a(\gamma) & x \in [0, \gamma \D) \\ e^{-\e} a(\gamma) & x \in [\gamma \D, \D) \\ e^{-k\e} f_{\gamma}(x - k\D) & x \in [ k \D, (k+1)\D) \; \text{for} \; k \in \N \\ f_{\gamma}(-x) & x<0 \end{cases}\label{eqn:deffgamma} \end{align} where \begin{align} a(\gamma) \triangleq \frac{ 1 - e^{-\e}}{2 \D (\gamma + e^{-\e}(1-\gamma))} \end{align} is a normalizing constant to make $\int_{x\in \R} f_{\gamma}(x) dx = 1$. Our main result is \begin{theorem} If the cost function $\loss(\cdot)$ is symmetric and increasing, and $\sup_{x \ge T} \frac{\loss(x+1)}{\loss(x)} < + \infty$ for some $T>0$, the optimal noise probability distribution has a staircase-shaped probability density function $f_{\gamma^*}(\cdot)$, where \begin{align} \gamma^* = \mathop{\arg \min} \limits_{\gamma \in [0,1] } \int_{x \in \R} \loss (x) f_{\gamma}(x) dx . \end{align} \end{theorem} We plot the probability density functions of Laplace mechanism and staircase mechanism in Figure \ref{fig:probdf}. Figure \ref{fig:fgamma} in Section \ref{sec:result} gives a precise description of staircase mechanism. The staircase mechanism is specified by three parameters: $\e$, $\D$, and $\gamma^*$ which is determined by $\e$ and the cost function $\loss(\cdot)$. For certain classes of cost functions, there are closed-form expressions for the optimal $\gamma^*$. \begin{figure}[h] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=0.8\textwidth]{fig_laplace.pdf} \caption{ Laplace Mechanism } \label{fig:compare1} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=0.8 \textwidth]{fig_stair.pdf} \caption{ Staircase Mechanism} \label{fig:compare2} \end{subfigure} \caption{Probability Density Functions of Laplacian Mechanism and Staircase Mechanism} \label{fig:probdf} \end{figure} \subsubsection{Applications: Minimum Noise Magnitude and Noise Power} We apply our main result Theorem \ref{thm:main} to two typical cost functions $\loss(x) = |x|$ and $\loss(x) = x^2$, which measure noise magnitude and noise power, respectively. We derive the closed-form expressions for the optimal parameters $\gamma^*$ for these two cost functions. Comparing the optimal performances with those of the Laplacian mechanism, we show that in the high privacy regime ($\e$ is small), the Laplacian mechanism is asymptotically optimal as $\e \to 0$; in the low privacy regime ($\e$ is large), the minimum expectation of noise amplitude and minimum noise power are $\Theta(\D e^{-\frac{\e}{2}})$ and $\Theta(\D^2 e^{-\frac{2\e}{3}})$ as $\e \to +\infty$, while the expectation of noise amplitude and power using the Laplacian mechanism are $\frac{\D}{\e}$ and $\frac{2\D^2}{\e^2}$, respectively, where $\D$ is the sensitivity of the query function. We conclude that the gains are more pronounced in the low privacy regime. \subsubsection{Extension to the Discrete Setting} Since for many important practical applications query functions are integer-valued, we also derive the optimal differentially private mechanisms for answering a single integer-valued query function. We show that adding query-output independent noise is optimal under a mild technical condition, and the optimal noise probability distribution has a staircase-shaped probability mass function, which can be viewed as the discrete variant of the staircase mechanism in the continuous setting. This result helps us directly compare our work and the existing works \cite{Ghosh09, minimax10} on integer-valued query functions. Our result shows that for integer-valued query function, the optimal noise probability mass function is also staircase-shaped, and in the case the sensitivity $\D = 1$, the optimal probability mass function is reduced to the geometric distribution, which was derived in \cite{Ghosh09, minimax10}. Therefore, this result can be viewed as a generalization of \cite{Ghosh09, minimax10} in the discrete setting for query functions with arbitrary sensitivity. \subsection{Connection to the Literature}\label{subsec:connection} In this section, we discuss the relations of our results and some directly related works in the literature, and the implications of our results on other works. \subsubsection{Laplacian Mechanism vs Staircase Mechanism} The Laplacian mechanism is specified by two parameters, $\e$ and the query function sensitivity $\D$. $\e$ and $\D$ completely characterize the differential privacy constraint. On the other hand, the staircase mechanism is specified by three parameters, $\e$, $\D$, and $\gamma^*$ which is determined by $\e$ and the utility function/cost function. For certain classes of utility functions/cost functions, there are closed-form expressions for the optimal $\gamma^*$. From the two examples given in Section \ref{sec:application}, we can see that although the Laplacian mechanism is not strictly optimal, in the high privacy regime ($\e \to 0$), Laplacian mechanism is asymptotically optimal: \begin{itemize} \item For the expectation of noise amplitude, the additive gap from the optimal values goes to 0 as $\e \to 0$, \item For noise power, the additive gap from the optimal values is upper bounded by a constant as $\e \to 0$. \end{itemize} However, in the low privacy regime ($\e \to +\infty$), the multiplicative gap from the optimal values can be arbitrarily large. We conclude that in the high privacy regime, the Laplacian mechanism is nearly optimal, while in the low privacy regime significant improvement can be achieved by using the staircase mechanism. We plot the multiplicative gain of staircase mechanism over Laplacian mechanism for expectation of noise amplitude and noise power in Figure \ref{fig:comparison}, where $V_{\text{Optimal}}$ is the optimal (minimum) cost, which is achieved by staircase mechanism, and $V_{Lap}$ is the cost of Laplacian mechanism. We can see that for $\e \approx 10$, the staircase mechanism has about 15-fold and 23-fold improvement, with noise amplitude and power respectively. Since the staircase mechanism is derived under the same problem setting as Laplacian mechanism, the staircase mechanism can be applied wherever Laplacian mechanism is used, and it performs strictly better than Laplacian mechanism (and significantly better in low privacy scenarios). \begin{figure}[h] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1.1\textwidth]{fig1.pdf} \caption{ $0 < \e \le 10 $} \label{fig:compare1} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1.1\textwidth]{fig2.pdf} \caption{ $10 \le \e \le 20$} \label{fig:compare2} \end{subfigure} \caption{Multiplicative Gain of the Staircase Mechanism over the Laplacian Mechanism. } \label{fig:comparison} \end{figure} \subsubsection{Relation to Ghosh et. al. \cite{Ghosh09} } Ghosh, Roughgarden, and Sundararajan \cite{Ghosh09} show that for a single count query with sensitivity $\D = 1$, for a general class of utility functions, to minimize the expected cost under a Bayesian framework the optimal mechanism to preserve differential privacy is the geometric mechanism, which adds noise with geometric distribution. We discuss the relations and differences between \cite{Ghosh09} and our work in the following: Both \cite{Ghosh09} and our work are similar in that, given the query output, the cost function only depends on the additive noise magnitude, and is an increasing function of noise magnitude. On the other hand, there are two main differences: \begin{itemize} \item \cite{Ghosh09} works under a Bayesian setting, while ours is to minimize the worst case cost. \item \cite{Ghosh09} studies a count query where the query output is integer-valued, bounded and sensitivity is unity. In our work, we first study general real-valued query function where the query output can take any real value, and then generalize the result to discrete setting where query output is integer valued. In both cases, the sensitivity of query functions can be arbitrary, not restricted to one. \end{itemize} \subsubsection{Relation to Gupte and Sundararajan \cite{minimax10} } Gupte and Sundararajan \cite{minimax10} derive the optimal noise probability distributions for a single count query with sensitivity $\D = 1$ for minimax (risk-averse) users. Their model is the same as the one in \cite{Ghosh09} except that their objective function is to minimize the worst case cost, the same as our objective. \cite{minimax10} shows that although there is no universally optimal solution to the minimax optimization problem in \cite{minimax10} for a general class of cost functions, each solution (corresponding to different cost functions) can be derived from the same geometric mechanism by randomly remapping. As in \cite{Ghosh09}, \cite{minimax10} assumes the query-output is bounded. Our result shows that when the query sensitivity is one, without any boundedness knowledge about the query-output, the optimal mechanism is to add random noise with geometric distribution to the query output. \subsubsection{Relation to Brenner and Nissim \cite{Nissim10} } While \cite{Ghosh09} shows that for a single count query with sensitivity $\D = 1$, there is a universally optimal mechanism for a general class of utility functions under a Bayesian framework, Brenner and Nissim \cite{Nissim10} show that for general query functions no universally optimal mechanisms exist. Indeed, this follows directly from our results: under our optimization framework, the optimal mechanism is adding noise with staircase-shaped probability distribution which is specified by three parameters $\e, \D$ and $\gamma^*$, where in general $\gamma^*$ depends on the cost function. Generally, for different cost functions, the optimal noise probability distributions have staircase-shaped probability density functions specified by different parameters. \subsubsection{Relation to Nissim, Raskhodnikova and Smith \cite{NRS07} } Nissim, Raskhodnikova and Smith \cite{NRS07} show that for certain nonlinear query functions, one can improve the accuracy by adding data-dependent noise calibrated to the smooth sensitivity of the query function, which is based on the local sensitivity of the query function. In our model, we use the global sensitivity of the query function only, and assume that the local sensitivity is the same as the global sensitivity, which holds for a general class of query functions, e.g., count, sum. \subsubsection{Relation to Hardt and Talwar \cite{geometry} } Hardt and Talwar \cite{geometry} study the tradeoff between privacy and error for answering a set of linear queries over a histogram in a differentially private way. The error is defined as the worst expectation of the $\ell^2$-norm of the noise. The lower bound given in \cite{geometry} is $\Omega( \e^{-1} d\sqrt{d})$, where $d$ is the number of linear queries. An immediate consequence of our result is that for fixed d, when $\e \to +\infty$, an upper bound of $\Theta(e^{-\frac{\e}{3d}}d \sqrt{d})$ is achievable by adding independent staircase-shaped noise with parameter $\frac{\e}{d}$ to each component. \subsubsection{Relation to Other Works} There are many existing works on studying how to improve the accuracy for answering more complex queries under differential privacy, in which the basic building block is the standard Laplacian mechanism. For example, Hay et. al. \cite{Hay10} show that one can improve the accuracy for a general class of histogram queries, by exploiting the consistency constraints on the query output, and Li et. al. \cite{Li10} study how to optimize linear counting queries under differential privacy by carefully choosing the set of linear queries to be answered. In these works, the error is measured by the mean squared error of query output estimates, which corresponds to the variance of the noise added to the query output to preserve differential privacy. In terms of $\epsilon$, the error bound in these works scales linearly to $\frac{1}{\e^2}$, because of the use of Laplacian noise. If Laplacian distribution is replaced by staircase distribution in these works, one can improve the error bound to $\Theta(e^{-C\e})$ (for some constant $C$ which depends on the number of queries) when $\e \to +\infty$ (corresponding to the low privacy regime). \subsection{Organization} The paper is organized as follows. We show the optimality of query-output independent perturbation in Section \ref{sec:optimality}, and present the optimal differentially private mechanism, staircase mechanism, in Section \ref{sec:result}. In Section \ref{sec:application}, we apply our main result to derive the optimal noise probability distribution with minimum expectation of noise amplitude and power, respectively, and compare the performances with the Laplacian mechanism. Section \ref{sec:gammaproperty} presents the asymptotic properties of $\gamma^*$ in the staircase mechanism for momentum cost functions, and suggests a heuristic choice of $\gamma$ that appears to work well for a wide class of cost functions. Section \ref{sec:discrete} generalizes the staircase mechanism for integer-valued query function in the discrete setting, and Section \ref{sec:abstractsetting} extends the staircase mechanism to the abstract setting. Section \ref{sec:conclusion} concludes this paper. \section{Introduction} \label{sec:intro} \input{introduction.tex} \section{Optimality of Query-Qutput Independent Perturbation} \label{sec:optimality} \input{optimality.tex} \section{Optimal Noise Probability distribution} \label{sec:result} \input{mainresult.tex} \section{Applications} \label{sec:application} \input{application.tex} \section{Property of $\gamma^*$} \label{sec:gammaproperty} \input{gammaproperty.tex} \section{Extension to The Discrete Setting} \label{sec:discrete} \input{discrete.tex} \section{Extension to The Abstract Setting} \label{sec:abstractsetting} \input{abstractsetting.tex} \section{Conclusion} \label{sec:conclusion} \input{conclusion.tex} \appendices \section{Proof of Theorem \ref{thm:maingeneral}} \label{app:optimality} \input{optimalityproof.tex} \section{Proof of Theorem \ref{thm:main}}\label{sec:proof} \input{proof.tex} \section{Proof of Theorem \ref{thm:1}}\label{app:1} \input{app_p1.tex} \section{Proof of Theorem \ref{thm:3}}\label{app:3} \input{app_p3.tex} \section{Proof of Theorem \ref{thm:gammaprop}}\label{app:gammaprop} \input{proofgamma.tex} \section{Proof of Theorem \ref{thm:discrete1} and Theorem \ref{thm:discrete2}}\label{sec:discrete_proof} \input{discreteproof.tex} \section*{Acknowledgment}\label{sec:acknowledgment} \input{acknowledgment.tex} \bibliographystyle{IEEEtran} \subsection{Outline of Proof} The key idea of the proof is to use a sequence of probability distributions with piecewise constant probability density functions to approximate any probability distribution satisfying the differential privacy constraint \eqref{eqn:dpconstraintfinal}. The proof consists of 8 steps in total, and in each step we narrow down the set of probability distributions where the optimal probability distribution should lie in: \begin{itemize} \item Step 1 proves that we only need to consider symmetric probability distributions. \item Step 2 and Step 3 prove that we only need to consider probability distributions which have symmetric piecewise constant probability density functions. \item Step 4 proves that we only need to consider those symmetric piecewise constant probability density functions which are monotonically decreasing for $x \ge 0$. \item Step 5 proves that optimal probability density function should periodically decay. \item Step 6, Step 7 and Step 8 prove that the optimal probability density function over the interval $[0, \D)$ is a step function, and they conclude the proof of Theorem \ref{thm:main}. \end{itemize} \subsection{Step 1} Define \begin{align} V^* \triangleq \inf_{\p \in \sP} \int_{x \in \R} \loss (x) \p(dx). \label{def:Vstar} \end{align} Our goal is to prove that $V^* = \inf\limits_{\gamma \in [0,1]} \int_{x \in \R} \loss (x) \p_{\gamma}(dx) $. If $V^* = + \infty$, then due to the definition of $V^*$, we have \begin{align} \inf_{\gamma \in [0,1]} \int_{x \in \R} \loss (x) \p_{\gamma}(dx) \ge V^* = + \infty, \end{align} and thus $\inf_{\gamma \in [0,1]} \int_{x \in \R} \loss (x) = V^* = + \infty$. So we only need to consider the case $V^* < +\infty$, i.e., $V^*$ is finite. Therefore, in the rest of the proof, we assume $V^*$ is finite. First we prove that we only need to consider symmetric probability measures. \begin{lemma}\label{lem:symmetric} Given $\p \in \sP$, define a symmetric probability distribution ${\p_{\text{sym} }}$ as \begin{align} {\p_{\text{sym} }}(S) \triangleq \frac{\p(S) + \p(-S)}{2}, \forall \ \text{measurable set} \ S \subseteq \R,\label{eqn:definesym} \end{align} where the set $ -S \triangleq \{ - x \ | \ x \in S \} $. Then ${\p_{\text{sym} }} \in \sP$, i.e., ${\p_{\text{sym} }}$ satisfies the differential privacy constraint \eqref{eqn:dpconstraintfinal}, and \begin{align} \int_{x \in \R} \loss (x) {\p_{\text{sym} }} (dx) = \int_{x \in \R} \loss (x) \p (dx). \end{align} \end{lemma} \begin{IEEEproof} It is easy to verify that ${\p_{\text{sym} }}$ is a valid probability distribution. Due to the definition of ${\p_{\text{sym} }}$ in \eqref{eqn:definesym}, we have \begin{align} {\p_{\text{sym} }}(S) = \frac{\p(S) + \p(-S)}{2} = {\p_{\text{sym} }}(-S), \end{align} for any measurable set $S \subseteq \R$. Thus, ${\p_{\text{sym} }}$ is a symmetric probability distribution. Next, we show that ${\p_{\text{sym} }}$ satisfies \eqref{eqn:dpconstraintfinal}. Indeed, $\forall$ measurable set $S \subseteq \R$ and $\forall |d| \le \D$, \begin{align} {\p_{\text{sym} }}(S) &= \frac{\p(S) + \p(-S)}{2} \\ & \le \frac{e^{\e} \p(S+d) + e^{\e} \p(-S-d)}{2} \label{eqn:temp1} \\ & = \frac{e^{\e} \p(S+d) + e^{\e} \p(-(S+d))}{2} \\ & = e^{\e} {\p_{\text{sym} }}(S+d), \end{align} where in \eqref{eqn:temp1} we use the facts $\p(S) \le e^{\e} \p(S+d) $ and $\p(-S) \le e^{\e} \p(-S-d) $. Lastly, since $\loss(x)$ is symmetric, \begin{align} \int_{x \in \R} \loss (x) \p (dx) & = \int_{x \in \R} \frac{\loss (x) + \loss (-x)} {2} \p (dx) \\ &= \int_{x \in \R} \loss (x) {\p_{\text{sym} }} (dx). \end{align} \end{IEEEproof} Therefore, if we define \begin{align} \sP_{\text{sym}} \triangleq \{ {\p_{\text{sym} }} | \p \in \sP \}, \end{align} due to Lemma \ref{lem:symmetric}, \begin{lemma} \begin{align} V^* = \inf_{\p \in \sP_{\text{sym}}} \int_{x \in \R} \loss (x) \p(dx) . \end{align} \end{lemma} \subsection{Step 2} Next we prove that for any probability distribution $\p$ satisfying differential privacy constraint \eqref{eqn:dpconstraintfinal}, the probability $\text{Pr}(\noise = x) = 0, \forall x \in \R $, and $\p([y,z]) \neq 0$ for any $y < z \in \R$. \begin{lemma}\label{lem:no_point_mass} $\forall \p \in \sP, \forall x \in \R$, $\p(\{x\}) = 0$. And, for any $y < z \in \R$, $\p([y,z]) \neq 0$. \end{lemma} \begin{IEEEproof} Given $\p \in \sP$, suppose $\p(\{x_0\}) = p_0 >0$, for some $x_0 \in \R$. Then for any $x \in [x_0, x_0 + \D]$, \begin{align} \p(\{x\}) \ge e^{-\e}, \end{align} due to \eqref{eqn:dpconstraintfinal}. So $\p(\{x\})$ is strictly lower bounded by a positive constant for uncountable number of $x$, and thus $\p([x_0, x_0 + \D]) = + \infty$, which contradicts with the fact $\p$ is a probability distribution. Therefore, $\forall \p \in \sP, \forall x \in \R$, $\p(\{x\}) = 0$. Suppose $\p([y,z]) = 0$ for some $y <z \in \R$. Then from \eqref{eqn:dpconstraintfinal} we have for any $|d| \le \D$, \begin{align} \p([y+d,z+d]) \le e^{\e} \p([y,z]) = 0, \end{align} and thus $\p([y+d,z+d]) = 0$. By induction, for any $k \in \Z$, $\p([y+kd, z+kd]) = 0$, which implies that $\p((-\infty, +\infty)) =0 $. Contradiction. So for any $y < z \in \R$, $\p([y,z]) \neq 0$. \end{IEEEproof} \subsection{Step 3} In this subsection, we show that for any $\p \in \sP_{\text{sym}}$ with \begin{align} V(\p) \triangleq \int_{x \in \R} \loss(x) \p(dx) < + \infty, \end{align} we can use a sequence of probability measures $\{ \p_i \in \sP_{\text{sym}} \}_{i \ge 1}$ with symmetric piecewise constant probability density functions to approximate $\p$ with $\lim_{i \to +\infty} V(\p_i) = V(\p)$. \begin{lemma}\label{lem:approx} Given $\p \in \sP_{\text{sym}}$ with $V(\p) < + \infty$, any positive integer $i \in N$, define $\p_i$ as the probability distribution with a symmetric probability density function $f_i(x)$ defined as \begin{align} f_i(x) = \begin{cases} a_k \triangleq \frac{ \p([k\frac{D}{i}, (k+1)\frac{D}{i} )}{\frac{D}{i}} & x\in [k\frac{D}{i}, (k+1)\frac{D}{i} ) \; \text{for} \; k \in \N \\ f_i(-x) & x < 0 \end{cases} \end{align} Then $\p_i \in \sP_{\text{sym}}$ and \begin{align} \lim_{i \to +\infty} V(\p_i) = V(\p). \end{align} \end{lemma} \begin{IEEEproof} First we prove that $\p_i \in \sP_{\text{sym}}$, i.e., $\p_i$ is symmetric and satisfies the differential privacy constraint \eqref{eqn:dpconstraintfinal}. By definition $f_i(x)$ is a symmetric and nonnegative function, and \begin{align} \int_{-\infty}^{+\infty} f_i (x) dx &= 2 \int_{0}^{+\infty} f_i (x) dx \\ &= 2 \int_{x \in [0, +\infty)} \p (dx) \\ &= 2 \int_{x \in (0, +\infty)} \p (dx) \label{eqn:limzero} \\ & = 1, \end{align} where in \eqref{eqn:limzero} we used the fact $\p(\{0\}) = 0$ due to Lemma \ref{lem:no_point_mass}. In addition, due to Lemma \ref{lem:no_point_mass}, $a_k >0, \forall k \in \N$. So $f_i (x)$ is a valid symmetric probability density function, and thus $\p_i$ is a valid symmetric probability distribution. Define the density sequence of $\p_i$ as the sequence $\{a_0, a_1, a_2, \dots, a_n, \dots \}$. Since $\p$ satisfies \eqref{eqn:dpconstraintfinal}, it is easy to see that \begin{align} a_j \le e^{\e} a_{j+k} \; \text{and} \; a_{j+k} \le e^{\e} a_j, \forall j \ge 0, 0 \le k \le i. \end{align} Therefore, for any $x,y$ such that $|x-y| \le \D$, we have \begin{align} f_i(x) \le e^{\e} f_i(y) \; \text{and} \; f_i(y) \le e^{\e} f_i(x), \end{align} which implies that $\p_i$ satisfies \eqref{eqn:dpconstraintfinal}. Hence, $\p_i \in \sP_{\text{sym}}$. Next we show that \begin{align} \lim_{i \to +\infty} V(\p_i) = V(\p). \end{align} Since $\loss(x)$ satisfies Property \ref{property2}, we can assume there exists a constant $B > 0$ such that \begin{align} \loss(x+1) \le B \loss(x), \forall x \ge T. \end{align} Given $\delta >0$, since $V(\p)$ is finite, there exists integer $T^* >T$ such that \begin{align} \int_{x \ge T^*} \loss(x) \p(dx) < \frac{\delta}{B}. \end{align} For any integers $i \ge 1 $, $N \ge T^* $, \begin{align} \int_{x \in [N,N+1)} \loss(x) \p_i (dx) & \le \p_i([N,N+1)) \loss(N+1) \\ &= \p([N,N+1)) \loss(N+1) \\ &\le \int_{x \in [N,N+1)} B \loss(x) \p (dx). \end{align} Therefore, \begin{align} \int_{x \in [T^*, +\infty)} \loss(x) \p_i (dx) &\le \int_{x \in [T^*, +\infty)} B \loss(x) \p (dx) \\ &\le B \frac{\delta}{B} \\ &= \delta. \end{align} For $x\in [0, T^*)$, $\loss(x)$ is a bounded function, and thus by the definition of Riemann$\text{-}$Stieltjes integral, we have \begin{align} \lim_{i \to \infty} \int_{x \in [0, T^*)} \loss(x) \p_i (dx) = \int_{x \in [0, T^*)} \loss(x) \p (dx). \end{align} So there exists a sufficiently large integer $i^*$ such that for all $i \ge i^*$ \begin{align} \left | \int_{x \in [0, T^*)} \loss(x) \p_{i} (dx) - \int_{x \in [0, T^*)} \loss(x) \p (dx) \right | \le \delta. \end{align} Hence, for all $i \ge i^*$ \begin{align} &\; |V(\p_i) - V(\p)| \\ =&\; \left | \int_{x \in \R} \loss(x) \p_{i} (dx) - \int_{x \in \R} \loss(x) \p (dx) \right | \\ =&\; 2 \left |\int_{x \in [0, T^*)} \loss(x) \p_{i} (dx) - \int_{x \in [0, T^*)} \loss(x) \p(dx) + \int_{x \in [T^*, +\infty)} \loss(x) \p_{i} (dx) - \int_{x \in [T^*, +\infty)} \loss(x) \p (dx) \right | \\ \le &\; 2 \left |\int_{x \in [0, T^*)} \loss(x) \p_{i} (dx) - \int_{x \in [0, T^*)} \loss(x) \p(dx) \right | + 2 \int_{x \in [T^*, +\infty)} \loss(x) \p_{i} (dx) + 2 \int_{x \in [T^*, +\infty)} \loss(x) \p (dx) \\ \le &\; 2 (\delta + \delta + \frac{\delta}{B}) \\ \le &\; (4+ \frac{2}{B}) \delta. \end{align} Therefore, \begin{align} \lim_{i \to +\infty} \int_{x \in \R } \loss(x) \p_i (dx) = \int_{x \in \R } \loss(x) \p (dx). \end{align} \end{IEEEproof} Define ${\sP_{i, \text{sym} }} \triangleq \{\p_i | \p \in \sP_{\text{sym}}\}$ for $ i \ge 1$, i.e., ${\sP_{i, \text{sym} }}$ is the set of probability distributions satisfying differential privacy constraint \eqref{eqn:dpconstraintfinal} and having symmetric piecewise constant (over intervals $[k\frac{\D}{i}, (k+1)\frac{\D}{i} ) \; \forall k \in \N$ ) probability density functions. Due to Lemma \ref{lem:approx}, \begin{lemma}\label{lem:piecewiseconstant} \begin{align} V^* = \inf_{\p \in \cup_{i=1}^{\infty} {\sP_{i, \text{sym} }}} \int_{x \in \R } \loss(x) \p (dx). \end{align} \end{lemma} Therefore, to characterize $V^*$, we only need to study probability distributions with symmetric and piecewise constant probability density functions. \subsection{Step 4} Next we show that indeed we only need to consider those probability distributions with symmetric piecewise constant probability density functions which are \emph{monotonically decreasing} when $x \ge 0$. \begin{lemma}\label{lem:monotone} Given $\p_a \in {\sP_{i, \text{sym} }}$ with symmetric piecewise constant probability density function $f(\cdot)$, let $\{a_0,a_1,\dots,a_n,\dots\}$ be the density sequence of $f(\cdot)$, i.e, \begin{align} f(x) = a_k, x \in [k\frac{\D}{i}, (k+1)\frac{\D}{i} ) \; \forall k \in \N. \end{align} Then we can construct a new probability distribution $\p_b \in {\sP_{i, \text{sym} }}$ the probability density function of which is monotonically decreasing when $x \ge 0$, and \begin{align} \int_{x \in \R } \loss(x) \p_b (dx) \le \int_{x \in \R } \loss(x) \p_a(dx). \end{align} \end{lemma} \begin{proof} Since $a_k > 0, \forall k \in \N$, and \begin{align} \sum_{k=0}^{+\infty} a_k \frac{\D}{i} = \frac{1}{2}, \end{align} we have $\lim_{k \to +\infty} a_k = 0$. Given the density sequence $\{a_0,a_1,\dots,a_n,\dots \}$, construct a new monotonically decreasing density sequence $\{b_0,b_1,\dots,b_n,\dots \}$ and a bijective mapping $\pi: \N \to \N$ as follows \begin{align} I_0 &= \argmax_{k \in \N} a_k, \label{eqn:max1}\\ \pi (0) &= \min_{n \in I_0} n, \text{i.e., the smallest element in} \; I_0, \\ b_0 &= a_{\pi(0)}, \\ & \\ \forall m \in \N & \; \text{and} \; m \ge 1, \\ I_m &= \argmax_{k \in \N \backslash \{ \pi(j) | j< m \}} a_k, \label{eqn:max2} \\ \pi (m) &= \min_{n \in I_m} n, \text{i.e., the smallest element in} \; I_m, \\ b_m &= a_{\pi(m)}. \end{align} Since the sequence $\{a_k\}$ converges to 0, the maximum of $\{a_k\}$ always exists in \eqref{eqn:max1} and \eqref{eqn:max2}. Therefore, $I_m$ is well defined for all $m \in \N$. Note that since $\sum_{k=0}^{\infty} a_k \frac{\D}{i} = \frac{1}{2}$ and the sequence $\{b_k\}_{k \in \N}$ is simply a permutation of $\{a_k\}_{k \in \N}$, $\sum_{k=1}^{\infty} b_k \frac{\D}{i} = \frac{1}{2}$. Therefore, if we define a function $g(\cdot)$ as \begin{align} g(x) = \begin{cases} b_k & x\in [k\frac{D}{i}, (k+1)\frac{D}{i} ) \; \text{for} \; k \in \N \\ g(-x) & x < 0 \end{cases} \end{align} then $g(\cdot)$ is a valid symmetric probability density function, and \begin{align} \int_{x\in \R} \loss(x) g(x) dx \le \int_{x\in \R} \loss(x) f(x) dx. \end{align} Next, we prove that the probability distribution $\p_b$ with probability density function $g(\cdot)$ satisfies the differential privacy constraint \eqref{eqn:dpconstraintfinal}. Since $\{b_k\}_{k \in \N}$ is a monotonically decreasing sequence, it is sufficient and necessary to prove that for all $k \in \N$, \begin{align} \frac{b_k}{b_{k+i}} \le e^{\e}. \end{align} To simplify notation, given $k$, we define \begin{align} a^*(k) = \min_{ k \le j \le k+i} a_k, \end{align} i.e., $a^*(k)$ denotes the smallest number of $\{ a_{k}, a_{k+1}, \dots, a_{k+i} \}$. First, when $k = 0$, it is easy to prove that $\frac{b_0}{b_{i}} \le e^{\e}$. Indeed, recall that $b_0 = a_{\pi(0)}$ and consider the $i+1$ consecutive numbers $\{a_{\pi(0)}, a_{\pi(0)+1}, \dots, a_{\pi(0)+i} \}$ in the original sequence $\{a_k\}_{k \in \N}$. Then $a^*(0) \le b_i$, since $b_i$ is the $(i+1)$th largest number in the sequence $\{a_k\}_{k \in \N}$. Therefore, \begin{align} \frac{b_0}{b_{i}} = \frac{a_{\pi(0)}}{b_{i}} \le \frac{a_{\pi(0)}}{a^*(0)} \le e^{\e}. \end{align} For $k = 1$, $b_1 = a_{\pi(1)}$ and consider the $i+1$ consecutive numbers $\{a_{\pi(1)}, a_{\pi(1)+1}, \dots, a_{\pi(1)+i} \}$. If $\pi(0) \notin [\pi(1), \pi(1)+i]$, then $a^*(\pi(1)) \le b_{i+1}$, and thus \begin{align} \frac{b_1}{b_{i+1}} = \frac{a_{\pi(1)}}{b_{1+i}} \le \frac{a_{\pi(1)}}{a^*(\pi(1))} \le e^{\e}. \end{align} If $\pi(0) \in [\pi(1), \pi(1)+i]$, then $a^*(\pi(0)) \le b_{i+1}$ and $\frac{a_{\pi(0)}}{a^*(\pi(0))} \le e^{\e}$. Therefore, \begin{align} \frac{b_1}{b_{i+1}} \le \frac{b_0}{b_{1+i}} \le \frac{b_0}{a^*(\pi(0))} \le e^{\e}. \end{align} Hence, $\frac{b_k}{b_{k+i}} \le e^{\e}$ holds for $k = 1$. In general, given $k$, we prove $\frac{b_k}{b_{k+i}} \le e^{\e}$ as follows. First, if $\pi_{j} \notin [\pi(k), \pi(k)+i], \forall j <k$, then $a^*{\pi(k)} \le b_{k+i}$, and hence \begin{align} \frac{b_k}{b_{i+k}} = \frac{a_{\pi(k)}}{b_{i+k}} \le \frac{a_{\pi(k)}}{a^*(\pi(k))} \le e^{\e}. \end{align} If there exists $j < k$ and $\pi_{j} \in [\pi(k)+1, \pi(k)+i]$, we use Algorithm \ref{algo:1} to compute a number $j^*$ such that $ j^* < k $ and $\pi_{j} \notin [\pi(j^*)+1, \pi(j^*)+i], \forall j<k$. \begin{algorithm} \caption{} \label{algo:1} \begin{algorithmic} \State $j^* \gets k$ \While{ there exists some $j <k $ and $\pi_{j} \in [\pi(j^*)+1, \pi(j^*)+i]$} \State $j^* \gets j$ \EndWhile \State Output $j^*$ \end{algorithmic} \end{algorithm} It is easy to show that the loop in Algorithm \ref{algo:1} will terminate after at most $k$ steps. After finding $j^*$, we have $j^* < k$, and $a^*(\pi(j^*)) \le b_{k+i}$. Therefore \begin{align} \frac{b_k}{b_{i+k}} \le \frac{a_{\pi(j^*)}}{b_{i+k}} \le \frac{a_{\pi(j^*)}}{a^*(\pi(j^*))} \le e^{\e}. \end{align} So $\frac{b_k}{b_{k+i}} \le e^{\e} $ holds for all $k \in \N$. Therefore, $\p_b \in {\sP_{i, \text{sym} }}$. This completes the proof of Lemma \ref{lem:monotone}. \end{proof} Therefore, if we define \begin{align} \pa \triangleq \{ \p | \p \in {\sP_{i, \text{sym} }}, \; \text{and} \; \text{the density sequence of}\; \p \; \text{is monotonically decreasing} \}, \end{align} then due to Lemma \ref{lem:monotone}, \begin{lemma}\label{lem:monotonelem} \begin{align} V^* = \inf_{\p \in \cup_{i=1}^{\infty} \pa} \int_{x \in \R} \loss(x) \p(dx) . \end{align} \end{lemma} \subsection{Step 5} Next we show that among all symmetric piecewise constant probability density functions, we only need to consider those which are periodically decaying. More precisely, given positive integer $i$, \begin{align} \pb \triangleq \{ \p | \p \in \pa, \; \text{and} \; \p \text{ has density sequence}\; \{a_0,a_1,\dots,a_n,\dots,\} \; \text{satisfying} \frac{a_k}{a_{k+i}} = e^{\e}, \forall k \in \N \}, \end{align} then \begin{lemma}\label{lem:pd} \begin{align} V^* = \inf_{ \p \in \cup_{i=1}^{\infty} \pb} \int_{x \in \R} \loss(x) \p(dx). \end{align} \end{lemma} \begin{IEEEproof} Due to Lemma \ref{lem:monotonelem}, we only need to consider probability distributions with symmetric and piecewise constant probability density functions which are monotonically decreasing for $x \ge 0$. We first show that given $\p_a \in \pa$ with density sequence $\{a_0,a_1,\dots,a_n,\dots,\}$, if $\frac{a_{0}}{a_{i}} < e^{\e}$, then we can construct a probability distributions $\p_b \in \pa$ with density sequence $\{b_0,b_1,\dots,b_n,\dots,\}$ such that $\frac{b_{0}}{b_{i}} = e^{\e}$ and \begin{align} V(\p_a) \ge V(\p_b). \end{align} Define a new sequence $\{b_0,b_1,\dots,b_n,\dots\}$ by scaling up $a_0$ and scaling down $\{a_1,a_2,\dots\}$. More precisely, let $\delta = \frac{i}{2D ( (\frac{i}{2D}-a_0)e^{-\e}\frac{a_0}{a_i} + a_0 )} - 1 >0$, and set \begin{align} b_0 &= a_0 (1 + \delta), \\ b_k &= a_k ( 1 - \delta'), \forall \; k \ge 1, \end{align} where $ \delta' \triangleq \frac{a_0 \delta}{ \frac{i}{2D} - a_0} >0$, and we have chosen $\delta$ such that $\frac{b_0}{b_i} = \frac{a_0}{a_k} \frac{\frac{i}{2D} - a_0}{ \frac{i}{2D(1+\delta)} - a_0} = e^{\e}$. It is easy to see the sequence $\{b_0,b_1,\dots,b_n,\dots,\}$ correspond to a valid probability density function and it also satisfies the differential privacy constraint \eqref{eqn:dpconstraintfinal}, i.e., \begin{align} \frac{b_k}{b_{k+i}} \le e^{\e}, \forall k \ge 0. \end{align} Let $\p_b$ be the probability distribution with $\{b_0,b_1,\dots,b_n,\dots,\}$ as the density sequence of its probability density function. Next we show $V(\p_b) \le V(\p_a)$. It is easy to compute $V(\p_a)$, which is \begin{align} V(\p_a) = 2\frac{\D}{i} \left( a_0 \int_{0}^{\frac{\D}{i}} \loss(x) dx + \sum_{k=1}^{\infty} a_k \int_{k\frac{\D}{i}}^{(k+1)\frac{\D}{i}} \loss(x) dx \right). \end{align} Similarly, we can compute $V(\p_b)$ by \begin{align} V(\p_b) &= 2\frac{\D}{i} \left( b_0 \int_{0}^{\frac{\D}{i}} \loss(x) dx + \sum_{k=1}^{\infty} b_k \int_{k\frac{\D}{i}}^{(k+1)\frac{\D}{i}} \loss(x) dx \right) \\ &= V(\p_a) + 2\frac{\D}{i} \left(a_0 \delta \int_{0}^{\frac{D}{i}} \loss(x) dx - \delta' \sum_{k=1}^{\infty} a_k \int_{k\frac{D}{i}}^{(k+1)\frac{D}{i}} \loss(x) dx \right) \\ &= V(\p_a) + 2\frac{\D}{i} \frac{a_0 \delta}{ \frac{i}{2\D} - a_0 } \left( \sum_{k=1}^{\infty} a_k \int_{0}^{\frac{\D}{i}} \loss(x) dx - \sum_{k=1}^{\infty} a_k \int_{k\frac{\D}{i}}^{(k+1)\frac{\D}{i}} \loss(x) dx \right) \\ &= V(\p_a) + 2\frac{\D}{i} \frac{a_0 \delta}{ \frac{i}{2\D} - a_0 } \sum_{k=1}^{\infty} a_k \left( \int_{0}^{\frac{\D}{i}} \loss(x) dx - \int_{k\frac{\D}{i}}^{(k+1)\frac{\D}{i}} \loss(x) dx \right) \\ &\le V(\p_a), \end{align} where in the last step we used the fact that $\left( \int_{0}^{\frac{\D}{i}} \loss(x) dx - \int_{k\frac{\D}{i}}^{(k+1)\frac{\D}{i}} \loss(x) dx \right) \le 0$, since $\loss(\cdot)$ is a monotonically increasing function for $x \ge 0$. Therefore, for given $i \in \N$, we only need to consider $\p \in \pa$ with density sequence $\{a_0,a_1,\dots,a_n,\dots\}$ satisfying $\frac{a_0}{a_i} = e^{\e}$. Next, we argue that among all probability distributions $\p \in \pa$ with density sequence $\{a_0,a_1,\dots,a_n,\dots,\}$ satisfying $\frac{a_0}{a_i} = e^{\e}$, we only need to consider those probability distributions with density sequence also satisfying $\frac{a_1}{a_{i+1}} = e^{\e}$. Given $\p_a \in \pa$ with density sequence $\{a_0,a_1,\dots,a_n,\dots\}$ satisfying $\frac{a_0}{a_i} = e^{\e}$ and $\frac{a_1}{a_{i+1}} < e^{\e}$, we can construct a new probability distribution $\p_b \in \pa$ with density sequence $\{b_0,b_1,\dots,b_n,\dots\}$ satisfying \begin{align} \frac{b_0}{b_i} &= e^{\e}, \\ \frac{b_1}{b_{i+1}} &= e^{\e}, \end{align} and $V(\p_a) \ge V(\p_b)$. First, it is easy to see $a_1$ is strictly less than $a_0$, since if $a_0 = a_1$, then $\frac{a_1}{a_{i+1}} = \frac{a_0}{a_{i+1}} \ge \frac{a_0}{a_i} = e^{\e}$. Then we construct a new density sequence by increasing $a_1$ and decreasing $a_{i+1}$. More precisely, we define a new sequence $\{b_0,b_1,\dots,b_n,\dots\}$ as \begin{align} b_k &= a_k, \forall k \neq 1, k \neq i+1, \\ b_1 &= a_1 + \delta, \\ b_{i+1} &= a_{i+1} - \delta, \end{align} where $\delta = \frac{e^{\e}a_{i+1} - a_1}{1 + e^{\e}}$ and thus $\frac{b_1}{b_{i+1}} = e^{\e}$. It is easy to verify that $\{b_0,b_1,\dots,b_n,\dots\}$ is a valid probability density sequence and the corresponding probability distribution $\p_b$ satisfies the differential privacy constraint \eqref{eqn:dpconstraintfinal}. Moreover, $V(\p_a) \ge V(\p_b)$. Therefore, we only need to consider $\p \in \pa$ with density sequences $\{a_0,a_1,\dots,a_n,\dots\}$ satisfying $\frac{a_0}{a_i} = e^{\e}$ and $\frac{a_1}{a_{i+1}} = e^{\e}$. Use the same argument, we can show that we only need to consider $\p \in \pa$ with density sequences $\{a_0,a_1,\dots,a_n,\dots\}$ satisfying \begin{align} \frac{a_k}{a_{i+k}} &= e^{\e}, \forall k \ge 0. \end{align} Therefore, \begin{align} V^* = \inf_{ \p \in \cup_{i=1}^{\infty} \pb} \int_{x \in \R} \loss(x) \p(dx). \end{align} \end{IEEEproof} Due to Lemma \ref{lem:pd}, we only need to consider probability distribution with symmetric, monotonically decreasing (for $x \ge 0$), and periodically decaying piecewise constant probability density function. Because of the properties of symmetry and periodically decaying, for this class of probability distributions, the probability density function over $\R$ is completely determined by the probability density function over the interval $[0, \D)$. Next, we study what the optimal probability density function should be over the interval $[0,\D)$. It turns out that the optimal probability density function over the interval $[0,\D)$ is a step function. We use the following three steps to prove this result. \subsection{Step 6} \begin{lemma}\label{lem:ratio} Consider a probability distribution $\p_a \in \pb$ ($i \ge 2$) with density sequence $\{a_0,a_1,\dots, a_n,\dots\}$, and $\frac{a_0}{a_{i-1}} < e^{\e}$. Then there exists a probability distribution $\p_b \in \pb$ with density sequence $\{b_0,b_1,\dots, b_n,\dots\}$such that $\frac{b_0}{b_{i-1}} = e^{\e}$, and \begin{align} V(\p_b) \le V(\p_a). \end{align} \end{lemma} \begin{IEEEproof} For each $ 0 \le k \le (i-1)$, define \begin{align} w_k \triangleq \sum_{j=0}^{+\infty} e^{-j\e} \int_{(j + \frac{k}{i})\D }^{(j + \frac{k+1}{i})\D} \loss (x) d x. \label{eqn:sumw} \end{align} Since $\loss(cdot)$ satisfies Property \ref{property2} and $V^* < \infty$, it is easy to show that the sum of series in \eqref{eqn:sumw} exists and is finite, and thus $w_k$ is well defined for all $ 0 \le k \le (i-1)$. In addition, it is easy to see \begin{align} w_0 \le w_1 \le w_2 \le \cdots \le w_{i-1}, \end{align} since $\loss (x)$ is a monotonically increasing function when $x \ge 0 $. Then \begin{align} V(\p_a) = \int_{x \in \R} \loss (x) \p_a(dx) = 2 \sum_{k=0}^{i-1} w_k a_k. \end{align} Since $\frac{a_0}{a_{i-1}} < e^{\e}$, we can scale $a_0$ up and scale $\{a_1,\dots,a_{i-1}\}$ down to derive a new valid probability density function with smaller cost. More precisely, define a new probability measure $\p_b \in \pb$ with density sequence $\{b_0,b_1,\dots, b_n,\dots\}$ via \begin{align} b_0 &\triangleq \gamma a_0, \\ b_k &\triangleq \gamma' a_k, \forall 1 \le k \le i-1, \end{align} for some $\gamma >1 $ and $\gamma' <1$ such that \begin{align} \frac{b_0}{b_{i-1}} = e^{\e}. \end{align} To make $\{b_0,b_1,\dots, b_n,\dots\}$ be a valid density sequence, i.e., to make the integral of the corresponding probability density function over $\R$ be 1, we have \begin{align} \sum_{k=0}^{i-1} b_k = \sum_{k=0}^{i-1} a_k = \frac{1-e^{-\e}}{2} \frac{i}{\D}. \end{align} Define $t \triangleq \frac{1-e^{-\e}}{2} \frac{i}{\D}$, then we have two linear equations on $\gamma$ and $\gamma'$: \begin{align} \gamma a_0 &= e^{\e} \gamma' \label{eqn:g1} \\ \gamma a_0 + \gamma' (t - a_0) &= t. \label{eqn:g2} \end{align} From \eqref{eqn:g1} and \eqref{eqn:g2}, we can easily get \begin{align} \gamma &= \frac{e^{\e} t a_{i-1}}{a_0 (t-a_0+e^{\e}a_{i-1})} >1 \\ \gamma' &= \frac{t}{ t-a_0+e^{\e}a_{i-1}} <1. \end{align} Then we can verify that the $V(\p_a) \ge V(\p_a)$. Indeed, \begin{align} &\; V(\p_a) - V(\p_b) \\ =&\; \int_{x \in \R} \loss (x) \p_a(dx) - \int_{x \in \R} \loss (x) \p_b(dx) \\ = &\; 2 \sum_{k=0}^{i-1} w_k a_k - 2 \sum_{k=0}^{i-1} w_k b_k \\ = &\; 2 \left((1 - \gamma) w_0 a_0 + (1 - \gamma') \sum_{k=1}^{i-1} w_k a_k \right) \\ \ge &\; 2 \left((1 - \gamma) w_0 a_0 + (1 - \gamma') \sum_{k=1}^{i-1} w_0 a_k \right) \\ = &\; 2 \left((1 - \gamma) w_0 a_0 + (1 - \gamma') w_0 (t - a_0 ) \right) \\ = &\; 2 w_0 \left(a_0 - \frac{a_{i-1}e^{\e}t}{t - a_0 + e^{\e}a_{i-1}} + (t - a_0)\frac{-a_0 + e^{\e}a_{i-1}}{t - a_0 + e^{\e}a_{i-1}} \right) \\ = &\; 0. \end{align} This completes the proof. \end{IEEEproof} Therefore, due to Lemma \ref{lem:ratio}, for all $i \ge 2$, we only need to consider probability distributions $\p \in \pb$ with density sequence $\{a_0,a_1,\dots, a_n,\dots\}$ satisfying $\frac{a_0}{a_{i-1}} = e^{\e}$. More precisely, define \begin{align} \pc = \{ \p \in \pb | \p \; \text{has density sequence} \; \{a_0,a_1,\dots, a_n,\dots\} \; \text{satisfying} \; \frac{a_0}{a_{i-1}} = e^{\e} \}. \end{align} Then due to Lemma \ref{lem:ratio}, \begin{lemma}\label{lem:ratiolem} \begin{align} V^* = \inf_{\p \in \cup_{i=3}^{\infty}\pc} \int_{x \in \R} \loss (x) \p(dx). \end{align} \end{lemma} \subsection{Step 7} Next, we argue that for each probability distribution $\p \in \pc$ ($i \ge 3$) with density sequence $\{a_0,a_1,\dots, a_n,\dots\}$, we can assume that there exists an integer $1 \le k \le (i-2)$, such that \begin{align} a_j &= a_0, \forall 0 \le j <k, \label{eqn:property1} \\ a_j &= a_{i-1}, \forall k< j <i. \label{eqn:property2} \end{align} More precisely, \begin{lemma}\label{lem:binary} Consider a probability distribution $\p_a \in \pc$ ($i \ge 3$) with density sequence $\{a_0,a_1,\dots,a_n,\dots\}$. Then there exists a probability distribution $\p_b \in \pc$ with density sequence $\{b_0,b_1,\dots,b_n,\dots\}$ such that there exists an integer $1 \le k \le (i-2)$ with \begin{align} b_j &= a_0, \forall \; 0 \le j <k, \label{eqn:binary1}\\ b_j &= a_{i-1}, \forall \; k< j <i, \label{eqn:binary2} \end{align} and \begin{align} V (\p_b) \le V (\p_a). \label{eqn:binary3} \end{align} \end{lemma} \begin{IEEEproof} If there exists integer $1 \le k \le (i-2)$ such that \begin{align} a_j &= a_0, \forall \; 0 \le j <k, \\ a_j &= a_{i-1}, \forall \; k< j <i, \end{align} then we can set $\p_b = \p_a$. Otherwise, let $k_1$ be the smallest integer in $\{0,1,2,\dots,i-1\}$ such that \begin{align} a_{k_1} \neq a_0, \end{align} and let $k_2$ be the biggest integer in $\{0,1,2,\dots, i-1\}$ such that \begin{align} a_{k_2} \neq a_{i-1}. \end{align} It is easy to see that $k_1 \neq k_2$. Then we can increase $a_{k_1}$ and decrease $a_{k_2}$ simultaneously by the same amount to derive a new probability distribution $\p_b \in \pc$ with smaller cost. Indeed, if \begin{align} a_0 - a_{k_1} \leq a_{k_2} - a_{i-1}, \end{align} then consider a probability distribution $\p_b \in \pc$ with density sequence $\{b_0,b_1,\dots,b_{i-1},\dots\}$ defined as \begin{align} b_j &= a_0, \forall 0 \le j \le k_1, \\ b_j &= a_j, \forall k_1 < j \le k_2-1, \\ b_{k_2} &= a_{k_2} - (a_0 - a_{k_1}), \\ b_j &= a_j, \forall k_2 < j \le i-1. \end{align} We can verify that $V (\p_a) \ge V (\p_b)$ via \begin{align} & \; V (\p_a) - V (\p_b) \\ =& \; \int_{x \in \R} \loss (x) \p_a(dx) - \int_{x \in \R} \loss (x) \p_b(dx) \\ = & \; 2 (w_{k_1} b_{k_1} + w_{k_2} b_{k_2} ) - 2 ( w_{k_1} a_{k_1} + w_{k_2} a_{k_2}) \\ = & \; 2 w_{k_1} (a_0 - a_{k_1}) + 2 w_{k_2} ( a_{k_2} - (a_0 - a_{k_1}) - a_{k_2} ) \\ = & \; 2 (a_0 - a_{k_1}) ( w_{k_1} - w_{k_2}) \\ \le & \; 0 , \end{align} where $w_i$ is defined in \eqref{eqn:sumw}. If $a_0 - a_{k_1} \ge a_{k_2} - a_{i-1}$, then accordingly we can construct $\p_b \in \pc$ by setting \begin{align} b_j &= a_0, \forall 0 \le j < k_1, \\ b_{k_1} &= a_{k_1} + (a_{k_2} - a_{i-1}), \\ b_j &= a_j, \forall k_1 < j \le k_2-1, \\ b_{j} &= a_{i-1}, \forall k_2 \le j \le i-1. \end{align} And similarly, it is easy to verify that $V (\p_a) \ge V (\p_b)$. Therefore, continue in this way, and finally we will obtain a probability distribution $\p_b \in \pc$ with density sequence $\{b_0,b_1,\dots, b_n,\dots\}$ such that \eqref{eqn:binary1}, \eqref{eqn:binary2} and \eqref{eqn:binary3} hold. This completes the proof. \end{IEEEproof} Define \begin{align} \pd = \{ \p \in \pc \ |\ \p \; \text{has density sequence} \; \{a_0,a_1,\dots,a_n,\dots\} \; \text{satisfying} \eqref{eqn:binary1} \; \text{and} \; \eqref{eqn:binary2} \; \text{for some}\; 1 \le k \le (i-2) \}. \end{align} Then due to Lemma \ref{lem:binary}, \begin{lemma}\label{lem:stepfunc} \begin{align} V^* = \inf_{\p \in \cup_{i=3}^{\infty}\pd} \int_{ x \in \R} \loss (x) \p(dx) . \end{align} \end{lemma} \subsection{Step 8} \begin{IEEEproof}[Proof of Theorem \ref{thm:main}] Since $\{ \p_{\gamma} | \gamma \in [0,1] \} \subseteq \sP$, we have \begin{align} V^* = \inf_{\p \in \sP} \int_{x \in \R} \loss (x) \p(dx) \le \inf_{\gamma \in [0,1]} \int_{x \in \R} \loss (x) \p_{\gamma}(dx) . \end{align} We prove the reverse direction in the following. We first prove that for any $\p \in \pd$ ( $i \ge 3$), there exists $\gamma \in [0,1]$ such that \begin{align} \int_{x \in \R} \loss (x) \p_{\gamma}(dx) \le \int_{x \in \R} \loss (x) \p(dx). \end{align} Consider the density sequence $\{a_0,a_1,\dots, a_n,\dots\}$ of $\p$. Since $\p \in \pd$, there exists an integer $ 0 \le k \le i-2 $ such that \begin{align} a_j &= a_0, \forall 0 \le j< k, \\ a_j &= a_0 e^{-\e}, \forall k < j \le i-1. \end{align} Let \begin{align} {\gamma'} \triangleq \frac{\frac{1-e^{-\e}}{2\D} - a_0 e^{-\e}}{a_0(1-e^{-\e})} \in [0,1]. \end{align} Then $a({\gamma'}) = a_0$. It is easy to verify that \begin{align} k \frac{\D}{i} \le {\gamma'} \D \le (k+1) \frac{\D}{i}. \end{align} The probability density functions of $\p$ and $\p_{{\gamma'}}$ are the same when $ x \in [0, \frac{k}{i}\D) \cup [\frac{k+1}{i}\D, \D)$. Since the integral of probability density functions over $[0, \D)$ is $\frac{1 - e^{-\e}}{2}$ due to the periodically decaying property, we have \begin{align} a_k \frac{\D}{i} = a_0 ({\gamma'} - \frac{k}{i}) \D + e^{-\e}a_0 ( \frac{k+1}{i} - {\gamma'})\D. \end{align} Define $\beta \triangleq i ( {\gamma'} - \frac{k}{i} ) \in [0,1]$. Then \begin{align} a_k = \beta a_0 + (1 - \beta) e^{-\e}a_0. \end{align} Define \begin{align} w_k^{(1)} &\triangleq \sum_{j=0}^{+\infty} e^{-j\e} \int_{(j + \frac{k}{i})\D }^{(j + {\gamma'}) \D} \loss (x) d x, \label{eqn:sumw1} \\ w_k^{(2)} &\triangleq \sum_{j=0}^{+\infty} e^{-j\e} \int_{(j + {\gamma'}) \D }^{(j + \frac{k+1}{i} )\D} \loss (x) d x, \label{eqn:sumw2}. \end{align} Note that $w_k = w_k^{(1)} + w_k^{(2)}$. Since $\loss(x)$ is a monotonically increasing function when $x \ge 0$, we have \begin{align} \frac{w_k^{(2)}}{w_k^{(1)}} \ge \frac{(j + \frac{k+1}{i} )\D - (j + {\gamma'}) \D }{(j + {\gamma'}) \D - (j + \frac{k}{i})\D} = \frac{\frac{k+1}{i} - {\gamma'}}{{\gamma'} - \frac{k}{i} }. \end{align} Therefore, \begin{align} & \int_{x \in \R} \loss (x) \p(dx) - \int_{x \in \R} \loss (x) \p_{{\gamma'}}(dx) \\ = & 2 w_{k} a_{k} - 2 \left( w_k^{(1)} a_0 + w_k^{(2)} a_0 e^{-\e}\right) \\ = & 2 \left(w_k^{(1)} + w_k^{(2)}\right) a_k - 2 \left( w_k^{(1)} a_0 + w_k^{(2)} a_0 e^{-\e}\right) \\ = & 2 ( a_k - a_0 e^{-\e} )w_k^{(2)} - 2 (a_0 - a_k ) w_k^{(1)} . \end{align} Since \begin{align} \frac{a_k - a_0 e^{-\e} }{a_0 - a_k} &= \frac{ \beta (a_0 - a_0 e^{-\e})}{(1-\beta) (a_0 - a_0 e^{-\e}) } \\ &= \frac{\beta}{1 - \beta} \\ &= \frac{{\gamma'} - \frac{k}{i}}{\frac{k+1}{i} - {\gamma'}} \\ &\ge \frac{w_k^{(1)}}{w_k^{(2)}}, \end{align} we have \begin{align} & \int_{x \in \R} \loss (x) \p(dx) - \int_{x \in \R} \loss (x) \p_{{\gamma'}}(dx) \\ = & 2 ( a_k - a_0 e^{-\e} )w_k^{(2)} - 2 (a_0 - a_k ) w_k^{(1)} \\ \ge & 0. \end{align} Therefore, \begin{align} V^* &= \inf_{\p \in \cup_{i=3}^{\infty}\pd} \int_{x \in \R} \loss (x) \p(dx) \\ & \ge \inf_{\gamma \in [0,1]} \int_{x \in \R} \loss (x) \p_{\gamma}(dx) . \end{align} We conclude \begin{align} V^* = \inf_{\p \in \sP} \int_{x \in \R} \loss (x) \p(dx) = \inf_{\gamma \in [0,1]} \int_{x \in \R} \loss (x) \p_{\gamma}(dx) = \inf_{\gamma \in [0,1]} \int_{x \in \R} \loss (x) f_{\gamma}(x) dx. \end{align} This completes the proof of Theorem \ref{thm:main}. \end{IEEEproof}
1,314,259,996,241
arxiv
\section*{Introduction.} Mirror symmetry got its start in 1989 with work of Greene and Plesser \cite{GrPl} and Candelas, Lynker and Schimmrigk \cite{CLS}. These two works first observed the existence of pairs of Calabi-Yau manifolds exhibiting an exchange of Hodge numbers. Recall that by Yau's proof of the Calabi conjecture \cite{Yau}, a Calabi-Yau manifold is an $n$-dimensional complex manifold $X$ with a nowhere vanishing holomorphic $n$-form $\Omega$ and a Ricci-flat K\"ahler metric with K\"ahler form $\omega$. Ricci-flatness is equivalent to $\omega^n=C \Omega\wedge\bar\Omega$ for a constant $C$. The most famous example of a Calabi-Yau manifold is a smooth quintic three-fold $X\subseteq\PP^4$. The Hodge numbers of $X$ are $h^{1,1}(X)=1$ and $h^{1,2}(X)=101$, with topological Euler characteristic $-200$. The original construction of Greene and Plesser gave a mirror to $X$, as follows. Let $Y\subseteq \PP^4$ be given by the equation \[ x_0^5+\cdots+x_4^5=0, \] and let \[ G=\{(a_0,\ldots,a_4)\in\ZZ_5^5\,|\, \sum_i a_i=0\}. \] An element $(a_0,\ldots,a_4)\in G$ acts on $Y$ by \[ (x_0,\ldots,x_4)\mapsto (\xi^{a_0}x_0,\ldots,\xi^{a_4}x_4) \] for $\xi$ a primitive fifth root of unity. The quotient $Y/G$ is highly singular, but there is a resolution of singularities $\check X\rightarrow Y/G$ such that $\check X$ is also Calabi-Yau, and one finds $h^{1,1}(\check X)=101$ and $h^{1,2}(\check X)=1$, so that $\check X$ has topological Euler characteristic $200$. The relationship between these two Calabi-Yau manifolds proved to be much deeper than just this exchange of Hodge numbers. Pioneering work of Candelas, de la Ossa, Greene and Parkes \cite{COGP} performed an amazing calculation, following string-theoretic predictions which suggested that certain enumerative calculations on $X$ should give the same answer as certain period calculations on $\check X$. The calculations on $\check X$, though subtle, could be carried out: these involved integrals of the holomorphic form on $\check X$ over three-cycles as the complex structure on $\check X$ is varied. On the other hand, the corresponding calculations on $X$ involved numbers of rational curves on $X$ of each degree. For example, the number of lines on a generic quintic threefold is $2875$ and the number of conics is $609250$. String theory thus gave predictions for these numbers for every degree, an astonishing feat given that most of these numbers seemed far beyond the reach of algebraic geometry at the time. More generally, string theory introduced the concepts of the \emph{$A$-model} and \emph{$B$-model}. The $A$-model involves the symplectic geometry of Calabi-Yau manifolds. Properly defined, the counts of rational curves are in fact symplectic invariants, now known as Gromov-Witten invariants. The $B$-model, on the other hand, involves the complex geometry of Calabi-Yau manifolds. Holomorphic forms of course depend on the complex structure, so the period calculations mentioned above can be thought of as $B$-model calculations. Ultimately, string theory predicts an isomorphism between the $A$-model of a Calabi-Yau manifold $X$ and the $B$-model of its mirror, $\check X$. The equality of numerical invariants is then a consequence of this isomorphism. Proofs of these string-theoretic predictions of curve-counting invariants were given by Givental \cite{Givental} and Lian, Liu and Yau \cite{LLY}, with successively simpler proofs by many other researchers. However, all the proofs relied on the geometry of the ambient space $\PP^4$ in which the quintic is contained. Roughly speaking, one considers all rational curves in $\PP^4$, and tries to understand how to compute how many of these are contained in a given quintic hypersurface. This raised the question: \emph{is there some underlying intrinsic geometry to mirror symmetry?} Historically the first approach to an intrinsic formulation of mirror symmetry is Kontsevich's Homological Mirror Symmetry conjecture, stated in 1994 in \cite{KHMS}. This made mathematically precise the notion of an isomorphism between the $A$- and $B$-models. The homological mirror symmetry conjecture posits an isomorphism between two categories, the Fukaya category of Lagrangian submanifolds of $X$ (the $A$-model) and the derived category of coherent sheaves on the mirror $\check X$ (the $B$-model). Morally, this states that the symplectic geometry of $X$ is the same as the complex geometry of $\check X$. At the time this conjecture was made, however, there was no clear idea as to how such an isomorphism might be realised, nor did this conjecture state how to construct mirror pairs. The second approach is due to Strominger, Yau and Zaslow in their 1996 paper \cite{SYZ}. They made a remarkable proposal, based on new ideas in string theory, which gave a very concrete geometric interpretation for mirror symmetry. Let me summarize, very roughly, the physical argument they used here. Developments in string theory in the mid-1990s introduced the notion of \emph{Dirichlet branes}, or $D$-branes. These are submanifolds of space-time, with some additional data, which serve as boundary conditions for open strings, i.e., we allow open strings to propagate with their endpoints constrained to lie on a $D$-brane. Remembering that space-time, according to string theory, looks like $\RR^{1,3}\times X$, where $\RR^{1,3}$ is ordinary space-time and $X$ is a Calabi-Yau three-fold, we can split a $D$-brane into a product of a submanifold of $\RR^{1,3}$ and one on $X$. It turned out, simplifying a great deal, that there are two particular types of submanifolds on $X$ of interest: \emph{holomorphic} $D$-branes, i.e., holomorphic submanifolds with a holomorphic line bundle, and \emph{special Lagrangian} $D$-branes, which are \emph{special Lagrangian submanifolds} with flat $U(1)$-bundle: \begin{definition} Let $X$ be an $n$-dimensional Calabi-Yau manifold with $\omega$ the K\"ahler form of a Ricci-flat metric on $X$ and $\Omega$ a nowhere vanishing holomorphic $n$-form. Then a submanifold $M\subseteq X$ is \emph{special Lagrangian} if it is Lagrangian, i.e., $\dim_{\RR} M=\dim_{\CC} X$ and $\omega|_M=0$, and in addition, $\Im\Omega|_M=0$. \end{definition} Holomorphic $D$-branes can be viewed as $B$-model objects, and special Lagrangian $D$-branes as $A$-model objects. The isomorphism between the $B$-model on $X$ and the $A$-model on $\check X$ then suggests that the moduli space of holomorphic $D$-branes on $X$ should be isomorphic to the moduli space of special Lagrangian $D$-branes on $\check X$. (This is now seen as a physical manifestation of the homological mirror symmetry conjecture). Now $X$ itself is the moduli space of points on $X$. So each point on $X$ should correspond to a pair $(M,\nabla)$, where $M\subseteq\check X$ is a special Lagrangian submanifold and $\nabla$ is a flat $U(1)$-connection on $M$. A theorem of McLean \cite{McLean} tells us that the tangent space to the moduli space of special Lagrangian deformations of a special Lagrangian submanifold $M\subseteq \check X$ is $H^1(M,\RR)$. Of course, the moduli space of flat $U(1)$-connections modulo gauge equivalence on $M$ is the torus $H^1(M,\RR)/H^1(M,\ZZ)$. In order for this moduli space to be of the correct dimension, we need $\dim H^1(M,\RR)=n$, the complex dimension of $X$. This suggests that $X$ consists of a family of tori which are dual to a family of special Lagrangian tori on $\check X$. An elaboration of this argument yields the following conjecture: \begin{conjecture} \emph{The Strominger-Yau-Zaslow conjecture}. If $X$ and $\check X$ are a mirror pair of Calabi-Yau $n$-folds, then there exists fibrations $f:X\rightarrow B$ and $\check f:\check X\rightarrow B$ whose fibres are special Lagrangian, with general fibre an $n$-torus. Furthermore, these fibrations are dual, in the sense that canonically $X_b=H^1(\check X_b,\RR/\ZZ)$ and $\check X_b=H^1(X_b,\RR/\ZZ)$ whenever $X_b$ and $\check X_b$ are non-singular tori. \end{conjecture} This conjecture motivated a great deal of work in the five years following its introduction in 1996, some of which will be summarized in the following sections. There was a certain amount of success, as we shall see, with the conjecture proved for some cases, including the quintic three-fold, at the topological level. Further, the conjecture gave a solid framework for thinking about mirror symmetry at an intuitive level. However, work of Dominic Joyce demonstrated that the conjecture was unlikely to be literally true. Nevertheless, it is possible that weaker limiting forms of the conjecture still hold. In the first several sections of this survey, I will clarify the conjecture, review what is known about it, and state a weaker form which seems accessible. Most importantly, I will explain how the SYZ conjecture leads to the study of affine manifolds (manifolds with transition functions being affine linear) and hence to an algebro-geometric interpretation of the conjecture, developed by me and Bernd Siebert. This removes the hard analysis, and gives a powerful framework for understanding mirror symmetry at a conceptual level. The bulk of the paper is devoted to outlining this framework as developed over the last ten years. I explain how affine manifolds are related to degenerations of Calabi-Yau manifolds. Once one begins to consider degenerations, log geometry of K.\ Kato and Fontaine--Illusie comes into the picture. Conjecturally, the base of the SYZ fibration incorporates key combinatorial information about log structures on degenerations of Calabi-Yau manifolds. Log geometry then gives a connection with tropical geometry and log Gromov-Witten theory, which theoretically allows a description of $A$-model curve counting using tropical geometry. On the mirror side, we explain how again tropical geometry is used to describe complex structures. This identifies tropical geometry as the geometry underlying both sides of mirror symmetry, and guides us towards a conceptual understanding of mirror symmetry. We end with a description of recent work with Pandharipande and Siebert \cite{GPS} which provides a snapshot of the relationship between the two sides of mirror symmetry. \medskip I would like to thank the organizers of Current Developments in Mathematics 2012 for inviting me to take part in the conference, and Bernd Siebert, my collaborator on much of the work described here. Some of the material appearing in this article was first published in my article ``The Strominger-Yau-Zaslow conjecture: From torus fibrations to degenerations," in \emph{Algebraic Geometry: Seattle 2005}, edited by D. Abramovich, et al., Proceedings of Symposia in Pure Mathematics Vol. 80, part 1, 149-192, published by the American Mathematical Society. (c) 2009 by the American Mathematical Society. Finally, I would like to thank Lori Lejeune and the Clay Institute for Figure \ref{vanishingdisk}. \bigskip \section{Moduli of special Lagrangian submanifolds} \label{modulisection} The first step in understanding the SYZ conjecture is to examine the structures which arise on the base of a special Lagrangian fibration. These structures arise from McLean's theorem on the moduli space of special Lagrangian submanifolds \cite{McLean}, and these structures and their relationships were explained by Hitchin in \cite{Hit}. We outline some of these ideas here. McLean's theorem says that the moduli space of deformations of a compact special Lagrangian submanifold of a compact Calabi-Yau manifold $X$ is unobstructed. Further, the tangent space at the point of moduli space corresponding to a special Lagrangian $M\subseteq X$ is canonically isomorphic to the space of harmonic $1$-forms on $M$. This isomorphism is seen explicitly as follows. Let $\nu\in\Gamma(M,N_{M/X})$ be a normal vector field to $M$ in $X$. Then the restriction of the contractions $(\iota(\nu)\omega)|_M$ and $(\iota(\nu)\Im\Omega)|_M$ are both seen to be well-defined forms on $M$: one needs to lift $\nu$ to a vector field but the choice is irrelevant because $\omega$ and $\Im\Omega$ restrict to zero on $M$. McLean shows that if $M$ is special Lagrangian then \[ \iota(\nu)\Im\Omega=-*\iota(\nu)\omega, \] where $*$ denotes the Hodge star operator on $M$. Furthermore, $\nu$ corresponds to an infinitesimal deformation preserving the special Lagrangian condition if and only if $d(\iota(\nu)\omega) =d(\iota(\nu)\Im\Omega)=0$. This gives the correspondence between harmonic $1$-forms and infinitesimal special Lagrangian deformations. Let $f:X\rightarrow B$ be a special Lagrangian fibration with torus fibres, and assume for now that all fibres of $f$ are non-singular. Then we obtain three structures on $B$, namely two affine structures and a metric, as we shall now see. \begin{definition} \label{affine} Let $B$ be an $n$-dimensional manifold. An {\it affine structure} on $B$ is given by an atlas $\{(U_i,\psi_i)\}$ of coordinate charts $\psi_i:U_i\rightarrow \RR^n$, whose transition functions $\psi_i\circ\psi_j^{-1}$ lie in ${\rm Aff}(\RR^n)$. We say the affine structure is \emph{tropical} if the transition functions lie in $\RR^n\rtimes GL(\ZZ^n)$, i.e., have integral linear part. We say the affine structure is {\it integral} if the transition functions lie in ${\rm Aff}(\ZZ^n)$. If an affine manifold $B$ carries a Riemannian metric $g$, then we say the metric is \emph{affine K\"ahler} or \emph{Hessian} if $g$ is locally given by $g_{ij}=\partial^2K/\partial y_i\partial y_j$ for some convex function $K$ and $y_1,\ldots,y_n$ affine coordinates. \end{definition} We obtain the three structures as follows: \emph{Affine structure 1.} For a normal vector field $\nu$ to a fibre $X_b$ of $f$, $(\iota(\nu)\omega)|_{X_b}$ is a well-defined $1$-form on $X_b$, and we can compute its periods as follows. Let $U\subseteq B$ be a small open set, and suppose we have submanifolds $\gamma_1,\ldots,\gamma_n\subseteq f^{-1}(U)$ which are families of 1-cycles over $U$ and such that $\gamma_1\cap X_b,\ldots,\gamma_n\cap X_b$ form a basis for $H_1(X_b,\ZZ)$ for each $b\in U$. Consider the $1$-forms $\omega_1,\ldots,\omega_n$ on $U$ defined by fibrewise integration: \[ \omega_i(\nu)=\int_{X_b\cap\gamma_i} \iota(\nu)\omega, \] for $\nu$ a tangent vector on $B$ at $b$, which we can lift to a normal vector field of $X_b$. We have $\omega_i=f_*(\omega|_{\gamma_i})$, and since $\omega$ is closed, so is $\omega_i$. Thus there are locally defined functions $y_1,\ldots,y_n$ on $U$ with $dy_i=\omega_i$. Furthermore, these functions are well-defined up to the choice of basis of $H_1(X_b,\ZZ)$ and constants. Finally, they give well-defined coordinates, as follows from the fact that $\nu\mapsto \iota(\nu)\omega$ yields an isomorphism of $\T_{B,b}$ with $H^1(X_b,\RR)$ by McLean's theorem. Thus $y_1,\ldots,y_n$ define local coordinates of a tropical affine structure on $B$. \emph{Affine structure 2.} We can play the same trick with $\Im\Omega$: choose submanifolds \[ \Gamma_1,\ldots,\Gamma_n\subseteq f^{-1}(U) \] which are families of $n-1$-cycles over $U$ and such that $\Gamma_1\cap X_b,\ldots,\Gamma_n\cap X_b$ form a basis for $H_{n-1}(X_b, \ZZ)$. We define $\lambda_i$ by $\lambda_i=-f_*(\Im\Omega|_{\Gamma_i})$, or equivalently, \[ \lambda_i(\nu)=-\int_{X_b\cap\Gamma_i} \iota(\nu)\Im\Omega. \] Again $\lambda_1,\ldots,\lambda_n$ are closed $1$-forms, with $\lambda_i=d\check y_i$ locally, and again $\check y_1,\ldots,\check y_n$ are affine coordinates for a tropical affine structure on $B$. \emph{The McLean metric.} The Hodge metric on $H^1(X_b,\RR)$ is given by \[ g(\alpha,\beta)=\int_{X_b} \alpha\wedge *\beta \] for $\alpha$, $\beta$ harmonic $1$-forms, and hence induces a metric on $B$, which can be written as \[ g(\nu_1,\nu_2)=-\int_{X_b}\iota(\nu_1)\omega\wedge \iota(\nu_2)\Im\Omega. \] \medskip A crucial observation of Hitchin \cite{Hit} is that these structures are related by the Legendre transform: \begin{proposition} \label{hessianmetric} Let $y_1,\ldots,y_n$ be local affine coordinates on $B$ with respect to the affine structure induced by $\omega$. Then locally there is a function $K$ on $B$ such that \[ g(\partial/\partial y_i,\partial/\partial y_j)=\partial^2 K/\partial y_i \partial y_j. \] Furthermore, $\check y_i=\partial K/\partial y_i$ form a system of affine coordinates with respect to the affine structure induced by $\Im\Omega$, and if \[ \check K(\check y_1,\ldots,\check y_n)=\sum \check y_i y_i-K(y_1,\ldots,y_n) \] is the Legendre transform of $K$, then \[ y_i=\partial \check K/\partial\check y_i \] and \[ \partial^2\check K/\partial \check y_i\partial \check y_j=g(\partial/\partial\check y_i, \partial/\partial\check y_j). \] \end{proposition} \begin{proof} Take families $\gamma_1,\ldots,\gamma_n,\Gamma_1,\ldots,\Gamma_n$ as above over an open neighbourhood $U$ with the two bases being Poincar\'e dual, i.e., $(\gamma_i\cap X_b)\cdot(\Gamma_j\cap X_b)= \delta_{ij}$ for $b\in U$. Let $\gamma_1^*,\ldots,\gamma_n^*$ and $\Gamma_1^*,\ldots,\Gamma_n^*$ be the dual bases for $\Gamma(U,R^1f_*\ZZ)$ and $\Gamma(U,R^{n-1}f_*\ZZ)$ respectively. From the choice of $\gamma_i$'s, we get local coordinates $y_1,\ldots,y_n$ with $dy_i=\omega_i$, so in particular \[ \delta_{ij}=\omega_i(\partial/\partial y_j)=\int_{\gamma_i\cap X_b} \iota(\partial/\partial y_j)\omega, \] hence $\iota(\partial/\partial y_j)\omega$ defines the cohomology class $\gamma_j^*$ in $H^1(X_b,\RR)$. Similarly, let \[ g_{ij}=-\int_{\Gamma_i\cap X_b}\iota(\partial/\partial y_j)\Im \Omega; \] then $-\iota(\partial/\partial y_j)\Im\Omega$ defines the cohomology class $\sum_i g_{ij}\Gamma_i^*$ in $H^{n-1}(X_b,\RR)$, and $\lambda_i=\sum_j g_{ij}dy_j$. Thus \begin{eqnarray*} g(\partial/\partial y_j,\partial/\partial y_k)&=& -\int_{X_b}\iota(\partial/\partial y_j)\omega \wedge \iota(\partial/\partial y_k)\Im\Omega\\ &=&g_{jk}. \end{eqnarray*} On the other hand, let $\check y_1,\ldots,\check y_n$ be coordinates with $d\check y_i=\lambda_i$. Then \[ {\partial\check y_i/\partial y_j}=g_{ij}=g_{ji}={\partial\check y_j/ \partial y_i}, \] so $\sum\check y_i dy_i$ is a closed 1-form. Thus there exists locally a function $K$ such that $\partial K/\partial y_i=\check y_i$ and $\partial^2 K/\partial y_i\partial y_j=g(\partial/\partial y_i, \partial/\partial y_j)$. A simple calculation then confirms that $\partial\check K/\partial \check y_i =y_i$. On the other hand, \begin{eqnarray*} g(\partial/\partial\check y_i,\partial/\partial\check y_j)&=& g\left(\sum_k {\partial y_k\over\partial\check y_i}{\partial\over \partial y_k},\sum_\ell {\partial y_\ell\over\partial\check y_j} {\partial\over\partial y_\ell}\right)\\ &=&\sum_{k,\ell}{\partial y_k\over\partial\check y_i}{\partial y_\ell\over\partial \check y_j} g(\partial/\partial y_k,\partial/\partial y_\ell)\\ &=&\sum_{k,\ell} {\partial y_k\over\partial\check y_i}{\partial y_\ell\over \partial\check y_j}{\partial\check y_k\over\partial y_\ell}\\ &=&{\partial y_j\over\partial\check y_i}={\partial^2\check K\over \partial\check y_i\partial\check y_j}. \end{eqnarray*} \end{proof} Thus we introduce the notion of the \emph{Legendre transform} of an affine manifold with a multi-valued convex function. \begin{definition} \label{multivaluedconvex} Let $B$ be an affine manifold. A \emph{multi-valued} function $K$ on $B$ is a collection of functions on an open cover $\{(U_i,K_i)\}$ such that on $U_i\cap U_j$, $K_i-K_j$ is affine linear. We say $K$ is \emph{convex} if the Hessian $(\partial^2 K_i/\partial y_j\partial y_k)$ is positive definite for all $i$, in any, or equivalently all, affine coordinate systems $y_1, \ldots,y_n$. Given a pair $(B,K)$ of affine manifold and convex multi-valued function, the \emph{Legendre transform} of $(B,K)$ is a pair $(\check B, \check K)$ where $\check B$ is an affine structure on the underlying manifold of $B$ with coordinates given locally by $\check y_i=\partial K/\partial y_i$, and $\check K$ is defined by \[ \check K_i(\check y_1,\ldots,\check y_n)=\sum \check y_j y_j -K_i(y_1,\ldots,y_n). \] \end{definition} \begin{xca} Check that $\check K$ is also convex, and that the Legendre transform of $(\check B,\check K)$ is $(B,K)$. \end{xca} Curiously, this Legendre transform between affine manifolds with Hessian metric seems to have first appeared in a work in statistics predating mirror symmetry, see \cite{Amari}. \section{Semi-flat mirror symmetry} \label{semiflatsection} Let's forget about special Lagrangian fibrations for the moment. Instead, we will look at how the structures found on $B$ in the previous section give a toy version of mirror symmetry. \begin{definition} Let $B$ be a tropical affine manifold. \begin{enumerate} \item Denote by $\Lambda\subseteq\T_B$ the local system of lattices generated locally by $\partial/\partial y_1,\ldots,\partial/\partial y_n$, where $y_1,\ldots,y_n$ are local affine coordinates. This is well-defined because transition maps are in $\RR^n\rtimes GL_n(\ZZ)$. Set \[ X(B):=\T_B/\Lambda. \] This is a torus bundle over $B$. In addition, $X(B)$ carries a complex structure defined locally as follows. Let $U\subseteq B$ be an open set with affine coordinates $y_1,\ldots,y_n$, so $\T_U$ has coordinate functions $y_1,\ldots,y_n$, $x_1=dy_1,\ldots,x_n=dy_n$. Then \[ q_j=e^{2\pi i(x_j+iy_j)} \] gives a system of holomorphic coordinates on $T_U/\Lambda|_U$, and the induced complex structure is independent of the choice of affine coordinates. This is called the \emph{semi-flat} complex structure on $X(B)$. Later we will need a variant of this: for $\epsilon>0$, set \[ X_{\epsilon}(B):=\T_B/\epsilon\Lambda. \] This has a complex structure with coordinates given by \[ q_j=e^{2\pi i(x_j+iy_j)/\epsilon}. \] (As we shall see later, the limit $\epsilon\rightarrow 0$ corresponds to a ``large complex structure limit.'') \item Define $\check\Lambda\subseteq\T^*_B$ to be the local system of lattices generated locally by $dy_1,\ldots,dy_n$, with $y_1,\ldots,y_n$ local affine coordinates. Set \[ \check X(B):=\T^*_B/\check\Lambda. \] Of course $\T^*_B$ carries a canonical symplectic structure, and this symplectic structure descends to $\check X(B)$. \end{enumerate} \qed \end{definition} We write $f:X(B)\rightarrow B$ and $\check f:\check X(B)\rightarrow B$ for these torus fibrations; these are clearly dual. Now suppose in addition we have a Hessian metric $g$ on $B$, with local potential function $K$. Then the following propositions show that in fact both $X(B)$ and $\check X(B)$ become K\"ahler manifolds. \begin{proposition} $K\circ f$ is a (local) K\"ahler potential on $X(B)$, defining a K\"ahler form $\omega=2i\partial\bar\partial(K\circ f)$. This metric is Ricci-flat if and only if $K$ satisfies the real Monge-Amp\`ere equation \[ \det {\partial^2 K\over \partial y_i\partial y_j}=constant. \] \end{proposition} \begin{proof} Working locally with affine coordinates $(y_j)$ and complex coordinates \[ z_j={1\over 2\pi i}\log q_j=x_j+i y_j, \] we compute $\omega=2i\partial\bar\partial(K\circ f)={i\over 2} \sum {\partial^2 K\over \partial y_j\partial y_k} dz_j\wedge d\bar z_k$ which is clearly positive. Furthermore, if $\Omega=dz_1\wedge\cdots\wedge dz_n$, then $\omega^n$ is proportional to $\Omega\wedge\bar\Omega$ if and only if $\det (\partial^2 K/\partial y_j\partial y_k)$ is constant. \end{proof} We write this K\"ahler manifold as $X(B,K)$. Dually we have \begin{proposition} In local canonical coordinates $y_i,\check x_i$ on $\T^*_B$, the complex coordinate functions $z_j=\check x_j+i\partial K/\partial y_j$ on $\T^*_B$ induce a well-defined complex structure on $\check X(B)$, with respect to which the canonical symplectic form $\omega$ is the K\"ahler form of a metric. Furthermore this metric is Ricci-flat if and only if $K$ satisfies the real Monge-Amp\`ere equation \[ \det {\partial^2 K\over \partial y_j\partial y_k}=constant. \] \end{proposition} \begin{proof} It is easy to see that an affine linear change in the coordinates $y_j$ (and hence an appropriate change in the coordinates $\check x_j$) results in a linear change of the coordinates $z_j$, so they induce a well-defined complex structure invariant under $\check x_j\mapsto \check x_j+1$, and hence a complex structure on $\check X(B)$. Then one computes that \[ \omega=\sum d\check x_j\wedge dy_j={i\over 2}\sum g^{jk} dz_j\wedge d\bar z_k \] where $g_{ij}=\partial^2 K/\partial y_j\partial y_k$. Then the metric is Ricci-flat if and only if $\det(g^{jk})=constant$, if and only if $\det(g_{jk})=constant$. \end{proof} As before, we call this K\"ahler manifold $\check X(B,K)$. This motivates the definition \begin{definition} An affine manifold with metric of Hessian form is a \emph{Monge-Amp\`ere manifold} if the local potential function $K$ satisfies the Monge-Amp\`ere equation $\det(\partial^2K/\partial y_i\partial y_j)=constant$. \end{definition} Hessian and Monge-Amp\`ere manifolds were first studied by Cheng and Yau in \cite{ChengYau}. \begin{xca} \label{caniso} Show that the identification of $\T_B$ and $\T^*_B$ given by a Hessian metric induces a canonical isomorphism $X(B,K)\cong\check X(\check B,\check K)$ of K\"ahler manifolds, where $(\check B,\check K)$ is the Legendre transform of $(B,K)$. \end{xca} There is a key extra parameter which appears in mirror symmetry known as the $B$-field. This appears as a field in the non-linear sigma model with Calabi-Yau target space, and is required mathematically to make sense of mirror symmetry. Mirror symmetry roughly posits an isomorphism between the complex moduli space of a Calabi-Yau manifold $X$ and the K\"ahler moduli space of $\check X$. If one interprets the K\"ahler moduli space to mean the space of all Ricci-flat K\"ahler forms on $\check X$, then one obtains only a real manifold as moduli space, and one needs a complex manifold to match up with the complex moduli space of $X$. The $B$-field is interpreted as an element ${\bf B}\in H^2(\check X,\RR/\ZZ)$, and one views ${\bf B}+i\omega$ as a complexified K\"ahler class on $\check X$ for $\omega$ a K\"ahler class on $\check X$. In the context of our toy version of mirror symmetry, we view the $B$-field as an element ${\bf B}\in H^1(B,\Lambda_{\RR}/\Lambda)$, where $\Lambda_{\RR}=\Lambda\otimes_{\ZZ} \RR$. This does not quite agree with the above definition of the $B$-field, as this group does not necessarily coincide with $H^2(\check X,\RR/\ZZ)$. However, in many important cases, such as for simply connected Calabi-Yau threefolds with torsion-free integral cohomology, these two groups do coincide. More generally, including the case of K3 surfaces and abelian varieties, one would need to pass to generalized complex structures \cite{HitGen}, \cite{Gual}, \cite{Oren}, \cite{Clay}, \cite{Huy}, which we do not wish to do here. Noting that a section of $\Lambda_{\RR}/\Lambda$ over an open set $U$ can be viewed as a section of $\T_U/\Lambda|_U$, such a section acts on $\T_U/\Lambda|_U$ via translation, and this action is in fact holomorphic with respect to the semi-flat complex structure. Thus a \v Cech 1-cocycle $(U_{ij},\beta_{ij})$ representing ${\bf B}$ allows us to reglue $X(B)$ via translations over the intersections $U_{ij}$. This is done by identifying the open subsets $f^{-1}(U_{ij})\subseteq f^{-1}(U_i)$ and $f^{-1}(U_{ij})\subseteq f^{-1}(U_j)$ via the automorphism of $f^{-1}(U_{ij})$ given by translation by the section $\beta_{ij}$. This gives a new complex manifold $X(B,{\bf B})$. If in addition there is a multi-valued potential function $K$ defining a metric, these translations preserve the metric and yield a K\"ahler manifold $X(B,{\bf B},K)$. \medskip Thus the full toy version of mirror symmetry is as follows: \begin{construction}[The toy mirror symmetry construction] Suppose given an affine manifold $B$ with potential $K$ and $B$-fields ${\bf B} \in H^1(B,\Lambda_{\RR}/\Lambda)$, $\check {\bf B}\in H^1(B, \check\Lambda_{\RR}/\check\Lambda)$. It is not difficult to see, and you will have seen this already if you've done Exercise \ref{caniso}, that the local system $\check\Lambda$ defined using the affine structure on $B$ is the same as the local system $\Lambda$ defined using the affine stucture on $\check B$. So we say the pair \[ (X(B,{\bf B},K),\check{\bf B}) \] is mirror to \[ (X(\check B,\check {\bf B},\check K),\bf B). \] \end{construction} This provides a reasonably fulfilling picture of mirror symmetry in a simple context. Many more aspects of mirror symmetry can be worked out in this semi-flat context, see \cite{Leung} and \cite{Clay}, Chapter 6. This semi-flat case is an ideal testing ground for concepts in mirror symmetry. However, ultimately this only sheds limited insight into the general case. The only compact Calabi-Yau manifolds with semi-flat Ricci-flat metric which arise in this way are complex tori (shown by Cheng and Yau in \cite{ChengYau}). To deal with more interesting cases, we need to allow singular fibres, and hence, singularities in the affine structure of $B$. The existence of singular fibres are fundamental for the most interesting aspects of mirror symmetry. \section{Affine manifolds with singularities} \label{affsection} To deal with singular fibres, we define \begin{definition} A \emph{(tropical, integral) affine manifold with singularities} is a $(C^0)$ manifold $B$ with an open subset $B_0\subseteq B$ which carries a (tropical, integral) affine structure, and such that $\Gamma:=B \setminus B_0$ is a locally finite union of locally closed submanifolds of codimension $\ge 2$. \end{definition} Here we will give a relatively simple construction of such affine manifolds with singularities; a broader class of examples is given in \cite{GBB}; see also \cite{HZ} and \cite{HZ3}. Let $\Delta$ be a reflexive polytope in $M_{\RR}=M\otimes_{\ZZ}\RR$, where $M=\ZZ^n$. This means that $\Delta$ is a lattice polytope with a unique interior integral point $0\in\Delta$, and the polar dual polytope \[ \nabla:=\{n\in N_{\RR}|\hbox{$\langle m,n\rangle\ge -1$ for all $m\in\Delta$} \} \] is also a lattice polytope. Let $B=\partial\Delta$, and let $\P$ be a decomposition of $B$ into lattice polytopes, i.e., $\P$ is a set of lattice polytopes contained in $B$ such that (1) $B=\bigcup_{\sigma\in\P} \sigma$; (2) $\sigma_1,\sigma_2 \in \P$ implies $\sigma_1\cap\sigma_2$ lies in $\P$ and is a face of both $\sigma_1$ and $\sigma_2$; (3) if $\sigma\in\P$, any face of $\sigma$ lies in $\P$. We now define a structure of integral affine manifold with singularities on $B$, with discriminant locus $\Gamma\subseteq B$ defined as follows. Let $\Bar(\P)$ denote the first barycentric subdivision of $\P$ and let $\Gamma\subseteq B$ be the union of all simplices of $\Bar(\P)$ not containing a vertex of $\P$ (a zero-dimensional cell) or intersecting the interior of a maximal cell of $\P$. Setting $B_0:=B\setminus\Gamma$, we define an affine structure on $B_0$ as follows. $B_0$ has an open cover \[ \{W_{\sigma}|\hbox{$\sigma\in\P$ maximal}\}\cup \{W_v|\hbox{$v\in\P$ a vertex}\} \] where $W_{\sigma}=\Int(\sigma)$, the interior of $\sigma$, and \[ W_v=\bigcup_{\tau\in\Bar(\P)\atop v\in\tau}\Int(\tau) \] is the (open) star of $v$ in $\Bar(\P)$. We define an affine chart \[ \psi_{\sigma}:W_{\sigma}\hookrightarrow\AA_{\sigma}\subseteq N_{\RR} \] given by the inclusion of $W_{\sigma}$ in $\AA_{\sigma}$, which denotes the unique $(n-1)$-dimensional affine hyperplane in $N_{\RR}$ containing $\sigma$. Also, take $\psi_v:W_v\rightarrow M_{\RR}/\RR v$ to be the projection. One checks easily that for $v\in\sigma$, $\psi_{\sigma}\circ\psi_v^{-1}$ is integral affine linear (integrality follows from reflexivity of $\Delta$!) so $B$ is an integral affine manifold with singularities. \begin{example} \label{quintic} Let $\Delta\subseteq\RR^4$ be the convex hull of the points \begin{align*} &(-1,-1,-1,-1),\cr & (4,-1,-1,-1),\cr & (-1,4,-1,-1),\cr & (-1,-1,4,-1),\cr & (-1,-1,-1,4). \end{align*} Choose a triangulation $\P$ of $B=\partial\Delta$ into standard simplices; this can be done in a regular way so that the restriction of $\P$ to each two-dimensional face of $\Delta$ is as given by the light lines in Figure \ref{quinticdisc}. This gives a discriminant locus $\Gamma$ depicted by the dark lines in the figure; the line segments coming out of the boundary of the two-face are meant to illustrate the pieces of discriminant locus contained in adjacent two-faces. The discriminant locus there is not contained in the plane of the two-face. In particular, the discriminant locus is not planar at the vertices of $\Gamma$ on the edges of $\Xi$ with respect to the affine structure we define. Note $\Gamma$ is a trivalent graph, with two types of trivalent vertices, the non-planar ones just mentioned and the planar vertices contained in the interior of two-faces. \begin{figure} \includegraphics{quintic} \caption{} \label{quinticdisc} \end{figure} For an affine manifold, the monodromy of the local system $\Lambda$ is an important feature of the affine structure. In this example, it is very useful to analyze this monodromy around loops about the discriminant locus. If $v$ is a vertex of $\Gamma$ contained in the interior of a two-face of $\Delta$, one can consider loops based near $v$ in $B_0$ around the three line segments of $\Gamma$ adjacent to $v$. It is an enjoyable exercise to calculate that these monodromy matrices take the form, in a suitable basis, \[ T_1=\begin{pmatrix} 1&0&0\\1&1&0\\0&0&1\end{pmatrix}, T_2=\begin{pmatrix} 1&0&0\\0&1&0\\1&0&1\end{pmatrix}, T_3=\begin{pmatrix} 1&0&0\\-1&1&0\\-1&0&1\end{pmatrix}. \] They are computed by studying the composition of transition maps between charts that a loop passes through. These matrices can be viewed as specifying the obstruction to extending the affine structure across a neighbourhood of $v$ in $\Gamma$. Of course, the monodromy of $\check\Lambda$ is the transpose inverse of these matrices. Similarly, if $v$ is a vertex of $\Gamma$ contained in an edge of $\Delta$, then the monodromy will take the form \[ T_1=\begin{pmatrix} 1&-1&0\\0&1&0\\0&0&1\end{pmatrix}, T_2=\begin{pmatrix} 1&0&-1\\0&1&0\\0&0&1\end{pmatrix}, T_3=\begin{pmatrix} 1&1&1\\0&1&0\\0&0&1\end{pmatrix}. \] So we see that the monodromy of the two types of vertices are interchanged between $\Lambda$ and $\check\Lambda$. \end{example} One main result of \cite{TMS} is \begin{theorem} If $B$ is a three-dimensional tropical affine manifold with singularities such that $\Gamma$ is trivalent and the monodromy of $\Lambda$ at each vertex is one of the above two types, then $f_0:X(B_0)\rightarrow B_0$ can be compactified to a topological fibration $f:X(B)\rightarrow B$. Dually, $\check f_0:\check X(B_0) \rightarrow B_0$ can be compactified to a topological fibration $\check f:\check X(B)\rightarrow B$. Both $X(B)$ and $\check X(B)$ are topological manifolds. \end{theorem} We won't give any details here of how this is carried out, but it is not particularly difficult, as long as one restricts to the category of topological (not $C^{\infty}$) manifolds. However, it is interesting to look at the singular fibres we need to add. If $b\in\Gamma$ is a point which is not a vertex of $\Gamma$, then $f^{-1}(b)$ is homeomorphic to $I_1\times S^1$, where $I_1$ denotes a Kodaira type $I_1$ elliptic curve, i.e., a pinched torus. If $v$ is a vertex of $\Gamma$, with monodromy of the first type, then $f^{-1}(v)=S^1\times S^1\times S^1/\sim$, with $(a,b,c)\sim (a',b',c')$ if $(a,b,c)=(a',b',c')$ or $a=a'=1$, where $S^1$ is identified with the unit circle in $\CC$. This is the three-dimensional analogue of a pinched torus, and $\chi(f^{-1}(v))=+1$. We call this a \emph{positive} fibre. If $v$ is a vertex of $\Gamma$, with monodromy of the second type, then $f^{-1}(v)$ can be described as $S^1\times S^1\times S^1/\sim$, with $(a,b,c)\sim (a',b',c')$ if $(a,b,c)=(a',b',c')$ or $a=a'=1$, $b=b'$, or $a=a',b=b'=1$. The singular locus of this fibre is a figure eight, and $\chi(f^{-1}(v))=-1$. We call this a \emph{negative} fibre. So we see a very concrete local consequence of SYZ duality: in the compactifications $X(B)$ and $\check X(B)$, the positive and negative fibres are interchanged. Of course, this results in the observation that the Euler characteristic changes sign under mirror symmetry for Calabi-Yau threefolds. \begin{example} Continuing with Example \ref{quintic}, it was proved in \cite{TMS} that $\check X(B)$ is homeomorphic to the quintic and $X(B)$ is homeomorphic to the mirror quintic. Modulo a paper \cite{tori} whose appearance has been long-delayed because of other, more pressing, projects, the results of \cite{GBB} imply that the SYZ conjecture holds for all complete intersections in toric varieties at a topological level. \end{example} W.-D.\ Ruan in \cite{Ruan} gave a description of \emph{Lagrangian} torus fibrations for hypersurfaces in toric varieties using a symplectic flow argument, and his construction should coincide with a \emph{symplectic} compactification of the symplectic manifolds $\check X(B_0)$. In the three-dimensional case, such a symplectic compactification has been constructed by Ricardo Casta\~no-Bernard and Diego Matessi \cite{CastMat}. If this compactification is applied to the affine manifolds with singularities described here, the resulting symplectic manifolds should be symplectomorphic to the corresponding toric hypersurface, but this has not yet been shown. \section{Tropical geometry} \label{tropgeomsect} Recalling that mirror symmetry is supposed to allow us to count curves, let us discuss at an intuitive level how the picture so far gives us insight into this question. Let $B$ be a tropical affine manifold. Then as we saw, $X(B)$ carries the semi-flat complex structure, and it is easy to describe some complex submanifolds of $X(B)$ as follows. Let $L\subseteq B$ be a linear subspace with rational slope, i.e., the tangent space $\shT_{L,b}$ to $L$ at any $b\in L$ can be written as $M\otimes_{\ZZ} \RR$ for some sublattice $M\subseteq \Lambda_b$. Then we obtain a submanifold \[ X(L):=\shT_L/(\shT_L\cap \Lambda)\subseteq X(B). \] One checks easily that this is a complex submanifold. For example, if $B=\RR^n$, so that $X(B)$ is just an algebraic torus $(\CC^*)^n$ with coordinates $q_1,\ldots,q_n$, and $L\subseteq B$ is a codimension $p$ affine linear subspace defined by equations \[ \sum_j c_{ij}y_j=d_i,\quad 1\le i\le p, \] with $c_{ij}\in\ZZ$, $d_i\in\RR$, then the corresponding submanifold of $X(B)$ is the subtorus given by the equations \[ \prod_j q_j^{c_{ij}}= e^{-2\pi d_j},\quad 1\le i\le p. \] Of course, subtori of tori are not particularly interesting. How might we build more complicated submanifolds? Let us focus on curves, where we take the linear submanifolds of $B$ to be of dimension one. Then if we take $L$ to be a line segment, ray, or line, $X(L)$ is a cylinder, with or without boundary in the various cases. We can then try to glue such cylinders together to obtain more complicated curves. For example, imagine we are given rays meeting at a point $b\in B=\RR^2$ as pictured in Figure \ref{tropicalline}. Take primitive integral tangent vectors $v_1,v_2,v_3\in\RR^2$ to $L_1,L_2$ and $L_3$ pointing outwards from the point $b$ where the three segments intersect. Now we have the three cylinders $X(L_i)$ which do not match up over $b$: the fibre $f^{-1}(b)=\RR^2/\ZZ^2$ intersects $X(L_i)$ in a circle $\RR v_i/\ZZ v_i$. These circles are represented in $H_1(f^{-1}(b),\ZZ)= \Lambda_b$ precisely by the vectors $v_1,v_2,v_3$, and so the condition that the circles bound a surface in $f^{-1}(b)$ is that $v_1+v_2+v_3=0$. Thus, if this condition holds, we can glue in a surface $S$ contained in $f^{-1}(b)$ so that $X(L_1)\cup X(L_2)\cup X(L_3)\cup S$ now has no boundary at $b$. Of course, it is very far from being a holomorphic submanifold. The expectation, however, is that this sort of object can be deformed to a nearby holomorphic curve. \begin{figure} \input{trivalent.pstex_t} \caption{} \label{tropicalline} \end{figure} Precisely, continuing with the above example, suppose $b=0$ and $v_1=(1,0)$, $v_2=(0,1)$ and $v_3=(-1,-1)$. With holomorphic coordinates $q_1,q_2$ on $X(B)=(\CC^*)^2$, consider the curve $C\subseteq (\CC^*)^2$ defined by $1+q_1+q_2=0$. Look at the image of this curve under the map $f:X(B)\rightarrow B$, which here can be written explicitly as $(q_1,q_2)\mapsto {-1\over 2\pi}(\log |q_1|, \log |q_2|)$. One finds that one obtains a thickening of the trivalent graph above, typically known as an amoeba. Further, if one considers not $X(B)$ but $X_{\epsilon}(B)$, where now holomorphic coordinates are given by $q_j=e^{2\pi i(x_j+iy_j)/\epsilon}$ and $f_{\epsilon}:X_{\epsilon}(B) \rightarrow B$ is given by $(q_1,q_2)\mapsto -{\epsilon\over 2\pi} (\log |q_1|,\log |q_2|)$, one finds that as $\epsilon\rightarrow 0$, $f_{\epsilon}(C)$ converges to the trivalent graph in the above figure. In this sense the trivalent graph on $B$ is a limiting version of curves on a family of varieties tending towards a large complex structure limit. This basic picture for curves in algebraic tori is now very well studied. In particular, this study spawned the subject of \emph{tropical geometry}. The word \emph{tropical} is motivated by the role that the \emph{tropical semiring} plays. This is the semiring $(\RR,\oplus, \odot)$ where addition and multiplication are given by \begin{align*} a\oplus b:= {} & \min(a,b)\\ a\odot b:= {} & a+b. \end{align*} The word ``tropical'' is used in honor of the Brazilian mathematician Imre Simon, who pioneered use of this semi-ring. We now consider polynomials over the tropical semiring, as follows. Let $S\subseteq \ZZ^n$ be a finite subset, and consider tropical polynomials on $\RR^n$ of the form \[ g:=\sum_{(p_1,\ldots,p_n)\in S} c_{p_1\ldots p_n} x_1^{p_1}\cdots x_n^{p_n} \] where the coefficients lie in $\RR$ and the operations are in the tropical semiring. Then $g$ is a convex piecewise linear function on $\RR^n$, and the locus where $g$ is not linear is called a \emph{tropical hypersurface}. In particular, in the case $n=2$, we obtain a tropical curve. In the example of Figure \ref{tropicalline}, the relevant tropical polynomial could be taken to be $0\oplus x_1\oplus x_2$. While the tropical semiring has been used extensively in tropical geometry, it is not so convenient for us to view our tropical curves on $B$ as being defined by equations, since typically these curves will be of high codimension. Instead, it is better to follow Mikhalkin \cite{Mik} and use parameterized tropical curves. The domain of a parameterized tropical curve will be a weighted graph. In what follows, $\overline{\Gamma}$ will denote a connected graph. Such a graph can be viewed in two different ways. First, it can be viewed as a purely combinatorial object, i.e., a set $\overline{\Gamma}^{[0]}$ of vertices and a set $\overline{\Gamma}^{[1]}$ of edges consisting of unordered pairs of elements of $\overline{\Gamma}^{[0]}$, indicating the endpoints of an edge. We can also view $\overline{\Gamma}$ as the topological realization of the graph, i.e., a topological space which is the union of line segments corresponding to the edges. We shall confuse these two viewpoints at will. We will then denote by $\Gamma$ the topological space obtained from $\overline{\Gamma}$ by deleting the univalent vertices of $\overline{\Gamma}$, so that $\Gamma$ may have some non-compact edges. We also take $\overline{\Gamma}$ to come with a \emph{weight function}, a map \[ w:\overline{\Gamma}^{[1]}\rightarrow \NN=\{0,1,2,\ldots\}. \] Replacing $\RR^n$ with a general tropical affine manifold $B$, we now arrive at the following definition: \begin{definition} A parameterized tropical curve in $B$ is a continuous map \[ h:\Gamma\rightarrow B \] where $\Gamma$ is obtained from a graph $\overline{\Gamma}$ as above, satisfying the following two properties: \begin{enumerate} \item If $E\in\Gamma^{[1]}$ and $w(E)=0$, then $h|_E$ is constant; otherwise $h|_E$ is a proper embedding of $E$ into $B$ as a line segment, ray or line of rational slope. \item \emph{The balancing condition}. Let $V\in\overline{\Gamma}^{[0]}$ be a vertex with valency larger than $1$, with adjacent edges $E_1,\ldots,E_{\ell}$. Let $v_i\in \Lambda_{h(V)}$ be a primitive tangent vector to $h(E_i)$ at $h(V)$, pointing away from $h(V)$. Then \[ \sum_{i=1}^{\ell} w(E_i)v_i=0. \] \end{enumerate} \end{definition} Here the balancing condition is just expressing the topological requirement that the boundaries of the various cylinders $X(h(E_i))\subseteq X(B)$ can be connected up with a surface contained in the fibre of $X(B)\rightarrow B$ over $h(V)$. The weights can be interpreted as taking the cylinders $X(h(E_i))$ with multiplicity. An important question then arises: \begin{question} When can a given parameterized tropical curve be viewed as a limit of holomorphic curves in $X_{\epsilon}(B)$ as $\epsilon\rightarrow 0$? \end{question} This question has attracted a great deal of attention when $B=\RR^n$, with completely satisfactory results in the case $n=2$ (Answer: always), and less complete results when $n\ge 3$. The $n=2$ case was first treated by Mikhalkin \cite{Mik}, and resuts in all dimensions were first obtained by Nishinou and Siebert \cite{NS}. In particular, Mikhalkin proved that in this two-dimensional case, one can calculate numbers of curves of a given degree and genus passing through a fixed set of points, showing that difficult holomorphic enumerative problems can be solved by a purely combinatorial approach. This work gives hope that one can really count curves combinatorially in much more general settings. In the two-dimensional case, again, my own work \cite{GP2} showed that the mirror side (for mirror symmetry for $\PP^2$) could also be interpreted tropically, giving a completely tropical interpretation of mirror symmetry for $\PP^2$. So far we have not considered the case that $B$ has singularities. In case $B$ has singularities, we expect that one should be able to relax the balancing condition when a vertex falls inside of a point of the singular locus, and in particular one can allow univalent vertices which map to the singular locus. The reason for this is that once we compactify $X(B_0)$ to $X(B)$, one expects to find holomorphic disks fibering over line segments emanating from singular points: see Figure \ref{vanishingdisk} for a depiction of this when $B$ is two-dimensional, having isolated singularities. \begin{figure} \centerline{\epsfbox{fig6.5.eps}} \caption{} \label{vanishingdisk} \end{figure} We will avoid giving a precise definition of what a tropical curve should mean in the case that $B$ has singularities, largely because it is not clear yet what the precise definition should be. Hopefully, though, this discussion makes it clear that at an intuitive level, counting curves should be something which can be done on $B$. \section{The problems with the SYZ conjecture, and how to get around them} \label{problemsection} The discussion of \S\ref{affsection} demonstrates that the SYZ conjecture gives a beautiful description of mirror symmetry at a purely topological level. This, by itself, can often be useful, but fails to get at the original hard differential geometric conjecture and fails to give insight into why mirror symmetry counts curves. In order for the full-strength version of the SYZ conjecture to hold, the strong version of duality for topological torus fibrations we saw in \S\ref{affsection} should continue to hold at the special Lagrangian level. This would mean that a mirror pair $X,\check X$ would possess special Lagrangian torus fibrations $f:X\rightarrow B$ and $\check f:\check X\rightarrow B$ with codimension two discriminant loci, and the discriminant loci of $f$ and $\check f$ would coincide. These fibrations would then be dual away from the discriminant locus. There are examples of special Lagrangian fibrations on non-compact toric varieties $X$ with discriminant locus looking very similar to what we have described in the topological case. In particular, if $X$ is an $n$-dimensional Ricci-flat K\"ahler manifold with a $T^{n-1}$-action preserving the metric and holomorphic $n$-form, then $X$ will have a very nice special Lagrangian fibration with codimension two discriminant locus. (See \cite{SLAGex} and \cite{Gold}). However, Dominic Joyce (see \cite{Joyce} and other papers cited therein) began studying some three-dimensional $S^1$-invariant examples, and discovered quite different behaviour. There is an argument in \cite{SlagII} that if a special Lagrangian fibration is $C^{\infty}$, then the discriminant locus will be (Hausdorff) codimension two. However, Joyce discovered examples which were not differentiable, but only piecewise differentiable, and furthermore, had a codimension one discriminant locus: \begin{example} Define $F:\CC^3\rightarrow \RR\times\CC$ by $F(z_1,z_2,z_3)=(a,c)$ with $2a=|z_1|^2-|z_2|^2$ and \[ c=\begin{cases} z_3&a=z_1=z_2=0\\ z_3-\bar z_1\bar z_2/|z_1|& a\ge 0, z_1\not=0\\ z_3-\bar z_1\bar z_2/|z_2|&a<0. \end{cases} \] It is easy to see that if $a\not=0$, then $F^{-1}(a,c)$ is homeomorphic to $\RR^2\times S^1$, while if $a=0$, then $F^{-1}(a,c)$ is a cone over $T^2$: essentially, one copy of $S^1$ in $\RR^2\times S^1$ collapses to a point. In addition, all fibres of this map are special Lagrangian, and it is obviously only piecewise smooth. The discriminant locus is the entire plane given by $a=0$. \end{example} This example forces a reevaluation of the strong form of the SYZ conjecture. In further work Joyce found evidence for a more likely picture for general special Lagrangian fibrations in three dimensions. The discriminant locus, instead of being a codimension two graph, will be a codimension one blob. Typically the union of the singular points of singular fibres will be a Riemann surface, and it will map to an amoeba-shaped set in $B$, i.e., the discriminant locus looks like the picture on the right rather than the left in Figure \ref{fatdisc}, and will be a fattening of the old picture of a codimension two discriminant. \begin{figure} \includegraphics{fatdisc} \caption{} \label{fatdisc} \end{figure} Joyce made some additional arguments to suggest that this fattened discriminant locus must look fundamentally different in a neighbourhood of the two basic types of vertices we saw in \S\ref{affsection}, with the two types of vertices expected to appear pretty much as depicted in Figure \ref{fatdisc}. Thus the strong form of duality mentioned above, where we expect the discriminant loci of the special Lagrangian fibrations on a mirror pair to be the same, cannot hold. If this is the case, one needs to replace this strong form of duality with a weaker form. It seems likely that the best way to rephrase the SYZ conjecture is in a limiting form. Mirror symmetry as we currently understand it has to do with degenerations of Calabi-Yau manifolds. Given a flat family $f:\X \rightarrow D$ over a disk $D$, with the fibre $\X_0$ over $0$ singular and all other fibres $n$-dimensional Calabi-Yau manifolds, we say the family is \emph{maximally unipotent} if the monodromy transformation $T:H^n(\X_t,\QQ)\rightarrow H^n(\X_t,\QQ)$ ($t\in D$ non-zero) satisfies $(T-I)^{n+1}=0$ but $(T-I)^n\not=0$. It is a standard expectation of mirror symmetry that mirrors should be associated to maximally unipotent degenerations of Calabi-Yau manifolds. In particular, given two different maximally unipotent degenerations in a single complex moduli space for some Calabi-Yau manifold, one might obtain different mirror manifolds. Such degenerations are usually called ``large complex structure limits'' in the physics literature, although sometimes this phrase is used to impose some additional conditions on the degeneration, see \cite{Morr}. We recall the definition of Gromov-Hausdorff convergence, a notion of convergence of a sequence of metric spaces. \begin{definition} Let $(X,d_X)$, $(Y,d_Y)$ be two compact metric spaces. Suppose there exists maps $f:X\rightarrow Y$ and $g:Y\rightarrow X$ (not necessarily continuous) such that for all $x_1,x_2\in X$, $$|d_X(x_1,x_2)-d_Y(f(x_1),f(x_2))|<\epsilon$$ and for all $x\in X$, $$d_X(x,g\circ f(x))<\epsilon,$$ and the two symmetric properties for $Y$ hold. Then we say the Gromov--Hausdorff distance between $X$ and $Y$ is at most $\epsilon$. The Gromov--Hausdorff distance $d_{GH}(X,Y)$ is the infimum of all such $\epsilon$. \end{definition} It follows from results of Gromov (see for example \cite{Petersen}, pg. 281, Cor. 1.11) that the space of compact Ricci-flat manifolds with diameter $\le C$ is precompact with respect to Gromov-Hausdorff distance, i.e., any sequence of such manifolds has a subsequence converging with respect to the Gromov-Hausdorff distance to a metric space. This metric space could be quite bad; this is quite outside the realm of algebraic geometry! Nevertheless, this raises the following natural question. Given a maximally unipotent degeneration of Calabi-Yau manifolds $\X\rightarrow D$, take a sequence $t_i\in D$ converging to $0$, and consider a sequence $(\X_{t_i}, g_{t_i})$, where $g_{t_i}$ is a choice of Ricci-flat metric chosen so that $Diam(g_{t_i})$ remains bounded. What is the Gromov-Hausdorff limit of $(\X_{t_i},g_{t_i})$, or the limit of some convergent subsequence? \begin{example} Consider a degenerating family of elliptic curves parameterized by $t$, given by $\CC/(\ZZ+\ZZ\tau)$ where $1$ and $\tau={1\over 2\pi i}\log t$ are periods of the elliptic curves. If we take $t$ approaching $0$ along the positive real axis, then we can just view this as a family of elliptic curves $\X_{\alpha}$ with period $1$ and $i\alpha$ with $\alpha\rightarrow\infty$. If we take the standard Euclidean metric $g$ on $\X_{\alpha}$, then the diameter of $\X_{\alpha}$ is unbounded. To obtain a bounded diameter, we replace $g$ by $g/\alpha^2$; equivalently, we can keep $g$ fixed on $\CC$ but change the periods of the elliptic curve to $1/\alpha, i$. It then becomes clear that the Gromov-Hausdorff limit of such a sequence of elliptic curves is a circle $\RR/\ZZ$. \end{example} This simple example motivates the first conjecture about maximally unipotent degenerations, conjectured independently by myself and Wilson on the one hand \cite{GrWi} and Kontsevich and Soibelman \cite{KS} on the other. \begin{conjecture} \label{GWKSconj} Let $\X\rightarrow D$ be a maximally unipotent degeneration of simply-connected Calabi-Yau manifolds with full $SU(n)$ holonomy, $t_i\in D$ with $t_i\rightarrow 0$, and let $g_i$ be a Ricci-flat metric on $\X_{t_i}$ normalized to have fixed diameter $C$. Then a convergent subsequence of $(\X_{t_i},g_i)$ converges to a metric space $(X_{\infty},d_{\infty})$, where $X_{\infty}$ is homeomorphic to $S^n$. Furthermore, $d_{\infty}$ is induced by a Riemannian metric on $X_{\infty}\setminus\Gamma$, where $\Gamma\subseteq X_{\infty}$ is a set of codimension two. \end{conjecture} Here the topology of the limit depends on the nature of the non-singular fibres $\X_t$; for example, if instead $\X_t$ was hyperk\"ahler, then we would expect the limit to be a projective space. Also, even in the case of full $SU(n)$ holonomy, if $\X_t$ is not simply connected, we would expect limits such as $\QQ$-homology spheres to arise. Conjecture \ref{GWKSconj} is directly inspired by the SYZ conjecture. Suppose we had special Lagrangian fibrations $f_i:\X_{t_i}\rightarrow B_i$. Then as the maximally unipotent degeneration is approached, one can see that the volume of the fibres of these fibrations goes to zero. This would suggest these fibres collapse, hopefully leaving the base as the limit. This conjecture was proved by myself and Wilson in 2000 for K3 surfaces in \cite{GrWi}. The proof relied on a number of pleasant facts about K3 surfaces. First, they are hyperk\"ahler manifolds, and a special Lagrangian torus fibration becomes an elliptic fibration after a hyperk\"ahler rotation of the complex structure. Since it is easy to construct elliptic fibrations on K3 surfaces, and indeed such a fibration arises from the data of the maximally unipotent degeneration, it is easy to obtain a special Lagrangian fibration. Once this is done, one needs to carry out a detailed analysis of the behaviour of Ricci-flat metrics in the limit. This is done by creating good approximations to Ricci-flat metric, using the existence of explicit local models for these metrics near singular fibres of special Lagrangian fibrations in complex dimension two. Most of the techniques used are not available in higher dimension. However, much more recently, weaker collapsing results in the hyperk\"ahler case were obtained in work with V.\ Tosatti and Y.\ Zhang in \cite{GTZ}, assuming the existence of abelian variety fibrations analogous to the elliptic fibrations in the K3 case. Rather than getting an explicit approximate Ricci-flat metric, we make use of a priori estimates of Tosatti in \cite{Tos}. In the general Calabi-Yau case, the only progress towards the conjecture has been work of Zhang in \cite{Zhang} showing existence of special Lagrangian fibrations in regions of Calabi-Yau manifolds with bounded injectivity radius and sectional curvature and deduces local collapsing from the existence of special Lagrangian fibrations. The motivation for Conjecture \ref{GWKSconj} from SYZ also provides a limiting form of the conjecture. There are any number of problems with trying to prove the existence of special Lagrangian fibrations on Calabi-Yau manifolds. Even the existence of a single special Lagrangian torus near a maximally unipotent degeneration is unknown, but we expect it should be easier to find them as we approach the maximally unipotent point. Furthermore, even if we find a special Lagrangian torus, we know that it moves in an $n$-dimensional family, but we don't know its deformations fill out the entire manifold. In addition, there is no guarantee that even if it does, we obtain a foliation of the manifold: nearby special Lagrangian submanifolds may intersect. (For an example, see \cite{Matessi}.) So instead, we will just look at the moduli space of special Lagrangian tori. Given a maximally unipotent degeneration of Calabi-Yau manifolds of dimension $n$, it is known that the image of $(T-I)^n:H_n(\shX_t,\QQ) \rightarrow H_n(\shX_t,\QQ)$ is a one-dimensional subspace $W_0$. Suppose, given a sequence $t_i$ with $t_i \rightarrow 0$ as $i\rightarrow \infty$, that for $t_i$ sufficiently close to zero, there is a special Lagrangian $T^n$ which generates $W_0$. This is where we expect to find fibres of a special Lagrangian fibration associated to a maximally unipotent degeneration. Let $B_{0,i}$ be the moduli space of deformations of this torus; every point of $B_{0,i}$ corresponds to a smooth special Lagrangian torus in $\X_{t_i}$. This manifold then comes equipped with the McLean metric and affine structures defined in \S 2. One can then compactify $B_{0,i}\subseteq B_i$, (probably by taking the closure of $B_{0,i}$ in the space of special Lagrangian currents; the details aren't important here). This gives a series of metric spaces $(B_i,d_i)$ with the metric $d_i$ induced by the McLean metric. If the McLean metric is normalized to keep the diameter of $B_i$ constant independent of $i$, then we can hope that $(B_i,d_i)$ converges to a compact metric space $(B_{\infty},d_{\infty})$. Here then is the limiting form of SYZ: \begin{conjecture} If $(\X_{t_i},g_i)$ converges to $(X_{\infty},g_{\infty})$ and $(B_i,d_i)$ is non-empty for large $i$ and converges to $(B_{\infty},d_{\infty})$, then $B_{\infty}$ and $X_{\infty}$ are isometric up to scaling. Furthermore, there is a subspace $B_{\infty,0} \subseteq B_{\infty}$ with $\Gamma:=B_{\infty}\setminus B_{\infty,0}$ of Hausdorff codimension 2 in $B_{\infty}$ such that $B_{\infty,0}$ is a Monge-Amp\`ere manifold, with the Monge-Amp\`ere metric inducing $d_{\infty}$ on $B_{\infty,0}$. \end{conjecture} Essentially what this is saying is that as we approach the maximally unipotent degeneration, we expect to have a special Lagrangian fibration on larger and larger subsets of $\X_{t_i}$. Furthermore, in the limit, the codimension one discriminant locus suggested by Joyce converges to a codimension two discriminant locus, and (the not necessarily Monge-Amp\`ere, see \cite{Matessi}) Hessian metrics on $B_{0,i}$ converge to a Monge-Amp\`ere metric. The main point I want to get at here is that it is likely the SYZ conjecture is only ``approximately'' correct, and one needs to look at the limit to have a hope of proving anything. On the other hand, the above conjecture seems likely to be accessible by currently understood techniques. I remain hopeful that this conjecture will be proved, though much additional work will be necessary. How do we do mirror symmetry using this modified version of the SYZ conjecture? Essentially, we would follow these steps: \begin{enumerate} \item We begin with a maximally unipotent degeneration of Calabi-Yau manifolds $\X\rightarrow D$, along with a choice of polarization. This gives us a K\"ahler class $[\omega_t]\in H^2(\X_t,\RR)$ for each $t\in D\setminus 0$, represented by $\omega_t$ the K\"ahler form of a Ricci-flat metric $g_t$. \item Identify the Gromov-Hausdorff limit of a sequence $(\X_{t_i}, r_ig_{t_i})$ where $t_i\rightarrow 0$ and $r_i$ is a scale factor which keeps the diameter of $\X_{t_i}$ constant. The limit will be, if the above conjectures work, an affine manifold with singularities $B$ along with a Monge-Amp\`ere metric. \item Perform a Legendre transform to obtain a new affine manifold with singularities $\check B$, though with the same metric. \item Try to construct a compactification of $X_{\epsilon}(\check B_0)$ for small $\epsilon>0$ to obtain a complex manifold $X_{\epsilon}(\check B)$. This will be the mirror manifold. \end{enumerate} As we shall see, we do not expect that we will need the full strength of steps (2) and (3) to carry out mirror symmetry; some way of identifying the base $B$ will be sufficient. Nevertheless, (2) is interesting from the point of view of understanding the differential geomtry of Ricci-flat K\"ahler manifolds. Step (4), on the other hand, is crucial, and we need to elaborate on this last step a bit more. The problem is that while we expect that it should be possible in general to construct symplectic compactifications of the symplectic manifold $\check X(B_0)$ (and hence get the mirror as a symplectic manifold, see \cite{CastMat} for the three-dimensional case), we don't expect to be able to compactify $X_{\epsilon}(\check B_0)$ as a complex manifold. Instead, the expectation is that a small deformation of $X_{\epsilon}(\check B_0)$ is necessary before it can be compactified. Furthermore, this small deformation is critically important in mirror symmetry: \emph{it is this small deformation which provides the $B$-model instanton corrections}. Because this last item is so important, let's give it a name: \begin{question}[The reconstruction problem, Version I] \label{reconstruct1} Given a tropical affine manifold with singularities $B$, construct a complex manifold $X_{\epsilon}(B)$ which is a compactification of a small deformation of $X_{\epsilon}(B_0)$. \end{question} We will return to this question later in the paper. However, I do not wish to dwell further on the differential-geometric versions of the SYZ conjecture here. Instead I will move on to describing how the above discussion motivated the algebro-geometric program developed by myself and Siebert for understanding mirror symmetry, and then describe recent work and ideas coming out of this program. \section{Gromov-Hausdorff limits, algebraic degenerations, and mirror symmetry} We now have two notions of limit: the familiar algebro-geometric notion of a degenerating family $\X\rightarrow D$ over a disk on the one hand, and the Gromov-Hausdorff limit on the other. In 2000 Kontsevich and Soibelman had an important insight (see \cite{KS}) into the connection between these two. In this section I will give a rough idea of how and why this works. Very roughly speaking, the Gromov-Hausdorff limit $(\X_{t_i},g_{t_i})$ as $t_i\rightarrow 0$, or equivalently, the base of the putative SYZ fibration, should coincide, topologically, with the \emph{dual intersection complex} of the singular fibre $\X_0$. More precisely, in a relatively simple situation, suppose $f:\X\rightarrow D$ is relatively minimal (in the sense of Mori) and normal crossings, with $\X_0$ having irreducible components $X_1,\ldots,X_m$. The dual intersection complex of $\X_0$ is the simplicial complex with vertices $v_1,\ldots,v_m$, and which contains a simplex $\langle v_{i_0},\ldots, v_{i_p}\rangle$ if $X_{i_0}\cap\cdots\cap X_{i_p}\not=\emptyset$. The idea that the dual intersection complex should play a role in describing the base of the SYZ fibration was perhaps first suggested by Leung and Vafa in \cite{LV}. Let us explain roughly why this should be, first by looking at a standard family of degenerating elliptic curves with periods $1$ and ${n\over 2\pi i} \log t$ for $n$ a positive integer. Such a family over the punctured disk is extended to a family over the disk by adding a Kodaira type $I_n$ (a cycle of $n$ rational curves) fibre over the origin. Taking a sequence $t_i\rightarrow 0$ with $t_i$ real and positive gives a sequence of elliptic curves of the form $X_{\epsilon_i}(B)$ where $B=\RR/n\ZZ$ and $\epsilon_i=-{2\pi\over\ln t_i}$. In addition, the metric on $X_{\epsilon_i}(B)$, properly scaled, comes from the constant Hessian metric on $B$. So we wish to explain how $B$ is related to the geometry near the singular fibre. To this end, let $X_1,\ldots,X_n$ be the irreducible components of $\X_0$; these are all $\PP^1$'s. Let $P_1,\ldots,P_n$ be the singular points of $\X_0$. We'll consider two sorts of open sets in $\X$. For the first type, choose a coordinate $z$ on $X_i$, with $P_i$ given by $z=0$ and $P_{i+1}$ given by $z=\infty$. Let $U_i\subseteq X_i$ be the open set $\{z\,|\,\delta\le |z| \le 1/\delta\}$ for some small fixed $\delta$. Then one can find a neighbourhood $\widetilde U_i$ of $U_i$ in $\X$ such that $\widetilde U_i$ is biholomorphic to $U_i\times D_{\rho}$ for $\rho>0$ sufficiently small, $D_{\rho}$ a disk of radius $\rho$ in $\CC$, and $f|_{\widetilde U_i}$ is the projection onto $D_{\rho}$. On the other hand, each $P_i$ has a neighbourhood $\widetilde V_i$ in $\X$ biholomorphic to a polydisk $\{(z_1,z_2)\in\CC^2\,|\,|z_1|\le \delta', |z_2|\le\delta'\}$ on which $f$ takes the form $z_1z_2$. If $\delta$ and $\delta'$ are chosen correctly, then for $t$ sufficiently close to zero, \[ \{\widetilde V_i\cap\X_t\,|\,1\le i\le n\}\cup \{\widetilde U_i\cap\X_t\,|\,1\le i\le n\} \] form an open cover of $\X_t$. Now each of the sets in this open cover can be written as $X_{\epsilon}(U)$ for some $U$ a one-dimensional (non-compact) affine manifold and $\epsilon=-2\pi/\ln|t|$. If $U$ is an open interval $(a,b)\subseteq \RR$, then $X_{\epsilon}(U)$ is biholomorphic to the annulus \[ \{z\in\CC\,|\, e^{-2\pi b/\epsilon}\le |z|\le e^{-2\pi a/\epsilon}\} \] as $q=e^{2\pi i(x+i y)/\epsilon}$ is a holomorphic coordinate on $X_{\epsilon}((a,b))$. Thus \[ \widetilde U_i\cap \X_t\cong X_{\epsilon}\left(\left({\epsilon\ln\delta\over 2\pi}, -{\epsilon\ln\delta\over 2\pi}\right)\right) \] with $\epsilon=-2\pi/\ln|t|$. As $t\rightarrow 0$, the interval $(\epsilon\ln\delta/2\pi, -\epsilon\ln\delta/2\pi)$ shrinks to a point. So $\widetilde U_i\cap \X_t$ is a smaller and smaller open subset of $\X_t$ as $t\rightarrow 0$ when we view things in this way. This argument suggests that every irreducible component should be associated to a point on $B$. Now look at $\widetilde V_i\cap\X_t$. This is \begin{eqnarray*} \{(z_1,z_2)\in\CC^2\,|\,|z_1|,|z_2|<\delta', z_1z_2=t\} &\cong&\{z\in\CC\,|\,|t|/\delta'\le |z|\le \delta'\}\\ &\cong& X_{\epsilon}\left({-\epsilon\over 2\pi}\ln\delta', {\epsilon\over 2\pi} (\ln\delta'-\ln |t|)\right) \end{eqnarray*} with $\epsilon=-2\pi/\ln|t|$. This interval approaches the unit interval $(0,1)$ as $t\rightarrow 0$. So the open set $\widetilde V_i\cap \X_t$ ends up being a large portion of $\X_t$. We end up with $\X_t$, for small $t$, being a union of open sets of the form $X_{\epsilon}((i+\epsilon',i+1-\epsilon'))$ (i.e., $\widetilde V_i\cap\X_{\epsilon}$) and $X_{\epsilon}((i-\epsilon'',i+\epsilon''))$ (i.e., $\widetilde U_i\cap \X_t$) for $\epsilon'$, $\epsilon''$ sufficiently small. These should glue, at least approximately, to give $X_{\epsilon}(B)$. So we see that irreducible components of $\X_0$ seem to coincide with points on $B$, but intersections of components coincide with lines. In this way we see the dual intersection complex emerge. \medskip Let us make one more observation before beginning with rigorous results in the next section. Suppose more generally we had a \emph{Gorenstein toroidal crossings} degeneration of Calabi-Yau manifolds $f:\X\rightarrow D$ (see \cite{ss}). This means that every point $x\in\X$ has a neighbourhood isomorphic to an open set in an affine Gorenstein (i.e., the canonical class is a Cartier divisor) toric variety, with $f$ given locally by a monomial which vanishes exactly to order $1$ on each codimension one toric stratum. This is a generalization of the notion of normal crossings. Very roughly, the above argument suggests that each irreducible component of the central fibre will correspond to a point of the Gromov-Hausdorff limit. The following exercise shows what kind of contribution to $B$ to expect from a point $x\in\X_0$ which is a zero-dimensional stratum in $\X_0$. \begin{xca} \label{gorensteinlimit} Suppose that there is a point $x\in\X_0$ which has a neighbourhood isomorphic to a neighbourhood of a dimension zero torus orbit of an affine Gorenstein toric variety $Y_x$. Such an affine variety is specified as follows. Set $M=\ZZ^n$, $M_{\RR}=M\otimes_{\ZZ}\RR$, $N=\Hom_{\ZZ}(M,\ZZ)$, $N_{\RR}=N\otimes_{\ZZ}\RR$ with $n=\dim\X_t$. Then there is a lattice polytope $\sigma\subseteq M_{\RR}$, $C(\sigma):=\{(rm,r)\,|\, m\in\sigma,r\ge 0\}\subseteq M_{\RR}\oplus\RR$, $P:=\dual{C(\sigma)}\cap (N\oplus\ZZ)$ the monoid determined by the dual of the cone $C(\sigma)$, $Y_x =\Spec \CC[P]$, and finally $f$ coincides with the monomial $z^{(0,1)}$. Now let us take a small neighbourhood of $x$ of the form \[ \widetilde U_{\delta}=\{y\in \Spec \CC[P]\,|\,\hbox{$|z^p|<\delta$ for all $p\in P$}\}. \] This is an open set as the condition $|z^p|<\delta$ can be tested on a finite generating set for $P$, provided that $\delta<1$. Then show that for a given $t$, $|t|<1$ and $\epsilon=-2\pi/\log|t|$, if \[ \sigma_t:=\{m\in M_{\RR}\,|\,\hbox{$\langle p,(m,1)\rangle>{\log\delta\over \log |t|}$ for all $p\in P$}\}, \] then \[ f^{-1}(t)\cap \widetilde U_{\delta}\cong X_{\epsilon}(\sigma_t). \] Note that \[ \sigma:=\{m\in M_{\RR}\,|\,\hbox{$\langle p,(m,1)\rangle\ge 0$ for all $p\in P$}\}, \] so $\sigma_t$ is an open subset of $\sigma$, and as $t\rightarrow 0$, $\sigma_t$ converges to the interior of $\sigma$. \qed \end{xca} This observation hopefully motivates the basic construction of the next section. \section{Toric degenerations, the intersection complex and its dual} I will now introduce the basic objects of the program developed by myself and Siebert to understand mirror symmetry in an algebro-geometric context. This program was announced in \cite{Announce}, and has been developed further in a series of papers \cite{PartI}, \cite{PartII}, \cite{Annals}, \cite{GBB}, \cite{JAMS}, \cite{GPS}. The motivation for this program came from two different directions. The first, which was largely my motivation, was the discussion of the limiting form of the SYZ conjecture of the previous sections. The second arose in work of Schr\"oer and Siebert \cite{ssKod}, \cite{ss}, which led Siebert to the idea that log structures on degenerations of Calabi-Yau manifolds would allow one to view mirror symmetry as an operation performed on degenerate Calabi-Yau varieties. Siebert observed that at a combinatorial level, mirror symmetry exchanged data pertaining to the log structure and a polarization. This will be explained more clearly in the following section, when I introduce log structures. Together, Siebert and I realised that the combinatorial data he was considering could be encoded naturally in the dual intersection complex of the degeneration, which we saw in the previous section appears to be the base of the SYZ fibration. The combinatorial interchange of data necessary for mirror symmetry then corresponded to a discrete Legendre transform on the dual intersection complex. It became apparent that this approach provided an algebro-geometrization of the SYZ conjecture. To set this up properly, one has to consider what kind of degenerations to allow. They should be maximally unipotent, of course, but there can be many different birational models of degenerations. Below we define the notion of \emph{toric degeneration}. The class of toric degenerations may seem rather restrictive, but it appears to be the largest class of degenerations closed under mirror symmetry: one can construct the mirror of a toric degeneration as a toric degeneration. It does not appear that there is any other natural family of degenerations with this property. Much of the material in this section comes from \cite{PartI}, \S 4. Roughly put, a toric degeneration of Calabi-Yau varieties is a degeneration whose central fibre is a union of toric varieties glued along toric strata, and the total space of the degeneration is, off of some well-behaved set $Z$ contained in the central fibre, locally toric with the family locally given by a monomial. The precise technical definition is as follows. \begin{definition} \label{toricdegen} Let $f:\X\rightarrow D$ be a proper flat family of relative dimension $n$, where $D$ is a disk and $\X$ is a complex analytic space (not necessarily non-singular). We say $f$ is a {\it toric degeneration} of Calabi-Yau varieties if \begin{enumerate} \item $\X_t$ is an irreducible normal Calabi-Yau variety with only canonical singularities for $t\not=0$. (The reader may like to assume $\X_t$ is smooth for $t\not=0$). \item If $\nu:\widetilde\X_0\to\X_0$ is the normalization, then $\widetilde\X_0$ is a disjoint union of toric varieties, the conductor locus $C\subseteq\widetilde\X_0$ is reduced, and the map $C\to\nu(C)$ is unramified and generically two-to-one. (The conductor locus is a naturally defined scheme structure on the set where $\nu$ is not an isomorphism.) The square \[\begin{CD} C@>>> \widetilde\X_0\\ @VVV @VV{\nu}V\\ \nu(C)@>>> \X_0 \end{CD}\] is cartesian and cocartesian. \item $\X_0$ is a reduced Gorenstein space and the conductor locus $C$ restricted to each irreducible component of $\widetilde\X_0$ is the union of all toric Weil divisors of that component. \item There exists a closed subset $Z\subseteq\X$ of relative codimension $\ge 2$ such that $Z$ satisfies the following properties: $Z$ does not contain the image under $\nu$ of any toric stratum of $\widetilde\X_0$, and for any point $x\in \X\setminus Z$, there is a neighbourhood $\widetilde U_x$ (in the analytic topology) of $x$, an $n+1$-dimensional affine toric variety $Y_x$, a regular function $f_x$ on $Y_x$ given by a monomial, and a commutative diagram $$\begin{matrix} \widetilde U_x&\mapright{\psi_x}&Y_x\cr \mapdown{f|_{\widetilde U_{x}}}&&\mapdown{f_x}\cr D'&\mapright{\varphi_x}&\CC\cr \end{matrix}$$ where $\psi_x$ and $\varphi_x$ are open embeddings and $D'\subseteq D$. Furthermore, $f_x$ vanishes precisely once on each toric divisor of $Y_x$. \end{enumerate} \end{definition} \begin{example} \label{quarticexample} Take $\X$ to be defined by the equation $tf_4+z_0z_1z_2z_3=0$ in $\PP^3\times D$, where $D$ is a disk with coordinate $t$ and $f_4$ is a general homogeneous quartic polynomial on $\PP^3$. It is easy to see that $\X$ is singular at the locus \[ \{t=f_4=0\}\cap Sing(\X_0). \] As $\X_0$ is the coordinate tetrahedron, the singular locus of $\X_0$ consists of the six coordinate lines of $\PP^3$, and $\X$ has four singular points along each such line, for a total of 24 singular points. Take $Z=Sing(\X)$. Then away from $Z$, the projection $\X\rightarrow D$ is normal crossings, which yields condition (4) of the definition of toric degeneration. It is easy to see all other conditions are satisfied. \end{example} Given a toric degeneration $f:\X\rightarrow D$, we can build the \emph{dual intersection complex} $(B,\P)$ of $f$, as follows. Here $B$ is an integral affine manifold with singularities, and $\P$ is a \emph{polyhedral decomposition} of $B$, i.e., a decomposition of $B$ into lattice polytopes. In fact, we will construct $B$ as a union of lattice polytopes. Specifically, let the normalisation of $\X_0$, $\widetilde \X_0$, be written as a disjoint union $\coprod X_i$ of toric varieties $X_i$, $\nu:\widetilde\X_0\rightarrow\X_0$ the normalisation. The {\it strata} of $\X_0$ are the elements of the set \[ Strata(\X_0)=\{\nu(S)\,|\, \hbox{$S$ is a toric stratum of $X_i$ for some $i$}\}. \] Here by toric stratum we mean the closure of a $(\CC^*)^n$ orbit. Let $\{x\}\in Strata(\X_0)$ be a zero-dimensional stratum. Applying Definition \ref{toricdegen},(4), to a neighbourhood of $x$, there is a toric variety $Y_x$ such that in a neighbourhood of $x$, $f:\X\rightarrow D$ is locally isomorphic to $f_x:Y_x\rightarrow\CC$, where $f_x$ is given by a monomial. Now the condition that $f_x$ vanishes precisely once along each toric divisor of $Y_x$ is the statement that $Y_x$ is Gorenstein, and as such, it arises as in Exercise \ref{gorensteinlimit}. Indeed, let $M,N$ be given in Exercise \ref{gorensteinlimit}, with $\rank M=\dim\X_0$. Then there is a lattice polytope $\sigma_x\subseteq M_{\RR}$ such that $C(\sigma_x) =\{(rm,r)|m\in\sigma, r\ge0\}$ is the cone defining the toric variety $Y_x$. As we saw in Exercise \ref{gorensteinlimit}, a small neighbourhood of $x$ in $\X$ should contribute a copy of $\sigma_x$ to $B$, which provides the motivation for our construction. We can now describe how to construct $B$ by gluing together the polytopes \[ \{\sigma_x\,|\, \{x\}\in Strata(\X_0)\}. \] We will do this in the case that every irreducible component of $\X_0$ is in fact itself normal so that $\nu:X_i\rightarrow \nu(X_i)$ is an isomorphism. The reader may be able to imagine the more general construction. With this normality assumption, there is a one-to-one inclusion reversing correspondence between faces of $\sigma_x$ and elements of $Strata(\X_0)$ containing $x$. We can then identify faces of $\sigma_x$ and $\sigma_{x'}$ if they correspond to the same strata of $\X_0$. Some argument is necessary to show that this identification can be done via an integral affine transformation, but again this is not difficult. Making these identifications, one obtains $B$. One can then prove \begin{lemma} If $\X_0$ is complex $n$-dimensional, then $B$ is an real $n$-dimensional manifold. \end{lemma} See \cite{PartI}, Proposition 4.10 for a proof. Now so far $B$ is just a topological manifold, constructed by gluing together lattice polytopes. Let \[ \P=\{\sigma\subseteq B| \hbox{$\sigma$ is a face of $\sigma_x$ for some zero-dimensional stratum $x$}\}. \] There is a one-to-one inclusion reversing correspondence between strata of $\X_0$ and elements of $\P$. It only remains to give $B$ an affine structure with singularities. In fact, I shall describe somewhat more structure on $B$ derived from $\X_0$ which in particular gives an affine structure with singularities on $B$. First, for $\tau\in\P$, let \[ U_{\tau}:=\bigcup_{\{\sigma\in\P\,|\,\tau\subseteq\sigma\}}\Int(\sigma). \] A \emph{fan structure along $\tau\in\P$} is a continuous map $S_{\tau}: U_{\tau}\rightarrow\RR^k$ such that \begin{enumerate} \item $S_{\tau}^{-1}(0)=\Int(\tau)$. \item If $e:\tau\rightarrow\sigma$ is an inclusion then $S_{\tau}|_{\Int\sigma}$ is an integral affine submersion onto its image. \item The collection of cones \[ \{K_e:=\RR_{\ge 0} S_{\tau}(\sigma\cap U_{\tau})\,|\, e:\tau\rightarrow\sigma\} \] defines a finite fan $\Sigma_{\tau}$ in $\RR^k$. \end{enumerate} Two fan structures $S_{\tau},S'_{\tau}:U_{\tau}\rightarrow\RR^k$ are considered \emph{equivalent} if they differ only by an integral linear transformation of $\RR^k$. If $S_{\tau}:U_{\tau}\rightarrow\RR^k$ is a fan structure along $\tau\in\P$ and $\sigma\supseteq \tau$ then $U_{\sigma}\subseteq U_{\tau}$. The fan structure along $\sigma$ \emph{induced by $S_{\tau}$} is the composition \[ U_{\sigma}\mapright{} U_{\tau} \mapright{S_{\tau}}\RR^k\mapright{} \RR^k/L_{\sigma}\cong \RR^{\ell} \] where $L_{\sigma}\subseteq\RR^k$ is the linear span of $S_{\tau}(\sigma)$. \begin{definition} An \emph{integral tropical manifold of dimension $n$} is a pair $(B,\P)$ as above along with a choice of fan structure $S_v$ at each vertex $v$ of $\P$, with the property that if $v,w\in\tau$, then the fan structures along $\tau$ induced by $S_v$ and $S_w$ are equivalent. \end{definition} Such data gives $B$ the structure of an integral affine manifold with singularities. Let $\Gamma\subseteq B$ be the union of those cells of $\Bar(\P)$ (the first barycentric subdivision of $\P$) which are not contained in maximal cells of $\P$ nor contain vertices of $\P$. Then $B_0:=B\setminus \Gamma$ can be covered by \[ \{\Int(\sigma)\,|\,\sigma\in\P_{\max}\}\cup \{W_v\,|\, \hbox{$v\in\P$ a vertex}\} \] for certain open neighbourhoods $W_v$ of $v\in B$ contained in $U_v$. We define an affine structure on $B_0$ by giving $\Int(\sigma)$ the natural affine structure given by $\sigma$ being a lattice polytope, while $S_v:U_v\rightarrow \RR^n$ restricts to an affine chart on $W_v$. Finally, the point is that the structure of $\X_0$ gives rise to an integral tropical manifold structure on $(B,\P)$. Indeed, each vertex $v\in \P$ corresponds to an irreducible component $X_v$ of $\X_0$ and this irreducible component is a toric variety with fan $\Sigma_v$ in $\RR^n$. Furthermore, there is a one-to-one correspondence between $p$-dimensional cones of $\Sigma_v$ and $p$-dimensional cells of $\P$ containing $v$ as a vertex, as they both correspond to strata of $\X_0$ contained in $X_v$. There is then a continuous map \[ \psi_v:U_v\rightarrow \RR^n \] which takes $U_v\cap\sigma$, for any $\sigma\in\P$ containing $v$ as a vertex, into the corresponding cone of $\Sigma_v$ \emph{integral affine linearly}. Such a map is uniquely determined by the combinatorial correspondence and the requirement that it be integral affine linear on each cell. These maps define a fan structure at each vertex. Furthermore, these fan structures are compatible in the sense that if $v,w\in\tau$, the two induced fan structures on $U_{\tau}$ are equivalent. This follows because there is a well-defined fan $\Sigma_{\tau}$ defining the stratum corresponding to $\tau$. \begin{example} Let $f:\X\rightarrow D$ be a degeneration of elliptic curves to an $I_n$ fibre. Then $B$ is the circle $\RR/n\ZZ$, decomposed by $\P$ into $n$ line segments of length one. \end{example} \begin{example} Continuing with Example \ref{quarticexample}, the dual intersection complex is the boundary of a tetrahedron, with each face affine isomorphic to a standard two-simplex, and the affine structure near each vertex makes the polyhedral decomposition look locally like the fan for $\PP^2$. There is one singularity at the barycenter of each edge, and one can calculate that the monodromy of $\Lambda$ about each of these singularities is $\begin{pmatrix} 1&4\\ 0&1\end{pmatrix}$ in a suitable basis. \end{example} \begin{example} Consider the polytope $\Delta$ of Example \ref{quintic}. The dual polytope $\nabla$ is the convex hull of the points $(-1,-1,-1,-1),(1,0,0,0),\ldots, (0,0,0,1)$. The corresponding projective toric variety $\PP_{\nabla}$ has a crepant resolution $X_{\Sigma}\rightarrow\PP_{\nabla}$ where $\Sigma$ is the fan consisting of cones over all elements of the decomposition $\P$ of $\partial\Delta$ as described in Example \ref{quintic}. Consider in $\PP_{\nabla}\times\AA^1$ the degenerating family $\shX\rightarrow\AA^1$ of Calabi-Yau manifolds given by \[ s_0+t\sum_{m\in \nabla\cap \ZZ^4} c_ms_m=0 \] where $s_m$ is the section of $\shO_{\PP_{\nabla}}(1)$ corresponding to $m\in\nabla\cap\ZZ^4$. Let $\widetilde\shX$ be the proper transform of $\shX$ in $X_{\Sigma}\times\AA^1$. Then the family $\widetilde\shX\rightarrow \AA^1$ is a toric degeneration with general fibre the mirror quintic, and its dual intersection complex is the affine manifold $B$ constructed in Example \ref{quintic}. \end{example} Is the dual intersection complex the right affine manifold with singularities? The following theorem provides evidence for this, and gives the connection between this construction and the SYZ conjecture. \begin{theorem} \label{complextheorem} Let $\X\rightarrow D$ be a toric degeneration, with dual intersection complex $(B,\P)$. Then there is an open set $U\subseteq B$ such that $B\setminus U$ retracts onto the discriminant locus $\Gamma$ of $B$, and an open subset $\U_t$ of $\shX_t$ which is biholomorphic to a small deformation of a twist of $X_{\epsilon}(U)$, where $\epsilon=O(-1/\ln|t|)$. \end{theorem} We will not be precise here about what we mean by small deformation; by twist, we mean a twist of the complex structure of $X_{\epsilon}(U)$ by a $B$-field. See \cite{Announce} for a much more precise statement; the above statement is meant to give a feel for what is true. The proof, along with much more precise statements, will eventually appear in \cite{tori}. \medskip If $\X\rightarrow D$ is a \emph{polarized} toric degeneration, i.e., if there is a relatively ample line bundle $\shL$ on $\X$, then we can construct another integral tropical manifold $(\check B,\check\P)$, which we call the \emph{intersection complex}, as follows. For each irreducible component $X_i$ of $\X_0$, $\shL|_{X_i}$ is an ample line bundle on a toric variety. Let $\check\sigma_i\subseteq N_{\RR}$ denote the Newton polytope of this line bundle. There is then a one-to-one inclusion preserving correspondence between strata of $\X_0$ contained in $X_i$ and faces of $\check\sigma_i$. We can then glue together the $\check\sigma_i$'s in the obvious way: if $Y$ is a codimension one stratum of $\X_0$, it is contained in two irreducible components $X_i$ and $X_j$, and defines faces of $\check\sigma_i$ and $\check\sigma_j$. These faces are affine isomorphic because they are both the Newton polytope of $\shL|_Y$, and we can then identify them in the canonical way. Thus we obtain a topological space $\check B$ with a polyhedral decomposition $\check\P$. To define the fan structure at a vertex $v\in\P$, note that such a vertex corresponds to a zero-dimensional stratum of $\X_0$, giving rise to a maximal cell $\sigma_v$ of the dual intersection complex. Take the fan structure at $v$ to be defined using the normal fan $\check\Sigma_v$ to $\sigma_v$. Then there is a one-to-one inclusion preserving correspondence between cones in $\check\Sigma_v$ and strata of $\X_0$ containing the stratum corresponding to $v$. This correspondence allows us to define a fan structure \[ S_v:U_v\rightarrow \RR^n \] which takes $U_v\cap\check\sigma$, for any $\check\sigma\in\check\P$ containing $v$ as a vertex, into the corresponding cone of $\check\Sigma_v$. One checks easily that this set of fan structures satisfies the definition of integral tropical manifold, and hence defines the intersection complex $(\check B,\check\P)$. Analogously to Theorem \ref{complextheorem}, we expect \begin{conjecture} Let $\X\rightarrow D$ be a polarized toric degeneration, with intersection complex $(\check B,\check\P)$. Let $\omega_t$ be a K\"ahler form on $\X_t$ representing the first Chern class of the polarization. Then there is an open set $\check U\subseteq \check B$ such that $\check B\setminus \check U$ retracts onto the discriminant locus $\Gamma$ of $\check B$, such that $\X_t$ is a symplectic compactification of $\check X(\check U)$ for any $t$. \end{conjecture} I don't expect this to be particularly difficult: it should be amenable to the techniques of W.-D.\ Ruan \cite{RuanJSG}, but such an approach has not been carried out in general. \medskip The relationship between the intersection complex and the dual intersection complex can be made more precise by introducing multi-valued piecewise linear functions, in analogy with the multi-valued convex functions of Definition \ref{multivaluedconvex}. \begin{definition} Let $(B,\P)$ be an integral tropical manifold. Then a \emph{multi-valued piecewise linear function} $\varphi$ on $B$ is a collection of continuous functions on an open cover $\{(U_i,\varphi_i)\}$ such that $\varphi_i$ is affine linear on each cell of $\P$ intersecting $U_i$, and on $U_i\cap U_j$, $\varphi_i-\varphi_j$ is affine linear. Furthermore, for any $\tau\in\P$, let $S_{\tau}:U_{\tau} \rightarrow \RR^k$ be the induced fan structure. Then there is a piecewise linear function $\varphi_{\tau}$ on the fan $\Sigma_{\tau}$ such that on $U_i\cap U_{\tau}$, $\varphi_i-\varphi_{\tau}\circ S_{\tau}$ is affine linear. Here we will always assume that each linear part of $\varphi_i$ has differential in $\check\Lambda$, i.e., $\varphi_i$ has integral slopes. \end{definition} The rather technical condition on the local behaviour of each $\varphi_i$ on $U_{\tau}$ comes from the idea that such a multi-valued piecewise linear function is really just a collection of piecewise linear functions on the fans $\Sigma_{\tau}$ given by the fan structure of $(B,\P)$. These functions need to satisfy some compatibility conditions, and this compatibility is motivated by the following discussion. Suppose we are given a polarized toric degeneration $\X\rightarrow D$. We in fact obtain a multi-valued piecewise linear function $\varphi$ on the dual intersection complex $(B,\P)$ as follows. Restricting to any toric stratum $X_{\tau}$, $\shL|_{X_{\tau}}$ is determined completely by an integral piecewise linear function $\varphi_{\tau}$ on $\Sigma_{\tau}$, well-defined up to a choice of linear function. Pulling back this piecewise linear function via $S_{\tau}$ to $U_{\tau}$, we obtain a collection of piecewise linear functions $\{(U_{\tau},\varphi_{\tau}\circ S_{\tau})\,|\,\tau\in\P\}$. The fact that $(\shL|_{X_{\tau}})|_{X_{\sigma}}=\shL|_{X_{\sigma}}$ for $\tau\subseteq\sigma$ implies that on overlaps $\varphi_{\sigma} \circ S_{\sigma}$ and $\varphi_{\tau}\circ S_{\tau}$ differ by at most an affine linear function. So $\{(U_{\tau}, \varphi_{\tau}\circ S_{\tau})\}$ defines a multi-valued piecewise linear function. The last condition in the definition of multi-valued piecewise linear function then reflects the need for the function to be locally a pull-back of a function via $S_{\sigma}$ in a neighbourhood of $\sigma$. If $\shL$ is ample, then the piecewise linear function determined by $\shL|_{X_{\sigma}}$ is strictly convex. So we say a multi-valued piecewise linear function is \emph{strictly convex} if $\varphi_{\tau}$ is strictly convex for each $\tau\in\P$. As a consequence, if $\shX\rightarrow D$ is a polarized toric degeneration, we will write $(B,\P,\varphi)$ for the data of the dual intersection complex and the induced multi-valued function $\varphi$. We call this triple the dual intersection complex of the polarized degeneration. \medskip Now suppose we are given abstractly a triple $(B,\P,\varphi)$ with $(B,\P)$ an integral tropical manifold and $\varphi$ a strictly convex multi-valued piecewise linear function on $B$. Then we construct the \emph{discrete Legendre transform} $(\check B,\check\P,\check\varphi)$ of $(B,\P,\varphi)$ as follows. $\check B$ will be constructed by gluing together Newton polytopes. If we view, for $v$ a vertex of $\P$, the fan $\Sigma_v$ as living in $M_{\RR}$, then the Newton polytope of $\varphi_v$ is \[ \check v:=\{x\in N_{\RR}\,|\,\langle x,y\rangle\ge-\varphi_v(y) \quad\forall y\in M_{\RR}\}. \] There is a one-to-one inclusion reversing correspondence between faces of $\check v$ and cells of $\P$ containing $v$. Furthermore, if $\sigma$ is the smallest cell of $\P$ containing two vertices $v$ and $v'$, then the corresponding faces of $\check v$ and $\check v'$ are integral affine isomorphic, as they are both isomorphic to the Newton polytope of $\varphi_{\sigma}$. Thus we can glue $\check v$ and $\check v'$ along this common face. After making all these identifications, we obtain a cell complex $(\check B,\check\P)$, which is really just the dual cell complex of $(B,\P)$. This is given an integral tropical structure by taking the fan structure at a vertex $\check\sigma$, for $\sigma\in\P_{\max}$, to be given by the normal fan to $\sigma$. Finally, the function $\varphi$ has a discrete Legendre transform $\check\varphi$ on $(\check B,\check\P)$. We have no choice but to define $\check\varphi$ in a neighbourhood of a vertex $\check\sigma\in \check\P$ dual to a maximal cell $\sigma\in\P$ to be a piecewise linear function whose Newton polytope is $\sigma$, i.e., \[ \check\varphi_{\check\sigma}(y) =-\inf\{\langle y,x\rangle\,|\,x\in\sigma\subseteq M_{\RR}\}. \] This gives $(\check B,\check\P,\check\varphi)$, the discrete Legendre transform of $(B,\P,\varphi)$. If $B$ is $\RR^n$, then this coincides with the classical notion of discrete Legendre transform. The discrete Legendre transform has several relevant properties: \begin{itemize} \item The discrete Legendre transform of $(\check B,\check\P,\check\varphi)$ is $(B,\P,\varphi)$. \item If we view the underlying topological spaces $B$ and $\check B$ as identified by being the underlying space of dual cell complexes, then $\Lambda_{B_0}\cong \check\Lambda_{\check B_0}$ and $\check\Lambda_{B_0}\cong\Lambda_{\check B_0}$, where the subscript denotes which affine structure is being used to define $\Lambda$ or $\check\Lambda$. \end{itemize} This hopefully makes it clear that the discrete Legendre transform is a suitable replacement for the duality provided by the Legendre transform of \S 2. \medskip Note in particular that if $\shX\rightarrow D$ is a polarized toric degeneration, with dual intersection complex $(B,\P,\varphi)$, then the discrete Legendre transform $(\check B,\check\P,\check\varphi)$ satisfies the condition that $(\check B,\check\P)$ is the intersection complex of the polarized degeneration. The function $\check\varphi$ is some extra information on $\check B$, which from the definition of discrete Legendre transform encodes the cells of $\P$. These cells of the dual intersection complex were defined using the local toric structure of $\shX\rightarrow D$. So $\check\varphi$ can be seen as carrying information about this local toric structure. We will say $(\check B,\check\P,\check\varphi)$ is the intersection complex of the polarized toric degeneration $\shX\rightarrow D$. So we see that for $(B,\P,\varphi)$, $\P$ carries information about the log structure and $\varphi$ carries information about the polarization, but for $(\check B,\check\P,\check\varphi)$, $\check\P$ carries information about the polarization and $\check\varphi$ carries information about the log structure. Mirror symmetry interchanges these two pieces of information! We can now state an \emph{algebro-geometric SYZ procedure}. In analogy with the procedure suggested in \S 5, we could follow these steps: \begin{enumerate} \item We begin with a toric degeneration of Calabi-Yau manifolds $\X\rightarrow D$ with an ample polarization. \item Construct the dual intersection complex $(B,\P,\varphi)$ from this data, as explained above. \item Perform the discrete Legendre transform to obtain $(\check B, \check\P,\check\varphi)$. \item Try to construct a polarized degeneration of Calabi-Yau manifolds $\check\X\rightarrow D$ whose dual intersection complex is $(\check B,\check\P,\check\varphi)$, or whose intersection complex is $(B,\P,\varphi)$. \end{enumerate} \begin{example} The discrete Legendre transform enables us to reproduce Batyrev duality \cite{Bat}. Let $\Delta\subseteq M_{\RR}$ be a reflexive polytope, $\nabla\subseteq N_{\RR}$ the polar dual, and assume $0\in\Delta$ is the unique interior point. We then obtain two toric degenerations given by the equations \[ s_0+t\sum_{m\in M\cap\Delta} c_ms_m=0,\quad\quad s_0+t\sum_{n\in N\cap \nabla} c_ns_n=0 \] in $\PP_{\Delta}\times\AA^1$ and $\PP_{\nabla}\times\AA^1$ respectively, with $s_m$ ($s_n$) the section of $\shO_{\PP_{\Delta}}(1)$ corresponding to $m$ (the section of $\shO_{\PP_{\nabla}}(1)$ corresponding to $n$). It is easy to check that the dual intersection complexes of these two degenerations are given as follows. For the first degeneration, $B=\partial\nabla$ with polyhedral decomposition given by the proper faces of $\nabla$. The fan structure at each vertex $v$ is given by projection $U_v\hookrightarrow N_{\RR}\rightarrow N_{\RR}/\RR v$. For the second degeneration, one uses $\Delta$ instead of $\nabla$. One can then check that if one polarizes the two degenerations using $\shO_{\PP_{\Delta}}(1)$ and $\shO_{\PP_{\nabla}}(1)$ respectively, then the corresponding triples $(B,\P,\varphi)$ are Legendre dual. Thus Batyrev duality is a special case of this general approach to a mirror construction. For a much more general construction which works for the Batyrev-Borisov construction \cite{BB} of mirrors of complete intersection Calabi-Yaus in toric varieties, see \cite{GBB}. \end{example} The only step missing in this mirror symmetry algorithm is the last: \begin{question}[The reconstruction problem, Version II] \label{reconstruct2} Given $(B,\P,\varphi)$, is it possible to construct a polarized toric degeneration $\X\rightarrow D$ whose intersection complex is $(B,\P,\varphi)$? \end{question} One could hope to solve this problem via naive deformation theory, by constructing the central fibre $\X_0$ from the data $(B,\P,\varphi)$, and then deforming this to find a smoothing. However, as initially observed in the normal crossings case by Kawamata and Namikawa in \cite{KN}, one needs to put some additional structure on $\X_0$ before it has good deformation theory. This structure is a \emph{log structure}, and introducing log structures allows us to study many aspects of mirror symmetry directly on the degenerate fibre itself. So let us turn to a review of the theory of logarithmic structures. \section{Log structures} We review the notion of log structures of Fontaine-Illusie and Kato (\cite{Illu}, \cite{K.Kato}). These play a key role in trying to understand mirror symmetry via degenerations. \begin{definition} A log structure on a scheme (or analytic space) $X$ is a (unital) homomorphism $$\alpha_X:\shM_X\rightarrow \O_X$$ of sheaves of (multiplicative and commutative) monoids inducing an isomorphism $\alpha_X^{-1}(\O_X^{\times})\rightarrow \O_X^{\times}$. The monoid structure on $\O_X$ is given by multiplication. The triple $(X,\shM_X,\alpha_X)$ is then called a {\it log space}. We often write the whole package as $X^{\dagger}$. \end{definition} A morphism of log spaces $F:X^{\dagger}\rightarrow Y^{\dagger}$ consists of a morphism $\underline{F}:X\rightarrow Y$ of underlying spaces together with a homomorphism $F^{\#}:\underline{F}^{-1}(\shM_Y) \rightarrow\shM_X$ commuting with the structure homomorphisms: $$\alpha_X\circ F^{\#}=\underline{F}^*\circ\alpha_Y.$$ The key examples: \begin{examples} \label{logexamples} (1) Let $X$ be a scheme and $Y\subseteq X$ a closed subset of codimension one. Denote by $j:X\setminus Y\rightarrow X$ the inclusion. Then the inclusion $$\alpha_X:\shM_X=j_*(\O_{X\setminus Y}^{\times})\cap\O_X\rightarrow \O_X$$ of the sheaf of regular functions invertible off of $Y$ is a log structure on $X$. This is called a \emph{divisorial log structure} on $X$. (2) A {\it prelog structure}, i.e., an arbitrary homomorphism of sheaves of monoids $\varphi:\shP\rightarrow\O_X$, defines an associated log structure $\shM_X$ by $$\shM_X=(\shP\oplus\O_X^{\times})/\{(p,\varphi(p)^{-1})\,|\,p\in \varphi^{-1}(\O_X^{\times})\}$$ and $\alpha_X(p,h)=h\cdot\varphi(p)$. (3) If $f:X\rightarrow Y$ is a morphism of schemes and $\alpha_Y:\shM_Y \rightarrow\O_Y$ is a log structure on $Y$, then the prelog structure $f^{-1}(\shM_Y)\rightarrow\O_X$ given as the composition of $\alpha_Y:f^{-1}(\shM_Y)\rightarrow f^{-1}\shO_Y$ and $f^*:f^{-1}\shO_Y \rightarrow \shO_X$ defines an associated log structure on $X$, the {\it pull-back log structure}. (4) In (1) we can pull back the log structure on $X$ to $Y$ using (3). Thus in particular, if $\X\rightarrow D$ is a toric degeneration, the inclusion $\X_0\subseteq\X$ gives a log structure on $\X$ and an induced log structure on $\X_0$. Similarly the inclusion $0\in D$ gives a log structure on $D$ and an induced one on $0$. Here $\M_0=\CC^{\times}\oplus\NN$, where $\NN$ is the (additive) monoid of natural (non-negative) numbers, and $$\alpha_0(h,n)=\begin{cases}h& n=0\\ 0&n\not=0.\end{cases}$$ $0^{\dagger}$ is usually called the \emph{standard log point}. We then have log morphisms $\X^{\dagger}\rightarrow D^{\dagger}$ and $\X_0^{\dagger}\rightarrow 0^{\dagger}$. (5) If $\sigma\subseteq M_{\RR}=\RR^n$ is a strictly convex rational polyhedral cone, $\dual{\sigma}\subseteq N_{\RR}$ the dual cone, let $P=\dual{\sigma}\cap N$: this is a monoid under addition. The affine toric variety defined by $\sigma$ can be written as $X=\Spec \CC[P]$. We then have a pre-log structure induced by the homomorphism of monoids $$P\rightarrow \CC[P]$$ given by $p\mapsto z^p$. There is then an associated log structure on $X$. This is in fact the same as the log structure induced by $\partial X\subseteq X$, where $\partial X$ is the toric boundary of $X$, i.e., the union of toric divisors of $X$. If $p\in P$, then the monomial $z^p$ defines a map $f:X\rightarrow \Spec \CC[\NN]=\AA^1$ which is a log morphism with the log structure on $\Spec \CC[\NN]$ induced similarly by $\NN\rightarrow\CC[\NN]$. The fibre $X_0=\Spec \CC[P]/(z^p)$ is a subscheme of $X$, there is an induced log structure on $X_0$, and a map $X_0^{\dagger} \rightarrow 0^{\dagger}$ as in (4). The log morphism $f$ is an example of a \emph{log smooth} morphism, see Definition \ref{logsmooth}. Condition (4) of Definition \ref{toricdegen} in fact implies that locally, away from $Z$, $\X^{\dagger}$ and $\X_0^{\dagger}$ are of the above form. So we should view $\X^{\dagger}\rightarrow D^{\dagger}$ as log smooth away from $Z$, and from the log point of view, $\X_0^{\dagger}$ can be treated much like a non-singular scheme away from $Z$. (6) Given a monoid $P$ as in (5) and a morphism $X\rightarrow \Spec\CC[P]$, we can pull back the log structure defined above on $\Spec\CC[P]$ to $X$. If $X^{\dagger}$ is a log scheme which \'etale locally can be described in this way, we say $X^{\dagger}$ is a \emph{fine saturated log scheme}. The adjective ``fine'' tells us it is locally described via maps to schemes of the form $\Spec\CC[P]$ where $P$ is a finitely generated \emph{integral} monoid, i.e., the canonical homomorphism $P\rightarrow P^{\gp}$ is an injection. The adjective ``saturated'' tells us the monoid $P$ is saturated. This means that $P$ is integral and whenever $p\in P^{\gp}$ satisfies $mp\in P$ for some $m>0$, $p\in P$. Such monoids arise, e.g., as the intersection of a rational polyhedral cone with a lattice. Most of the literature on log geometry tends to apply only to fine log structures. In the key example of $\shX_0^{\dagger}\rightarrow 0^{\dagger}$, the log structure is fine saturated away from the set $Z$. However, it is not in general fine along $Z$, and this tends to cause many technical problems as new techniques have to be developed to deal properly with the log structure along $Z$. \qed \end{examples} The notion of log smoothness generalizes the morphisms of Examples \ref{logexamples}, (5): \begin{definition} \label{logsmooth} A morphism $f:X^{\dagger}\rightarrow Y^{\dagger}$ of fine log schemes is \emph{log smooth} if \'etale locally on $X$ and $Y$ it fits into a commutative diagram \[ \xymatrix@C=30pt { X\ar[r]\ar[d]&\Spec \ZZ[P]\ar[d]\\ Y\ar[r]&\Spec\ZZ[Q] } \] with the following properties: \begin{enumerate} \item The canonical log structure on $\Spec \ZZ[P]$ and $\Spec\ZZ[Q]$ of Examples \ref{logexamples}, (5), pull-back to the log structures on $X$ and $Y$ respectively. \item The induced morphism \[ X\rightarrow Y\times_{\Spec\ZZ[Q]}\Spec\ZZ[P] \] is a smooth morphism of schemes. \item The right-hand vertical arrow is induced by a monoid homomorphism $Q\rightarrow P$ with $\ker(Q^{\gp}\rightarrow P^{\gp})$ and the torsion part of $\coker(Q^{\gp}\rightarrow P^{\gp})$ finite groups of orders invertible on $X$. Here $P^{\gp}$ denotes the Grothendieck group of $P$. \end{enumerate} \end{definition} Log smooth morphisms include, in the simplest case, normal crossings morphisms. \medskip On a log scheme $X^{\dagger}$ there is always an exact sequence \[ 1\mapright{} \O_X^{\times}\mapright{\alpha^{-1}}\M_X\mapright{} \overline{\M}_X\mapright{}0, \] where we write the quotient sheaf of monoids $\overline{\M}_X$ additively. We call $\overline{\M}_X$ the \emph{ghost sheaf} of the log structure. I like to view $\overline{\M}_X$ as specifying the combinatorial information associated to the log structure. For example, if $X^{\dagger}$ is induced by the Cartier divisor $Y\subseteq X$ with $X$ normal, then the stalk $\overline{\M}_{X,x}$ at $x\in X$ is the monoid of effective Cartier divisors on a neighbourhood of $x$ supported on $Y$. It is useful for understanding pull-backs of log structures to note that if $f:Y\rightarrow X$ is a morphism with $X$ carrying a log structure, and $Y$ is given the pull-back log structure, then $\overline{\M}_Y=f^{-1}\overline{\M}_X$. In the case that $\M_X$ is induced by an inclusion of $Y\subseteq X$, $\overline{\M}_X$ is supported on $Y$, so we can equate $\overline{\M}_X$ and $\overline{\M}_Y$, the ghost sheaves for the divisorial log structure on $X$ and its restriction to $Y$. \begin{xca} \label{ghostexercise} Show that in Example \ref{logexamples}, (5), $\overline{\M}_{X,x}= P$ if $\dim\sigma=\dim M_{\RR}$ and $x$ is the unique zero-dimensional torus orbit of $X$. More generally, \[ \overline{\M}_{X,x}={\dual{\tau}\cap N\over \tau^{\perp}\cap N} =\Hom_{monoid}(\tau\cap M,\NN), \] when $x\in X$ is in the torus orbit corresponding to a face $\tau$ of $\sigma$. In particular, $\tau$ can be recovered as $\Hom_{monoid}(\overline{\M}_{X,x},\RR_{\ge 0})$, where $\RR_{\ge 0}$ is the additive monoid of non-negative real numbers. \qed \end{xca} In the sections which follow, the key logarithmic spaces we consider will be those arising from toric degenerations $\shX\rightarrow D$. As above, the central fibre $\shX_0\subseteq\shX$ induces a divisorial log structure on $\shX$, and restricting gives a log scheme $\shX_0^\dagger$ along with a morphism $\shX_0^{\dagger}\rightarrow 0^{\dagger}$ which is log smooth off of the bad set $Z\subseteq \shX_0$. We can now elaborate on the philosophy we wish to take with the following diagram: \begin{center} \input{philo1.pstex_t} \end{center} There are two sides to mirror symmetry. The $A$-model side involves counting curves: we wish to count curves in the general fibre of a toric degeneration $\shX\rightarrow D$. There are good reasons to believe that this count can in fact be performed on $\shX_0^{\dagger}$, using a theory of logarithmic Gromov-Witten invariants: see \S\ref{Amodelsection}. The hope is that $\shX_0$ is a sufficiently combinatorial object so that such a count can be carried out in a combinatorial manner. The $B$-side involves deformations of complex structure. The idea is that to understand deformations of complex structure, we should start with the central fibre $\shX_0^{\dagger}$ and try to construct smoothings, i.e., construct a toric degeneration with this central fibre. The log structure is necessary to find a unique smoothing. If this smoothing can be described sufficiently explicitly, then again one should be able to extract the necessary periods for the $B$-model calculations purely in terms of combinatorics. So log geometry will play an important role on both sides of mirror symmetry, but as the above suggests, there should be some combinatorial objects underlying both calculations. In fact, log geometry is closely related to tropical geometry. We will explore in the following sections how tropical geometry controls both the $A$- and $B$-model sides of the above picture, completing the above diagram: \bigskip \begin{center} \input{philo2.pstex_t} \end{center} \section{The $A$-model and tropical geometry} \label{Amodelsection} The first link between log geometry and tropical geometry comes from an elementary combinatorial construction. Given a log scheme $X^{\dagger}$, we can construct the \emph{tropicalization} of $X^{\dagger}$, as follows. For each geometric point $\bar\eta$ of $X$, we have a monoid $\overline\M_{X,\bar\eta}$, and hence a cone $C_{\bar \eta}:= \Hom(\overline\M_{X,\bar\eta},\RR_{\ge 0})$ (where here $\Hom$ denotes monoid homomorphisms and $\RR_{\ge 0}$ is given the additive monoid structure). Further, if $\bar\eta$ is in the closure of $\bar\eta'$, there is a generization map $\overline\M_{X,\bar\eta}\rightarrow \overline\M_{X,\bar\eta'}$.\footnote{Since we need to work in the \'etale topology, there can actually be a number of generization maps. For example, if $X$ is a nodal cubic, then there are two generization maps from $\bar\eta$ the node to $\bar\eta'$ the generic point.} Dualizing, this gives maps $C_{\bar\eta'}\rightarrow C_{\bar\eta}$. If the log structure on $X$ is fine, then these maps are inclusions of faces of strictly convex rational polyhedral cones. We can then form a cell complex by making identifications given by these inclusions of faces, obtaining a polyhedral cone complex $\Trop(X^{\dagger})$. Actually, in general this may not really make sense as a cell complex because the generization maps may induce many strange self-identifications on faces, but in the situations we want to describe here, this will not cause a problem. This construction is functorial, so if $f:X^{\dagger}\rightarrow Y^{\dagger}$ is a morphism of log schemes, then we obtain $\Trop(f): \Trop(X^{\dagger})\rightarrow \Trop(Y^{\dagger})$. For example, consider the case of a toric degeneration $\shX\rightarrow D$. As we saw in the previous section, this gives a morphism of log schemes $\shX_0^{\dagger}\rightarrow 0^{\dagger}$. The bad set $Z\subseteq \shX_0$ is precisely the locus where the log structure on $\shX_0$ is not fine. Thus we can apply the above tropicalization construction to $\shX_0^{\dagger}\setminus Z\rightarrow 0^{\dagger}$. Now $\Trop(0^{\dagger})=\RR_{\ge 0}$ is a ray. On the other hand, if $x\in\shX_0$ is a zero-dimensional stratum, locally a neighbourhood of $x$ looks like the fibre over $0$ of $f_x:Y_x\rightarrow\CC$ where $Y_x$ is a toric variety defined by $C(\sigma_x)\subseteq M_{\RR}\oplus\RR$ for a lattice polytope $\sigma_x\subseteq M_{\RR}$, and the morphism $Y_x\rightarrow\CC$ is given by the projection $M_{\RR}\oplus\RR\rightarrow\RR$. Then $\overline\M_{\shX_0,x}=C(\sigma_x)^{\vee}\cap (N\oplus \ZZ)$, as follows from Exercise \ref{ghostexercise}, and $C_x=C(\sigma_x)$. In particular, the induced map $\Trop(f): C_x\rightarrow \Trop(0^{\dagger})$ has fibre $\Trop(f)^{-1}(1)=\sigma_x$. From this, one checks easily that \[ \Trop(f):\Trop(\shX_0^{\dagger}\setminus Z)\rightarrow \Trop(0^{\dagger})=\RR_{\ge 0} \] has fibre \[ \Trop(f)^{-1}(1)=B, \] with $B$ coming with the polyhedral decomposition $\P$. So the dual intersection complex comes from a very general construction. In particular, note that $B$ only depends on $\shX_0^{\dagger}$, not on $\shX$ (although this is obvious without knowing this general construction). Now let us turn to the $A$-model, which for the purposes of this discussion means counting curves on Calabi-Yau manifolds. Suppose we have a toric degeneration $\shX\rightarrow D$. We would like to count curves on the general fibre. Can we do so by counting curves on $\shX_0$ instead, where the problem might have a more combinatorial nature? This question has a long history. The first work on this kind of question was due to Li and Ruan \cite{LR} and Ionel and Parker \cite{IP1},\cite{IP2}. Essentially they considered a situation where one has a degeneration $\shX\rightarrow D$ where the special fibre $\shX_0=X_1\cup X_2$ is a normal crossings union of two smooth irreducible components. They showed that there was a theory of Gromov-Witten invariants of $\shX_0$, and that it gave the same answer as Gromov-Witten theory on a general fibre. Further, they gave gluing formulas, which stated that the Gromov-Witten invariants of $\shX_0$ could be computed using the Gromov-Witten invariants of the two pairs $(X_i,X_1\cap X_2)$, $i=1,2$. Here the Gromov-Witten theory associated to a pair $(X,D)$ where $D\subseteq X$ is a smooth divisor is the theory of \emph{relative} Gromov-Witten invariants, where one considers curves in $X$ with some imposed orders of tangency at points on the curve with $D$. This gluing formula has proven to be a very powerful tool in Gromov-Witten theory. In 2001, Bernd Siebert \cite{STalk} proposed using log geometry to generalize these results. Meanwhile, Jun Li was working on an algebro-geometric approach to the Li-Ruan and Ionel-Parker theories (which were carried out using symplectic techniques). He gave a satisfactory algebro-geometric definition of relative Gromov-Witten invariants and reproved the gluing formula, using a few techniques from log geometry. However, the theory possesses a technical difficulty. In Gromov-Witten theory, it is standard that one allows the domain curves to develop bubbles. But in relative Gromov-Witten theory, it is also necessary to allow the target space $X$ to develop bubbles. This occurs when an irreducible component of the domain curve falls into the divisor $D$, so that the order of tangency with $D$ becomes meaningless. So the actual target space for a relative stable map might be $X$ with a chain of $\PP^1$-bundles over $D$ glued to $D\subseteq X$. This often makes the analysis more difficult, and was a major stumbling block for extending these techniques to more complicated degenerations. Several solutions to this problem were completed in 2011. Brett Parker in \cite{Park1}, \cite{Park2} provided a completely new category, the category of \emph{exploded manifolds}, in which to study Gromov-Witten theory. These manifolds carry information similar to log spaces, but is a somewhat more flexible and ``softer'' category in which to work. In \cite{Park2} he provides a definition of Gromov-witten invariants in this setting and gives a gluing formula. Also, Siebert and I \cite{JAMS} completed a theory of logarithmic Gromov-Witten invariants, as did Abramovich and Chen \cite{Chen},\cite{AC}, working with Siebert's original suggestion. I will summarize the basic ideas here. \begin{definition} A \emph{log curve} over a fine saturated log scheme $W^{\dagger}$ is a fine saturated log scheme $C^{\dagger}$ with a morphism $C^{\dagger} \rightarrow W^{\dagger} $ which is flat of relative dimension one, log smooth, and with all geometric fibres reduced. \end{definition} Here log smoothness implies that the geometric fibres of $C\rightarrow W$ are nodal curves, which is pleasant as this is precisely the sort of curve which is allowed as the domain of a stable map. The log structure can also be viewed as incorporating marked points. For example, given a smooth curve $C$ over $W=\Spec \CC$, one can take a finite number of points $x_1,\ldots,x_k\in C$ and give $C$ the divisorial log structure associated to the subset $\{x_1,\ldots,x_k\}\subseteq C$. Then $C^{\dagger}$ is log smooth over $W$ with the trivial log structure $\M_W=\O_W^{\times}$. \begin{definition} Let $X^{\dagger}\rightarrow S^{\dagger}$ be a morphism of fine saturated log schemes. A \emph{log curve in $X^{\dagger}$ with base $W^{\dagger}$} is a log curve $C^{\dagger}/W^{\dagger}$ together with a morphism $f:C^{\dagger}\rightarrow X^{\dagger}$ fitting into a commutative diagram of log schemes \[ \xymatrix@C=30pt { C^{\dagger}\ar[r]^f\ar[d] &X^{\dagger}\ar[d]\\ W^{\dagger}\ar[r]&S^{\dagger}} \] A log curve in $X^{\dagger}$ is a \emph{stable log map} if for every geometric point $\bar w\rightarrow W$, the restriction of $f$ to the underlying marked curve $C_{\bar w}\rightarrow\bar w$ is an ordinary stable map. We write the data as $(C^{\dagger}/W^{\dagger},f)$. This definition can be further decorated in the usual way by labelling marked points. \end{definition} The main work of \cite{JAMS} is to construct a well-behaved moduli space of stable log maps. There is a technical issue which arises whenever one tries to construct a moduli space of log objects; this was explored by Martin Olsson in his thesis \cite{Ols}. The problem is as follows. Suppose we are given a stable log map with domain $\pi:(C,\M_C)\rightarrow (W,\M_W)$. Then $\pi':(C,\M_C\oplus \NN^r)\rightarrow (W,\M_W\oplus\NN^r)$ also gives the domain of a stable log map. Here the structure map $\alpha_C$ (or $\alpha_W$) takes the value $0$ on the non-zero elements of the constant sheaf $\NN^r$, and the map $\pi'$ acting on monoids just takes $\NN^r$ isomorphically to $\NN^r$. The new map $f^{\#}$ is the composition of the old $f^{\#}:f^{-1}\M_X\rightarrow \M_C$ and the inclusion $\M_C\rightarrow \M_C\oplus\NN^r$. As a result, a single stable log map gives rise to a countable number of other maps, so the stack of stable log maps has no chance of being finite type, and hence cannot be proper. The solution is to identify log structures on $W$ which are universal in a suitable sense. In the above example, all the log curves in question arise as a cartesian diagram of log schemes: \[ \xymatrix@C=30pt { (C,\M_C\oplus\NN^r)\ar[r]\ar[d]&(C,\M_C)\ar[d]\\ (W,\M_W\oplus\NN^r)\ar[r]&(W,\M_W) } \] Thus all these extraneous log curves can be viewed as obtained by pull-back from the initial choice of log curve via a logarithmic base-change. To solve this problem, we introduce a property of stable log maps called \emph{basic}. I do not wish to give the definition here, as it is very involved, but the important properties of basic stable log maps are universality and boundedness, as expressed in the following two theorems, a summation of the main results of \cite{JAMS}: \begin{theorem} Given a stable log map $(C^{\dagger}/W^{\dagger},f)$, there is a basic stable log map $(C_b^{\dagger}/W_b^{\dagger},f_b)$ fitting into a commutative diagram \[ \xymatrix@C=30pt { C^{\dagger}\ar[r]\ar[d]&C_b^{\dagger}\ar[r]\ar[d]& X^{\dagger}\ar[d]\\ W^{\dagger}\ar[r]&W_b^{\dagger}\ar[r]&S^{\dagger} } \] where the left-hand square is cartesian in the category of fine saturated log schemes and the maps $W\rightarrow W_b$ and $C\rightarrow C_b$ of underlying schemes are isomorphisms. Furthermore, $(C_b^{\dagger}/W_b^{\dagger},f_b)$ and the maps in the above diagram are determined by $(C^{\dagger}/W^{\dagger},f)$ uniquely up to unique isomorphism. \end{theorem} \begin{theorem} Let $\cM(X^{\dagger}/S^{\dagger})$ denote the stack of basic stable log maps in $X^{\dagger}$ over $S^{\dagger}$. Then: \begin{enumerate} \item $\cM(X^{\dagger}/S^{\dagger})$ is a Deligne-Mumford stack. \item Let $\beta$ denote a choice of genus $g$, number of marked points $k$, homology class in $H_2(X,\ZZ)$, along with a collection of tangency data for the marked points (this notion can be made precise). Let $\cM(X^{\dagger}/S^{\dagger},\beta)$ denote the substack of $\cM(X^{\dagger}/S^{\dagger})$ of basic stable log maps of curves of genus $g$ and $k$ marked points, representing the given homology class, and satisfying the given tangency conditions. Then modulo some technical hypotheses on $X^{\dagger}$, $\cM(X^{\dagger}/S^{\dagger},\beta)$ is proper over $S$ if $X$ is proper over $S$. \item Assuming further that $X^{\dagger}\rightarrow S^{\dagger}$ is log smooth, $\cM(X^{\dagger}/S^{\dagger},\beta)$ carries a virtual fundamental class, allowing for the definition of logarithmic Gromov-Witten invariants. \end{enumerate} \end{theorem} Similar results were also obtained by Abramovich and Chen in \cite{AC},\cite{Chen}. This is a promising start to the problem of understanding the $A$-model by working entirely on the central fibre of a toric degeneration. There are, however, still two major gaps in the theory which need to be filled. First, one needs an analogue of the gluing formula. This should allow us to break down a calculation of curves on the central fibre of a degeneration into simpler pieces. This is expected to be quite subtle, however, and is still work in progress. I will say a bit more shortly about what one expects such a formula to look like. Second, as observed earlier, the central fibre of a toric degeneration $\shX_0^{\dagger}\rightarrow 0^{\dagger}$ is only fine saturated off of the set $Z$. As a result none of the above theorems about stable log maps apply. It is quite likely that even the definition of stable log map is not the correct one in this case. So the theory still needs to be extended. This is also work in progress of Michael Kasa. Let us return to the tropicalization functor. Suppose we have a degeneration $q:\shX\rightarrow D$, which we assume to be log smooth (say a normal crossings degeneration), so that we don't have to worry about the singular set $Z$ where the log structure on $\shX$ is not fine. As usual, this gives $\shX_0^{\dagger} \rightarrow 0^{\dagger}$. Suppose we have a basic stable log map over a point, i.e., a diagram \[ \xymatrix@C=30pt {C^{\dagger}\ar[r]^f\ar[d]_\pi& \shX_0^{\dagger}\ar[d]^q\\ W^{\dagger}=(\Spec\CC, Q\oplus\CC^{\times})\ar[r]_(0.8)g&0^{\dagger} } \] Here $Q$ is a monoid given by $Q=\sigma_Q\cap \ZZ^n$ for a strictly convex rational polyhedral cone $\sigma_Q$, and the log structure on $W$ is given by $\alpha:Q\oplus\CC^\times\rightarrow\CC$ defined by \[ \alpha(p,s)=\begin{cases} s & p=0\\ 0 & p\not=0 \end{cases} \] Here, the monoid $Q$ is determined by the fact the curve is basic. We then tropicalize this, so get a diagram \[ \xymatrix@C=30pt {\Trop(C^{\dagger})\ar[r]^{\Trop(f)}\ar[d]_{\Trop(\pi)}& \Trop(\shX_0^{\dagger})\ar[d]^{\Trop(q)}\\ \Trop(W^{\dagger})=\sigma_Q^{\vee}\ar[r]_{\Trop(g)}& \Trop(0^{\dagger})=\RR_{\ge 0} } \] The fibres of $\Trop(\pi)$ are in general one-dimensional graphs, while $\Trop(q)^{-1}(1)$ is the dual intersection complex $B$ of $\shX_0^{\dagger}$. (In general, this is only a polyhedral complex and does not carry an affine structure in codimension one, unlike the case of a toric degeneration.) Thus $\Trop(g)^{-1}(1)\subseteq \sigma_Q^{\vee}$ can be viewed as a space parameterizing maps from graphs (fibres of $\Trop(\pi)$) into $B$. These will be tropical curves. In fact, where $B$ does carry an affine structure, these curves satisfy the tropical balancing condition. The fundamental property that the monoid $Q$ associated with the basic log structure must satisfy is that $\Trop(g)^{-1}(1)$ must parameterize all tropical curves in $B$ of the same ``combinatorial type''. This makes precise the correspondence between tropical curves and log curves. We can also describe the expected shape of a gluing formula, in keeping with the formula developed by Brett Parker in his setting \cite{Park2}. One considers tropical curves in $B$ as above. These in general move in families, but there will be, for any given set of data $\beta$, a finite number of tropical curves representing $\beta$ which cannot be deformed without changing the domain graph. We call such tropical curves \emph{rigid}. The actual moduli space $\cM(\shX_0^{\dagger}/0^{\dagger},\beta)$ can then be viewed to have a ``decomposition into virtual irreducible components'' indexed by these rigid curves. Furthermore, the ``virtual irreducible component'' associated to any rigid curve can be further related to moduli spaces of curves associated to each vertex of the tropical curve. This should ultimately allow an expression for the Gromov-Witten invariants of $\shX_0^{\dagger}/0^{\dagger}$, and hence the Gromov-Witten invariants of a smoothing of $\shX_0^{\dagger}$, in terms of much simpler invariants. This is an ongoing joint project with Abramovich, Chen and Siebert. \section{The $B$-model and tropical geometry} \label{Bmodelsect} Let us turn to the $B$-model, and understand how tropical geometry may be visible in the variation of complex structures which is necessary for $B$-model computations. This problem is closely related to the reconstruction problem, as stated in Question \ref{reconstruct2}. If given $(B,\P,\varphi)$, one can find an explicit description of a toric degeneration $\shX\rightarrow D$, with dual intersection complex $(B,\P,\varphi)$, then one could use this explicit description to calculate periods and extract $B$-model predictions for the mirror. Before describing the solution to this problem, let me give a bit of history of the reconstruction problem. The version as stated in Question \ref{reconstruct1} was first studied by Fukaya in \cite{Fukaya}. There he considered directly the question of perturbing the complex structure on $X_{\epsilon}(B_0)$ by looking at the Kodaira-Spencer equation governing deformations of complex structure. Arguing informally in the case that $\dim B=2$, he suggested that the perturbations should be concentrated along trees made of gradient flow lines, with the lines emanating initially from singular points of $B$. This gave the first hint that a nice solution to the reconstruction problem might actually see something related to curves. However, Fukaya's work contained no definite theorems, and the analysis looked likely to be very difficult. In 2004, Siebert and I were considering how to solve the reconstruction problem using our program. Given $(B,\P)$, we had shown in \cite{PartI} how to construct log schemes $X_0(B,\P,s)^{\dagger}$ along with a morphism to $0^{\dagger}$ which had all the properties one would want for a central fibre of a toric degeneration $\shX_0^{\dagger} \rightarrow 0^{\dagger}$. Our original hope was that a generalization of the Bogomolov-Tian-Todorov unobstructedness theorem would allow us to show such log schemes smoothed. In particular, Kawamata and Namikawa \cite{KN} had had success with this point of view in the normal crossings case. While this approach works easily in dimension 2, we couldn't make it work in higher dimension. Furthermore, this approach fails to give the explicit description of the smoothing which would be needed to describe the $B$-model. As a consequence, we turned towards a more explicit approach, which involved gluing together explicit local models. While we were working on this approach, Kontsevich and Soibelman in \cite{KS2} got around the difficult analysis of Fukaya's approach by replacing complex manifolds with rigid analytic manifolds. They were able to show that given a tropical affine surface $B$ with $24$ singularities of focus-focus type (the simplest type of singularity which occurs in affine surfaces, to be described shortly) one could construct a rigid analytic K3 surface $X^{an}(B)$. This was done by gluing together standard pieces via automorphisms attached to lines on $B$. These lines were given as gradient flow lines, giving a similar, but much more precise, picture to the one given by Fukaya. Combining our approach of gluing local models with one of the central ideas of Kontsevich and Soibelman's work \cite{KS2}, we were then able to complete a construction in all dimensions, giving a satisfactory solution to the reconstruction problem within algebraic geometry. This was carried out in \cite{Annals}. Before surveying this approach, let me make a philosophical remark. Note that when we were discussing the $A$-model, we observed that tropical curves on $B$ should correspond to holomorphic curves on $X(B)$. If we want to see these same tropical curves playing a role on the $B$-model of the mirror, then we should think of the $B$-model not on a complex manifold of the form $X(B)$, but rather on $\check X(B)$. This is a slightly confusing reversal of roles. Normally counting curves is done in the symplectic category, here expected to mean on the symplectic manifold $\check X(B)$, while anything having to do with complex structures should be done on $X(B)$. This reversal can be explained as follows. If we were to study pseudo-holomorphic curves on $\check X(B)$, we would need to put an almost complex structure on $\check X(B)$. One way to do this is to choose a metric on $B$; this induces an almost complex structure on $\check X(B)$ constant on fibres of the torus fibration, generalizing the construction of a complex structure from a Hessian metric described in \S\ref{semiflatsection}. Then in a suitable adiabatic limit where the almost complex structure is rescaled, pseudo-holomorphic curves are expected to tend towards trees of gradient flow lines. If the chosen metric was in fact Hessian, these gradient flow lines would in fact be straight lines with respect to the Legendre dual affine structure, so these trees of gradient flow lines can be viewed as a generalization of tropical curves. However, tropical geometry is linear and much easier to control. We take the attitude that we should work on the side in which tropical geometry appears. Indeed, this turns out to be very helpful. Given this, we can then present a somewhat revised version of the mirror symmetry program: \begin{enumerate} \item We begin with a toric degeneration of Calabi-Yau manifolds $\shX\rightarrow D$ with an ample polarization. \item Construct the dual intersection complex $(B,\P,\varphi)$ from this data. \item Construct a new toric degeneration $\check\shX\rightarrow D$ whose \emph{intersection complex} is $(B,\P,\varphi)$. This degeneration should be controlled by tropical data. \item Understand genus $0$ holomorphic curves (or whatever other aspect of the $A$-model one is interested in) on the general fibre of $\shX\rightarrow D$ in terms of tropical geometry of $B$. \item Understand the variation of Hodge structures for $\check\shX\rightarrow D$ in terms of tropical geometry of $B$. \item Use the fact that the $A$- and $B$-models of $\shX$ and $\check\shX$ respectively are controlled by the same tropical geometry on $B$ to prove mirror symmetry. \end{enumerate} Here we outline the completion of step (3) as carried out in \cite{Annals}. The first step is as follows. Given $(B,\P,\varphi)$, we wish to construct the central fibre $\shX_0^{\dagger}$ of the degeneration. This in fact was carried out in \S 5 of \cite{PartI}, assuming certain genericity assumptions on the singular locus of $B$. As a scheme, it is fairly obvious what $\shX_0$ should be. For each maximal cell $\sigma$, one has an associated projective toric variety $\PP_{\sigma}$ with Newton polytope $\sigma$. Any face $\tau\subseteq\sigma$ specifies a toric strata $\PP_{\tau}\subseteq\PP_{\sigma}$, and given $\tau=\sigma_1\cap\sigma_2$ for $\sigma_1,\sigma_2$ maximal, we can glue together the toric strata $\PP_{\tau}\subseteq\PP_{\sigma_1},\PP_{\sigma_2}$ in a torus equivariant manner. There is of course a whole family of possible gluings, parameterized by what we call \emph{closed gluing data} in \cite{PartI}. Given closed gluing data $s$, we obtain a scheme $\check X_0(B,\P,s)$. Now $\check X_0(B,\P,s)$ cannot be a central fibre of a toric degeneration unless it carries a log structure of the correct sort. There are many reasons this may not happen. If $s$ is poorly chosen, there may be zero-dimensional strata of $\check X_0(B,\P,s)$ which do not have neighbourhoods locally \'etale isomorphic to the toric boundary of an affine toric variety; this is a minimal prerequisite. As a result, we have to restrict attention to closed gluing data induced by what we call \emph{open gluing data}. Explicitly, each vertex $v$ of $\P$ defines local models $V(v)\subseteq U(v)$ as follows. The piecewise linear function $\varphi$ is defined locally up to affine linear functions. Choose a representative $\varphi_v$ for $\varphi$ in a neighbourhood of $v$ which takes the value $0$ at $v$. By extending the function linearly on each cell, we can view this as a piecewise linear function on the fan $\Sigma_v$, viewed as a fan in some $\RR^n$. We can then set \[ P_v:=\{(m,r)\in \ZZ^n\times\ZZ\,|\,r\ge \varphi_v(m)\}. \] Noting that $(0,1)\in P_v$, we set \begin{align*} U(v):={} & \Spec \CC[P_v],\\ V(v):={} & \Spec \CC[P_v]/(z^{(0,1)}). \end{align*} Note that $z^{(0,1)}$ vanishes to order one on every toric divisor of $U(v)$, so in fact $V(v)$ is the toric boundary of $U(v)$. It turns out, as we show in \cite{PartI}, that a necessary condition for $\check X_0(B,\P,s)$ to be the central fibre of a toric degeneration is that it is obtained by dividing out $\coprod_{v\in\P} V(v)$ by an equivalence relation. In other words, we are gluing together the $V(v)$'s along Zariski open subsets to obtain a scheme.\footnote{\cite{PartI} allowed the case that the cells of $\P$ self-intersect. As a consequence, the equivalence relation is merely \'etale and one obtains an algebraic space.} Again, there is some choice of gluing, but now the gluing data are given by equivariant identifications of open subsets of the various $V(v)$'s. We call this \emph{open gluing data}. The advantage of using open gluing data is that each $V(v)$ carries a log structure induced by the divisorial log structure $V(v)\subseteq U(v)$. These log structures are not identified under the open gluing maps, but the ghost sheaves of the log structures are isomorphic. So the ghost sheaves $\overline{\M}_{V(v)}$ glue to give a ghost sheaf of monoids $\overline{\M}_{\check X_0(B,\P,s)}$. Thus we see how $\varphi$ influences the log structure. One then tries to construct a log structure with this ghost sheaf. This is done in \cite{PartI} by building suitable extensions of the ghost sheaf with $\shO_{\check X_0(B,\P,s)}^{\times}$, and this extension depends on some moduli (which may in general be empty). The good situation is that one can find a closed subset $Z\subseteq\check X(B,\P,s)$ of complex codimension at least two and a log structure on $\check X_0(B,\P,s)$ along with a morphism $\check X_0(B,\P,s)^{\dagger}\rightarrow 0^{\dagger}$ which is log smooth away from $Z$. Furthermore, the ghost sheaf on $\check X_0(B,\P,s)\setminus Z$ should be the given ghost sheaf of monoids $\overline{\M}_{\check X_0(B,\P,s)}$ restricted to $\check X_0(B,\P,s)\setminus Z$. We call such a log scheme with morphism to $0^{\dagger}$ a \emph{log Calabi-Yau space}. The technical heart of \cite{PartI} is an explicit classification of log Calabi-Yau spaces with given intersection complex $(B,\P,\varphi)$, modulo some assumptions on the singularities of $B$ called \emph{simplicity}. The definition of simplicity is rather involved, so we will not give it here, but it essentially says that not too much topology of $\check X(B)$ (or $X(B)$) can be hiding over the singular locus of $B$. A main result of \cite{PartI}, (Theorem 5.4) is then \begin{theorem} Given $(B,\P,\varphi)$ simple, the set of log Calabi-Yau spaces with intersection complex $(B,\P,\varphi)$ modulo isomorphism preserving $B$ (i.e., does not interchange irreducible components) is $H^1(B,i_*\check \Lambda\otimes \CC^{\times})$. An isomorphism is said to preserve $B$ if it induces the identity on the intersection complex. \end{theorem} So the moduli space is an algebraic torus (or a disjoint union of algebraic tori) of dimension equal to $\dim_\CC H^1(B,i_*\check\Lambda\otimes \CC)$. In \cite{PartII}, we in fact show the dimension of this torus is the dimension of $H^1(\shX_t,\shT_{\shX_t})\cong H^{n-1,1}(\shX_t)$ for a smooth fibre $\shX_t$ of a smoothing $\shX\rightarrow D$ of $X_0(B,\P,s)^{\dagger}$. This is the expected dimension, as this latter vector space is the tangent space to the moduli space of $\shX_t$. Now assume given a log Calabi-Yau space $X_0:=X_0(B,\P,s)^{\dagger} \rightarrow 0^{\dagger}$. Our goal is to use the log structure to provide ``initial conditions'' to produce $k$-th order deformations $X_k \rightarrow\Spec\CC[t]/(t^{k+1})$, order by order. To do so, we will glue together standard thickenings of ``pieces'' of $X_0$, modifying standard gluings by a complicated system of data we call a \emph{structure}. First, the ``pieces'' of $X_0$ we consider are toric open affine subsets of strata of $X_0$. Recall that strata of $X_0$ are indexed by cells $\tau\in\P$, corresponding to a projective toric variety $\PP_{\tau}$. Recall also that if $\omega\subseteq\tau$, the normal cone to $\tau$ along $\omega$ is a cone in the fan defining $\PP_{\tau}$ and hence defines an open affine subset of $\PP_{\tau}$. We call this open affine subset $V_{\omega,\tau}\subseteq\PP_{\tau}$; note \[ V_{\omega,\tau}=\PP_{\tau}\setminus \bigcup_{\rho\subseteq\tau\atop \omega\not\subseteq\rho} \PP_{\rho}. \] For example, if $\omega$ is a vertex of $\tau$, then $V_{\omega,\tau}$ is the standard toric open affine subset of $\PP_{\tau}$ containing the zero-dimensional stratum of $\PP_{\tau}$ corresponding to $\omega$. Second, what are the thickenings of the sets $V_{\omega,\tau}$? These can be described explicitly as follows. Choose a point $x$ in the interior of $\omega$ not contained in the singular locus $\Gamma$ of $B$. We obtain a fan $\Sigma_x$ in the tangent space $\Lambda_x\otimes_{\ZZ}\RR$ of not necessarily strictly convex cones consisting of the tangent cones at $x$ of each cell $\sigma$ containing $\omega$. We can choose a representative $\varphi_x$ for $\varphi$ in a small neighbourhood of $x$ which is zero along $\omega$, and this can then be extended linearly on each cone of $\Sigma_x$ to view $\varphi_x$ as a piecewise linear function $\varphi_x:\Lambda_x\otimes\RR\rightarrow\RR$. This in turn defines a monoid \[ P_x:=\{(m,r)\in \Lambda_x\times \ZZ\,|\, r\ge \varphi_x(m)\}, \] completely analogous to the definition of $P_v$. For each maximal cell $\sigma$ containing $\tau$, let $n_{\sigma} \in\check\Lambda_x$ denote the slope of $\varphi_x$ restricted to the tangent cone of $\sigma$. We then define a monomial ideal in the ring $\CC[P_x]$ given by \[ I_{\omega,\tau}^{>k}=\langle z^{(m,r)}\,|\, \hbox{ $(m,r)\in P_x$, $r-\langle n_{\sigma},m\rangle > k$ for some $\sigma\in\P_{\max}$ with $\sigma\supseteq\tau$}\rangle. \] Then the desired standard thickening of $V_{\omega,\tau}$ is \[ V^k_{\omega,\tau}:=\Spec\CC[P_x]/I_{\omega,\tau}^{>k}. \] One checks easily that if $k=0$, this recovers $V_{\omega,\tau}$, and if $k>0$, then the reduced space of $V^k_{\omega,\tau}$ is $V_{\omega,\tau}$. Thus this is indeed a thickening of $V_{\omega,\tau}$. There is one point we have to be quite careful about. This definition would appear to depend on the point $x$, and identifications of different tangent spaces $\Lambda_x$, $\Lambda_{x'}$ via parallel transport depend on the path because of the presence of the singular locus. We deal with this issue not by choosing a specific point $x$, but choosing a specific maximal reference cell $\sigma$ containing $\tau$. We then can identify any $\Lambda_x$ with $\Lambda_{\sigma}$, the well-defined tangent space to $\sigma$, via parallel transport from $x$ directly into $\sigma$. We will notate this additional choice of reference cell by writing $V^k_{\omega,\tau,\sigma}$. A different choice of reference cell $\sigma'$ gives a space $V^k_{\omega,\tau,\sigma'}$ abstractly, but not canonically, isomorphic to $V^k_{\omega,\tau,\sigma}$. This will prove important below. We also use the notation for the coordinate rings \[ R^k_{\omega,\tau,\sigma}:=\CC[P_x]/I_{\omega,\tau}^{>k}, \] again keeping in mind this choice of reference cell. There are also natural gluings between these various thickened schemes. One notes that given $\tau_1\subseteq\tau_2\subseteq\tau_3$ there are natural surjections \[ R^k_{\tau_1,\tau_3,\sigma}\rightarrow R^k_{\tau_1,\tau_2,\sigma} \] giving a closed embedding $V^k_{\tau_1,\tau_2,\sigma}\rightarrow V^k_{\tau_1,\tau_3,\sigma}$, and natural inclusions \[ R^k_{\tau_1,\tau_3,\sigma}\rightarrow R^k_{\tau_2,\tau_3,\sigma}, \] giving open embeddings $V^k_{\tau_2,\tau_3,\sigma} \rightarrow V^k_{\tau_1,\tau_3,\sigma}$. If $B$ has no singularities, then the reference cell $\sigma$ is not important, and we drop this from the notation in this case. In particular, it is easy to check that if we take, say, $\tau_1$ to be a fixed vertex $v$, and we take the limit of the directed system $\{V^k_{v,\tau}\,|\,v\in\tau\}$ of schemes, we obtain a $k$-th order thickening $V^k(v)$ of $V(v)$ given by $V^k(v)=U(v)\times_{\AA^1} \Spec\CC[t]/(t^{k+1})$, with $U(v)\rightarrow\AA^1$ the morphism given by $z^{(0,1)}$. This is precisely the kind of vanilla smoothing the log structure leads us to expect. Note we can write this direct limit of schemes as \[ \Spec \lim_{\longleftarrow\atop\tau} R^k_{v,\tau}. \] The basic idea then will be to modify the various maps above by some additional data. To understand why we need these modifications, let us consider the single most important example, that of an isolated singularity of focus-focus type in a two-dimensional $B$. We suppose $\P$ contains two maximal cells $\sigma_1,\sigma_2$, with $\sigma_1\cap\sigma_2=\tau$, as depicted in Figure \ref{TwoTriangles}. Note that the intersection of the two coordinate charts is $(\sigma_1\cup\sigma_2)\setminus\tau$, and the transition map is then the identity on $\sigma_1\setminus\tau$ and is given by the linear transformation $\begin{pmatrix} 1&0\\ 1&1\end{pmatrix}$ on $\sigma_2\setminus \tau$. Together, these two charts define an integral affine structure on $(\sigma_1\cup\sigma_2)\setminus\Gamma$, where $\Gamma=\{p\}$ is the common point of the two cuts. \begin{figure} \input{TwoTriangles.pstex_t} \caption{The fundamental example. The diagram shows the affine embeddings of two charts, obtained by cutting the union of two triangles as indicated in two different ways. Each triangle is a standard simplex.} \label{TwoTriangles} \end{figure} We then take $\varphi$ to be single-valued, identically $0$ on $\sigma_1$ and taking the value $1$ at the right-hand vertex. One now finds \begin{align*} R^k_{\tau,\sigma_1,\sigma_1}= {} & \CC[x,y,w^{\pm 1}]/(y^{k+1})\\ R^k_{\tau,\sigma_2,\sigma_2}= {} & \CC[x,y,w^{\pm 1}]/(x^{k+1})\\ R^k_{\tau,\tau,\sigma_i}= {} & \CC[x,y,w^{\pm 1}]/(x^{k+1},y^{k+1}). \end{align*} Here, if we use the chart on the left, i.e., choose a point $s$ below $p$ and work in $P_s\subseteq\Lambda_s\oplus\ZZ$, the variables $x,y$ and $w$ are identified with elements of $\CC[P_s]$ as \[ x=z^{(-1,0,0)},\quad y=z^{(1,0,1)},\quad w=z^{(0,1,0)}. \] We have the natural surjections $R^k_{\tau,\sigma_i,\sigma_i} \rightarrow R^k_{\tau,\tau,\sigma_i}$, and we identify $R^k_{\tau,\tau,\sigma_1}$ with $R^k_{\tau,\tau,\sigma_2}$ by identifying $\Lambda_{\sigma_1}$ and $\Lambda_{\sigma_2}$ by parallel transport through $s$. Since we have written everything in the left-hand chart, where $\Lambda_{\sigma_1}$ and $\Lambda_{\sigma_2}$ are identified via parallel transport through $s$, this identification is the trivial one. We can thus glue together the coordinate rings of the thickenings as \[ R^k_{\tau,\sigma_1,\sigma_1}\times_{R^k_{\tau,\tau,\sigma_i}} R^k_{\tau,\sigma_2,\sigma_2}. \] This fibred product of rings is easily seen to be isomorphic to the ring \[ \CC[X,Y,W^{\pm 1},t]/(t-XY, t^{k+1}), \] where $X=(x,x)$, $Y=(y,y)$, $W=(w,w)$, and $t=(xy,xy)$ as elements of the Cartesian product of rings. On the other hand, suppose we instead identified $R^k_{\tau,\tau,\sigma_1}$ and $R^k_{\tau,\tau,\sigma_2}$ by parallel transport through a point $s'$ lying above $p$. To do this, we can work in the right-hand chart. Again, $x,y$ and $w$ are defined using the tangent vectors $(-1,0), (1,0)$ and $(0,1)$ in $\sigma_1$, and these are transported to the same tangent vectors in $\sigma_2$ in the second chart. However, to compare this with our original description of $R^k_{\tau, \tau,\sigma_2}$, we need to think of these as tangent vectors in $\sigma_2$ in the original chart, i.e., the left-hand chart. There, these tangent vectors are $(-1,1)$, $(1,-1)$ and $(0,1)$ respectively. Thus we obtain an isomorphism $R^k_{\tau,\tau,\sigma_1}\rightarrow R^k_{\tau,\tau,\sigma_2}$ given by \begin{equation} \label{auto1} x\mapsto xw,\quad y\mapsto yw^{-1}, \quad w\mapsto w. \end{equation} Using this identification, we obtain a composed map $R^k_{\tau,\sigma_1, \sigma_1}\rightarrow R^k_{\tau,\tau,\sigma_1}\rightarrow R^k_{\tau,\tau, \sigma_2}$, leading to a fibred product \[ R^k_{\tau,\sigma_1,\sigma_1}\times_{R^k_{\tau,\tau,\sigma_2}} R^k_{\tau,\sigma_2,\sigma_2} \cong \CC[X,Y,W^{\pm 1},t]/(XY-tW, t^{k+1}), \] where now \[ X=(x,xw),\quad Y=(yw,y),\quad W=(w,w), \quad t=(xy,xy). \] Note that while this new ring is abstractly isomorphic to the previous ring, there is no isomorphism as $\CC[t]/(t^{k+1})$-algebras. So the gluing is not well-defined, and this is caused by the singularities of $B$. The correct smoothing in this case will depend on the choice of log structure, but in any event we expect it should be a family of the form $\Spec \CC[X,Y,W^{\pm 1},t]/(XY-f(W)t)$ for some function $f(W)$ which vanishes along the $W$-axis precisely at the points where the given log structure on $X_0(B,\P,s)$ is not fine. Clearly $f$ is then determined by the log structure up to invertible functions. Let us take for the sake of this example the function $f(W)=1+W$, noting that $f(W)=1+W^{-1}$ would do just as well. We can now modify the gluings using Figure \ref{TwoTriangles2}. \begin{figure} \input{TwoTriangles2.pstex_t} \caption{} \label{TwoTriangles2} \end{figure} In this figure, we have drawn two rays contained in $\tau$ emanating from the singular point, and labelled these two arrows with the functions $1+w^{-1}$ and $1+w$ respectively. These rays tell us that if we try to identify $R^k_{\tau,\tau,\sigma_1}$ with $R^k_{\tau,\tau, \sigma_2}$ using parallel transport between the two maximal cells, we need to modify the identification via an automorphism given by the crossing of one of these rays. Here, we will get different automorphisms depending on whether we cross above or below the singularity $p$. If we cross below, the ray tells us to use an automorphism of $R^k_{\tau,\tau,\sigma_1}$ given by \begin{equation} \label{auto2} x\mapsto x(1+w),\quad y\mapsto y(1+w)^{-1},\quad w\mapsto w, \end{equation} while if we cross above the singularity, we use the automorphism \begin{equation} \label{auto3} x\mapsto x(1+w^{-1}),\quad y\mapsto y(1+w^{-1})^{-1},\quad w\mapsto w. \end{equation} Actually, note that $1+w$ or $1+w^{-1}$ is not invertible in $R^k_{\tau,\tau,\sigma_i}$, so we need to modify this ring by localizing it at $1+w$ (or equivalently $1+w^{-1}$). Let's see how this affects the fibred products $R^k_{\tau,\sigma_1,\sigma_1} \times_{R^k_{\tau,\tau,\sigma_2}} R^k_{\tau,\sigma_2,\sigma_2}$. If we use parallel transport below the singular point, then the map $R^k_{\tau,\sigma_1,\sigma_1}\rightarrow R^k_{\tau,\tau,\sigma_2}$ is just given by \eqref{auto2}, while $R^k_{\tau,\sigma_2,\sigma_2} \rightarrow R^k_{\tau,\tau,\sigma_2}$ remains the canonical one. One then finds \[ R^k_{\tau,\sigma_1,\sigma_1} \times_{R^k_{\tau,\tau,\sigma_2}} R^k_{\tau,\sigma_2,\sigma_2} \cong \CC[X,Y,W^{\pm},t]/(XY-(1+W)t, t^{k+1}), \] with \[ X=(x,x(1+w)),\quad Y=(y(1+w),y),\quad W=(w,w),\quad t=(xy,xy). \] On the other hand, if we use parallel transport above the singular point, we need to compose the automorphism \eqref{auto3} with the isomorphism \eqref{auto1}, giving a map $R^k_{\tau,\sigma_1,\sigma_1} \rightarrow R^k_{\tau,\tau,\sigma_2}$ given by \[ x\mapsto xw(1+w^{-1})=x(1+w),\quad y\mapsto yw^{-1}(1+w^{-1})^{-1} =y(1+w)^{-1},\quad w\mapsto w. \] Thus this map is exactly the same as \eqref{auto2}, and hence we get the same fibred product. The glued thickenings are independent of choices. The introduction of the extra automorphisms removes the problems caused by monodromy. This is a very local situation. The next problem which arises is that more globally, we need to propagate the automorphisms attached to the rays. Indeed, imagine now that the picture we are looking at is contained in a more complex situation, as on the left-hand side of Figure \ref{proprays}. Here we have two singularities, and rays emanate in each direction from the singularity. Let us follow the rule that any identification of rings which involves parallel transport through a ray must be modified by the appropriate automorphism as described above. Then looking at the vertex $v_1$, say, we need to glue together five irreducible components, but only one of these gluings is modified. These gluings would not be compatible. To correct for this, one can extend the ray indefinitely, and ``parallel transport'' the automorphism along the ray. There is a precise sense in which this can be done. This is shown on the right-hand picture in Figure \ref{proprays}, with the dotted lines showing the extension of the rays. Now if crossing a ray in one direction produces the inverse of the automorphism given by crossing the ray the other direction, one finds that gluing at the vertices $v_1$ and $v_2$ have now become compatible. A new problem arises, however, at the intersection point of the two rays. Again, when we try to identify various rings using parallel transport and automorphisms induced by crossing rays, we don't want the choice to depend on the particular path we take. Because in general the two automorphisms attached to the rays don't commute, we again have trouble at the point of intersection. This is in fact where our thinking stood in early 2004, shortly before the release of Kontsevich and Soibelman's paper \cite{KS2}. The solution to this problem, really the key part of Kontsevich and Soibelman's argument, is to add new rays emanating from the point of intersection of the old rays, as depicted in Figure \ref{ksrays}. These rays are added in such a way as to guarantee that the composition of automorphisms given by a loop around the intersection point is in fact the identity, and thus the identifications will be independent of the choice of path. \begin{figure} \input{proprays.pstex_t} \caption{} \label{proprays} \end{figure} \begin{figure} \input{ksrays.pstex_t} \caption{} \label{ksrays} \end{figure} The description here is somewhat vague, but demonstrates the basic idea. We've seen how we obtain our degeneration by gluing together basic pieces. Other than these different basic pieces, in two dimensions, the main distinction between our approach and the one taken by Kontsevich and Soibelman in \cite{KS2} is that we work in the affine structure dual to the one \cite{KS2} works with. They propogate automorphisms along gradient flow lines, but we are able to propogate automorphisms along straight lines with respect to the affine structure. This saves a great deal of trouble in higher dimensions, where gradient flow lines will be much more difficult to control. That makes it possible for us to obtain results in all dimensions. We of course have not made it particularly clear how we really encode automorphisms and how they propagate, but we will make at least the first point clearer in the next section. For the second point, the main thing is that they propagate along straight lines; this in fact is crucial for guaranteeing that the automorphisms don't start to involve monomials with poles on irreducible components of $X_0$. So here we see something which looks tropical already, with the union of rays looking like a tropical tree. Again, we will make this more precise in the next section. In higher dimensions, the argument becomes much more subtle. Instead of rays carrying automorphisms, codimension one wall carry automorphisms, and one needs to be very careful about how these walls propagate. Furthermore, there are great technical difficulties concerning convergence of the algorithm near the discriminant locus. This was handled in \cite{KS2} in two dimensions via an argument showing new rays added can be guaranteed to avoid a neighbourhood of each singularity, but this is done by choosing the metric carefully. In higher dimensions, this is not true, and instead we used algebraic methods to prove convergence. All these difficulties were overcome in \cite{Annals}. In \cite{Gbook} I wrote down a complete version of the proof in two dimensions; this has the advantage of avoiding most of the really technical issues. Hopefully, \cite{Gbook} provides a gentler entry point into the ideas outlined here than the main paper \cite{Annals}. We now turn to a more precise description of the automorphisms involved, and give evidence that the description of the explicit deformations (which we view as $B$-model information) really encodes $A$-model information on the mirror. \section{The tropical vertex} To simplify the discussion, we will work in this section only with the simplest rings which occur in the previous section, of the form $R^k_{\sigma,\sigma,\sigma}$ where $\sigma$ is a maximal (two-dimensional) cell. This ring is isomorphic to $\CC[x^{\pm 1},y^{\pm 1},t]/(t^{k+1})$. Let us work formally instead, setting \[ R=\CC[x^{\pm 1},y^{\pm 1}]\lfor t\rfor. \] This is the ring of formal power series in $t$ with coefficients Laurent polynomials in $x$ and $y$. Let $f\in R$ be of the form \[ f=1+tx^ay^b\cdot g(x^ay^b,t),\quad g(z,t)\in \CC[z]\lfor t\rfor. \] Then this defines an automorphism $\theta_{(a,b),f}$ of $R$ as a $\CC\lfor t\rfor$-algebra given by \[ \theta_{(a,b),f}(x)= x \cdot f^{b},\quad \theta_{(a,b),f}(y)=y\cdot f^{-a}. \] Note that $\theta_{(a,b),f}^{-1}=\theta_{(a,b),f^{-1}}$. These automorphisms have the further property that they preserve the holomorphic symplectic form ${dx\over x}\wedge {dy\over y}$. We define the \emph{tropical vertex group} $\VV$ to be the completion with respect to the maximal ideal $(t)\subseteq\CC\lfor t\rfor$ of the subgroup of $\CC\lfor t\rfor$-algebra automorphisms of $R$ generated by all such automorphisms. Note that infinite products are defined in $\VV$ only if only finitely many factors are non-trivial modulo $t^k$ for every $k>0$. This is a slight modification of a group originally introduced by Kontsevich and Soibelman in \cite{KS2}. We now describe a local version of the rays described in the previous section. For convenience, set $M=\ZZ^2$, $M_{\RR}=M\otimes_{\ZZ}\RR$, and identify $\CC[x^{\pm 1},y^{\pm 1}]$ with $\CC[M]$. \begin{definition} A \emph{ray} or \emph{line} in $M_{\RR}$ is a pair $(\fod,f_{\fod})$ for some $\fod=\RR_{\le 0}m$ if $\fod$ is a ray and $\fod=\RR m$ if $\fod$ is a line, where $m\in M\setminus\{0\}$. Furthermore, \[ f_{\fod}=1+tz^m\cdot g(z^m,t)\in R,\quad g(z,t)\in \CC[z]\lfor t\rfor, \] A \emph{scattering diagram} $\foD$ is a collection of rays and lines $\{(\fod,f_{\fod})\}$ with the property that for any $k>0$, $f_{\fod}\equiv 1 \mod t^k$ for all but a finite number of elements of $\foD$. \end{definition} Given a scattering diagram $\foD$, let \[ \Supp\foD=\bigcup_{(\fod,f_{\fod})\in\foD} \fod. \] If we are given a path $\gamma:[0,1]\rightarrow M_{\RR}\setminus\{0\}$ with $\gamma(0),\gamma(1)\not\in\Supp(\foD)$ and $\gamma$ being transversal to each ray it crosses, then we can define the \emph{path-ordered product} $\theta_{\gamma,\foD}\in\VV$ which is a composition of automorphisms associated to each ray that $\gamma$ crosses. We define the path-ordered product for each power $k>0$, and take the limit. For any given $k>0$, we can find numbers \[ 0<t_1\le t_2\le \cdots\le t_s<1 \] and elements $\fod_i\in\foD$ with $f_{\fod_i}\not\equiv 1\mod t^k$ such that $\gamma(t_i)\in\fod_i$, $\fod_i\not=\fod_j$ if $t_i=t_j$, and $s$ taken as large as possible. For each $i$ define $\theta_{\fod_i}$ to be the automorphism defined as follows. Let $n\in N=\Hom(M,\ZZ)$ be the unique primitive element which vanishes on $\fod_i$ and is negative on $\gamma'(t_i)$. Then define $\theta_{\fod_i}$ to be the automorphism \[ \theta_{\fod_i}(z^m)=z^mf_{\fod_i}^{\langle n,m\rangle}. \] Note this is of the form $\theta_{(a,b),f_{\fod}}$ for suitable choice of $(a,b)$. We then define \[ \theta^k_{\gamma,\foD}=\theta_{\fod_s}\circ\cdots\circ\theta_{\fod_1}. \] Note that if $\gamma$ crosses two rays at the same time, the order doesn't matter as one checks easily that two automorphisms commute if they are associated with the same underlying $\fod\subseteq M_{\RR}$. Finally, we define \[ \theta_{\gamma,\foD}=\lim_{k\rightarrow\infty} \theta^k_{\gamma,\foD}. \] We can then express the essential lemma of \cite{KS2} in this context: \begin{proposition} \label{KSLemma} Let $\foD$ be a scattering diagram. Then there is a scattering diagram $\Scatter(\foD)$ such that $\Scatter(\foD)\setminus\foD$ consists just of rays and $\theta_{\gamma,\Scatter(\foD)}$ is the identity for any loop $\gamma$ around the origin. \end{proposition} The proof is very simple and algorithmic; I give a quick outline. One constructs a sequence of scattering diagrams $\foD_1=\foD,\foD_2,\ldots$ with the property that $\theta_{\gamma,\foD_k}\equiv\id \mod t^{k}$. This is clearly true for $\foD_1$, so we proceed inductively, assuming we have constructed $\foD_k$. Then one shows (by looking at the Lie algebra of $\VV$) that \begin{align*} \theta_{\gamma,\foD_k}(x)= {} & x\sum_{i=1}^n b_ic_it^kx^{a_i}y^{b_i}\\ \theta_{\gamma,\foD_k}(y)= {} & -y\sum_{i=1}^n a_ic_it^kx^{a_i}y^{b_i} \end{align*} for integers $a_i,b_i$ (with $a_i,b_i$ not both zero) and $c_i\in\CC$. Then one obtains $\foD_{k+1}$ by adding rays \[ (\RR_{\le 0}(a_i,b_i), 1\pm c_it^kx^{a_i}y^{b_i}),\quad 1\le i\le n \] with the sign chosen so that when $\gamma$ crosses this ray, it produces the automorphism \[ x\mapsto x(1-b_ic_it^kx^{a_i}y^{b_i})\mod t^{k+1},\quad y\mapsto y(1+a_ic_it^kx^{a_i}y^{b_i})\mod t^{k+1}. \] Since this automorphism will commute with all other automorphisms in $\foD_k$ modulo $t^{k+1}$, inserting these rays will precisely cancel out the contributions to $\theta_{\gamma,\foD_k}$ to order $k$, and thus $\theta_{\gamma,\foD_{k+1}}\equiv \id\mod t^{k+1}$. It is very easy to program this algorithm and explore these scattering diagrams. They appear to have a very rich and fascinating structure. The following simple examples show their complexity. \begin{example} \label{scatdiagexample} Consider the case that \[ \foD=\{(\RR (1,0), (1+tx^{-1})^\ell), (\RR (0,1), (1+ty^{-1})^\ell)\} \] for $\ell$ some positive integer. For $\ell=1$, it is easy to check that \[ \Scatter(\foD)\setminus\foD=\{(\RR_{\ge 0}(1,1),1+t^2x^{-1}y^{-1})\}. \] Figure \ref{oneone} shows explicitly what the automorphisms are as one traverses the depicted loop; the reader can easily check that the composition of the five automorphisms is the identity. \begin{figure} \input{oneone.pstex_t} \caption{$\Scatter(\foD)$ for $\ell=1$. Here the automorphisms are given explicitly, and the identity $\theta_{\gamma,\Scatter(\foD)}$ is just the composition of the given automorphisms.} \label{oneone} \end{figure} If $\ell=2$, then one finds \begin{eqnarray*} \Scatter(\foD) \setminus\foD=&&\{(\RR (n+1,n),(1+t^{2n+1}x^{-(n+1)}y^{-n})^2)| n\in\ZZ, n\ge 1\}\\ &\cup& \{(\RR (n,n+1),(1+t^{2n+1}x^{-n}y^{-(n+1)})^2)| n\in\ZZ, n\ge 1\}\\ &\cup&\{(\RR (1,1), (1-t^2x^{-1}y^{-1})^{-4})\}. \end{eqnarray*} This was first found experimentally by myself and Siebert via a computer program, and the first verification of this was given in \cite{GMN}. It also follows immediately from the results of \cite{GPS} which will be explained in what follows. If $\ell=3$, the situation becomes even more complicated. First, as noticed by Kontsevich, $\Scatter(\foD)$ has a certain periodicity. Namely, \[ (\RR_{\ge 0} (m_1,m_2),f(x^{-m_1}y^{-m_2})) \in\Scatter(\foD) \] if and only if \[ (\RR_{\ge 0} (3m_1-m_2,m_1),f(x^{-(3m_1-m_2)}y^{-m_1})) \in\Scatter(\foD), \] provided that $m_1,m_2$ and $3m_1-m_2$ are all positive. In addition, there are rays with support $\RR_{\ge 0} (3,1)$ and $\RR_{\ge 0} (1,3)$, hence by the periodicity, there are also rays with support \[ \RR_{\ge 0}(8,3),\ \RR_{\ge 0}(21,8),\ \ldots \ \ \ \text{and}\ \ \ \RR_{\ge 0}(3,8),\ \RR_{\ge 0}(8,21),\ \ldots \] which converge to the rays of slope $(3\pm\sqrt{5})/2$, corresponding to the two distinct eigenspaces of the linear transformation $\begin{pmatrix}3&-1\\1&0\end{pmatrix}$. Each of these rays is of the form \[ \big(\RR_{\ge 0}(m_1,m_2), (1+t^{m_1+m_2}x^{-m_1}y^{-m_2})^3\big). \] These are the only rays appearing outside of the cone generated by the rays of slope $(3\pm\sqrt{5})/2$. On the other hand, inside this cone, every rational slope occurs, and the attached functions are very complicated. For example, the function attached to the line of slope 1 is \[ \left(\sum_{n=0}^{\infty} {1\over 3n+1}\begin{pmatrix}4n\\ n\end{pmatrix} t^{2n}x^{-n}y^{-n}\right)^9. \] Again, Siebert and I found this form via computer experiment, but it was verified by Reineke in \cite{Rei}. Recently, Kontsevich has shown the functions attached to all these rays are algebraic. For example, if $g$ denotes the $9$-th root of the above function, it satisfies the equation \[ t^2x^{-1}y^{-1}g^4-g+1=0. \] This series of examples also makes contact with a number of other interesting objects. On the one hand, Reineke in \cite{Rei} gave an interpretation of the attached functions in terms of Euler characteristics of moduli spaces of representaions of the Kronecker $\ell$-quiver, the quiver with two vertices and $\ell$ arrows between them. On the other hand, these diagrams are also closely related to the cluster algebras defined by these quivers. This connection will be studied in more detail in work with Hacking, Keel, and Kontsevich \cite{GHKK}. \end{example} We will now explain the enumerative interpretation for the functions which arise in $\Scatter(\foD)$. To motivate this, let us return to the tropical interpretation of \S\ref{tropgeomsect}. Begin, say, with a tropical manifold $B$ which corresponds to a K3 surface, as depicted in Figure \ref{tropicaldisk}, along with what we will call a \emph{tropical disk}. This is almost a tropical curve, but it just ends at the point $P$ without any balancing condition at $P$; meanwhile, it has other legs terminating at the singularities of $B$. This is legal behaviour as explained at the end of \S\ref{tropgeomsect}. Following the description at the end of \S4, one can imagine disks over each leg terminating at a singular point. Where these legs meet, one would like to glue these disks together and continue along a cylinder over the segment adjacent to $P$. Terminating at $P$, we roughly obtain a disk in $X(B)$ with boundary contained in the torus fibre over $P$, as depicted. It is natural to ask how many ways the initial disks (possibly taking multiple covers of these disks) can be glued together to give a new disk. \begin{figure} \input{tropicaldisk.pstex_t} \caption{A tropical disk on an affine K3 surface. Here the $\times$'s indicate singular points, while the disk ``ends'' at the point $P$.} \label{tropicaldisk} \end{figure} Now compare this picture with what we have seen on the mirror side. Our explicit degeneration really gives, as generic fibre, something like $\check X(B)$. However, it is controlled by similar tropical information: rays emanate from the singularities in the monodromy invariant direction, just as in the case of the tropical curves. They collide, and the Kontsevich-Soibelman result in Proposition \ref{KSLemma} gives new rays. So one may hope that this process precisely reflects holomorphic disks in $X(B)$ with boundary on fibres of $X(B)\rightarrow B$. It is also worth mentioning work of Auroux \cite{Aur}, which makes more precise the notion that the complex structure on one side should be determined by holomorphic disks on the other. This also provides a posteriori support for the idea that there must be an enumerative interpretation for the process of generating new rays. It is usually difficult to work with holomorphic disks. It is often easier to translate problems involving holomorphic disks into problems involving genuine Gromov-Witten invariants. We can do so for the problems being discussed here. Here then is the enumerative interpretation, in the simplest situation, as explained in \cite{GPS}. Suppose we are given distinct non-zero primitive vectors $m_1,\ldots,m_p \in M$ and positive integers $\ell_1,\ldots,\ell_p$. Consider the scattering diagram \[ \foD=\{(\RR m_i, (1+tz^{-m_i})^{\ell_i})\,|\,1\le i\le p\}. \] Let $(\fod,f_{\fod})\in\Scatter(\foD)\setminus\foD$. We can always assume that this is the only ray in $\Scatter(\foD)\setminus\foD$ with a given underlying ray $\fod$. This is because if there are rays $(\fod_1,f_{\fod_1}),(\fod_2,f_{\fod_2}),\ldots$ in $\Scatter(\foD) \setminus\foD$ with $\fod_1=\fod_2=\cdots$, we can replace this collection of rays with a single ray $(\fod_1,\prod f_{\fod_i})$ without affecting $\theta_{\gamma,\Scatter(\foD)}$. With this assumption, $f_{\fod}$ is uniquely determined by $\foD$. We wish to interpret $f_{\fod}$ enumeratively. To do this, consider a complete fan $\Sigma$ in $M_{\RR}$ whose one-dimensional rays are \[ \RR_{\le 0}m_1,\ldots,\RR_{\le 0}m_p, \fod. \] Assume for the sake of simplicity in this discussion that $\fod$ does not coincide with the other $p$ rays. Let $X$ be the toric variety defined by $\Sigma$, with toric divisors $D_1,\ldots,D_p,D_{\out}$ corresponding to the above rays. Next, choose $\ell_i$ general points on the divisor $D_i$, say labelled $P_{i1},\ldots,P_{i\ell_i}$. Let $\nu:\widetilde X\rightarrow X$ be the blow-up of these $\sum_i \ell_i$ points, with exceptional divisor $E_{ij}$ over $P_{ij}$. Let $\widetilde D_i,\widetilde D_{\out}$ denote the proper transforms of $D_i,D_{\out}$. In what follows, we will use the notation ${\bf P}_i=(p_{i1},\cdots, p_{i\ell_i})$ for a partition of length $\ell_i$ of some non-negative integer $|{\bf P}_i| =p_{i1}+\cdots+p_{i\ell_i}$, allowing some of the $p_{ij}$'s to be zero. Fix a class $\beta\in H^2(X,\ZZ)$ with the property that $a_i:=\beta\cdot D_i$ are non-negative and $k:=\beta\cdot D_{\out}$ is positive. It is an easy exercise in toric geometry that this implies a relationship \[ \sum_{i=1}^p a_i m_i=km_{\out}, \] where $m_{\out}$ is a primitive generator of $\fod$. If one chooses a collection of partitions ${\bf P}=({\bf P}_1,\ldots,{\bf P}_p)$ where ${\bf P}_i$ is a partition of $a_i$, let \[ \beta_{\bf P}:=\nu^*\beta-\sum_{i=1}^p \sum_{j=1}^{\ell_i} p_{ij}E_{ij}. \] This can be thought of as the class of a curve on $X$ which passes through the point $P_{ij}$ precisely $p_{ij}$ times. We would now like to associate a number to this cohomology class. This will be a Gromov-Witten count of one-pointed rational curves in $\widetilde X$ which (1) represent the class $\beta_{\bf P}$; (2) are tangent to $\widetilde D_{\out}$ at the marked point with order $k$; and (3) are otherwise disjoint from any of the divisors $\widetilde D_i$. This is a relative Gromov-Witten invariant. However, the classical theory of relative Gromov-Witten invariants works relative to a smooth divisor, and of course the union of the boundary divisors here is singular. One can instead encode the above conditions using log Gromov-Witten theory. At the time \cite{GPS} was written, log Gromov-Witten theory was not yet available, and as a consequence, we used a technical work-around to reduce to the classical theory. I give this description here since it does not require knowing log Gromov-Witten theory. One defines $\widetilde X^o:=\widetilde X\setminus \bigcup_{i=1}^p \widetilde D_i$. One then considers the moduli space $\foM(\widetilde X^o/\widetilde D^o_{\out}, \beta_{\bf P})$ of relative stable maps of genus zero with target space $\widetilde X^o$, relative to the divisor $\widetilde D^o_{\out}=\widetilde D_{\out}\cap \widetilde X^o$. These curves have one marked point with order of tangency $k$ with $\widetilde D_{\out}^o$. The only problem is that the target space is non-proper, but one shows this doesn't cause any problems because nevertheless the moduli space is proper. One finds it is virtual dimension zero, and since it carries a virtual fundamental class, we can define \[ N_{\bf P}:=\int_{[\foM(\widetilde X^o/\widetilde D^o,\beta_{\bf P})]^{vir}} 1 \in\QQ. \] We can then state the enumerative result (\cite{GPS}): \begin{theorem} \label{mainGPStheorem} We have \[ \log f_{\fod}=\sum_{\beta}\sum_{\bf P} k(\beta) N_{\bf P} t^{\sum_i |{\bf P}_i|} z^{-k(\beta)m_{\out}}, \] where the sum is over all $\beta\in H^2(X,\ZZ)$ with $\beta\cdot D_i\ge 0$, $k(\beta):=\beta\cdot D_{\out}>0$, and partitions ${\bf P}$ with $|{\bf P}_i|=\beta\cdot D_i$. \end{theorem} \begin{example} Returning to Example \ref{scatdiagexample}, consider the function $f_{\fod}$ attached to the ray of slope $1$ for the cases $\ell=1,2$ and $3$. In each case, the surface $X$ is $\PP^2$, with coordinate axes $D_1,D_2$ and $D_{\out}$. Then $\widetilde X$ is obtained by blowing up $\ell$ points on each of $D_1$ and $D_2$. Considering first the case of $\ell=1$, we note that for $\beta=dH$, the class of a degree $d$ curve in $\PP^2$, the only relevant choice of ${\bf P}$ is ${\bf P}_1=d$, ${\bf P}_2=d$, and thus we have \[ \beta_{\bf P}=d\nu^* H - d E_{11}-d E_{21}. \] This represents the class of a curve of degree $d$ passing through the two blown-up points $d$ times each. It is easy to see that the only choice for such a curve is a $d$-fold cover of a line passing through the two points. Furthermore, this cover must be totally ramified over $D_{\out}$ to guarantee the required order of tangency with $D_{\out}$. This requires a virtual count, and the relevant localization calculations are carried out in \cite{GPS}, giving a value of $N_{{\bf P}}=(-1)^{d+1}/d^2$. Thus we get \[ \log f_{\fod}=\sum_{d=1}^{\infty} d \left({(-1)^{d+1}\over d^2}\right) t^{2d}x^{-d}y^{-d}. \] Exponentiating one finds $f_{\fod}=1+t^2x^{-1}y^{-1}$, agreeing with Example \ref{scatdiagexample}. So here we are just counting the one line through two points in $\PP^2$ along with certain multiple covers of this line. Going to $\ell=2$, and $\beta=dH$, one finds four choices for the partition in the case $d=1$, ${\bf P}=(1+0,1+0), (1+0,0+1), (0+1,1+0)$, and $(0+1,0+1)$. Each corresponds to a choice of one point on each of $D_1$, $D_2$, and one has one line through each of these pairs of points. Thus $N_{\bf P}=1$ for each choice of such ${\bf P}$. As in the case $\ell=1$, each of these lines also contributes to higher degree via multiple covers, with, say, ${\bf P}=(d+0,d+0)$ contributing $N_{\bf P}=(-1)^{d+1}/d^2$. For $d=2$, one sees there are no curves for ${\bf P}=(2+0,1+1)$, say, as this would require a conic with a node on $D_1$ and tangent to $D_{\out}$; such does not exist. But with ${\bf P}=(1+1,1+1)$, we look at conics passing through all four points and tangent to $D_{\out}$. It is very easy to see there are two such conics. One can then check that the only other curves contributing are multiple covers of one of the four lines or two conics. The multiple cover contribution for conics is actually different than for lines, because the order of tangency with $D_{\out}$ is different. It turns out the correct contribution for a $d$-fold cover of a conic is $1/d^2$. Hence we find \[ \log f_{\fod}=4\sum_{d=1}^{\infty} d\left({(-1)^{d+1}\over d^2}\right)t^{2d}x^{-d}y^{-d} +2\sum_{d=1}^{\infty} 2d\left(1\over d^2\right)t^{4d}x^{-2d}y^{-2d} \] and exponentiating we get \[ f_{\fod}={(1+t^2xy)^4\over (1-t^4x^{-2}y^{-2})^4}=(1-t^2x^{-1}y^{-1})^{-4}. \] In the case that $\ell=3$, one expects $3\times 3=9$ lines, as there is one line passing through each pair of choices of one point on $D_1$ and one point on $D_2$. For conics, one has double covers of these lines, for a contribution of $-9/4$, and $2\times 3\times 3=18$ conics. Here one needs to choose two points on $D_1$ and two points on $D_2$, and then there are two conics passing through these four points tangent to $D_{\out}$. For cubics, there is the contribution of triple covers of lines, for a total of $9/9$, and a number of contributions from plane cubics. It turns out that for ${\bf P}=(1+1+1,1+1+1)$, $N_{\bf P}=18$. Note this gives a count of nodal plane cubics passing through $6$ fixed points and for which $D_{\out}$ is a tri-tangent. On the other hand, for ${\bf P}=(1+2+0,1+1+1)$, $N_{\bf P}=3$. Note that there are a total of $12$ partitions of this shape. This latter count represents nodal cubics with the node at one of the chosen points, passing also through four other chosen points, with $D_{\out}$ being tritangent. One concludes that \[ \log f_{\fod}=9 t^2x^{-1}y^{-1}+2(-9/4+18)t^4x^{-2}y^{-2}+3(9/9+54)t^6x^{-3}y^{-3}+ \cdots. \] A direct comparision with the value given in Example \ref{scatdiagexample} gives agreement. \end{example} We end this section with brief additional motivation for Theorem \ref{mainGPStheorem} and a word about the proof. Suppose we have a piece of an integral affine manifold as depicted in Figure \ref{affinemanpiece}. Here we imagine a situation with two singular points in a surface, with local monodromy around the singularities contained in the horizontal and vertical line segments being $\begin{pmatrix}1&\ell_1\\ 0&1\end{pmatrix}$ and $\begin{pmatrix}1&\ell_2\\ 0&1\end{pmatrix}$ in suitably chosen bases (different for each segment). This is a slightly more general situation than was considered in \S\ref{Bmodelsect}, where we only discussed singularities with monodromy of the form $\begin{pmatrix}1&1\\ 0&1\end{pmatrix}$. Nevertheless, the techniques of that section still apply, but the functions attached to the initial rays emanating from the singularities towards the central vertex $v$ can be taken to be of the form $(1+x^{-1})^{\ell_1}$ and $(1+y^{-1})^{\ell_2}$. This is roughly the shape of the examples discussed above. Applying the scattering procedure would then produce a smoothing of $\check X_0(B,\P,s)$. However, on the mirror side, we interpret $B$ as a dual intersection complex, which means it should arise from a degeneration $\shX\rightarrow D$ where the central fibre $\shX_0$ has an irreducible component $Y_v$ isomorphic to $\PP^2$ (corresponding to the vertex $v$). Furthermore, the total space $\shX$ should have $\ell_1+\ell_2$ ordinary double points lying on the toric boundary of $Y_v$. If one blows up the Weil divisor $Y_v$ inside of $\shX$, one obtains a small resolution $\widetilde\shX\rightarrow \shX$ of these ordinary double points, and in particular, the proper transform $\widetilde Y_v$ of $Y_v$ is the blow-up of $Y_v$ at the points $Y_v\cap \Sing(\shX)$. This operation blows up $\ell_1$ points on one coordinate axis of $Y_v$ and $\ell_2$ on the other. This is exactly the same surface considered in Theorem \ref{mainGPStheorem}. Now consider the kind of curves on $\tilde Y_v$ counted by Theorem \ref{mainGPStheorem}. These are curves in $\widetilde Y_v$ which only intersect the third coordinate axis at one point. These can be viewed as curves in $\widetilde\shX_0$, but not ones which deform to holomorphic curves in a general fibre of the family $\widetilde\shX\rightarrow D$. Rather, roughly, we expect such curves to deform to holomorphic disks, with the point of intersection with the singular locus of $\widetilde\shX_0$ (i.e., the point of intersection with the third axis of $\widetilde Y_v$) expanding into an $S^1$, giving the boundary of the holomorphic disk. Approximately, we expect this boundary to lie in a fibre of an SYZ fibration on a general fibre of the family $\widetilde\shX\rightarrow D$. The homology class of this boundary inside the fibre is determined by the order of tangency of the curve with the third axis. This correspondence between the relative curves considered in Theorem \ref{mainGPStheorem} is only a moral one; there is no proof yet that we are really counting such holomorphic disks. However, this argument served as the primary motivation for Theorem \ref{mainGPStheorem}. Finally, as far as the proof is concerned, there are several steps. First, we show that scattering diagrams can be deformed to look like a union of tropical curves, and use a variant of Mikhalkin's fundamental curve-counting results \cite{Mik} as developed by Nishinou and Siebert \cite{NS} to show that scattering diagrams perform certain curve counts on toric surfaces. This is then related to the Gromov-Witten counts of the blown-up surfaces using Jun Li's gluing formula \cite{Li2}. \begin{figure} \input{affinemanpiece.pstex_t} \caption{} \label{affinemanpiece} \end{figure} \section{Other recent results and the future} I will close with a brief discussion of applications and future developments of the methods discussed here. Recently a variant of the smoothing mechanism described here was used by myself, Hacking and Keel \cite{GHK} to give a very general construction of mirrors of pairs $(Y,D)$ where $Y$ is a rational surface and $D$ is an effective anti-canonical divisor forming a cycle of rational curves. We make use of \cite{GPS} to write down what we call the \emph{canonical scattering diagram}, which can be described entirely in terms of the Gromov-Witten theory of the pair $(Y,D)$ (and more specifically, counts of curves intersecting $D$ at only one point). This scattering diagram determines the mirror family. However, there is an additional crucial tool used to partially compactify the family constructed. This is necessary because unlike the affine manifolds considered in this paper, the natural one to associate to the pair $(Y,D)$ has a singularity at a vertex of the polyhedral decomposition. There is no local model for a smoothing at this vertex, and as a consequence, one constructs families which are ``missing'' a point. To add this point back, one needs to be sure there are enough functions on the family constructed, and it turns out homological mirror symmetry suggests a natural way to construct such functions. This can be done tropically, creating what we call \emph{theta functions}. The same construction applied to the case of degenerating abelian varieties indeed produces ordinary theta functions, and we anticipate the functions we construct in these other contexts will be similarly useful. See \cite{GStheta} for a survey of these ideas. The construction of \cite{GHK} then also solves a problem which pre-dates mirror symmetry. In particular, we prove a conjecture of Looijenga concerning smoothability of cusp singularities. Theta functions can be viewed as canonical bases for rings of functions on an affine variety or spaces of sections of line bundles on projective varieties. As such, they make contact with canonical bases in cluster algebra theory, providing a framework for constructing canonical bases of cluster algebras. We also expect that the techniques for surface pairs $(Y,D)$ will generalize. Indeed, a mirror partner to any maximally unipotent normal crossings degeneration of K3 surfaces can be constructed along similar lines, in work in progress with Hacking, Keel and Siebert. The expectation is that with an additional helping of log Gromov-Witten theory, one should be able write down a general construction in all dimensions for mirror partners to maximally unipotent degenerations of Calabi-Yau manifolds. There still remains the question of extracting enumerative information from periods which provided the original excitement in mirror symmetry. Here we showed how enumerative geometry can be reflected in the mirror, but in a rather local way. We expect that it should be possible to carry out the computation of period integrals to extract genus zero Gromov-Witten invariants of the mirror, but some technical issues remain in this direction. Nevertheless, the program of understanding mirror symmetry via degenerations, inspired by the SYZ conjecture, seems to provide a powerful framework of thinking about mirror symmetry inside the realm of algebraic and tropical geometry.
1,314,259,996,242
arxiv
\section{introduction} One of the key features of relativistic (massless) free particle states is that they evolve, at least in effectively one-dimensional situations, in time without spreading. This in turn relies on the linear nature of the relativistic dispersion relation which for photons reads $\omega(k)=ck$, with c the speed of light. The condensed matter relativistic-particle analog is found in the low energy approximation (long wavelength) of single layer graphene where the chiral massless particles move with a speed $v_F\approx c/300$ \cite{novoselov1}. This linear spectrum provides graphene with remarkable transport properties such as high mobility\cite{geim}, Klein tunneling\cite{guinearmp} and unconventional spin Hall effect\cite{sarmarmp,kane}. The later stems from the interplay between intrinsic spin-orbit interaction and the coupling extrinsically induced by external gate voltages or an appropriate substrate. This extrinsic coupling\cite{macdonald,guinea,stauber}, so called Rashba spin-orbit (RSOC) has been also found to give rise to spin polarization \cite{rashbapol} and relaxation \cite{fabianrelax,guinearelax} effects. Although the role of static RSOC on graphene has been extensively discussed in the literature, to the best of our knowledge, the role of periodically driven time dependent RSOC on graphene samples has not been analysed so far. Yet, recent works have focused on the dynamical features of charge currents induced by means of time dependent extrinsic spin-orbit interaction on mesoscopic semiconductor quantum rings where Rabi oscillations are shown to appear as well as collapse and revival phenomena\cite{peeters1}. The main motivation of our work is twofold: First at all, we are interested in determining the feasibility of ac driven fields to generate and modulate a finite spin polarization of carriers in graphene for states that under static conditions remain unpolarized. In addition, we are also interested in the dynamical modulation of the effective Rashba coupling strength $\Lambda=\lambda/\hbar\Omega$ which would allow to explore regimes beyond the static limit domain. Taking advantage of the periodicity of the problem, the evolution equations can be solved via Floquet theory. A standard approach here consists of expressing the Hamiltonian in a Fourier mode expansion leading to an infinite-dimensional eigenvalue problem for the so-called quasi-energies \cite{milena,chu}. This quasi-energy spectrum carries nontrivial information on the topological nature of the system under study\cite{topological1}, and for semiconductor quantum wells with a zincblende structure it has been recently shown that ac driving can induce a topological phase transition\cite{fti}. Practically, in order to treat the infinite eigenvalue problem one has to truncate at an order of the harmonic expansion chosen appropriately to yield well-converged results. An alternative approach to the Floquet problem which does not rely on Fourier expansions has been devised by Magnus\cite{magnus1}. This method appears to be somewhat less popular and amounts in formulating the time evolution operator as the exponential of a series of nested commutators. It has the virtue of both preserving unitarity at any order in the series expansion (in contrast to truncation of the Dyson series within a perturbative approach) and avoids the infinite-dimensional eigenvalue problems. Following\cite{magnus2} we will make use of the Magnus expansion approach combined with Floquet theory in order to generate semi-analytical solutions of the dynamics induced by periodic RSOC. Since RSOC couples the spin and pseudospin degrees of freedom the problem is, for a given wave vector, generically four-dimensional. However, by an appropriate unitary transformation, the evolution equations can be recast as a set of two equivalent two-level Schr\"{o}dinger equations related by time reversal. In this way we can explicitly analyse which static states get dynamically coupled. Our main results are the following: The ac driven RSOC induces a quasi-energy spectrum where the original gap due to static spin-orbit coupling is dynamically closed. In particular, at the Dirac point ($\vec k=0$) the dynamics is exactly solvable with zero quasi-energy Floquet states. This quasi-energy spectrum is two fold-degenerate as a consequence of time reversal invariance of spin-orbit interaction, and the closing of the original gap is due to the destructive interference induced among the initially uncoupled positive and negative energy RSOC eigenstates. Then we show that sizeable alternating out of plane spin polarization ensues on states that under static conditions remain unpolarized. We also find that the uniform interference pattern shown by the autocorrelation function for static RSOC gets distorted due to the interlevel mixing of the static eigenstates which dynamically modulates the relative phases that add up in the quantum revivals of the autocorrelation function. In the driven case quantum revivals (suppressions) are directly correlated to spin up (down) phases of the out of plane spin polarization. Since the autocorrelation function is related to the Fourier transform of the local density of states\cite{kramer}, and because spin probes can be more demanding in practical implementations than charge detection, its evaluation yields useful indirect information on the spin degree of polarization. We believe these findings have the potential to provide interesting new strategies to dynamically control spin properties of charge carriers in graphene for future spintronics applications. The paper is organized as follows. In section {\rm II} we describe the spectum for static RSOC and introduce the model hamiltonian for periodically driven RSOC. Here we also present the exact solution to the dynamical equations corresponding to the Dirac point $k=0$. The main results of the Floquet-Magnus approach for the semi-analytical solution of the evolution operator at finite momentum are presented in section {\rm III}. Next, in section {\rm IV} we compare the quasi-energy spectrum obtained through Magnus approach to that given by making a rotating wave approximation. We also evaluate and discuss the out of plane spin polarization as well as the onset of quantum revivals for the autocorrelation function. Finally, in section {\rm V} we give some concluding remarks and discuss an experimental scenario where our results could be tested. \section{model} We consider a graphene monolayer sample subject to periodic time dependent spin-orbit interation of the Rashba type. In graphene RSOC interaction emerges as a consequence of $\sigma$ and $\pi$ orbital mixing\cite{yao} and stems from the induced electric field due to the substrate over which the graphene sample lies or by applied gate voltages. Then, a periodic modulation can in principle be implemented by means of time dependent gate voltages or by the induced time varying electric field within a parallel plates capacitor coupled to a LC circuit. Under these circumstances the induced RSOC perturbation could be given a periodic time dependence $V(t)=\lambda(t)\vec{s}\centerdot(\hat{z}\times\vec{\sigma})$, where the driving amplitude will be assumed to be periodic $\lambda(t+T)=\lambda(t)$ with $\lambda(0)=\lambda_R$, the coupling strenght in the static case. Concerning energy scales the value of the intrinsic and extrinsic spin-orbit coupling parameters $\Delta$ and $\lambda_R$ in graphene have been obtained by tight binding \cite{guinea,zarea} and band structure calculations \cite{macdonald,yao}. They gave estimates in the range $10^{-6}-10^{-5} {\rm e V}$, much smaller than any other energy scale in the problem (kinetic, interaction and disorder). However, the RSOC strength has recently been reported\cite{rashbaexp} to be of order $\lambda_R\approx 0.2\tau$, where $\tau\approx 2.8 {\rm e V}$ is the value of the first-neighbor hopping parameter for graphene within a tight binding approach. The formulation of the problem is as follows. In momentum space and and taking into account the energy scales of spin-orbit coupling we can work within the so called long wavelength approximation, where the total hamiltonian for monolayer graphene in presence of time dependent RSOC can be described by the $8\times8$ hamiltonian\cite{kane} \begin{equation} \mathcal{H}(\vec{k},t)=(\sigma_x\tau_zk_x+\sigma_y\tau_y)s_0+\lambda(t)(\sigma_x\tau_zs_y-\sigma_ys_x),\label{dirac0} \end{equation} where $v_{F}\sim 10^6 {\rm m/s}$ is the Fermi velocity in graphene, $\vec{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$ is a vector of Pauli matrices, with $\sigma_z=\pm1$ describing states on the sublattice $A(B)$ and so called pseudo spin degree of freedom, whereas $\tau_z=\pm1$ describes the so called Dirac points ${\bf K}$ and ${\bf K}'$, respectively. In addition, $\vec{k}=(k_x,k_y)$ is the momentum measured from the ${\bf K}$ point and $s_i$ ({\rm i=0,x,y,z}) represents the real spin degree of freedom, with $s_0$ the identity matrix. In addition, $\lambda(t)$ gives the time dependence of the RSOC and we have neglected the intrinsic spin-orbit contributions. Now since RSOC does not mix the valleys, we can focus on any of the two Dirac points, say ${\bf K}$ and then the results for the ${\bf K'}$ point are found by the substitution $k_x\rightarrow -k_x$. Yet, we will formulate the problem in an isotropic way such that the results for the ${\bf K'}$ Dirac point will inmediately follow. Before dealing with the time dependent problem we summarize the main results for static RSOC. The spectrum of the noninteracting Hamiltonian \begin{equation} \mathcal{H}_0=\hbar v_{F}\vec{\sigma}\centerdot\vec{k} \end{equation} is given by the linear dispersion relation \begin{equation} \epsilon^0_{\sigma}(k)=\sigma\hbar v_F\sqrt{k^2_x+k^2_y}\equiv\sigma\hbar v_F k, \end{equation} whereas its eigenbasis is spanned by the spinors \begin{equation} |\phi_\sigma(\vec{k})\rangle=\frac{1}{\sqrt{2}}\left( \begin{array}{c} 1\\ \sigma e^{i\theta} \end{array} \right) \end{equation} with $\tan\theta=k_y/k_x$, and $\sigma= 1$ ($-1$) describes electron (hole) states. When a static RSOC interaction term is present the hamiltonian near the $\bf{K}$ point reads \begin{equation} \mathcal{H}(\vec{k})=\hbar v_F(\sigma_xk_x+\sigma_yk_y)s_0+\lambda_R(\sigma_xs_y-\sigma_ys_x)\label{dirac01} \end{equation} and the energy spectrum changes to $\pm\epsilon_\pm$ with \begin{equation}\label{rso} \epsilon_{\pm}(k)=\pm\lambda_R+\sqrt{\lambda^2_R+(\hbar v_F k)^2}. \end{equation} Since RSOC mixes the $\sigma$ and $\pi$ atomic orbitals it induces a gap $\delta_0=2\lambda$ at the Dirac point $k=0$. The static Rashba hamiltonian in Eq.(\ref{dirac01}) is diagonalized by the unitary transformation $U(\vec{k})$ given explicitly as \begin{widetext} \begin{equation}\label{unitary} U(\vec{k})= \frac{1}{\sqrt{2}}\left( \begin{array}{cccc} - i e^{i\theta}\sin\gamma_+&-\cos\gamma_+&i\cos\gamma_+& e^{-i\theta}\sin\gamma_+ \\ - i e^{i\theta}\sin\gamma_-&\cos\gamma_-&-i\cos\gamma_-& e^{-i\theta}\sin\gamma_- \\ i e^{i\theta}\sin\gamma_-&-\cos\gamma_-&-i\cos\gamma_-& e^{-i\theta}\sin\gamma_- \\ i e^{i\theta}\sin\gamma_+&\cos\gamma_+&i\cos\gamma_+& e^{-i\theta}\sin\gamma_+ \end{array} \right), \end{equation} \end{widetext} where $\cos\gamma_\pm=\epsilon_\pm/\sqrt{(\hbar v_F k)^2+\epsilon^2_\pm}$. In this basis the static RSOC hamiltonian reads \begin{equation} \tilde{\mathcal{H}}(\vec{k})=\textrm{Diag}\{-\epsilon_+,\epsilon_-,-\epsilon_-,\epsilon_+\}. \end{equation} This particular choice of basis will simplify the calculations that follow. We are interested in analyzing the emergent dynamics of Dirac fermions in monolayer graphene when the amplitude of RSOC is a periodically varying function of time $\lambda(t)=\lambda_R\cos(\Omega t)$, with $\lambda_R$ and $\Omega$ the amplitude and frequency of the driving term. Then we would have to deal with the following $4\times4$ evolution equations \begin{equation}\label{time1} i\hbar\partial_t\Psi(\vec{k},t)=\mathcal{H}(\vec{k},t)\Psi(\vec{k},t) \end{equation} However, if we make use of the unitary transformation (\ref{unitary}) the time dependent hamiltonian (\ref{dirac0}) becomes isotropic and block-diagonal \begin{equation}\label{interaction} \tilde{\mathcal{H}}(k,t)=\left( \begin{array}{cc} h_-(k,t)&0 \\ 0 & h_+(k,t) \end{array} \right), \end{equation} with both sub-blocks periodic functions of time i.e. $h_\pm(k,t+T)=h_\pm(k,t)$. Therefore the unitary transformation represented by (\ref{unitary}) simplifies considerably the mathematical resolution of the evolution equations by recasting the problem as two time reversal pairs of coupled $2\times 2$ two level problems. In addition, it has the physical appealing feature of clearly giving the subset of states which are dynamically coupled through the time dependent interaction. Let us then focus on the upper block $h_-(k,t)$ which reads \begin{widetext} \begin{equation}\label{interaction0} h_-(k,t)=-\frac{2}{\epsilon_-+\epsilon_+}\left( \begin{array}{cc} (\hbar v_F k)^2+\lambda(t)\epsilon_+&\hbar v_F k [\lambda_R-\lambda(t)]\\ \hbar v_F k [\lambda_R-\lambda(t)]&-(\hbar v_F k)^2+\lambda(t)\epsilon_- \end{array} \right), \end{equation} \end{widetext} whereas the lower block is obtained by changing the sign of the amplitude $\lambda_R\rightarrow-\lambda_R$. Because of this symmetry relation among the two subspaces their quasi-energy spectra are identical. This is to be expected since RSOC is time reversal invariant. First of all, we notice that in the static limit $\lambda(t)\rightarrow\lambda_R$ the reduced hamiltonian (\ref{interaction}) is block diagonal \begin{equation} h_-(k)=\left( \begin{array}{cc} -\epsilon_+&0 \\ 0&\epsilon_- \end{array} \right) \end{equation} We also note that at the Dirac point $k=0$, one gets \begin{equation} h_-(0,t)=\left( \begin{array}{cc} -2\lambda(t)&0 \\ 0&0 \end{array} \right). \end{equation} In this case the resulting dynamics \begin{equation} i\hbar\partial_t|\phi(t)\rangle=h_-(0,t)|\phi(t)\rangle \end{equation} is exactly solved by the eigenspinors \begin{eqnarray}\label{zeroenergy} |\phi_1(t)\rangle&=&(e^{2if(t)},0)\\ \nonumber |\phi_2(t)\rangle&=&(0,1) \end{eqnarray} where \begin{equation} f(t)=\frac{1}{\hbar}\int^t_0d t'\lambda(t'). \end{equation} As will be discussed below, these solutions correspond to zero quasi-energy Floquet states. The corresponding evolution operator is diagonal and given as \begin{equation}\label{exact} U_-(0,t)=e^{i f(t)}\textrm{Diag}\{e^{i f(t)},e^{-i f(t)}\}. \end{equation} \section{Magnus-Floquet approach} Although we have shown that at $k= 0$ the dynamics is exactly solvable, this is no longer true for finite $k$. Then, we need to resort to approximate solutions. As we discuss below, a semi-analytical approach known as Magnus-Floquet expansion will be suitable for dealing with the dynamical equations of periodically driven systems. Since the Magnus-Floquet approach is not so popular in the literature we now briefly summarize its main results (see reference \cite{magnus2} for more detailed derivations). In the language of differential equations, the matrix solution $S(t)$ of a $n$-dimensional system of dynamical evolution equations (here we omit the orbital degrees of freedom for ease of notation), \begin{equation}\label{flo1} \partial_t\Psi(t)=A(t)\Psi(t) \end{equation} i.e. a matrix that satisfies \begin{equation}\label{flo2} \partial_tS(t)=A(t)S(t) \end{equation} is called a fundamental matrix solution if all its columns are linearly independent. If in addition, there is a time $t=t_0$ such that $S(t_0)$ is the identity matrix, then $S(t)$ is called a principal fundamental matrix solution. To solve Eq.(\ref{flo2}) Magnus\cite{magnus1} proposed to find exponential solutions to the evolution operator in the form \begin{equation} S(t)=e^{M(t)} \end{equation} and then wrote $M(t)$ as an infinite series \begin{equation} M(t)=\sum_{j=1}^\infty M_j(t), \end{equation} where each term $M_j(t)$ is given as a combination of nested commutators, with the first terms reading as \begin{widetext} \begin{eqnarray} M_1(t)&=&\int_0^t A(t_1)dt_1\\ M_2(t)&=&\frac{1}{2}\int_0^tdt_1\int^{t_1}_0[A(t_1),A(t_2)] dt_2\\ M_3(t)&=&\frac{1}{6}\int_0^tdt_1\int^{t_1}dt_2\int^{t_2}_0([A(t_1),[A(t_2),A(t_3)]]+[A(t_3),[A(t_2),A(t_1)]]) dt_3\\ &&\vdots\nonumber \end{eqnarray} \end{widetext} On the other hand, for periodic driving \cite{milena,chu} $A(t+T)=A(t)$, Floquet's theorem states that the principal fundamental solution of the dynamical equations can be written as \begin{equation} S(t)=P(t)e^{tF} \end{equation} where $P$ and $F$ are $n\times n$ matrices, such that $P(t)$ is periodic $P(t+T)=P(t)$ and $F$ is time independent. Floquet's theorem is the time dependent analog of Bloch's theorem in solid state physics for spatially periodic structures and it provides a time dependent transformation such that the so called Floquet states evolve according to the time independent matrix $F$. This time dependent transformation is implemented by $P(t)$. One important remark is in order since although the interaction $A(t)$ is periodic, the corresponding evolution matrix $S(t)$ is not, i.e. $S(t+T)\neq S(t)$. In fact $S(T)$ carries indeed non trivial information on the dynamics of the periodic system. The eigenvalues of $F$ are called Floquet exponents $\rho$. These Floquet exponents can be found by diagonalizing $S(T)=e^{TF}$. Yet, they are not uniquely defined since $\rho\rightarrow\rho+2in\pi/T$ leaves $S(T)$ invariant. In order to determine those exponents one standard approach consists of performing an expansion in the (infinite) eigenbasis of time periodic functions $\xi_N(t)=e^{iN\Omega t}$ (Fourier modes). In that periodic basis the evolution operator is diagonalized and the Floquet exponents $q_n$ are the logarithms of the eigenvalues of the evolution operator evaluated at $t=T$, i.e. $S(T)$. Then, in order to deal with the infinite eigenvalue problem one resorts to a truncation procedure in order to determine the Floquet exponents. The Magnus approach avoids the need to solve the infinite dimensional eigenvalue problem and has the physical virtue of preserving unitarity of the evolved state to any order in the expansion. The connection between Magnus expansion and Floquet's theorem is found in reference \cite{magnus2} where the authors present a solution of the evolution equations that consists of writing the periodic part $P(t)$ as an exponential \begin{equation} P(t)=e^{\Omega(t)}\qquad \Omega(t+T)=\Omega(t), \end{equation} and then they proceed to expand both operators $\Omega(t)$ and $F$ in power series \begin{equation}\label{terms} \Omega(t)=\sum_{j=1}^\infty \Omega_j(t),\qquad F=\sum_{j=1}^\infty F_j. \end{equation} Now, since $S(t)$ is by construction a principal fundamental matrix solution $P(T)=P(0)$ is the identity matrix and one gets for all values of $j$ \begin{equation} F_j=M_j(T)/T. \end{equation} Introducing the Bernoulli numbers $B_l$ ($B_0=1$, $B_1=-1/2$, $B_2=1/6,B_4=-1/30\dots$) such that $B_{2m+1}=0$ for $m\ge 1$, the exponent operator term contributions $\Omega_j(t)$ satisfy a recurrence relation in terms of two auxiliary time dependent operators $W(t)$ and $T(t)$, according to the relations \begin{equation} \partial_t\Omega_j(t)=\sum^{j-1}_{l=0}\frac{B_l}{l!}(W^{(l)}_j(t)+(-1)^{l+1}T^{(l)}_j(t))\qquad (n\ge 1), \end{equation} In turn, the $W's$ and $T's$ are given through the iterative relations \begin{eqnarray} W^{(l)}_j&=&\sum^{j-l}_{m=1}[\Omega_m,W^{(l-1)}_{j-m}]\quad (1\le l\le j-1)\\ T^{(l)}_j&=&\sum^{j-l}_{m=1}[\Omega_m,T^{(l-1)}_{j-m}]\quad (1\le l\le j-1)\\ W^{(0)}_1&=&A,\qquad W^{(0)}_j=0\quad (j>1)\\ T^{(0)}_j&=& F_j,\qquad (j>0). \end{eqnarray} In practical calculations, the relations in the last two lines serve to initialize the iterative procedure. \section{Results and discussion} In order to apply the previous results to our problem one just has to make the following identifications $S(t)\rightarrow U_-(k,t)$, $A(t)\rightarrow-ih_-(k,t)/\hbar$. Then, the characteristic exponents are proportional to the quasi-energies $\rho=-iq_n$. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{fig1.png} \label{figure1} \caption{(Color online) First Brillouin zone for the quasi-energy spectrum as function of adimensional non interacting quasi particle energy. Due to the dynamical interlevel mixing the static gap $\delta_0$ gets closed (colored, thick continous lines) as compared to the static interacting spectrum (gray, thin, dashed lines). Colored arrows depict the limit $\Lambda\rightarrow 0$ where the highly oscillatory contributions tend to cancel and the non interacting spectrum is recovered (black, thin, dashed lines). The Floquet Fourier solutions (colored, thick, dashed lines) show qualitative agreement with the Magnus result however they converge slower for small $\Lambda$. We have expressed all quantities in units of $\tau=2.8{\rm eV}$, the first neighbor hopping parameter within a tight-binding approach and set $\Omega=1$.} \end{center} \end{figure} For our calculation we choose a time dependence of the form $\lambda(t)=\lambda_R\cos{\Omega t}$, where $\Omega$ is the frequency of the driving and $\lambda_R$ the RSOC strength and proceeded to evaluate $F_j$ by the iteration procedure described in the previous section. We have found a characteristic behavior of the (adimensional) quasi-energies $\varepsilon\pm=q_\pm/\hbar\Omega$, which qualitatively do not change when one goes beyond third order in the Magnus-Floquet expansion (in the appendix we briefly describe the calculations up to fifth order). For finite $\Omega$ they explicitly read as \begin{eqnarray}\label{quasi} \varepsilon_\pm&=&\pm\sqrt{\kappa^2 (16\kappa^2\Lambda^2+\Lambda^4-2\Lambda^2 + 1)}, \end{eqnarray} where we have defined the adimensional quantities $\kappa=v_F k/\Omega$ and $\Lambda=\lambda_R/\hbar\Omega$. In {\rm FIG.1} we show the typical behavior of the quasi-energies $\varepsilon_\pm$ (blue and red, thick continous lines) given in equations (\ref{quasi}) as function of the adimensional quasiparticles non interacting energy $\epsilon^0$. We also show (gray, thin, dashed lines) the corresponding static eigenvalues $\epsilon_\pm$ of the RSOC hamiltonian as described by $h_-(k)$, as well as the eigenenergies of the non interacting hamiltonian (thin, dashed, black lines). We have also included (red and blue, thick, dashed lines) the result from a $20$-mode Fourier expansion of the quasi-energies. We have expressed all quantities in units of the first neighbor hopping parameter $\tau=2.8{\rm eV}$ and for finite $\Omega$ have set $\Omega=1$ and change the effective coupling $\Lambda$. This is true for all the figures within the dynamical case. Since $h_-(k,t)$ mixes the static eigenstates of $h_-(k)$ the relative phases among them are dynamically modulated giving rise to interference phenomena and we find that this leads to a dynamical closing of the original gap $\delta_0$. Therefore, the exact solutions at $k=0$ correspond to vanishing quasi-energies (modulo $\hbar\Omega$). We further find (see arrows in {\rm FIG.1}) that in the limit $\Lambda\rightarrow 0$, corresponding to a highly oscillating field, $\varepsilon_\pm\rightarrow\epsilon^0_\pm$ because then the influence of the driving quickly tends to vanish on average and intuitively one expects to recover the non interacting linear spectrum. The result from a $20-$mode Fourier expansion shows qualitative agreement to the Magnus-Floquet approach, however a small discrepancy is found and the Magnus result converges faster in the limit of highly oscillating fields $\Lambda\rightarrow 0$, as depicted by thick arrows in {\rm FIG.1}. Within the Magnus-Floquet approach the evolution operator $U_-(\kappa,t)$ corresponding to the hamiltonian $h_-(\kappa,t)$ is found to be given as \begin{equation} U_-(\kappa,t)=e^{if(t)}e^{i\vec{V}(\kappa,t)\cdot\vec{p}}e^{i\vec{v}(\kappa,t)\cdot\vec{p}} \end{equation} where $\vec{p}$ is a vector of Pauli matrices and the non vanishing components of the vector $\vec{V}$ are given as \begin{widetext} \begin{eqnarray} V_x(\kappa,t)&=&\frac{-\kappa\Lambda\sin{\Omega t}}{{\sqrt{\kappa^2+\Lambda^2}}}(4\kappa^2-2\Lambda^2+1+\Lambda^2\cos{\Omega t})\\ V_y(\kappa,t)&=& 2\kappa\Lambda(1-\cos{\Omega t})\\ V_z(\kappa,t)&=&\frac{\Lambda^2\sin{\Omega t}}{{\sqrt{\kappa^2+\Lambda^2}}}\left(6\kappa^2+1-\kappa^2\cos{\Omega t}\right) \end{eqnarray} \end{widetext} whereas those of $\vec{v}$ are \begin{eqnarray} v_x(\kappa,t)&=&\frac{t\kappa\Lambda}{{\sqrt{\kappa^2+\Lambda^2}}}\left(1+4\kappa^2-\Lambda^2 \right)\\ v_z(\kappa,t)&=&\frac{t\kappa^2}{{\sqrt{\kappa^2+\Lambda^2}}}\left(1-5\Lambda^2\right). \end{eqnarray} At the Dirac point $\kappa= 0$ we find that $v_x,v_z,V_x,V_y$ all vanish, whereas $V_z=\Lambda\sin{\Omega t}=f(t)$, and thus we recover the exact solution (\ref{exact}). Before we proceed to evaluate other physical quantities of interest we make a brief disgression on the importance of taking into account the full time dependence of the RSOC. In particular, due to smallness of the RSOC strength one can try a rotating wave approximation (RWA) where for a given finite value of $k$ only near to resonance ($\Omega\sim 2 v_F k$) terms are kept in the interacting hamiltonian. Now we show that for the present model this approach gives unphysical results, for instance, it predicts a gap opening at $k=0$. This in turn would imply finite quasi-energy mode contributions at the Dirac point which contradicts the previously described exact result. Since the results are easily found within the $4$-dimensional formulation we briefly return to the original basis and make a full dimensional discussion. Within a RWA approach, the hamiltonian reads (here we omit the spatial degrees of freedom for ease of notation \begin{equation} \mathcal{H}^{\textrm{RWA}}(t)=\mathcal{H}_0+i\lambda(\sigma_+s_-e^{i\Omega t}-\sigma_-s_+e^{-i\Omega t}), \end{equation} which for a given value of $k$ describes near resonance $\Omega\sim 2 v_F k$ spin and pseudo spin flipping processes and neglects the so called secular or counter-rotating terms that oscillate rapidly. In this case the solution is exact and the adimensional quasi-energy spectrum reads \begin{eqnarray} \varepsilon^{\textrm{RWA}}_{\sigma s}=\frac{s}{2}\sqrt{\delta^2_{res}+g_\sigma(\kappa)} \end{eqnarray} where $\delta_{res}=2\kappa- 1$ describes the resonance and $g_\sigma(\kappa)=\Lambda^2-\kappa+2\sigma\sqrt{\kappa(\Lambda^2+1)+\Lambda^4}$. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{fig2.png} \label{figure2} \caption{(Color online) Quasi-energies as function of momenta. Gray continuos lines correspond to the static limit. The solution within this approach describes an unphysical gap opened at $\kappa=0$ (see main text).} \end{center} \end{figure} When we evaluate at $\kappa=0$ and finite $\Omega$ one would get the gaps $\Delta=\sqrt{(\Lambda^2+1/4}$ for the originally gapped states and a dynamically opened gap on the static degenerate states $\Delta_{dyn}=1$. These results are in disagreement with the exact solution to the full equations (\ref{zeroenergy}) where we had found zero energy solutions at $\kappa=0$ so this approach is not suitable to describe the dynamical features of the periodic driving. In {\rm FIG.2} we depict the quasi-energy spectrum as a function of non- interacting energies $\epsilon^0$ for this RWA solution. After this brief discussion we turn back our attention to the Magnus-Floquet solution in order to get additional information on the dynamical behavior induced on the system. This can be found by evaluating some other physical quantities of interest from an experimental point of view. First we analyse the out of plane spin polarization $\mathcal{S}_z(\vec{k},t)=\langle\Psi(\vec{k},t)|S_z|\Psi(\vec{k},t)\rangle$, where $S_z=\hbar/2\sigma_0\otimes s_z$ and $\sigma_0$ the identity matrix in pseudo spin space. Using again the transformation (\ref{unitary}) the out of plane spin polarization reads $S_z(\vec{k})=U(k)S_zU^\dagger(\vec{k})$ and is found to be isotropic and block anti-diagonal \begin{equation} S_z(\vec{k})=\left(\begin{array}{cc} 0&s_-\\ s_+&0 \end{array} \right) \end{equation} with $s_\pm$ given explicitly as \begin{equation} s_\pm=-\frac{\hbar}{2\sqrt{\kappa^2+\Lambda^2}}(\kappa p_0\pm i\Lambda p_y) \end{equation} with $p_i$ a vector of Pauli matrices and $p_0$ the two dimensional identity matrix. The anti-diagonal structure of $\tilde{S}_z$ in this basis reflects the fact that spin polarization is not conserved in presence of RSOC. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{fig3.png} \label{figure3} \caption{(Color online) Density plot showing the out of plane spin polarization plotted against normalized time $t=\hbar/\tau$ for the exact solution at $\kappa=0$. The periodicity is inherited from the driving field. The vertical axis indicates the normalized adimensional field strength $\Lambda$ and one gets up ($+$) down ($-$) spin components phases equally separated by the zeroes of $f(t)$.} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{fig4.png} \label{figure4} \caption{(Color online) Density plot showing the time behavior of the out of plane spin polarization for the semi- analytical Magnus-Floquet solution at finite $\kappa=1$. Alternating maxima and minima appear due to interlevel mixing among different static eigenstates. This is also manifested by the lost of symmetry with respect to $t=\pi$ and stems from the driven modulated phases since dynamical interlevel mixing leads to interference effects.} \end{center} \end{figure} Then we have to evaluate \begin{eqnarray} \mathcal{S}_z(\vec{k},t)=\langle\Psi(k,0)|U^\dagger(k,t)S_z(k)U(k,t)|\Psi(k,0)\rangle, \end{eqnarray} for any initially prepared state $|\Psi(k,0)\rangle$. Next we separate explicitly the four spinor $|\Psi(k,0)\rangle$ in upper and lower components as \begin{equation} |\Psi(k,0)\rangle=\frac{1}{\sqrt{2}}\left(\begin{array}{c} \psi_-\\ \psi_+ \end{array} \right) \end{equation} where $\psi_\pm$ are normalized two dimensional spinors. After some algebra one gets for the spin polarization in terms of adimensional parameters \begin{equation} \mathcal{S}_z(\kappa,t)=\Re\psi^*_-e^{-2if(t)}e^{-i\vec{v}\cdot\vec{p}}e^{-i\vec{V}\cdot\vec{p}}s_-e^{i\vec{V}\cdot\vec{p}}e^{i\vec{v}\cdot\vec{p}}\psi_+/2 \end{equation} \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{fig5.png} \label{figure5} \caption{(Color online) Density plot showing the behavior of the absolute value of the autocorrelation function plotted against normalized time $t=\hbar/\tau$ and strength $\Lambda$ for the exact solution at $\kappa=0$. As expected, only for large values of the interaction strength the evolved state departs considerably from the initial state configuration (see main text). } \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{fig6.png} \label{figure6} \caption{(Color online) Density plot showing the behavior of the absolute value of the autocorrelation function against normalized time $t=\hbar/\tau$ and RSOC strength $\Lambda$ for the Magnus-Floquet solution at $\kappa=1$. The phase interference effects previously discussed. Now the onset of recurrences at small $\Lambda$ is a consequence of interlevel mixing.} \end{center} \end{figure} For a finite value of $\kappa$ we now choose the initial spinor configuration $\psi_\pm=(\pm i,1)/\sqrt{2}$, in such way that the out of plane polarization vanishes $\langle|\Psi(k,t)|S_z(k)|\Psi(k,t)\rangle=0$ for a static RSOC. In figure {\rm FIG.3} we show a density plot of the resulting out of plane spin polarization for the exact solution $\kappa=0$. In this case, the only relevant parameter is the adimensional amplitude of the driving field $\Lambda$. As expected, for $\Lambda=0 $ the system remains unpolarized for all values of the adimensional time $t$ (given in units of $\tau/\hbar$). For finite values of the effective coupling an alternating pattern of spin phases (denoted as $+$ and $-$ representing up and down, respectively) are seen to apper as time evolves. They are symmetrically distributed among the values $t=n\pi$ where $f(t)$ and thus the relative phases among the static RSOC eigenstates vanish. However, as shown in figure {\rm FIG.4}, once $\kappa$ is finite this panorama qualitatively changes. In this case, the additional interference due to level mixing induces a pattern of alternating maxima $(+)$ and minima $(-)$ for $t_n=n\pi$, $n = \in\mathbb{N}$. The reason for such a behavior is that for a given $t_n$ the evolution operator is given by $e^{TF}$ and thus the polarization maxima and minima $\mathcal{S}(\kappa,T)=\pm1/2$ depend essentially on the quasienergy spectrum properties. Then, increasing $\Lambda$ makes these alternating maxima and minima to get closer. When we move to the next period (corresponding to $t=4\pi$ in figure {\rm FIG.4}) the number of alternating maxima and minima is doubled, and so on. Therefore, dynamical coupling produces a non vanishing out of plane spin polarization with an oscillating time pattern resulting from the mixing of the static eigenstates such that the changing relative phases, modulated by the time dependent interaction prevent total destructive interference to happen. Therefore ac driven RSOC provides a suitable means to dynamically control the degree of spin polarization and could in principle serve to generate non vanishing and non trivial spin polarized phases in otherwise unpolarized states in monolayer graphene. In order to complement the just described physical picture of the spin polarization scenario we evaluated the autocorrelation function $A(\vec{\kappa},t)$. This is given by the projection of the evolved state $|\Psi(\vec{\kappa},t)\rangle$ along a given (in principle arbitrary) initial spinor configuration $|\Psi(\vec{\kappa},0)\rangle$, i.e. \begin{equation} A(\vec{\kappa},t) =\langle\Psi(\vec{\kappa},0)|\Psi(\vec{\kappa},t)\rangle. \end{equation} The absolute value of the autocorrelation function provides information on the so called recurrences or quantum revivals of the dynamics, i.e. those values of the time parameter for which such overlapping is a maximum. In addition, its Fourier transform is proportional to the local density of states\cite{kramer}. In figure {\rm FIG.5} ({\rm FIG.6}) we plot the absolute value of $A(\kappa,t)$ obtained by means of the exact (semi-analytical) evolution operators. For the exact solution shown in figure {\rm FIG.5} only large values of $\Lambda$ induce an appreciable phase change and the system remains mostly correlated to the initial state. As for the case of the out of plane spin polarization, for $t=t_n$ the autocorrelation gives maxima corresponding to red (dark gray) zones in the figure. These signal the return of the system to initial vanishing spin polarization. Maxima and minima of spin polarization correspond to partial quantum revivals. Given the definition of the autocorrelation as the probabibility that the system returns to its initial state we find that for $\Lambda=0$ giving $\varepsilon=\kappa$ the loci of $A(\kappa,t_n)=1$ correspond to vanishing of spin polarization and are thus equally spaced as $\delta t=\pi$. On the contrary, the maxima and minima of spin polarization correspond in this case to quantum suppressions and are shown as blue (dark) zones in the figure. As soon as we move towards finite values of $\kappa$ ({\rm FIG.6}) interference phenomena come again into play and we find recurrences represented as read (dark gray) zones and suppressions, described by purple (black) zones. These arise because of the constructive and destructive interference effects, described previously and are modulated by the ac driving. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{fig7.png} \label{figure7} \caption{(Color online) Density plot showing the behavior of the absolute value of the autocorrelation function versus normalized time and RSOC strength $\Lambda$ for the static interaction at $\kappa=0$. In this case, quantum revivals are separated by the inverse of the gap $\delta_0=2\lambda$ in accordance to Heisenberg uncertainty principle.} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{fig8.png} \label{figure8} \caption{(Color online) For finite values of momentum $\kappa=1$ recurrences are modulated signaling the coexistence of revivals for times inversely proportional to the static gap $\delta_0=2\lambda$ and those corresponding to the larger energy separation $\delta=\sqrt{k^2+\lambda^2}$, which correspond to a smaller time and seen in the figure as modulated alternating pattern of maxima and minima for each recurrence curve corresponding to $k=0$. } \end{center} \end{figure} To check that this physics is different from the static scenario, in Figures {\rm FIG.7} and {\rm FIG.8} we depict the corresponding countour plots for time independent RSOC interaction. For the $\kappa=0$ case shown in {\rm FIG.7}, the alternating pattern of quantum revivals and suppressions is separated by the inverse of the energy gap $\delta_0$. This is due to the fact that the in this situation the relative phase among the non interacting eigenstates is set by the energy separation among them, i.e. $\delta_0$, and according to Heisenberg's time-energy uncertainty relation one should have $\delta t$ inversely proportional to the energy separation. As soon as $\kappa$ is finite, a quasi homogeneous pattern of recurrences is seen. Therefore although in the static case where the spin polarization vanishes for all values of RSOC strength, there appear recurrences indicating there is no longer correlation between $S_z$ and $A(t)$, which for the system under study are only correlated for time dependent driving. Two comments are in order here. The first one is concerning the validity of the Dirac fermion hamiltonian approximation in order to deal with spin-orbit related phenomena in graphene monolayer. As it is discussed in reference\cite{yao} the large gap of the hamiltonian describing the $\sigma$ orbitals implies that the effective hamiltonian including SOC is essentially given by the long wavelength or Dirac Hamiltonian considered in our model. In addition, M. Gmitra, et al\cite{mitra} used first principles calculations to discuss the relevance of spin-orbit related physics in graphene near and beyond the Dirac points. The second point to remark is the role of localized impurities. As has been discussed in\cite{castro}, the presence of local impurities can enhance the value of the static intrinsic spin-orbit coupling strength because of the induced $sp^3$ distortion which leads to a hybridization of the $\pi$ and $\sigma$ orbitals and as shown in reference\cite{mitra} RSOC only respond to this $\sigma-\pi$ hybridization. Therefore, we would expect our results to be robust or even enhanced if localized impurities were included in the model. Now we would like to compare to other proposals of dynamical modulation of energy gaps under ac drivenin graphene. For instance, Oka and Aoki\cite{oka} found that circularly polarized intense laser fields can induce a photovoltaic effect in graphene, that is, a Hall effect without magnetic fields. This in turn relies on the gap opening at the Dirac point $k=0$. However, as shown recently by Zhou and Wu\cite{wu} in analyzing the optical response of graphene under intense THz fields, the contributions from large momenta make the effective gaps opened to get also dynamically closed. However, it is found that the quasi energy spectrum for the linear polarization leads to linear quasi energy spectrum (see also \cite{foa}). Our model could in principle be mapped into their scenario of linearly polarized radiation field. Yet, in both works \cite{wu,foa} the case of linear polarization leads to linear quasienergy spectrum. Therefore, from the semi parabolic quasi energy spectrum shown in figure (FIG.\ref{figure1}) we can infer that the bending of the quasienergy spectrum makes the spin-orbit driven scenario qualitatively different from these other approaches to dynamical control of graphene electronic properties. Our argument relies on the fact that the topological properties of periodically driven systems are characterized by the quasienergy spectrum\cite{topological1} so we can conclude that the physics related to ac driven spin-orbit does reveal new physical interesting electronics properties which are absent in the static regime. \section{conclusions} We have investigated the role of periodically driven RSOC in monolayer graphene and recasted the original $4$-dimensional problem as an equivalent set of two two-level problems. Due to the induced modulation of the relative phases among the static eigenstates we found a closing of the static gap at the Dirac point $k=0$. This result is in agreement with the available exact solution and differs from RWA where an unphysical gap is seen to appear. This physical picture is confirmed through a Fourier mode expansion and we found that Magnus Floquet approach indeed has the advantage of providing the quasienergy spectrum with less computational effort. We also found that the generation and manipulation of out of plane spin polarization for otherwise spin unpolarized states requires the time driving to be realizable within this set up. Due to the induced interlevel mixing among the static eigenstates we found a set of alternating positive and negative spin phases in clear distinction to the well separated spin phases at the Dirac point corresponding to the exact solution. The dynamical onset of quantum revivals described through the autocorrelation function is directly correlated to the appearance of maxima for either spin phases. However, in the static case, such a correlation does not ensues since the spin polarization vanishes identically whereas the quantum revivals are still present. Concerning the actual experimental realization of our proposal we believe it could be implemented by means of Magnetic Resonance Force Microscopy as reported in\cite{rugal}. Within this scheme, single spin polarization could be detected by means of the frequency shift induced on a cantilever that is used to scan the sample. The sign of the cantilever's frequency shift can be associated to the spin polarization. This detection is achieved by means of low intensity magnetic fields under resonant conditions, thus no magnetic coupling terms need to be included in the description of the dynamics of the charge carriers in graphene. In this way, although in principle RSOC in graphene has a small static strength, the time dependent efective phenomena produce some interesting spin controlling strategies which we believe could provide a route to new implementations of graphene in spintronic devices with appropiate spin detection techniques. {\it{Acknowledgments}--} Z.Z.S. thanks the Alexander von Humboldt Foundation (Germany) for a grant, and A.L. thanks DAAD-FUNDAYACUCHO for financial support. This work has been supported by Deutsche Forschungsgemeinschaft via GRK 1570.
1,314,259,996,243
arxiv
\section{Introduction} This paper is concerned with the combinatorial structure of the polytopes of probability models which are widely studied in quantum information and foundations. Much current study focusses on the \emph{no-signalling polytope} \cite{popescu2014nonlocality,pironio2011extremal}, comprising those probability models whose correlations are consistent with the constraints imposed by relativity, and on characterizing the set of quantum correlations \cite{NavascuesPironioAcin2008}, which is contained within this polytope. As is well known, quantum correlations exceed those which can be achieved using classical, ``local realistic'' models. While Bell's original proof of this fact \cite{Bell-thm} used the detailed probabilistic structure of the models arising from quantum mechanics to show that they violated the Bell inequalities which hold for classical models, subsequent proofs by Greenberger, Horne and Zeilinger \cite{GHZ,GHSZ90}, Hardy \cite{Hardy92:nonlocality1,Hardy93:nonlocality2}, and others \cite{cabello2001bell,zimba1993bell,Mermin90:QuantumMysteriesRevisited-SimplifiedGHZ1}, were inequality-free, and even probability-free. In \cite{AbramskyBrandenburger}, a hierarchy of forms of non-locality, or more generally contextuality, was defined. The higher levels, of \emph{logical contextuality} (generalizing Hardy arguments), \emph{strong contextuality} (generalizing GHZ arguments), and \emph{All-versus-Nothing contextuality} \cite{AbramskyEtAl:ContextualityCohomologyAndParadox}, use only the \emph{possibilistic} information from the probability models. That is, they need only the information about which events are possible (probability greater than zero) or impossible (probability zero). In other words, they only refer to the \emph{supports} of the probability distributions. Passing from probability models to their supports (the ``possibilistic collapse'') evidently loses a great deal of information. In this paper, we show that nevertheless, the \emph{combinatorial structure} of the no-signalling polytope is completely captured by the possibilistically collapsed models, thus confirming that much structural information can in fact be gained from these apparently much simpler models. In more precise terms, the \emph{combinatorial type} of a polytope is given by its \emph{face lattice} \cite{ziegler1995lectures}, the set of faces of the polytope, ordered by inclusion. These face lattices have a rich structure, and have been extensively studied in combinatorics. Our main result can be stated as follows. \begin{theorem} Fix a ``measurement scenario'', specifying a set of variables which can be observed or measured, the possible outcomes of these measurements, and which variables are compatible and can be measured together. We can then define a polytope $\mathcal{N}$ of no-signalling probability models over this scenario. Call the face lattice of this polytope $\mathcal{F}$. Now let $\mathcal{S}$ be the set of supports or possibilistic collapses of the models in $\mathcal{N}$. $\mathcal{S}$ is naturally ordered by context-wise inclusion of supports. Then there is an order-isomorphism \[ \mathcal{F} \cong \mathcal{S} . \] \end{theorem} Thus the combinatorial type of the polytope $\mathcal{N}$ is completely determined by its possibilistic collapse. This result has a number of interesting corollaries. For example: \begin{itemize} \item All the models in the relative interior of each face $F \in \mathcal{F}$ have the same support. \item The vertices of $\mathcal{N}$ are exactly the probability models with minimal support in $\mathcal{N}$. Moreover, there is only one probability model in $\mathcal{N}$ for each such minimal support. \item The vertices of $\mathcal{N}$ can be written as the disjoint union of the local, deterministic models --- the vertices of the polytope of classical models --- and the strongly contextual models with minimal support. \item Thus the extremal contextual probability models are completely determined by their supports. \end{itemize} In fact, this result applies to a much wider class of polytopes. Note that the no-signalling polytope, for any given measurement scenario, is defined by the following types of constraint: \begin{itemize} \item Non-negativity \item Linear equations: namely normalization, and the no-signalling conditions. \end{itemize} In geometric terms, this says that $\mathcal{N} = \mathcal{H}_{\geq \mathbf{0}} \cap \mathsf{Aff} (\mathcal{N})$, where $\mathsf{Aff}(\mathcal{N})$ is the affine subspace generated by $\mathcal{N}$, and $\mathcal{H}_{\geq \mathbf{0}}$ is the non-negative orthant, \textit{i.e.}~ the set of all vectors $\mathbf{v}$ with $\mathbf{v} \geq \mathbf{0}$. It is a standard result that every linear program can be put in a ``standard form'' of this kind \cite{matousek2007understanding}, so that its associated polytope of constraints is of the type we are considering. Now our theorem in fact applies at this level of generality. \begin{theorem}[General Version] Let $P$ be a polytope such that $P = \mathcal{H}_{\geq \mathbf{0}} \cap \mathsf{Aff}(P)$. Let $\mathcal{F}(P)$ be the face lattice of $P$. Let $\mathcal{S}(P)$ be the set of ``supports'' of points in $P$, \textit{i.e.}~ $0/1$-vectors where each positive component of $\mathbf{v} \in P$ is mapped to 1, while each zero component is left fixed. $\mathcal{S}(P)$ is naturally ordered componentwise, with $0 < 1$. Then there is an order-isomorphism \[ \mathcal{F}(P) \cong \mathcal{S}(P) . \] \end{theorem} The structure of the remainder of the paper is as follows. In Section~2, we provide background on measurement scenarios and empirical models. In Section~3 we review some basic notions on partial orders and lattices, and in Section~4, we review some standard material on polytopes. We prove our main result in Section~5, and apply it to probability polytopes, in particular no-signalling polytopes, in Section~6. \section{Measurement Scenarios and Empirical Models} \label{empsec} We shall give a brief introduction to the basic notions of measurement scenarios and empirical models, as developed in the sheaf-theoretic approach to contextuality and non-locality. For further discussion, motivation and technical details, see \cite{AbramskyBrandenburger,AbramskyEtAl:ContextualityCohomologyAndParadox}. An extended introduction and overview is given in \cite{Abr15}. A \emph{measurement scenario} is a triple $(X, \mathcal{M}, O)$ where: \begin{itemize} \item $X$ is a set of variables which can be measured, observed or evaluated \item $\mathcal{M}$ is a family of sets of variables, those which can be measured together. These form the \emph{contexts}. \item $O$ is a set of possible outcomes or values for the variables. \end{itemize} In this paper, we shall only consider \emph{finite} measurement scenarios, where the sets $X$, $O$, and hence also $\mathcal{M}$ are finite. This allows us to avoid measure-theoretic technicalities, while capturing the primary objects of interest in quantum information and foundations. Given a measurement scenario $(X, \mathcal{M}, O)$, an \emph{empirical model} for this scenario is a family $\{ e_C \}_{C \in \mathcal{M}}$ of probability distributions \[ e_C \in \mathsf{Prob}(O^C), \quad C \in \mathcal{M} . \] Here we write $\mathsf{Prob}(X)$ for the set of probability distributions on a finite set $X$. Such distributions are simply represented by functions $d: X \rightarrow [0, 1]$ which are normalized: \[ \sum_{x \in X} d(x) = 1 . \] The probability of an event $S \subseteq X$ is then given by \[ d(S) \; = \; \sum_{x \in S} d(x) . \] The set $O^C$ is the set of all assignments $s : C \rightarrow O$ of outcomes to the variables in the context $C$. Such an assignment represents a \emph{joint outcome} of measuring all the variables in the context. Given $e_C \in \mathsf{Prob}(O^C)$, and a subset $U \subseteq C$, we have the operation of restricting $e_C$ to $U$ by \emph{marginalization}: \[ e_C |_{U} \in \mathsf{Prob}(O^U) , \] defined by: \[ e_C |_{U}(s) \; = \; \sum_{t \in O^C, \; t |_U = s} e_C(t) . \] Here $t |_U$ is the restriction of the assignment $t$ to the variables in $U$. We say that an empirical model $\{ e_C \}_{C \in \mathcal{M}}$ is \emph{no-signalling} if for all $C, C' \in \mathcal{M}$: \[ e_C |_{C \cap C'} \; = \; e_{C'} |_{C \cap C'} . \] Thus an empirical model is no-signalling if the marginals from any pair of contexts to their overlap agree. This corresponds to a general form of an important physical principle. Suppose that $C = \{ a, b \}$, and $C' = \{ a, b' \}$, where $a$ is a variable measured by an agent Alice, while $b$ and $b'$ are variables measured by Bob, who may be spacelike separated from Alice. Then under relativistic constraints, Bob's choice of measurement --- $b$ or $b'$ --- should not be able to affect the distribution Alice observes on the outcomes from her measurement of $a$. This is captured by saying that the distribution on $\{ a \} = \{ a, b \} \cap \{ a, b' \}$ is the same whether we marginalize from the distribution $e_C$, or the distribution $e_{C'}$. The general form of this constraint is shown to be satisfied by the empirical models arising from quantum mechanics in \cite{AbramskyBrandenburger}. \textbf{Example} Consider the following table: \begin{center} \begin{tabular}{ll|ccccc} A & B & $(0, 0)$ & $(1, 0)$ & $(0, 1)$ & $(1, 1)$ & \\ \hline $a$ & $b$ & $0$ & $1/2$ & $1/2$ & $0$ & \\ $a'$ & $b$ & $3/8$ & $1/8$ & $1/8$ & $3/8$ & \\ $a$ & $b'$ & $3/8$ & $1/8$ & $1/8$ & $3/8$ & \\ $a'$ & $b'$ & $3/8$ & $1/8$ & $1/8$ & $3/8$ & \end{tabular} \end{center} This represents a situation where Alice and Bob can each choose measurement settings and observe outcomes. Alice can choose the settings $a$ or $a'$, while, independently, Bob can choose $b$ or $b'$. The total set of variables is $X = \{ a, a', b, b' \}$. The \emph{measurement contexts} are \[ \mathcal{M} \; = \; \{\{ a, b \}, \;\; \{ a', b\}, \;\; \{ a, b' \}, \;\; \{ a', b' \} \}.\] Each measurement has possible outcomes in $O = \{ 0, 1 \}$. The matrix entry at row $(a', b)$ and column $(0,1)$ corresponds to the \emph{event } \[ \{ a' \mapsto 0, \; b \mapsto 1 \} . \] Each row of the table specifies a \emph{probability distribution} $e_C$ on events $O^C$ for a given choice of measurements $C$. Thus the table directly corresponds to an empirical model on the measurement scenario $(X, \mathcal{M}, O)$. We verify that this table satisfies the no-signalling condition. Consider the following schematic representation of the table \begin{center} \begin{tabular}{ll|ccccc} A & B & $(0, 0)$ & $(1, 0)$ & $(0, 1)$ & $(1, 1)$ & \\ \hline $a$ & $b$ & $c$ & $d$ & $e$ & $f$ & \\ $a'$ & $b$ & $g$ & $h$ & $i$ & $j$ & \\ $a$ & $b'$ & $k$ & $l$ & $m$ & $n$ & \\ $a'$ & $b'$ & $o$ & $p$ & $q$ & $r$ & \end{tabular} \end{center} where we have labelled the entries with the letters $c$, \ldots , $r$. The no-signalling conditions for the non-empty intersections of contexts are given by the following equations: \begin{center} \begin{tabular}{lclclcl} $c + e = k + m$, & $\quad$ & $d + f = l + n$, & $\quad$ & $g + i = o + q$, & $\quad$ & $h + j = p + r$ \\ $c + d = g + h$, & $\quad$ & $e + f = i + j$, & $\quad$ & $k + l = o + p$, & $\quad$ & $m + n = q + r$ \\ \end{tabular} \end{center} We see that, for example, the first equation is verified in the table above, since $0 + 1/2 = 3/8 + 1/8$. This table has the additional property that it can be realized by a quantum state and appropriate choices of local observables. It thus provides the basis for a proof of Bell's theorem \cite{Bell-thm}. See \cite{AbramskyBrandenburger} for an extended discussion. Situations with any number of parties, measurement settings for each party, and outcomes can be represented similarly by empirical models on measurement scenarios. \textbf{Example} A different kind of example will illustrate the wide scope of our notion of empirical models over measurement scenarios. Consider the measurement scenario $(X, \mathcal{M}, O)$, where: \begin{itemize} \item $X$ is a set of 18 variables, $\{ A, \ldots , O \}$. \item $\mathcal{M} = \{ U_1 , \ldots , U_9 \}$, where the columns $U_i$ are the contexts: \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $U_1$ & $U_2$ & $U_3$ & $U_4$ & $U_5$ & $U_6$ & $U_7$ & $U_8$ & $U_9$ \\ \hline\hline $A$ & $A$ & $H$ & $H$ & $B$ & $I$ & $P$ & $P$ & $Q$ \\ \hline $B$ & $E$ & $I$ & $K$ & $E$ & $K$ & $Q$ & $R$ & $R$ \\ \hline $C$ & $F$ & $C$ & $G$ & $M$ & $N$ & $D$ & $F$ & $M$ \\ \hline $D$ & $G$ & $J$ & $L$ & $N$ & $O$ & $J$ & $L$ & $O$ \\ \hline \end{tabular} \end{center} \item $O = \{ 0, 1 \}$. \end{itemize} The empirical model has the property that the support of the distribution $e_C$ for each context $C$ consists of those assignments $s : C \rightarrow O$ which assign $1$ to exactly one variable in $C$. This example shows that \emph{Kochen-Specker constructions} \cite{kochen1975problem},\cite{cabello1996bell} also fall within the scope of our definitions. See \cite{AbramskyBrandenburger} for further discussion. \subsection*{Strong Contextuality} The previous example also exhibits the property of \emph{strong contextuality}, which is a key part of the hierarchy of degrees of contextuality identified and studied in \cite{AbramskyBrandenburger}. Let $e$ be an empirical model over a measurement scenario $(X, \mathcal{M}, O)$. We say that a global assignment $s : X \rightarrow O$ is consistent with the support of $e$, if $s |_C \in \mathsf{supp}\, e_C$ for all $C \in \mathcal{M}$. Here $\mathsf{supp}\, e_C := \{ s \in O^C \mid e_C(s) > 0 \}$. We define $e$ to be \emph{strongly contextual} if there is no global assignment consistent with its support. One can show that any empirical model satisfying the specification of the previous example is strongly contextual in this sense \cite{AbramskyBrandenburger}. \subsection*{Vector representation of empirical models} Each distribution $e_C$ can be specified by a list of $\card{O^C}$ numbers in $[0, 1]$, one for each assignment $s \in O^C$. We can concatenate these lists to represent the empirical model as a whole by a vector in $\mathbb{R}^n$, where \[ n \; := \; \sum_{C \in \mathcal{M}} \card{O^C} . \] We shall henceforth pass freely from empirical models to their representation as real vectors without further comment. \section{Interlude on partial orders and lattices} We briefly pause to review some basic notions on partial orders and lattices. We refer to \cite{davey2002introduction} for additional background. We recall that a partial order is a set $P$ equipped with a binary relation $\leq$ which is reflexive, antisymmetric and transitive. An element $\bot \in P$ is the least or bottom element if $\bot \leq x$ for all $x \in P$. Given two elements $x, y \in P$, their join or least upper bound, if it exists, is an element $z$ such that $x \leq z$, $y \leq z$, and whenever $x \leq w$ and $y \leq w$, then $z \leq w$. Note that such an element $z$ is necessarily unique if it exists; in this case, we write it as $x \vee y$. If a poset $P$ has a least element, and a join $x \vee y$ for all elements $x, y \in P$, then we say that $P$ is a join semilattice. Note that for any set $X$, the powerset $\mathcal{P}(X)$, ordered by inclusion, is a join semilattice, with the join being given by set union. The least element is the empty set. The notion of meet or greatest lower bound is defined in the obvious dual fashion. Notation is $x \wedge y$ for the meet of elements $x$, $y$ if it exists. Also, we write $\top$ for the greatest or top element of a poset, if it exists. In the case of $\mathcal{P}(X)$, the meet is given by set intersection, and the top element is $X$. If a poset has a meet for every pair of elements, and a top element, it is a meet semilattice. If a poset is both a join semilattice, and a meet semilattice, then it is a lattice. Note that a join semilattice has a least upper bound for every finite subset of its elements. For subsets with more than two elements, this can be constructed by iterating the binary join operation. Now suppose that we have a \emph{finite} join semilattice $P$. Then we can construct the meet of any subset $S \subseteq P$, by taking the join of the set $L$ of all the lower bounds of $S$ in $P$. Thus $P$ is in fact a lattice. A similar statement holds for a finite meet semilattice. \section{Background on Polytopes} We review some standard definitions and results on polytopes. Our primary references for this section will be \cite{rockafellar2015convex} and \cite{ziegler1995lectures}. We shall work concretely in $\mathbb{R}^n$. We write $d(\mathbf{x},\mathbf{y})$ for the Euclidean distance function on $\mathbb{R}^n$, and $\mathbf{x} \cdot \mathbf{y}$ for the inner product of vectors $\mathbf{x}, \mathbf{y} \in \Real^n$. Vectors are ordered componentwise: \[\mathbf{x} \leq \mathbf{y} \; \; \Longleftrightarrow \; \; (\forall i : 1 \leq i \leq n) \; \mathbf{x}_i \leq \mathbf{y}_i . \] A \emph{closed half-space} in $\Real^n$ is a set of the form \[ \{ \mathbf{x} \mid \mathbf{a} \cdot \mathbf{x} \geq b \} \] for some $\mathbf{a} \in \Real^n$, $b \in \mathbb{R}$. An \emph{$\mathcal{H}$-polytope} is a bounded intersection of a finite set of closed half-spaces in $\Real^n$. A \emph{$\mathcal{V}$-polytope} is the convex hull $\mathsf{Conv}(S)$ of a finite set of points $S \subseteq \Real^n$. The Fundamental Theorem of polytopes \cite{ziegler1995lectures} says that these two notions coincide; we shall refer simply to polytopes. An \emph{affine combination} of points $\mathbf{x}_1, \ldots , \mathbf{x}_k$ in $\Real^n$ is an expression of the form \[ \sum_{i=1}^{k} \lambda_i \mathbf{x}_i , \qquad \sum_{i=1}^{k} \lambda_i = 1 . \] We write $\mathsf{Aff}(S)$ for the set of all affine combinations of points of $S \subseteq \Real^n$; this is the affine subspace generated by $S$. Note that, if a linear equation $\mathbf{a} \cdot \mathbf{x} = b$ is satisfied by all the elements of $S$, then it is satisfied by all the elements of $\mathsf{Aff}(S)$. A linear inequality $\mathbf{a} \cdot \mathbf{x} \geq b$ is \emph{valid} for a polytope $P$ if it is satisfied by every $\mathbf{x} \in P$. Such a valid inequality defines a \emph{face} $F$ of $P$: \[ F \; := \; \{ \mathbf{x} \in P \mid \mathbf{a} \cdot \mathbf{x} = b \} . \] Any face is itself a polytope. We write $\mathcal{F}(P)$ for the set of faces of $P$, and $\FF^{+}(P)$ for the set of non-empty faces. The \emph{relative interior} of a polytope $P$ consists of those points $\mathbf{x} \in P$ such that, for some $\epsilon > 0$, $\mathbf{y} \in P$ whenever $\mathbf{y} \in \mathsf{Aff}(P)$ and $d(\mathbf{x},\mathbf{y}) \leq \epsilon$. We write $\mathsf{relint}\, P$ for the relative interior of $P$. We will use the following characterization of the relative interior (\cite{rockafellar2015convex}[Theorem 6.4]). \begin{theorem} \label{relintthm} A point $\mathbf{x} \in P$ is in $\mathsf{relint}\, P$ if and only if for all $\mathbf{y} \in P$, for some $\mu > 1$, $\mu \mathbf{x} + (1-\mu) \mathbf{y}$ is in $P$. \end{theorem} Intuitively, this says that a point $\mathbf{x}$ of $P$ is in the relative interior if the line segment $[\mathbf{y}, \mathbf{x}]$ from any point $\mathbf{y}$ of $P$ to $\mathbf{x}$ can be extended beyond $\mathbf{x}$ while remaining in $P$. We will also use the following basic result (\cite{rockafellar2015convex}[Theorem 18.2]). \begin{theorem} Every polytope $P$ can be written as the disjoint union of the relative interiors of its non-empty faces: \[ P \; = \; \bigsqcup_{F \in \FF^{+}(P)} \mathsf{relint}\, F . \] \end{theorem} This means that for any polytope $P$ we can define a map \[ \mathsf{carr}\, : P \rTo \FF^{+}(P) \] which assigns to each point $\mathbf{x}$ of $P$ its \emph{carrier face} --- the unique face $F$ such that $\mathbf{x} \in \mathsf{relint}\, F$. We can regard $\mathcal{F}(P)$ as partially ordered by set inclusion. The following result is standard \cite{ziegler1995lectures}. \begin{theorem} $\mathcal{F}(P)$ is a finite lattice. It is atomistic --- every element is the join of the atoms below it --- and coatomistic --- every element is the meet of the coatoms above it.\footnote{In the literature on polytopes, the terms ``atomic'' and ``coatomic'' are used, but these have a different meaning in the lattice theory literature \cite{davey2002introduction}.} It is graded --- all maximal chains have the same length. \end{theorem} Note that meets in $\mathcal{F}(P)$ are simply given by intersection of faces, while joins $F \vee G$ are defined indirectly, as the intersection of all faces containing both $F$ and $G$. We call $\mathcal{F}(P)$ the \emph{face lattice} of $P$. We refer to $\mathcal{F}(P)$ as the \emph{combinatorial type} of $P$. Two polytopes with isomorphic face lattices are \emph{combinatorially equivalent}. We say that a polytope $P$ is in \emph{standard form} if $P = \mathcal{H}_{\geq \mathbf{0}} \cap \mathsf{Aff}(P)$. This means that $P$ is defined by a set of linear equations, together with the non-negativity constraint. It is a standard result of linear programming that every linear program can be put in this form \cite{matousek2007understanding}. \begin{proposition} \label{facesfprop} If $P$ is in standard form, so is every face $F$ of $P$. \end{proposition} \begin{proof} Let $F = \{ \mathbf{x} \in P \mid \mathbf{a} \cdot \mathbf{x} = b \}$. Since $F \subseteq P$, $F \subseteq \mathcal{H}_{\geq \mathbf{0}} \cap \mathsf{Aff}(F)$. Conversely, since $\mathsf{Aff}(F) \subseteq \mathsf{Aff}(P)$, any $\mathbf{x} \in \mathcal{H}_{\geq \mathbf{0}} \cap \mathsf{Aff}(F)$ is in $P$, while $\mathbf{x} \in \mathsf{Aff}(F)$ implies $\mathbf{a} \cdot \mathbf{x} = b$. Hence $\mathbf{x} \in F$, as required. \end{proof} We can define a map $\mathsf{supp}\, : \mathcal{H}_{\geq \mathbf{0}} \rTo \{ 0, 1\}^n$: \[ (\mathsf{supp}\, \mathbf{x})_i = \left\{ \begin{array}{ll} 0, & \mathbf{x}_i = 0 \\ 1, & \mathbf{x}_i > 0 \end{array} \right. \] We call this the \emph{support} of a non-negative vector. In the case of a probability vector, this gives the usual notion of support of a distribution. In this case, we can also speak of \emph{possibilities}; an outcome is possible if it has positive probability, impossible if it has zero probability. For a polytope $P$ in standard form, we define $\mathcal{S}(P) := \{ \mathsf{supp}\, \mathbf{x} \mid \mathbf{x} \in P \}$. Since $\mathcal{S}(P)$ is a subset of $\Real^n$, it inherits the componentwise partial order on vectors. We write $\mathcal{S}(P)_{\bot}$ for the result of adjoining a least element to this partially ordered set. We make the following observation. \begin{proposition} Let $P$ be a polytope in standard form. If $\mathbf{0} \in P$, then $P = \{ \mathbf{0} \}$. \end{proposition} \begin{proof} If $\mathbf{0} \in P$, then the equations defining $\mathsf{Aff}(P)$ must all be homogeneous. If any non-zero $\mathbf{v}$ is in $P$, the positive half-ray generated by $\mathbf{v}$ will lie in $P$, contradicting the boundedness of $P$. \end{proof} Thus if $\card{P} > 1$, we can define $\mathcal{S}(P)_{\bot} := \mathcal{S}(P) \cup \{ \mathbf{0} \}$ with the componentwise order. We can define the join of two elements $\mathbf{u}, \mathbf{v}$ as the componentwise boolean disjunction (or equivalently, the supremum in $\mathbb{R}$): \[ (\mathbf{u} \vee \mathbf{v})_i \; := \; \mathbf{u}_i \vee \mathbf{v}_i . \] This is clearly the join (\textit{i.e.}~ pairwise supremum) in $\{ 0, 1 \}^n$. We have the following simple result: \begin{proposition} \label{convsuppprop} Let $P$ be a polytope in standard form, $\mathbf{x}, \mathbf{y} \in P$, $0 < \lambda < 1$. Then \[ \mathsf{supp}\, (\lambda \mathbf{x} + (1-\lambda) \mathbf{y}) \; = \; \mathsf{supp}\, x \vee \mathsf{supp}\, y . \] \end{proposition} \begin{proposition} $\mathcal{S}(P)_{\bot}$ is a finite lattice. \end{proposition} \begin{proof} The cases where $\card{P} \leq 1$ are trivial. Otherwise, the previous proposition shows that $\mathcal{S}(P)$ is closed under the join operation. This makes $\mathcal{S}(P)_{\bot}$ a join semi-lattice, and since it is finite, it is a lattice. \end{proof} \section{Results} We fix a polytope $P$ in standard form. Given $\mathbf{x}$ in $P$, we define a vector $\xx^{\sigma}$ in $\Real^n$: \[ \xx^{\sigma}_i = \left\{ \begin{array}{ll} 0, & \mathbf{x}_i > 0 \\ 1, & \mathbf{x}_i = 0 \end{array} \right. \] Clearly $\xx^{\sigma} \cdot \mathbf{z} \geq 0$ is valid for all $\mathbf{z} \in P$, and defines a face \begin{align*} F_{\xx} \, =\, & \setdef{\mathbf{z} \in P}{\xx^{\sigma} \cdot \mathbf{z} = 0}\\ = \, & \setdef{\mathbf{z} \in P}{\mathsf{supp}\,{\mathbf{z}} \leq \mathsf{supp}\,{\mathbf{x}}} . \end{align*} \begin{proposition} \label{carrprop} For all $\mathbf{x}$ in $P$, $\mathsf{carr}\, \mathbf{x} = F_{\xx}$. \end{proposition} \begin{proof} We show that $\mathbf{x} \in \mathsf{relint}\, F_{\xx}$, using Theorem~\ref{relintthm}. Given $\mathbf{z} \in F_{\xx}$, for each $i$ with $\mathbf{z}_i > 0$, for some $\epsilon_i > 0$, $\mathbf{x}_i \geq \epsilon_i \mathbf{z}_i$. Let $\epsilon = \min_i \epsilon_i$, $\mu = 1 + \epsilon$. Then $\mathbf{v} := \mu \mathbf{x} + (1-\mu) \mathbf{z} \geq \mathbf{0}$, while since $\mathbf{v}$ is an affine combination of points in $F_{\xx}$, $\mathbf{v} \in \mathsf{Aff}(F_{\xx})$. Thus by Proposition~\ref{facesfprop}, $\mathbf{v} \in F_{\xx}$, as required. \end{proof} The following is immediate from the definition of $\xx^{\sigma}$. \begin{proposition} \label{orderequivprop} For all $\mathbf{x}, \mathbf{y} \in P$: \[ F_{\xx} \, \subseteq \, F_{\yy} \; \Longleftrightarrow \; \mathsf{supp}\, \mathbf{x} \leq \mathsf{supp}\, \mathbf{y} . \] \end{proposition} We shall use a simple general result. \begin{proposition} \label{triangprop} Let $X$ be a set, $Q$ and $R$ posets, and $f : X \repi Q$, $g : X \repi R$ surjective maps. Then the following are equivalent: \begin{enumerate} \item For all $x, y \in X$: $f(x) \leq f(y) \; \Longleftrightarrow \; g(x) \leq g(y)$. \item There is a unique order-isomorphism $s : Q \rTo^{\cong} R$ such that the following diagram commutes: \begin{diagram} & & X & & \\ & \ldepi^{f} & & \rdepi^{g} & \\ Q & & \rTo_{s}^{\cong} & & R \end{diagram} \end{enumerate} \end{proposition} \begin{proof} Assume (1). Given $y \in Q$, since $f$ is surjective, $y = f(x)$ for some $x \in X$. Define $s(y) := g(x)$. The forward implication in (1) implies that $f(x) = f(y) \; \Rightarrow \; g(x) = g(y)$, so this is well-defined. It also implies that $s$ is monotonic. The converse implication in (1) implies that $s$ is order-reflecting. Hence it is an order-isomorphism onto its image, which is $R$ by surjectivity of $g$. This isomorphism is unique by surjectivity of $f$. The converse implication is immediate: \[ f(x) \leq f(y) \; \Longleftrightarrow \; s(f(x)) \leq s(f(y)) \; \Longleftrightarrow \; g(x) \leq g(y) . \] \end{proof} We now come to our main result. \begin{theorem} There is an order-isomorphism $\mathcal{F}(P) \cong \mathcal{S}(P)_{\bot}$ between the face lattice and the support lattice of $P$, which sends a face to the support of any point in its relative interior. \end{theorem} \begin{proof} We apply Propositions~\ref{triangprop}, \ref{carrprop} and \ref{orderequivprop} to the diagram \begin{diagram} & & P & & \\ & \ldepi^{\mathsf{carr}\,} & & \rdepi^{\mathsf{supp}\,} & \\ \FF^{+}(P) & & & & \mathcal{S}(P) \end{diagram} This gives an order-isomorphism $\FF^{+}(P) \cong \mathcal{S}(P)$, which extends to a lattice isomorphism $\mathcal{F}(P) \cong \mathcal{S}(P)_{\bot}$ by mapping the empty face to the bottom element of $\mathcal{S}(P)_{\bot}$. \end{proof} We now draw some corollaries of this result. \begin{corollary} \begin{enumerate} \item Two points $\mathbf{x}$, $\mathbf{y}$ of $P$ have the same support if and only if they are in the relative interior of the same face. \item The vertices of $P$ are exactly those points with minimal support. \item A point $\mathbf{x}$ of $P$ is a vertex if and only if it is the unique point of $P$ with support $\mathsf{supp}\, \mathbf{x}$. \end{enumerate} \end{corollary} \begin{proof} (1) follows immediately from the definition of the order-isomorphism $s$. The atoms of $\mathcal{F}(P)$ are the 0-dimensional faces $\{ \mathbf{v} \}$ corresponding to the vertices of $P$; these are mapped bijectively by $s$ to the atoms of $\mathcal{S}(P)$, which are the minimal supports, which yields (2). Since the vertices of $P$ are exactly those points which are the unique elements of their carrier faces, (3) follows from (1). \end{proof} One interesting observation about our result is that we have shown an isomorphism between two lattices whose concrete presentations look rather different: \begin{itemize} \item For $\mathcal{F}(P)$, meet is given simply by intersection, while join is defined indirectly, as the intersection of all upper bounds. \item For $\mathcal{S}(P)_{\bot}$, join is simply defined pointwise, while meet is defined indirectly, as the supremum of all lower bounds. \end{itemize} \section{Application to Probability Polytopes} Let $\Sigma = (X, \mathcal{M}, O)$ be a measurement scenario, and $\mathcal{N}(\Sigma)$ the set of no-signalling models over $\Sigma$. As explained in Section~\ref{empsec}, we can view empirical models as vectors in $\Real^n$, where $n$ depends on the measurement scenario. Thus we can view $\mathcal{N}(\Sigma)$ as a subset of $\Real^n$. The points of $\mathcal{N}(\Sigma)$ are those vectors of non-negative numbers satisfying the normalization equations, one for each $C \in \mathcal{M}$: \[ \sum_{s \in O^C} e_C(s) = 1 \] and the no-signalling equations, one for each pair $C, C' \in \mathcal{M}$ and $s \in O^{C \cap C'}$: \[ \sum_{t \in O^C, \; t |_{C \cap C'} = s} e_C(t) \; = \; \sum_{t' \in O^{C'}, \; t' |_{C \cap C'} = s} e_C(t') . \] We have written these in the notation of empirical models on measurement scenarios, but of course they can be ``flattened out'' into linear equations on the corresponding vectors. Thus $\mathcal{N}(\Sigma)$ is specified by a set of linear equations, together with the non-negativity constraints. Hence $\mathcal{N}(\Sigma)$ is a polytope in standard form, and the results of the previous section apply. We restate the results explicitly for $\mathcal{N}(\Sigma)$. \begin{theorem} \label{mainnsthm} Let $\Sigma$ be any measurement scenario. There is an order-isomorphism \[ \mathcal{F}(\mathcal{N}(\Sigma)) \cong \mathcal{S}(\mathcal{N}(\Sigma))_{\bot} \] between the face lattice and the support lattice of $\mathcal{N}(\Sigma)$. \end{theorem} The same result will hold for any polytope of probability models which can be put in standard form. \begin{corollary} \begin{enumerate} \item Two empirical models in $\mathcal{N}(\Sigma)$ have the same support if and only if they are in the relative interior of the same face. \item The vertices of $\mathcal{N}(\Sigma)$ are exactly those no-signalling empirical models with minimal support. \item A no-signalling empirical model $e$ is a vertex if and only if it is the unique model in $\mathcal{N}(\Sigma)$ with support $\mathsf{supp}\, e$. \item The empirical models in the relative interior of $\mathcal{N}(\Sigma)$ are exactly those with full support. \end{enumerate} \end{corollary} We also have the following result specific to the no-signalling case. \begin{proposition} The vertices of $\mathcal{N}(\Sigma)$ can be written as a disjoint union \[ V(\Sigma) \; = \; \mathsf{LD}(\Sigma) \, \sqcup \, \mathsf{MSC}(\Sigma) \] of the local deterministic models $\mathsf{LD}(\Sigma)$, and $\mathsf{MSC}(\Sigma)$, the strongly contextual models with minimal support. The polytope $\mathcal{L}(\Sigma)$ of models realized by local hidden variables is given by the convex hull $\mathsf{Conv} (\mathsf{LD}(\Sigma))$, while every strongly contextual model over $\Sigma$ is a convex combination of vertices in $\mathsf{MSC}(\Sigma)$, and belongs to a face of $\mathcal{N}(\Sigma)$ containing only strongly contextual models. \end{proposition} \begin{proof} By \cite{AbramskyBrandenburger}[Proposition 6.3], a model is strongly contextual if and only if it has no convex decomposition with a local model. Thus a strongly contextual model $e$ must be in the polytope $\mathsf{Conv}(\mathsf{MSC}(\Sigma))$. Let $F = \mathsf{carr}\, e$. Any model $e' \in F$ has $\mathsf{supp}\, e' \leq \mathsf{supp}\, e$ by Theorem~\ref{mainnsthm}; hence the strong contextuality of $e$ implies that of $e'$. \end{proof} In these cases of probabilistic models, the supports have their usual interpretation as supports of probability distributions. Moreover, they acquire a conceptual significance as expressing \emph{possibilistic} information. We can also view possibilistic models as well-motivated objects in their own right \cite{Abramsky:RelationalHiddenVariables,AbramskyBrandenburger,MansfieldFritz11:Hardy}. Boolean logic for combining possibilities replaces arithmetic formulas for calculating with probabilities. In more precise terms, we use the notion of \emph{commutative semiring}, which is an algebraic structure $(R, {+}, 0, {\cdot}, 1)$, where $(R, {+}, 0)$ and $(R, {\cdot}, 1)$ are commutative monoids satisfying the distributive law: \[ a \cdot (b + c) = a\cdot b + a \cdot c . \] Examples include the non-negative reals $\Real_{\geq 0}$ with the usual addition and multiplication, and the booleans $\mathbb{B} = \{ 0, 1 \}$ with disjunction and conjunction playing the r\^oles of addition and multiplication respectively. Now an $R$-distribution on a finite set $X$ is given by a map \[ d : X \rTo R \] satisfying the normalization condition \[ \sum_{x \in X} d(x) \; = \; 1 . \] In the case of $R = \Real_{\geq 0}$, we recover the usual notion of probability distribution, while in the boolean case, a distribution is simply the characteristic function of a non-empty subset. We can define marginalization of distributions over any commutative semiring $R$, exactly we did in the usual probabilistic case, and hence obtain a notion of no-signalling empirical model over a measurement scenario $\Sigma$ with respect to $R$. We write $\mathcal{N}(\Sigma, R)$ for the set of all such models. In the case of $R = \Real_{\geq 0}$, we recover the notion of probabilistic no-signalling model we have already seen, while in the boolean case, we obtain a notion of \emph{possibilistic no-signalling empirical model}. We can now re-interpret the support map as a \emph{possibilistic collapse}, which maps a probabilistic model to a possibilistic one. Some obvious questions arise: \begin{itemize} \item Are probabilistic no-signalling models mapped to possibilistic ones? \item Does every possibilistic no-signalling model arise as the possibilistic collapse of a probabilistic no-signalling model? \end{itemize} These are answered by the results from \cite{Abramsky:RelationalHiddenVariables,AbramskyBrandenburger}, which show that the answer to the first question is positive, while the answer to the second is negative. The first point follows from the fact that there is a (unique) semiring homomorphism $h : \Real_{\geq 0} \rTo \{ 0, 1 \}$, which when applied pointwise to an empirical model yields the possibilistic collapse. The fact that $h$ is a homomorphism means that the no-signalling equations are preserved. For the second point, we give an explicit counter-example in \cite{Abramsky:RelationalHiddenVariables}[Proposition 9.1]. In the light of these results, we have the following situation. The support map can be viewed as the possibilistic collapse \[ \mathsf{poss}\, : \mathcal{N}(\Sigma, \Real_{\geq 0}) \rTo \mathcal{N}(\Sigma, \mathbb{B}) . \] We know that this map is not surjective. We pose the following open question. \begin{question} Can we give an \emph{intrinsic characterization} of the image of the possibilistic collapse map, using only possibilistic notions? \end{question} It is tempting to attempt to answer to this question by conjecturing that the minimal models in $\mathcal{N}(\Sigma,\mathbb{B})$ are in the image of the possibilistic collapse, and hence form the atoms of $\mathcal{S}(\mathcal{N}(\Sigma))$. Since $\mathcal{S}(\mathcal{N}(\Sigma))$ is atomistic, this would mean that the supports of the probabilistic models can all be expressed as joins of the minimal possibilistic models. This would provide a complete determination of the combinatorial structure of the no-signalling polytope by purely possibilistic information. The situation turns out to be somewhat more complicated, however, as we can demonstrate with the following possibilistic empirical model, which provides a counter-example to the tempting conjecture. We shall now describe a simple example of a minimal possibilistic model $s$ which does not arise as the support of any probabilistic model; i.e.~such that there exists no probabilistic model $e$ for which $\mathsf{poss}(e) = s$. Note that this also implies that there cannot be any probabilistic model with strictly smaller support, since otherwise its possibilistic collapse would contradict the minimality of $s$. The measurement scenario is described as follows. \begin{align*} X &= \{A,B,C,D\} \\ \mathcal{M} &= \left\{ \{A,B\}, \{A,C\}, \{A,D\}, \{B,C\}, \{B,D\}, \{C,D\} \right\} \\ O &= \{0,1,2\} \end{align*} The possibilistic model $s$ is then defined by the following possible sections whose coefficients we label $a,b,\dots,o$. \begin{equation*} \begin{array}{lcccc} AB & \mapsto & 00, & 10, & 21 \\ & & a & b & c \\ AC & \mapsto & 00, & 11, & 21 \\ & & d & e & f \\ AD & \mapsto & 01, & 10, & 21 \\ & & k & l & m \\ BC & \mapsto & 00, & 11 & \\ & & g & h & \\ BD & \mapsto & 00, & 11 & \\ & & i & j & \\ CD & \mapsto & 01, & 10 & \\ & & n & o & \\ \end{array} \end{equation*} The model is also depicted in the bundle diagram of figure \ref{fig:model}. What we see at the bottom in this representation is the \emph{base space} of the variables in $X$. There is an edge between two variables when they can be measured together. Above each vertex is a \emph{fibre} of those values which can be assigned to the variable. We represent the values in the order $0$, $1$, $2$. There is an edge between values in adjacent fibres precisely when the corresponding \emph{joint outcome} is possible. For example, in the context $AD$ the joint outcome $01$ is possible, so there is an edge connecting these values in the respective fibres above $A$ and $D$; while $00$ is not possible, so there is no corresponding edge. For further examples of how empirical models are represented as bundle diagrams, see \cite{AbramskyEtAl:ContextualityCohomologyAndParadox}. \begin{figure} \caption{\label{fig:model} The possibilistic model $s$.} \begin{center} \begin{tikzpicture}[x=50pt,y=50pt,thick,label distance=-0.25em,baseline=(current bounding box.center),scale=.75] \node (e) at (4,-0.2) {}; \node (n) at (2,1.5) {}; \node (T) at (0,3.5) {}; \node (u) at (0,0.4) {}; \node [inner sep=0em] (a) at (0,0) {}; \node [inner sep=0em] (b) at ($ (a) + (n) $) {}; \node [inner sep=0em] (c) at ($ (a) + (n) - (e) $) {}; \node [inner sep=0em] (d) at ($ (a) + (e) $) {}; \node [inner sep=0em] (c') at ($ (a) - (e) $) {}; \node [inner sep=0em] (d') at ($ (a) + (e) - 1.5*(n) $) {}; \node [inner sep=0em] (a0) at ($ (a) + (T) $) {}; \node [inner sep=0em] (a1) at ($ (a0) + (u) $) {}; \node [inner sep=0em] (a2) at ($ (a0) + 2*(u) $) {}; \node [inner sep=0em] (b0) at ($ (b) + (T) $) {}; \node [inner sep=0em] (b1) at ($ (b0) + (u) $) {}; \node [inner sep=0em] (c0) at ($ (c) + (T) $) {}; \node [inner sep=0em] (c1) at ($ (c0) + (u) $) {}; \node [inner sep=0em] (d0) at ($ (d) + (T) $) {}; \node [inner sep=0em] (d1) at ($ (d0) + (u) $) {}; \node [inner sep=0em] (c'0) at ($ (c') + (T) $) {}; \node [inner sep=0em] (c'1) at ($ (c'0) + (u) $) {}; \node [inner sep=0em] (d'0) at ($ (d') + (T) $) {}; \node [inner sep=0em] (d'1) at ($ (d'0) + (u) $) {}; \draw (a) -- (b) -- (c) -- (a) -- (d) -- (b) -- (a); \draw (c) .. controls (c') and (d') .. (d); \draw [dotted] (a2) -- (a); \draw [dotted] (b1) -- (b); \draw [dotted] (c1) -- (c); \draw [dotted] (d1) -- (d); \node [inner sep=0.2em,label=left:{$A$}] at (a) {$\bullet$}; \node [inner sep=0.2em,label=right:{$B$}] at (b) {$\bullet$}; \node [inner sep=0.2em,label=left:{$C$}] at (c) {$\bullet$}; \node [inner sep=0.2em,label=right:{$D$}] at (d) {$\bullet$}; \draw (b0) -- (c0); \draw (b1) -- (c1); \draw [line width=3.2pt,white] (c0) -- (a0); \draw [line width=3.2pt,white] (c1) -- (a1); \draw [line width=3.2pt,white] (c1) -- (a2); \draw (c0) -- (a0); \draw (c1) -- (a1); \draw (c1) -- (a2); \draw (b0) -- (d0); \draw (b1) -- (d1); \draw [line width=3.2pt,white] (a0) -- (b0); \draw [line width=3.2pt,white] (a1) -- (b0); \draw [line width=3.2pt,white] (a2) -- (b1); \draw (a0) -- (b0); \draw (a1) -- (b0); \draw (a2) -- (b1); \draw [line width=3.2pt,white] (a0) -- (d1); \draw [line width=3.2pt,white] (a1) -- (d0); \draw [line width=3.2pt,white] (a2) -- (d1); \draw (a0) -- (d1); \draw (a1) -- (d0); \draw (a2) -- (d1); \draw [line width=3.2pt,white] (c0) .. controls (c'0) and (d'1) .. (d1); \draw [line width=3.2pt,white] (c1) .. controls (c'1) and (d'0) .. (d0); \draw (c0) .. controls (c'0) and (d'1) .. (d1); \draw (c1) .. controls (c'1) and (d'0) .. (d0); \node [inner sep=0.2em] at (a0) {$\bullet$}; \node [inner sep=0.2em] at (a1) {$\bullet$}; \node [inner sep=0.2em] at (a2) {$\bullet$}; \node [inner sep=0.2em] at (b0) {$\bullet$}; \node [inner sep=0.2em] at (b1) {$\bullet$}; \node [inner sep=0.2em] at (c0) {$\bullet$}; \node [inner sep=0.2em] at (c1) {$\bullet$}; \node [inner sep=0.2em] at (d0) {$\bullet$}; \node [inner sep=0.2em] at (d1) {$\bullet$}; \end{tikzpicture} \end{center} \end{figure} We first observe that no-signalling requires all coefficients to be equated. For example, the no-signalling conditions for the overlapping contexts $AB$, $AC$ translate into the equations \[ a = d, \quad b = e, \quad c = f. \] Continuing in this fashion, we obtain the following equations: \[ \begin{array}{llllllll} a = k, & b = l, & g = i, & h = j, & c = n, & d = k, & e = l, & f = m \\ c = h, & h = o, & g = n, & i = o, & j = n, & c = j, & l = o, & d = n. \end{array} \] These equations imply that all the coefficients must be equated. Minimality of the model can be deduced from the fact that if the coefficient for any of the possible sections is set to zero then consistency requires that all coefficients must be set to zero. Having equated all coefficients, the only remaining non-trivial equation is \begin{equation*} a = a + a. \end{equation*} Since this arises from a possibilistic model, it evidently has a non-zero solution in the booleans, namely $a = 1$. Observe, however, that there exists no non-zero, real solution. This implies that there exists no probabilistic model $e$ such that $\mathsf{supp}(e) = s$. Further, we note that such possibilistic models can also arise in ``Bell-type'' measurement scenarios of the kind described in the first example in Section~2. In \cite{MansfieldBarbosa:QPL2013} a method was introduced for constructing Bell-type models from models of more general kinds such that the degree of contextuality is preserved. Under this construction, the model $s$ described above yields a model $s_{\text{Bell}}$ with two agents, each of which have four measurement settings, with each measurement having three possible outcomes, which also has the property that there exists no probabilistic model $e$ such that $\mathsf{supp}(e) = s_{\text{Bell}}$. The new measurement scenario is described as follows. \begin{align*} X_{\text{Bell}} &= \{A_1,B_1,C_1,D_1,A_2,B_2,C_2,D_2\} \\ \mathcal{M}_{\text{Bell}} &= \{A_1,B_1,C_1,D_1\} \times \{A_2,B_2,C_2,D_2\} \\ O &= \{0,1,2\} \end{align*} The possibilistic model $s_{\text{Bell}}$ has possible sections defined as follows (c.f.~figure \ref{fig:bellmodel}). \begin{equation*} \begin{array}{lllclll} A_1A_2 & & & \mapsto & 00, & 11, & 22 \\ B_1B_2, & C_1C_2, & D_1D_2 & \mapsto & 00, & 11 & \\ A_1B_2, & A_2B_1 & & \mapsto & 00, & 10, & 21 \\ A_1C_2, & A_2C_1 & & \mapsto & 00, & 11, & 21 \\ A_1D_2, & A_2D_1 & & \mapsto & 01, & 10, & 21 \\ B_1C_2, & B_2C_1 & & \mapsto & 00, & 11 & \\ B_1D_2, & B_2D_1 & & \mapsto & 00, & 11 & \\ C_1D_2, & C_2D_1 & & \mapsto & 01, & 10 & \end{array} \end{equation*} \begin{figure} \caption{\label{fig:bellmodel} The possibilistic model $s_\text{Bell}$.} \begin{center} \begin{tikzpicture}[x=50pt,y=50pt,thick,label distance=-0.25em,baseline=(current bounding box.center),scale=.45] \node (e) at (4.5,-0.2) {}; \node (n) at (2,4) {}; \node (T) at (0,6) {}; \node (u) at (0,0.5) {}; \node [inner sep=0em] (a) at (0,0) {}; \node [inner sep=0em] (b) at ($ (a) + (e) $) {}; \node [inner sep=0em] (c) at ($ (a) + 2*(e) $) {}; \node [inner sep=0em] (d) at ($ (a) + 3*(e) $) {}; \node [inner sep=0em] (a') at ($ (a) + (n) $) {}; \node [inner sep=0em] (b') at ($ (b) + (n) $) {}; \node [inner sep=0em] (c') at ($ (c) + (n) $) {}; \node [inner sep=0em] (d') at ($ (d) + (n) $) {}; \node [inner sep=0em] (a0) at ($ (a) + (T) $) {}; \node [inner sep=0em] (a1) at ($ (a0) + (u) $) {}; \node [inner sep=0em] (a2) at ($ (a0) + 2*(u) $) {}; \node [inner sep=0em] (a'0) at ($ (a0) + (n) $) {}; \node [inner sep=0em] (a'1) at ($ (a1) + (n) $) {}; \node [inner sep=0em] (a'2) at ($ (a2) + (n) $) {}; \node [inner sep=0em] (b0) at ($ (b) + (T) $) {}; \node [inner sep=0em] (b1) at ($ (b0) + (u) $) {}; \node [inner sep=0em] (b'0) at ($ (b0) + (n) $) {}; \node [inner sep=0em] (b'1) at ($ (b1) + (n) $) {}; \node [inner sep=0em] (c0) at ($ (c) + (T) $) {}; \node [inner sep=0em] (c1) at ($ (c0) + (u) $) {}; \node [inner sep=0em] (c'0) at ($ (c0) + (n) $) {}; \node [inner sep=0em] (c'1) at ($ (c1) + (n) $) {}; \node [inner sep=0em] (d0) at ($ (d) + (T) $) {}; \node [inner sep=0em] (d1) at ($ (d0) + (u) $) {}; \node [inner sep=0em] (d'0) at ($ (d0) + (n) $) {}; \node [inner sep=0em] (d'1) at ($ (d1) + (n) $) {}; \draw (a) -- (a'); \draw (a) -- (b'); \draw (a) -- (c'); \draw (a) -- (d'); \draw (b) -- (a'); \draw (b) -- (b'); \draw (b) -- (c'); \draw (b) -- (d'); \draw (c) -- (a'); \draw (c) -- (b'); \draw (c) -- (c'); \draw (c) -- (d'); \draw (d) -- (a'); \draw (d) -- (b'); \draw (d) -- (c'); \draw (d) -- (d'); \draw [dotted] (a2) -- (a); \draw [dotted] (b1) -- (b); \draw [dotted] (c1) -- (c); \draw [dotted] (d1) -- (d); \draw [dotted] (a'2) -- (a'); \draw [dotted] (b'1) -- (b'); \draw [dotted] (c'1) -- (c'); \draw [dotted] (d'1) -- (d'); \node [inner sep=0.2em,label=below:{$A_1$}] at (a) {$\bullet$}; \node [inner sep=0.2em,label=below:{$B_1$}] at (b) {$\bullet$}; \node [inner sep=0.2em,label=below:{$C_1$}] at (c) {$\bullet$}; \node [inner sep=0.2em,label=below:{$D_1$}] at (d) {$\bullet$}; \node [inner sep=0.2em,label=left:{$A_2$}] at (a') {$\bullet$}; \node [inner sep=0.2em,label=left:{$B_2$}] at (b') {$\bullet$}; \node [inner sep=0.2em,label=right:{$C_2$}] at (c') {$\bullet$}; \node [inner sep=0.2em,label=right:{$D_2$}] at (d') {$\bullet$}; \draw [thin] (a0) -- (c'0); \draw [thin] (b0) -- (d'0); \draw [thin] (a0) -- (b'0); \draw [thin] (b0) -- (c'0); \draw [thin] (a0) -- (a'0); \draw [thin] (b0) -- (b'0); \draw [thin] (c0) -- (c'0); \draw [thin] (d0) -- (d'0); \draw [thin] (b0) -- (a'0); \draw [thin] (c0) -- (b'0); \draw [thin] (c0) -- (a'0); \draw [thin] (d0) -- (b'0); \draw [thin] (a0) -- (d'1); \draw [thin] (d1) -- (a'0); \draw [thin] (a1) -- (b'0); \draw [thin] (d1) -- (c'0); \draw [thin] (c0) -- (d'1); \draw [thin] (b0) -- (a'1); \draw [thin] (a1) -- (d'0); \draw [thin] (c1) -- (d'0); \draw [thin] (a1) -- (c'1); \draw [thin] (b1) -- (d'1); \draw [thin] (b1) -- (c'1); \draw [thin] (a1) -- (a'1); \draw [thin] (b1) -- (b'1); \draw [thin] (c1) -- (c'1); \draw [thin] (d1) -- (d'1); \draw [thin] (c1) -- (a'1); \draw [thin] (d1) -- (b'1); \draw [thin] (c1) -- (b'1); \draw [thin] (a2) -- (a'2); \draw [thin] (a2) -- (b'1); \draw [thin] (a2) -- (c'1); \draw [thin] (a2) -- (d'1); \draw [thin] (b1) -- (a'2); \draw [thin] (c1) -- (a'2); \draw [thin] (c1) -- (d'0); \draw [thin] (d1) -- (a'2); \draw [thin] (d0) -- (c'1); \node [inner sep=0.2em] at (a0) {$\bullet$}; \node [inner sep=0.2em] at (a1) {$\bullet$}; \node [inner sep=0.2em] at (a2) {$\bullet$}; \node [inner sep=0.2em] at (b0) {$\bullet$}; \node [inner sep=0.2em] at (b1) {$\bullet$}; \node [inner sep=0.2em] at (c0) {$\bullet$}; \node [inner sep=0.2em] at (c1) {$\bullet$}; \node [inner sep=0.2em] at (d0) {$\bullet$}; \node [inner sep=0.2em] at (d1) {$\bullet$}; \node [inner sep=0.2em] at (a'0) {$\bullet$}; \node [inner sep=0.2em] at (a'1) {$\bullet$}; \node [inner sep=0.2em] at (a'2) {$\bullet$}; \node [inner sep=0.2em] at (b'0) {$\bullet$}; \node [inner sep=0.2em] at (b'1) {$\bullet$}; \node [inner sep=0.2em] at (c'0) {$\bullet$}; \node [inner sep=0.2em] at (c'1) {$\bullet$}; \node [inner sep=0.2em] at (d'0) {$\bullet$}; \node [inner sep=0.2em] at (d'1) {$\bullet$}; \end{tikzpicture} \end{center} \end{figure} Note that maximal contexts in $s$ each contain two elements, so that to model these in an $n$-partite scenario requires $n=2$. The sets $X_{\text{Bell}}$ and $\mathcal{M}_{\text{Bell}}$ together with $O$ define the smallest bipartite measurement scenario which models the contexts $\mathcal{M}$ in bipartite form. For all $P,Q \in X$, the possible sections of $s_\text{Bell}$ at the context $P_1Q_2$ are defined either by the possible sections of $s$ at the context $PQ$ in the case that $P \neq Q$, or by the diagonal sections $\{oo \mid o \in \mathsf{supp}(s_{\{P\}})\}$ in the case that $P=Q$. The equivalence in terms of contextuality of the original and constructed Bell models was established in \cite{MansfieldBarbosa:QPL2013} and covers the type of contextuality (strong, logical, probabilistic) as well as the degree to which the model violates the analogous contextual inequalities. For our present purposes, it is also clear that the system of consistency equations yielded by a constructed Bell model is equivalent to the system of consistency equations for the original model, once the simple identifications of coefficients arising from the diagonalised contexts are taken into account. \section*{Acknowledgements} Support from the following is gratefully acknowledged: EPSRC EP/K015478/1, Quantum Mathematics and Computation (SA); Fondation Sciences Math\'{e}matiques de Paris, postdoctoral research grant eotpFIELD15RPOMT-FSMP1, Contextual Semantics for Quantum Theory (SM); FCT -- Funda\c{c}\~ao para a Ci\^encia e Tecnologia (Portuguese Foundation for Science and Technology), PhD grant SFRH/BD/94945/2013 (RSB); John Templeton Foundation, Categorical Unification (RSB); U.S. AFOSR FA9550-12-1-0136, Topological and Game-Semantic Methods for Understanding Cyber-Security (SA, KK); the Oxford Martin School James Martin Program on Bio-inspired Quantum Technologies, (SA, SM, RSB); and Templeton World Charity Foundation (RL). We are grateful to Janne Kujala for a number of helpful comments which led to improvements and clarifications in the presentation. We also thank the anonymous journal referee for their comments. \bibliographystyle{apalike}
1,314,259,996,244
arxiv
\section{Transformations and dynamics} One of the biggest and most fundamental questions in relativity theory is what does it mean that something is ``relativistic". Newton assumed the absolute reference frame and defined relative motion as the difference of two absolute motion vectors to be observed from the outside of the absolute frame. Galileo's idea was to reject the absolute frame and associate a reference frame to each body and let each body observe the other bodies' motion inside its own reference frame. This concept came with the restriction of ``inertial reference frames", by which are meant the reference frames that move with constant speed relative to each other inside each other. This condition was added when it was discovered that accelerating reference frames do not share the same laws of physics. For example in the classical dynamics, by considering the mutually accelerating frames, we violate the law of action-reaction. The condition of sharing the same laws of physics was taken as fundamental for the theory of relativity and is called the ``principle of relativity". To represent the motion of inertial reference frames, Galilean relativity theory introduced a naturally associated ``spacetime" coordinate transformation called Galilean transformation: \begin{eqnarray*} t^{\prime } &=&t \\ x^{\prime } &=&x-vt \end{eqnarray*} where $v$ is the relative speed between the two inertial frames involved. With this, it was possible to restate the principle of relativity as ``The laws of physics must be invariant under the Galilean transformation." Knowing that the adoption of inertial reference frames violates the third law (the law of action-reaction) of Newton's dynamics, it seems a natural question to ask whether the second law and the law of gravity are invariant under the Galilean transformation. Curiously, this question was never considered. However, as we will present below, the answer is ``yes" for Galilean transformation. \begin{equation*} F=m\frac{d^{2}x}{dt^{2}}\quad \implies \quad F=m\frac{d^{2}(x-vt)}{dt^{2}}=m% \frac{d^{2}x}{dt^{2}}. \end{equation*} \begin{equation*} F=\frac{GmM}{(x_{m}^{{}}-x_{M})^{2}}\quad \implies \quad F=\frac{GmM}{% ((x_{m}-vt)^{{}}-(x_{M}^{{}}-vt))^{2}}=\frac{GmM}{(x_{m}-x_{M}^{{}})^{2}}. \end{equation*} Hence, in the case of Galilean transformation, the damage inflicted by relativization is limited to the loss of the third law. \textbf{Remark (1)} \textit{By identical argument we can show that the Coulomb force law is invariant under the Galilean transformation.} To the Galilean relativity theory Einstein added an extra axiom of the constancy of the speed of light, which says that the speed of light is constant $c$ in any inertial reference frames, in other words $c+v=c$.\cite{Ein} The result is what we now call the special theory of relativity. The simplest and most effective refutation of this claim came from Anderton \cite{And} of Natural Philosophy Alliance. Anderton incisively pointed out that ``if $c+v=c$ is true then ${\normalsize c}$ is not a speed." We present the following argument to back up Anderton's argument. Assume that $v$ is the absolute speed of Michelson-Morley apparatus in the absolute space. Even if $c+v$ is the absolute speed of the light moving towards the mirror, the effect of this cancels out because the mirror is also moving with speed $v$ in the absolute frame. On the same token, though the reflected light moves with speed $c-v$ towards the emitter of light, as the emitter is moving with speed $v$, the effect of $v$ cancels out. So, we will never detect this $v$. This came from an incorrect interpretation of Michelson-Morley experiment. Thus, Einstein derived the Lorentz transformation between two inertial reference frames. Before Einstein, Lorentz derived a coordinate transformation only between the absolute aether frame and an observer's frame moving in the absolute frame, assuming that the length of a matter moving in the absolute frame contracts (also referred to as ``Lorentz-FitzGerald contraction").\cite{Lor} Einstein generalized this result of Lorentz to the setting of arbitrary inertial reference frames without involving the absolute frame. It goes as follows: \begin{eqnarray*} x\prime &=&\frac{(x-vt)}{\sqrt{1-(v/c)^{2}}} \\ t^{\prime } &=&\frac{1}{\sqrt{1-(v/c)^{2}}}\left( t-\frac{vx}{c^{2}}\right) . \end{eqnarray*} With this, Einstein rewrote the principle of relativity to read that ``laws of physics must be invariant under the choice of inertial reference frames", which later was taken as the invariance under the Lorentz transformations. Before this the principle of relativity meant that the laws of physics must be invariant under the choice of inertial reference frames. (Though not brought up explicitly, this implied that the laws of physics must be invariant under the Galilean transformations.) Being the extension of the Galilean relativity theory, this theory violates the third law of dynamics. Interestingly, also the second law and the law of gravity fail to be invariant under the Lorentz transformation, as we have \begin{equation*} F=m\frac{d^{2}x}{dt^{2}}\quad \implies \quad F=m\frac{d^{2}}{dt^{2}}\frac{% (x-vt)}{\sqrt{1-(v/c)^{2}}}\neq m\frac{d^{2}x}{dt^{2}}. \end{equation*} \begin{equation*} F=\frac{GmM}{(x_{m}^{{}}-x_{M})^{2}}\Longrightarrow F=\frac{GmM}{(\frac{% (x_{m}-vt)}{\sqrt{1-(v/c)^{2}}}^{{}}-\frac{(x_{M}-vt)}{\sqrt{1-(v/c)^{2}}}% )^{2}}=\frac{GmM}{(\frac{(x_{m}-x_{M})}{\sqrt{1-(v/c)^{2}}})^{2}}\neq \frac{% GmM}{(x_{m}^{{}}-x_{M})^{2}}. \end{equation*} This means that under Einsteinian relativisation all axioms of Newtonian dynamics except the first axiom of the Law of Inertia fail. The damage inflicted by relativisation is much bigger when we take Einsteinian relativisation over Galilean relativisation. \textbf{Remark (2)} \textit{Again, by identical argument we can show that Lorentz transformation fails to conserve the Coulomb's laws.} \section{Transformations and electromagnetism} One of the major reasons for the introduction of the Lorentz transformation was that Lorentz discovered that the electromagnetic wave equation of Maxwell is not invariant under the Galilean transformation. Lorentz discovered that a better coordinate transformation by himself, called Lorentz transformation, preserves Maxwell's electromagnetic wave equation.\cite{Lor} Einstein generalized this result by showing that under the Lorentz transformation all basic equations of Maxwell's electromagnetic field theory are invariant.\cite{Ein} This became the vindication for the claim that Maxwell's electromagnetic field theory is relativistic in Einsteinian sense. Though wave equations are not conserved under the Galilean transformations, wave functions are transformed into wave functions through Galilean transformations. So, it is not quite clear if we needed Lorentz transformations to begin with. If the issue of relativity is just the changing of reference frames, then Galilean transformation is the most natural transformation representing the choice of inertial reference frames. The dominating argument for supporting the Lorentz transformation is twofold: \begin{enumerate} \item Lorentz transformation maps electromagnetic wave equations into electromagnetic wave equations while Galilean transformations fail to do so. The right response to this is that wave mechanics and particle mechanics are entirely different theories covering entirely different issues of physics. Galilean transformation came from the consideration of relativising particle kinematics. It is therefore totally expected that this transformation does not deal with physical waves, which exist upon continuum wave mediums. In wave mechanics, it is not particles that move in the direction of the wave. It is the localized vibration energy of the medium that moves. For this reason, wave mechanics has no naturally associated momentum, which is the product of speed and mass. The hypothetical momentum of waves came from the relativistic theory of waves of de Broglie.\cite{DeB} \item Lorentz transformation transforms Maxwell's electromagnetic equations into equivalent equations. We will discuss later the issue of whether Galilean transformations do the same. \end{enumerate} Maxwell reduced the entire electromagnetic theory to the following field plus current equations: \begin{equation*} \mathbf{\nabla \times E}=-\frac{1}{c}\frac{\partial \mathbf{H}}{\partial t}% ,\quad \mathbf{\nabla \cdot E}=4\pi \rho ,\quad \mathbf{\nabla \cdot H}% =0,\quad \mathbf{J}=\rho \mathbf{v,\quad \nabla \times H}=\frac{4\pi }{c}% \mathbf{J}+\frac{1}{c}\frac{\partial }{\partial t}\mathbf{E} \end{equation*} where $c=\frac{1}{\sqrt{\epsilon _{0}\mu _{0}}}.$ Here $\rho $ is electric charge density and $\mathbf{J}$ is the conducting current created by the charge density moving with speed $\mathbf{v}$. Maxwell obtained the last equation, called the ``Generalized Amp\`{e}re's Law", under the assumption that the charges in $\mathbf{J}$ move with constant speed $\mathbf{v}$. This restriction was removed later. This introduction of $\mathbf{J}$ into the field theory is problematic. According to the definition of electric force field, charges placed in a field will not affect the field, meaning that the charges placed will not affect other charges that create the electric field. Ontologically, it is hard to understand $\mathbf{J}$ at this basic level. How is it possible that mutually repelling electrons can form a coherent bundle? A possible answer says that this is possible inside a conductor. But there is a vicious circularity here. Conductors are objects to be studied in material science where we have to use the theory of electromagnetism. This problem somehow resembles the problem of tension rope in classical dynamics. The rope is supposed to be a mass-less entity that moves under acceleration. Then it must be the case that all such ropes will move with infinite speed. How does it connect two bodies? Here is yet another consequence of the logical inconsistency discussed above of Maxwell's electromagnetic field theory. Maxwell obtained the electromagnetic wave equation \begin{equation*} \mathbf{\nabla }^{2}\mathbf{E=}\frac{1}{c^{2}}\frac{\partial ^{2}\mathbf{E}}{% \partial ^{2}t}\text{\qquad \textrm{and}\qquad }\mathbf{\nabla }^{2}\mathbf{% H=}\frac{1}{c^{2}}\frac{\partial ^{2}\mathbf{H}}{\partial ^{2}t}\mathbf{.} \end{equation*} under the condition that $\mathbf{J}=0$ (meaning that there is no conducting current). From this wave equation, he calculated the speed of electromagnetic wave in vacuum to be $c=1/\sqrt{\varepsilon _{0}\mu _{0}}.$ Interestingly, Maxwell also showed that electromagnetic waves are created by accelerating charges. But do accelerating charges not constitute a current? According to the mathematical definition of a current, even a singular moving charge is a current. In fact, radio engineers postulate that a circular closed circuit conducting electrons inside produces electromagnetic wave and one cycle of the electron corresponds to the cycle of the produced electromagnetic wave. This is because electrons in the circuit are under centripetal acceleration. Lorentz found out that the Galilean transformation of the electromagnetic wave equations does not lead to wave equations. This lead him to try his own Lorentz transformation instead and he showed that the Lorentz transformation maps electromagnetic wave equation to electromagnetic wave equation. \textbf{Remark (3)} \textit{This claim by Lorentz is to be refuted later in \textbf{Section 6} however.} The above formed a base for the claim that Galilean transformation is invalid and should be replaced by Lorentz transformation, despite all other problems that Lorentz transformations create, some of which we discussed in the previous section. \section{Transformation of waves v.s. transformation of wave equations} The argument that Galilean transformation is invalid and should be replaced by Lorentz transformation should be reconsidered due to the following legitimate argument: The wave equation in one dimension without sources for speed $v$ is \begin{equation*} \mathbf{\nabla }^{2}\phi \text{ }\mathbf{=}\frac{1}{k}\frac{\partial ^{2}\phi }{\partial ^{2}t}. \end{equation*} It is well known that when we Galilean transform this equation the result is not a wave equation anymore. The general solution of this wave equation is \begin{equation*} \phi (x,t)=\psi _{+}(x-kt)+\psi _{-}(x+kt). \end{equation*} The first term is a wave propagating with speed $+k$, and the second one with $-k$. The Galilean transformation transforms this general solution to \begin{equation*} \phi (x,t)=\psi _{+}(x-(k+v)t)+\psi _{-}(x+(k-v)t). \end{equation*} This is a superposition of two waves and so it is a wave. This wave has a different speed from the original wave. But is it not expected due to the Galilean transformation? We can support this argument using Fourier expansion too. A sinusoidal wave is transformed into a sinusoidal wave. So, a Fourier expansion which describes a wave also transforms into a Fourier series which represents a wave. However, when we Lorentz transform a wave function, we do not obtain a wave function in general. For example it is known well that a sinusoidal wave will be transformed into a sinusoidal wave by Lorentz transformation if the wave amplitude is the solution of a wave equation which is invariant under the Lorentz transformation. For this reason, relativistic theory of waves assumes that the wave equations are invariant under the Lorentz transformation. Under this assumption, we have the invariance under Lorentz transformation of the phase of a plane wave \begin{equation*} \mathbf{k}^{\prime }\cdot \mathbf{r}^{\prime }-\omega ^{\prime }t^{\prime }=% \mathbf{k}\cdot \mathbf{r}-\omega t \end{equation*} where $\mathbf{k}$ is the wave vector, $\mathbf{r}$ is a position vector and $\omega $ is the frequency. With this invariance, we obtain the following relativistic wave transformation equations: \begin{eqnarray*} k_{x}^{\prime } &=&\frac{1}{\sqrt{1-(v/c)^{2}}}\left( k_{x}-v\frac{\omega }{% c^{2}}\right) \\ k_{y}^{\prime } &=&k_{y} \\ k_{z}^{\prime } &=&k_{z} \\ \omega ^{\prime } &=&\frac{1}{\sqrt{1-(v/c)^{2}}}\left( \omega -vk_{x}\right) . \end{eqnarray*} So it is clear that the theory of Lorentz transformation of waves is not so universal. It applies only to the waves that are invariant under the transformation. And it has been claimed that electromagnetic waves are examples of such waves. Clearly it has the same kind of deficiency as the Galilean transformation of waves, if not worse. \textbf{Remark (4)} \textit{It is clear that the phase $\mathbf{k}\cdot \mathbf{r}-\omega t$ is Galilean invariant for any wave.} \textbf{Remark (5)} \textit{We will show later in \textbf{Section [6]} that, contrary to what has been claimed by Lorentz and Einstein, Lorentz transformations will not conserve any of wave equations, electromagnetic or non-electromagnetic.} De Broglie used this transformation of ``relativistic" waves and Einstein's relativistic theory of photons to create a relativistic wave-particle duality. Observing an analogy between the above discussed relativistic wave transformation and the relativistic transformation of energy and momentum: \begin{eqnarray*} p_{x}^{\prime } &=&\frac{1}{\sqrt{1-(v/c)^{2}}}\left( p_{x}-v\frac{E}{c^{2}}% \right) \\ p_{y}^{\prime } &=&k_{y} \\ p_{z}^{\prime } &=&k_{z} \\ E^{\prime } &=&\frac{1}{\sqrt{1-(v/c)^{2}}}\left( E-vp_{x}\right) . \end{eqnarray*} applied to the relativistic theory of photons \begin{equation*} E=hf=pc=0/0\qquad p=h/\lambda \end{equation*} where $\lambda $ is the wave length, de Broglie obtained the following relativistic equation for relativistic waves: \begin{equation*} \mathbf{p}=\hslash \mathbf{k}\qquad E=\hslash \omega . \end{equation*} This is how the wave-particle duality of quantum mechanics was introduced. It was unfortunate that all of this was done without noticing the following contradiction coming from the relativistic theory of photons: \begin{equation*} E=\sqrt{(cp)^{2}+(m_{0})^{2}c^{4}}=cp=\frac{m_{0}vc}{\sqrt{1-\left( \frac{v}{% c}\right) ^{2}}}=\frac{0}{0}cv=c^{2}hf=hf. \end{equation*} The reason why electromagnetic wave equations are invariant under the Lorentz transformations is because Lorentz transformations are transformations obtained within the context of Maxwell's electromagnetic field theory. Moreover, as we have shown above, Lorentz transformation fails to map axioms of Newton's dynamics into axioms of Newton's dynamics. What does this mean? A naturally expected answer would be that the concept of relativity is not applicable to dynamics as it violates the law of action-reaction. But as we have seen in the first section, Galilean transformation, which also is a coordinate transformation associated with inertial reference frames, conserves the second law and the gravitational law. The only deficiency of the Galilean transformation we know of is that it violates the third law of dynamics. This should mean that Lorentz transformations are more problematic than Galilean transformations though both of them are invalid as they are mathematical representations of the invalid concept of relatively moving reference frames which violates the third law. However, in practice, in most cases of dynamics around us, we can assume that one mass is way to large to be able to ignore the reaction on it from a smaller mass, and so, the conservation of the second law and gravitational law is enough to obtain a reasonable approximation. The action-reaction problem becomes important when we consider the astronomical scale masses under gravitational acceleration. \textbf{Remark (6)} \textit{It is a tradition of pure mathematics and logic to make sure that it is properly understood that a result must be always accompanied with restrictions imposed on the derivation of it. Coordinate transformations are applicable only to inertial frames. No accelerating frames should be transformed. Yet in the discussion of transformation of waves we are completely forgetting that physical waves come from acceleration. This type of errors appear in many places in theoretical physics, unfortunately. For example the well tested concept of centrifugal force comes from the abusive use of a reference frame of an orbiting body.} \section{Lorentz transformation and speed $c$} What does it mean that Einstein proved that all axioms of Maxwell's electromagnetic field theory are Lorentz invariant? This transformation was built only for the electromagnetic field theory by Lorentz as a solution to the problem that the Michelson-Morley experiment introduced into the theory of electromagnetic fields. That $c$ is the speed of light wave which is electromagnetic wave in other reference frame gave an advantage. In other theories of physics where the speed of light is not the issue, it is hard to imagine that this $c$ will play the role it played in the Maxwell's electromagnetic field theory. Therefore, not surprisingly, Lorentz transformation fails to preserve the law of gravity and the second law of classical dynamics. Maxwell's electromagnetic field theory is incomplete for a description of electrodynamics. Electrodynamics assumes classical dynamics as an underlying theory. This part was omitted by Maxwell and proponents. This problem surfaced explicitly in Lorentz force. Lorentz force, which depends also upon the speed, contradicts the second law, which asserts that the force should be dependent upon acceleration, not speed. On the one hand the Galilean transformation preserves the second law and the gravitational law, and on the other hand Lorentz transformation fails to do so. This important fact has never been noticed before. Einstein faced problems in showing the invariance of all axioms of Maxwell under the Lorentz transformation. Some of the axioms had to be translated into equivalent ones. This is rather expected. The problem is that $\mathbf{J}$ is not a force field. It is a different monstrosity that was challenged when Maxwell put it in his axiomatic theory of electromagnetic fields. $\mathbf{J}$ is defined in terms of $\mathbf{v}$. Lorentz transformation has a problem with transforming $\mathbf{v,}$ more specifically, relativistic addition of speeds. The argument goes as follows: Assume three inertial frames $F1$, $F2$ and $F3$. Let $v$ and $v\prime $ be the mutual speed between $F1$ and $F2$, and $F2$ and $F3$, respectively. As these speeds are used to define the gamma factor, they must be pre-relativistic speeds, i.e. classical speeds. So, $v+v^{\prime }$ is the mutual speed between $% F1$ and $F3$. The Lorentz transformation $L$ between $F1$ and $F3$ is defined as \begin{equation*} x^{\prime }=\frac{(x-(v+v^{\prime })t)}{\sqrt{1-(v+v^{\prime })^{2}/c^{2}}}% ,\quad y^{\prime }=y,\quad z^{\prime }=z,\quad t^{\prime }=\frac{\left( t-% \frac{(v+v^{\prime })x}{c^{2}}\right) }{\sqrt{1-(v+v^{\prime })^{2}/c^{2}}}. \end{equation*} Let $L$ and $L^{\prime }$ be Lorentz transformations from $F1$ to $F2$, and from $F2$ to $F3$ \begin{equation*} x^{\prime }=\frac{(x-vt)}{\sqrt{1-v^{2}/c^{2}}},\quad y^{\prime }=y,\quad z^{\prime }=z,\quad t^{\prime }=\frac{\left( t-\frac{vx}{c^{2}}\right) }{% \sqrt{1-v^{2}/c^{2}}}. \end{equation*} \begin{equation*} x^{"}=\frac{(x^{\prime }-v^{\prime }t^{\prime })}{\sqrt{1-(v^{\prime })^{2}/c^{2}}},\quad y"=y^{\prime },\quad z"=z^{\prime },\quad t"=\frac{% \left( t^{\prime }-\frac{v^{\prime }x^{\prime }}{c^{2}}\right) }{\sqrt{% 1-(v^{\prime })^{2}/c^{2}}}. \end{equation*} respectively. It is clear that $L \neq L^{\prime }\circ L$ where $L^{\prime }\circ L$ is the mathematical composition of $L$ and $L^{\prime }.$ It is contested by the mainstream that in the special theory of relativity calculating $v+v^{\prime }$ is the wrong thing to do. It is claimed that this addition should be replaced by the relativistic addition (transformation) of speed \begin{equation*} v\oplus v^{\prime }=\frac{v-v^{\prime }}{1-vv^{\prime }/c^{2}}. \end{equation*} This is to say that $L$ should be \begin{equation*} x"=\frac{(x-(v\oplus v^{\prime })t)}{\sqrt{1-(v+v^{\prime })^{2}/c^{2}}}% ,\quad y"=y,\quad z"=z,\quad t"=\frac{\left( t-\frac{(v\oplus v^{\prime })x}{% c^{2}}\right) }{\sqrt{1-(v+v^{\prime })^{2}/c^{2}}}. \end{equation*} If so, then why is $v+v\prime $ not replaced by $v\oplus v^{\prime }$ in the gamma factor? Also we have a problem with $(v\oplus v^{\prime })$ in the Lorentz transformation as Lorentz transformation is the agent that introduces the concept of relativity and one cannot use already relativistic concept $% v\oplus v^{\prime }$ to define such a transformation. This is a conceptual vicious circularity. The shear reason that this kind of addition works for composing Lorentz transformations does not justify its use. Unfortunately this version of $L$ still fails $L=L^{\prime }\circ L.$ Mathematically, Lorentz transformations are linear transformations from 4D space to itself. So, it is highly irregular that algebraic composition of such transformations is not the desired transformation. Conceptually, $(v+v^{\prime })$ is the classically measured speed. The reason why $(v\oplus v^{\prime })$ was introduced in the numerator part is because the classical addition $(v+v^{\prime })$ does not work for relativistic addition of speeds. As the relativistic addition $(v\oplus v^{\prime })$ is introduced by a relativistic argument, it is viciously circular to use this relativistic version in the gamma factor. Moreover, mathematically we have a problem too. It is naturally expected that $x"/t"$ will serve as the observed speed $v\oplus v^{\prime }.$ There is no way to prove that they are equivalent. Here is a simple calculation that leads to this conclusion. \begin{equation*} \frac{x"}{t"}=\frac{(x-(v\oplus v^{\prime })t)}{\left( t-\frac{(v\oplus v^{\prime })x}{c^{2}}\right) }=\frac{c^{2}(x-(v\oplus v^{\prime })t)}{% c^{2}t-(v\oplus v^{\prime })x}=\frac{c^{2}x-c^{2}(v\oplus v^{\prime })t}{% c^{2}t-(v\oplus v^{\prime })x.} \end{equation*} Note that even if we replace the classical $v+v\prime $ in gamma factor with relativistic $v\oplus v^{\prime }$, the above calculation holds. Now assume that \begin{equation*} \frac{c^{2}x-c^{2}(v\oplus v^{\prime })t}{c^{2}t-(v\oplus v^{\prime })x.}% =v\oplus v^{\prime }. \end{equation*} Then we have \begin{eqnarray*} c^{2}x-c^{2}(v\oplus v^{\prime })t &=&(v\oplus v^{\prime })(c^{2}t-(v\oplus v^{\prime })x. \\ c^{2}x &=&2c^{2}(v\oplus v^{\prime })t-(v\oplus v^{\prime })^{2}x \\ c^{2}x+(v\oplus v^{\prime })^{2}x &=&2c^{2}(v\oplus v^{\prime })t \\ \frac{x}{t} &=&\frac{(v\oplus v^{\prime })^{2}}{c^{2}+(v\oplus v^{\prime })^{2}}. \end{eqnarray*} There is no reason for this to be true. Empirically speaking, even though it could be possible to measure $v,$ what about $v\prime $? It is next to impossible to measure $v+v\prime $, is it not? If $v$ is the speed of a star A moving away from us in distance and if $v\prime $ is the speed of another star B moving away from A, then how can we measure the $% v\prime $ and so $v+v\prime $? In case of axioms of electromagnetic fields, there is no $\mathbf{v}$ involved and this is why there was no difficulty for the invariance of the field equations of Maxwell. Despite all of this, there are some positive results concerning the transformation of wave equations. Unlike Galilean transformation, Lorentz transformation maps electromagnetic wave equations into electromagnetic wave equations. The reason for this is straightforward. This transformation assumes that the constant $c$ is the speed of light in any inertial reference frame. \textbf{Remark (7)} \textit{Again, the above claim is false. As we will see later in \textbf{Section 6}, the claim that Lorentz transformations map wave equations into wave equations is false even for electromagnetic wave equations.} \section{Is Schr\"{o}dinger's wave equation relativistic?} \subsection{De Broglie relation} De Broglie obtained the following relativistic transformation of a plane wave for a wave invariant under the Lorentz transformation (we call it a ``relativistic wave"): \begin{equation*} k_{x}^{\prime }=\frac{1}{\sqrt{1-(v/c)^{2}}}\left( k_{x}-v\frac{\omega }{% c^{2}}\right) ,\text{\quad }k_{y}^{\prime }=k_{y},\text{\quad }k_{z}^{\prime }=k_{z},\text{\quad }\omega ^{\prime }=\frac{(\omega -\nu k_{x})}{\sqrt{% 1-(v/c)^{2}}} \end{equation*} where $\mathbf{k}=(k_{x},k_{y},k_{z})$ is the wave vector and $\omega $ is the frequency. We denote the wave number $ \left\vert \mathbf{k}\right\vert $ by $k$. So, $% k=\left\vert \mathbf{k}\right\vert .$ This restriction to relativistic waves is in place because otherwise the wave phase $\mathbf{k}\cdot \mathbf{r}-\omega t$ would not be invariant under the Lorentz transformation. Using the analogy between this and the momentum-energy transformation \begin{equation*} p_{x}^{\prime }=\frac{1}{\sqrt{1-(v/c)^{2}}}\left( p_{x}-v\frac{E}{c^{2}}% \right) ,\text{\quad }p_{y}^{\prime }=p_{y},\text{\quad }p_{z}^{\prime }=p_{z},\text{\quad }\omega ^{\prime }=\frac{(E-\nu p_{x})}{\sqrt{1-(v/c)^{2}% }} \end{equation*} where $\mathbf{p}=(p_{x},p_{y},p_{z})$ is the momentum vector and $E$ is the energy, de Broglie proposed the following association between a particle and a wave (called matter wave): \begin{equation*} \mathbf{p}=\hslash \mathbf{k}\quad E=\hslash \omega \end{equation*}% where $\hslash $ is a constant. It is called de Broglie relation. Though it resembles Einstein's particle-wave duality \begin{equation*} E=hf=pc\quad p=h/\lambda \end{equation*}% where $\lambda $ is the wave length, there is a fundamental difference. There are several issues to be discussed. \begin{enumerate} \item Unlike the photon-light duality where the speed of photon and that of light are equal, the phase speed of matter wave and the speed of particle can be different. \item De Broglie further assumed that, associated with a particle with speed $V$, was a wave having phase speed $w=\omega /k$. This association requires further explanation. \item De Broglie also assumed that the energy in the wave traveled along with a group speed $v_{g}=d\omega /dk$, which was identical to the particle's speed $V$. Here it is not quite clear what did he mean by ``energy" in the wave. The de Broglie relation above is only a hypothesis based upon the above mentioned analogy of wave vector-frequency transformation and momentum-energy transformation. Certainly this does not yield the concept of energy in the wave. \end{enumerate} De Broglie assumed that $c^{2}(\omega /c^{2}-k^{2})$ is invariant under the relativistic transformation as an analogy to the relativistic invariance of $c^{2}(E/c^{2}-p^{2})$. Thus, \begin{equation*} c^{2}(\omega /c^{2}-k^{2})=constant=C. \end{equation*} From this, it follows that \begin{equation*} 2\omega /c^{2}\frac{d\omega }{dk}-2k=0. \end{equation*} This leads to \begin{equation*} v_{g}=\frac{d\omega }{dk}=\frac{c^{2}k}{\omega }. \end{equation*} As the phase speed is $w=\omega /k,$ we have \begin{equation*} v_{g}=\frac{c^{2}}{w}. \end{equation*} It now follows that either the phase speed $w$ or the group speed $v_{g}$ could exceed $c$, but not both. We do not know what this means for the special theory of relativity, which asserts that nothing moves faster than the speed $c$. All of this is relative to the \textit{analogy-based hypothesis} that a particle with speed $v$ has a wave dual called matter wave whose group speed is $v_{g}$ and whose phase speed is $w=\omega /k$. A particle in motion carries energy and so it is expected that the wave dual of this particle also carries energy of the same amount if the energy conservation law is to be respected. But according to wave mechanics, for a wave to carry energy it has to have wave medium. A concern we have is that de Broglie's wave is a mathematical wave that appears to have no wave medium, just like that electromagnetic wave carries energy without having wave medium. We have already pointed out that electromagnetic field which carries electromagnetic waves is a fiction, a counter-factual modality that plays no ontological role in physics. So, what happened to the energy issue of the matter wave? This is not the case with de Broglie waves. However, it is not clear why we have to choose Lorentz transformed version over Galilean transformed version. That the Galilean transformation of a wave function is a wave function, seems to suggest that the theory of Lorentz transformation of wave functions is rather self-serving. That Lorentz transformation came from time dilation and length contraction, which causes paradoxes (contradictions), seems to suggest that there are more fundamental things that have to be reexamined in the theory of Lorentz transformations. Indeed, almost all waves that wave mechanics works on are not relativistic. The only familiar waves that are relativistic are the electromagnetic waves. But this is overshadowed by the fact that the electromagnetic theory, which gave birth to the electromagnetic waves, is not relativistic either. The most basic axioms of this theory, the Coulomb's laws, are not relativistic as we have established above. So, the claim that electromagnetic waves are relativistic strongly suggests that the theory of electromagnetism is inconsistent. \textbf{Remark (8)} \textit{As we will see later in \textbf{Section 6}, the claim that Lorentz transformations map wave equations into wave equations is false even for electromagnetic wave equations.} \subsection{Schr\"{o}dinger's wave equation} Schr\"{o}dinger used Hamilton's energy dynamics for the particle theory and applied de Broglie's pilot wave theory to produce a wave-particle duality that looks after the energy issue of de Broglie's relation.\cite{Sch}\cite{DeB} All waves propagated along the $x$-axis obey the following wave equation \begin{equation*} \frac{\partial ^{2}\Psi }{\partial x^{2}}=\frac{1}{\omega ^{2}}\frac{% \partial ^{2}\Psi }{\partial t^{2}} \end{equation*} where $\Psi (x,t)$ is the wave function and $\omega $ is the wave speed. Here, we consider the wave function $\Psi $ whose square yields the probability of locating a particle at any point in the space. We consider only systems whose total energy $E$ is constant and whose particles move along the $x-$axis and are bound in space. Then the frequency associated via the de Broglie relation, which is hypothetical and relativistic, with the bound particle is also constant, and we can take the wave function $\Psi (x,t)$ to be \begin{equation*} \Psi (x,t)=\psi (x)f(t). \end{equation*} As the frequency is assumed to be precisely defined, \begin{equation*} f(t)=\cos 2\pi \nu t. \end{equation*} So, we have \begin{equation*} \frac{\partial ^{2}\psi }{\partial x^{2}}=-\left( \frac{2\pi }{\lambda }% \right) ^{2}\psi =-\left( \frac{p}{h}\right) ^{2}\psi \end{equation*} where the wave length is $\lambda =\omega /\nu $ and the momentum of the particle is $p=h/\lambda .$ We take the particle of mass $m$ to be interacting with its surroundings through a potential-energy function $V(x)$. The total energy of the system is given by \begin{equation*} E=E_{k}+V=\frac{p^{2}}{2m}+V \end{equation*} where $E_{k}$ is the kinetic energy of the particle. Then we have \begin{equation*} p^{2}=2m(E-V) \end{equation*} and we have \begin{equation*} \frac{\hslash ^{2}}{2m}\frac{\partial ^{2}\psi }{\partial x^{2}}+(E-V)\psi =0. \end{equation*} So, \begin{equation*} -\frac{\hslash ^{2}}{2m}\frac{\partial ^{2}\psi }{\partial x^{2}}+V\psi =E=i\hslash \frac{\partial \psi }{\partial t}. \end{equation*} This equation is called (non-relativistic) Schr\"{o}dinger wave equation as the energy equation involves non-relativistic mass $m$ and it is not invariant under the Lorentz transformation. This does not mean that quantum mechanics is a non-relativistic theory however. The derivation of Schr\"{o}dinger wave equation involved de Broglie relation which is nothing but a relativistic theory. We observe an issue with the above argument. It is claimed that \begin{equation*} \frac{\partial ^{2}\Psi }{\partial x^{2}}=\frac{1}{\omega ^{2}}\frac{% \partial ^{2}\Psi }{\partial t^{2}} \end{equation*} is the wave equation and \begin{equation*} -\frac{\hslash ^{2}}{2m}\frac{\partial ^{2}\psi }{\partial x^{2}}+V\psi =E=i\hslash \frac{\partial \psi }{\partial t}. \end{equation*} is given as its example. This obviously becomes a wave equation only when $V=0$. As pointed out above, Schr\"{o}dinger's wave equation is in fact not a wave equation as it is irreconcilable with the classical equation for waves. A further observation of its form indicates similarities between Schr\"{o}dinger's equation and the diffusion equation used in describing density fluctuations in materials due to diffusion. The diffusion equation is given as \begin{equation*} \frac{\partial \phi (x,t)}{\partial t}=\nabla D(\phi ,x)\phi (x,t) \end{equation*} where $\phi (x,t)$ is the density of the diffusing material at position $x$ and time $t$, and $D(\phi ,x)$ is the diffusion coefficient for density $\phi $ at position $x$. When $D$ is constant, the equation reduces to \begin{equation*} \frac{\partial \phi (x,t)}{\partial t}=D\nabla ^{2}\phi(x,t). \end{equation*} which is a partial differential equation with first derivative in time and second derivative in position, just like the Schr\"{o}dinger's equation. This particular form of diffusion equation was proposed by Fourier in 1822 to describe the heat distribution in a given region of a material over a particular time and hence is sometimes referred to as ``heat equation".\cite{Fou} A crucial difference between the Schr\"{o}dinger equation and the diffusion equation is that the coefficient in the latter ($D$) is real, while in the former it is complex. Consider for instance the equation for a free particle: \begin{equation*} \frac{\partial \psi (x,t)}{\partial t}=\frac{i\hbar }{2m}\nabla ^{2}\psi (x,t). \end{equation*} This difference makes the solutions to the diffusion equation decay with time (gradient), while the solutions to the Schr\"{o}dinger's equation oscillate (wave). Note however that originally, before the Born interpretation became common, Schr\"{o}dinger attempted to interpret the wavefunction as electronic charge distribution in space (with charge density at position $x$ and time $t$ proportional to $|\psi|^2$): ``the charge of the electron is not concentrated in a point, but is spread out through the whole space [...] the charge is nevertheless restricted to a domain of, say, a few Angstroms, the wavefunction $\psi$ practically vanishing at greater distance from the nucleus."\cite{Sch} This would suggest his treatment of charge density as having a character of a radiation, indicating certain gradient properties. It is unfortunate that the name ``wave equation" became the common name for Schr\"{o}dinger's equation, thereby confusing the classical wave equation with a formalism lacking grounding in ontology. It appears that the Schr\"{o}dinger equation is an attempt at merging the concept of wave-particle duality with the treatment of electronic charges in terms of density distribution. Regarding the former, Schr\"{o}dinger himself had reservations assuming the veracity of matter waves, justifying the concept by stating that neglecting de Broglie's waves leads to serious difficulties in atomic mechanics. Regarding the latter, Schr\"{o}dinger himself noticed that this interpretation of wavefunction does not work for systems of multiple electrons.\cite{Sch2} Nevertheless, Schr\"{o}dinger was aware of this problem and tried, unsuccessfully, to make his wave equation relativistic. Later, Gordon, Klein and Dirac attempted to resolve this problem in the development of the quantum electrodynamics. With all of this, it is clear why Schr\"{o}dinger failed to show that his wave equation for particles is relativistic. To make the matter even worse, Schr\"{o}dinger's equation is not relativistic, and thus a quantization of such an equation is impossible because de Broglie's quantization of waves worked only for relativistic waves. Instead of relativising Schr\"{o}dinger's wave equation, Gordon and Klein quantized relativistic energy-momentum equation of Einstein by replacing energy variable and momentum variable with quantum energy operator and quantum momentum operator. This however does not make the Schr\"{o}dinger wave equation relativistic and therefore does not compensate for the deficiency stated above. \textbf{Remark (9)} \textit{The energy-momentum relation is a consequence of the relativistic energy equation $e=mc^{2}$ which is false. Here, $m$ is the relativistic mass $m_{0}/\sqrt{1-(v/c)^{2}} $which is obtained through a thought experiment that assumed that $v$ is constant. With this Einstein obtained relativistic second law $P=mv.$ By taking a time derivative, Einstein then obtained the relativistic second law $F=dP/dt=vdm/dt+mdv/dt.$ This lead him to conclude that $e=mc^{2}.$ Unfortunately, $v$ is a constant, which leads to $e=0$ instead. As Einstein pointed out, if $e=mc^{2}$ fails, the entire modern physics fails. } Dirac took advantage of the Gordon-Klein equation and derived a relativistic theory of electrons which yielded the positron and opened a gate to quantum electrodynamics which is considered the most successful theory of physics in history. \section{Are wave equations really relativistic?} Now we have reached the point where the question has to be asked whether the wave equations really represent waves that appear in physics correctly. Another question is whether the Lorentz transformation which seemingly maps electromagnetic wave equations to electromagnetic wave equations does so with ontological background. The Galilean transformation fails to conserve electromagnetic wave equations and the Lorentz transformation conserves electromagnetic wave equations. It seems to be the only reason why Lorentz transformation replaced Galilean transformation. The Galilean relativity theory was rejected (except the faulty interpretation of the Michelson-Morley experiment) because it failed to map electromagnetic wave equations to electromagnetic wave equations. So, if can be safely said that as far as the wave theory is concerned, it was the failure to conserve electromagnetic wave equations which dethroned the Galilean transformation. Here we have to ask whether the wave equations are the basic axioms of physical theories. Clearly not. They are the product of the basic axioms under certain circumstances. So, logically there is no convincing reason why such secondary equations must be conserved under coordinate transformations. But to make the argument more articulate, let us discuss the issue in a more general setting. \begin{eqnarray*} \frac{\partial \psi(x',t')}{\partial x} &=& \frac{\partial \psi(x',t')}{\partial x'}\frac{\partial x'}{\partial x} + \frac{\partial \psi(x',t')}{\partial t'}\frac{\partial t'}{\partial x} \\ &=& \frac{\partial \psi(x',t')}{\partial x'}\frac{\partial \gamma(x-vt)}{\partial x} + \frac{\partial \psi(x',t')}{\partial t'}\frac{\partial \gamma(t-\frac{vx}{c^2})}{\partial x} \\ &=& \gamma\frac{\partial \psi(x',t')}{\partial x'} - \frac{\gamma v}{c^2}\frac{\partial \psi(x',t')}{\partial t'} \end{eqnarray*} Similarly, \begin{eqnarray*} \frac{\partial \psi(x',t')}{\partial t} &=& -\gamma v\frac{\partial \psi(x',t')}{\partial x'} + \gamma \frac{\partial \psi(x',t')}{\partial t'} \end{eqnarray*} Then, \begin{eqnarray*} \frac{\partial^2 \psi(x',t')}{\partial x^2} &=& \left(\gamma\frac{\partial}{\partial x'} - \frac{\gamma v}{c^2}\frac{\partial}{\partial t'} \right)\left(\gamma\frac{\partial}{\partial x'} - \frac{\gamma v}{c^2}\frac{\partial}{\partial t'} \right) \\ &=& \gamma^2 \frac{\partial^2}{\partial x'^2} -2\frac{\gamma^2 v}{c^2}\frac{\partial^2}{\partial x' \partial t'} + \frac{\gamma^2 v^2}{c^4}\frac{\partial^2}{\partial t'^2} \end{eqnarray*} And similarly, \begin{eqnarray*} \frac{\partial^2 \psi(x',t')}{\partial t^2} &=& \gamma^2 v^2 \frac{\partial^2}{\partial x'^2} -2\gamma^2 v \frac{\partial^2}{\partial x' \partial t'} + \gamma^2 \frac{\partial^2}{\partial t'^2} \end{eqnarray*} With this, the wave equation now becomes \begin{equation*} \gamma ^{2}\frac{\partial ^{2}}{\partial x^{\prime 2}}-2\frac{\gamma ^{2}v}{% c^{2}}\frac{\partial ^{2}}{\partial x^{\prime }\partial t^{\prime }}+\frac{% \gamma ^{2}v^{2}}{c^{4}}\frac{\partial ^{2}}{\partial t^{\prime 2}}=\frac{1}{% \omega ^{2}}\left( \gamma ^{2} v^2\frac{\partial ^{2}}{\partial x^{\prime 2}}% -2\gamma ^{2}v\frac{\partial ^{2}}{\partial x^{\prime }\partial t^{\prime }}% +\gamma ^{2}\frac{\partial ^{2}}{\partial t^{\prime 2}}\right) . \end{equation*} This is valid only under the condition $v=c=\omega .$ The second equality comes from the fact that $\omega $ is the wave speed. The first equation implies that the frame speed is $c$ which is not possible in the special theory of relativity. This means that Einstein's claim that the electromagnetic wave equation is invariant under the Lorentz transformation is invalid. It is a well understood fact that there is no reference frame for light at the pain of contradiction. We summarize the results thus far as follows: \textbf{Conclusion (1)} \textit{Lorentz transformation fails to conserve all wave equations including electromagnetic wave equations.} \textbf{Conclusion (2)} \textit{Lorentz transformation serves no imaginable purpose. It fails the conservation of the third law, that of the second law, that of gravitational law, that of Coulomb's law. Now we also know that it does not conserve even the electromagnetic wave equations. This removes the claim that Einsteinian relativity theory is more appropriate than Galilean relativity theory. Naturally, Lorentz transformation does not conserve wave functions either.} \textbf{Conclusion (3)} \textit{All of this is a consequence of the wrong interpretation of Michelson-Morley's experiment. As we demonstrated previously \cite{Kan3}, the Michelson-Morley experiment showed that one cannot detect $v$ in $c+v$ in the way we measure the speed of light. Hence it appears that the problems in modern physics started from the Michelson-Morley experiment.} Galilean transformation conserves all of the basic laws and constructs of physics except the third law and the wave equation. The only issue with this transformation is that it is based upon the faulty concept of moving reference frames. To see the problem with moving reference frames, assume a train runs on a track. When the tip of the train's power pole touches the power line at point A, a spark occurs at A. An observer located in the train straight down from the tip A of the power pole will observe that this light comes straight down to him/her from point A. But as point A also is a stationary point of the power line, the observer will also see that the light reaches him/her diagonally from point A on the power line.\cite{Kan2} Mathematically this problem can be explained as follows: in geometry one cannot move any point in geometric spaces as doing so breaks the metric structure of geometric spaces. One cannot move a point 5 to the position of point 3 and \textit{vice versa} as this breaks the metric topology of the real number line. This is why Newton did not attempt to move any geometric points. Instead he reduced a physical body to a point body and moved it inside a geometric space. If we cannot move even a single point in a geometric space, how can we move the entire space itself inside another space? If we move a point 5 on a real line then what is the function that describes such motion? Topologist Ren\'{e} Thom pointed out there is no point in geometry. In geometry we must assume that mysterious linear ordering among real points. This makes the geometry a continuum. Mathematical logician (the founder of model theory) Abraham Robinson expressed this in terms of infinitesimals. Points are all ``glued" together by invisible infinitesimals. In the end, standard real analysis and infinitesimal calculus do the same thing. \section{Relativistic transformations and 4D spacetime} A motion in the 3D space is a function $f(t)=(x,y,z)$. This can be expressed as a line graph in the 4D spacetime. If the speed of the motion is constant, the graph is a straight line, and if it is under acceleration, then the graph is a curved line. When we apply a Galilean transformation to this graph, then the resulting graph is a translated line. \begin{equation*} f(t)=(x-vt,y,z). \end{equation*} However, when we apply Lorentz transformation to this graph, due to time dilation and length contraction combined, the resulting graph becomes incomprehensible. So, the resulting graph is unusable for the purpose of physics. In symbols, the resulting ``motion" becomes the graph of \begin{equation*} f\left( \frac{1}{\sqrt{1-(v/c)^{2}}}\left( t-\frac{vx}{c^{2}}\right) \right) =\left( \frac{1}{\sqrt{1-(v/c)^{2}}}\left( x-vt\right) ,y,z\right) . \end{equation*} This is expected as under the Lorentz transformation time and space coordinates are interdependent. \section{Dirac's aether theory} As can be seen in the vortex theory, which we will discuss in the next section, the whole philosophy of aether theory is to ``squeeze out" particles from a continuum. At the most fundamental level, as Ren\'{e} Thom pointed out, this is impossible as it destroys the topology of the continuum. The difficulty the classical aether theory had is naturally expected because of the nature of continuum. Dirac was the first who managed to create this paradigm upon the quantum field which is the quantization of classical field using the mathematical tool of Fourier expansion.\cite{Dir2} In this method, Dirac did not create a geometric point as the quanta. He created a finite approximation of infinite Fourier expansion as a particle. So, Dirac's particles are infinitary objects described by waves. This project started with a new theory of photons proposed by Planck, which Planck himself did not take seriously and presented as a purely mathematical convention arising from graph fitting as the last resort to resolve the mystery of the blackbody radiation. Planck presented the argument that if one accepts that the minimum energy carried by the electromagnetic wave is $hf$, where $f$ is the frequency of the wave and $h$ is a constant, which is now called the Planck constant, the infamous blackbody radiation problem is resolved. So, Planck proposed that the light wave of frequency $f$ carries waves as $nhf$, where $n$ is a natural number. Under the assumption that the speed $v$ of light in vacuum without conducting current is constant $c$, which came from Maxwell's theory of electromagnetism, Einstein concluded that the mass of Planck's particle (photon) must be $0$ to avoid the relativistic energy \begin{equation*} e=mc^{2}=\frac{m_{0}}{\sqrt{1-(v/c)^{2}}}c^{2} \end{equation*} of the photon become undefined (or diverge) where $m$ is the rest mass of the photon. With this convention, Einstein took the energy equation for the photon to become \begin{equation*} e=0/0=hf. \end{equation*} \textbf{Remark (10)} \textit{Einstein thought that since $0/0$ is equivalent to $0x=0$, and for the latter $x$ can be any number, $0/0$ can be any number and he chose it to be $hf.$ However the difference between the two is such that the former involves division by $0$, which is not allowed in mathematics, and the latter does not involve it.} As discussed above, this leads to yet another contradiction. The relativistic energy equation $e=mc^{2}$ leads to the famous relativistic energy-momentum relation $e=\sqrt{(cp)^{2}+(m_{0}c^{2})^{2}}$ which in turn leads to the following contradiction \begin{equation*} e=\sqrt{(cp)^{2}+(m_{0})^{2}c^{4}}=cp=\frac{m_{0}vc}{\sqrt{1-\left( \frac{v}{% c}\right) ^{2}}}=\frac{0}{0}c^{2}=c^{2}hf=hf. \end{equation*} Logically speaking the real problem with the Planck-Einstein photon theory is that the issue of blackbody radiation was an empirical refutation of the classical electromagnetic field theory of Maxwell. The convention Planck and Einstein presented, which turned out to be invalid as we have shown, did not repair the deficiency of Maxwell's theory. No change was made to Maxwell's theory after the Planck's proposal. So, these two mutually contradicting theories were combined together to produce another theory that makes opportunistic choice. Namely, when it comes to most of the classical part of electromagnetism, it uses the original Maxwell's theory and when it comes to the issue of the light waves, it chooses the Planck-Einstein addition, which contradicts Maxwell. Dirac does not appear to have known of this fatal error of Planck-Einstein. But he was rightly unhappy with the \textit{ad hoc} nature of the process of obtaining the equation $e=0/0=hf.$ He concluded that obtaining photons from electromagnetic wave equation is the wrong thing to do. So instead, Dirac presented the photons through Fourier expansion of the vector potential. In this way, he managed to obtain a richer theory of photons. However, deducing photons through Fourier expansion of vector potential lacks in ontology. Also, the quantization of the charges and currents in the Maxwell's theory remained to be reviewed. Dirac's solution to the problem of correctly quantizing charges and particles was to rely upon the Schr\"{o}dinger wave equations. He found it impossible to quantize charges and particles as they are already particles. According to the basic principle of wave-particle duality, namely de Broglie relation, quantum particles must come from waves. So, Dirac first converted particles such as charges into the Schr\"{o}dinger wave equations. Instead of going through von Neumann's quantization, Dirac used Fourier expansion of the solutions of the wave equations to create quantized particles. Particle interactions were modeled through the interference of the wave equations derived from the particles. Through this process Dirac obtained more particle varieties and more interesting operators on particles such as the annihilation operators. Unfortunately, as we have discussed in the section ``Is Schr\"{o}dinger's wave equation realtivistic?", Schr\"{o}dinger's wave equations are not relativistic, meaning that they are not invariant under the Lorentz transformation in general. This means that the claim of Dirac that his new theory of quantum electrodynamics is relativistic is false as his quantization uses Schr\"{o}dinger's wave equations. To make the matter even more confusing, Schr\"{o}dinger's wave equation was obtained by applying de Broglie relation to classical Hamiltonian theory of particles and this relation is relativistic upon the assumption that the wave of de Broglie is relativistic (meaning Lorentz-invariant). Schr\"{o}dinger's wave equation is not relativistic because Schr\"{o}dinger misunderstood what de Broglie did. Schr\"{o}dinger used de Broglie relation to convert classical Hamiltonian energy equation into a wave function. De Broglie did not associate a particle to a wave equation however. Indeed, what he did was the opposite. He associated a particle having relativistic energy and relativistic momentum with waves. His wave-particle duality is a one way association. Moreover de Broglie had to assume that the wave equation in his theory has to be relativistic, meaning that it is invariant under the Lorentz transformation. For such a relativistic wave equation, he associated a particle with energy and momentum. In this way, the wave-particle duality of Schr\"{o}dinger's is fundamentally flawed, putting the invalidity of relativity theory aside. After all, as the relativity theory is inconsistent, there is no point in considering whether Dirac's theory is relativistic or not. Moreover, Dirac quantized electromagnetic fields, which are not physical reality but a modality, through Fourier expansion to obtain photons. This makes Dirac's photons suffer from the same conceptual obscurity as Planck-Einstein's photons, which are also the product of quantizing (in a different way) electromagnetic waves, which are modal waves. Regarding Feynman's quantum electrodynamics, despite some improvements such as leaving Hamiltonians behind and moving into the Lagrangian, this theory did not resolve the problem associated with Schr\"{o}dinger's wave equations discussed just above. Gordon-Klein's quantization of invalid relativistic energy-momentum equation of Einstein does not offer any solution to this fundamental problem that Schr\"{o}dinger's wave equation is not relativistic. Replacing relativistic energy variable and relativistic momentum variable with energy operator and momentum operator in the faulty relativistic energy-momentum relation is not what a quantization should entail. Below we posit some major questions regarding how quantum physics led to quantum electrodynamics. \begin{enumerate} \item There are many concepts of quantization. Namely, Planck-Einstein quantization of electromagnetic waves, de Broglie's quantization of associating relativistic waves with a particle with momentum and energy through analogy between the transformation of relativistic waves and transformation of energy-momentum, Schr\"{o}dinger's quantization of converting classical particle equations into wave equations using de Broglie's relation, Dirac's quantization of electromagnetic fields through Fourier expansion, Dirac's quantization of particles expressed as Schr\"{o}dinger wave equations through Fourier expansion, Gordon-Klein quantization of Einstein's energy-momentum relation, etc. Yet there is no study of how they are related. \item De Broglie's quantization, which plays key role in many quantizations as listed above, is not properly understood. This quantization works only for relativistic waves that are invariant under the Lorentz transformation. As the momentum-energy of de Broglie particle is related to the transformation of relativistic waves only through analogy, we cannot find a way to obtain a wave that is relativistic and represents the original particle. As Schr\"{o}dinger's wave was created using this obscure de Broglie relation, the validity of it is questionable. This makes Dirac's quantization of Schr\"{o}dinger's wave questionable as well. \item When it comes to the Gordon-Klein equation, which is accepted as an alternative to the failed attempt of making Schr\"{o}dinger's wave mechanics relativistic, this is an attempt to quantize a relativistic relation in an unprecedented way. Does replacing classical variables with corresponding Hermitian operators make the classical theory quantum? \item It is important to ask these questions instead of experimentally try to verify the predictions of these incoherent theories where the core discussion is based only upon analogy and the wrong assumption that Schr\"{o}dinger's wave equations are relativistic. On the top of it, as quantum theory is inherently probabilistic, its experimental verification is highly compromised. The verification is done as the statistical calculation of standard deviation. So, the claimed accuracy of the expensive experiments on particles is verified in the same way we evaluate the accuracy of the prediction of the life span of automobiles. \end{enumerate} Regarding (iii) above: The Gordon-Klein equation does not conserve probability, the conservation of which is a major requirement imposed by the usual interpretation of quantum mechanics. Quantum mechanics interprets the square of the modulus of a wave function's amplitudes as probabilities. For that reason, Schr\"{o}dinger's equation was made to make sure that the coefficients of wave functions were normalized at every point in time. This unfortunately is not the case for the Klein-Gordon equation. It cannot therefore be seen as a valid replacement for a relativistic version of Schr\"{o}dinger's equation. In order to conserve probability, a time evolution equation needs to satisfy the following condition with regards to a wave function \begin{equation*} \int \left\vert \psi (x,t)^{2}\right\vert dx=1 \end{equation*} Furthermore, as the conservation must hold at any point in time, it has to be independent of time evolution. This is to say that the Gordon-Klein equation must satisfy the following equation as well \begin{equation*} \frac{\partial }{\partial t}\int \left\vert \psi (x,t)^{2}\right\vert dx=0. \end{equation*} Now consider the Gordon-Klein equation \begin{equation*} \frac{1}{c^{2}}\frac{\hslash ^{2}\partial ^{2}}{\partial t^{2}}\psi (x,t)=(\hslash ^{2}\nabla ^{2}-m^{2}c^{2})\psi (x,t). \end{equation*} Since it involves the second derivative with regards to time, it is clear that the first derivative term in the probability conservation expression will in general not disappear. Hence, the expression will not produce the required value $0$ and therefore the Gordon-Klein equation clearly does not describe the probability wave that the Schr\"{o}dinger equation describes. The most important issue is that relativity theory as per Einstein is false and there is no point in trying to make classical theories relativistic. Classical theories such as the theory of electromagnetism have their own problems. Relativity theory is a wrong answer to the problem of classical electromagnetic theory. Considering the fact that the theory of relativity came from the wrong interpretation of the Michelson-Morley experiment, the entire quantum theory must be reevaluated. \section{Aether theory} \subsection{Gravitational aether} \textrm{Classical aether theory proposed by Descartes is yet another example of a continuum medium producing atoms (particles) through type lowering. Here a vortex, which is a substructure of the universal medium aether, is supposed to be the atom which will induce the inter-atomic forces. We do not know how far we can push this idea forward as from the start this idea leads to a contradiction. Here is a brief discussion on the basic idea of Descartes on aether theory:} \begin{enumerate} \item \emph{Proposition}\textrm{: } \emph{``No empty space can exist, therefore space must be filled with matter."}\textrm{\ \quad \smallskip Descartes is saying that there is no such thing as a geometric space like e.g. the 3D space then. As Newton made it clear, no matter what we place in a geometric space, the space itself is a geometric space. Otherwise we cannot even define a motion which is, as Newton said, a function from time to space. The other way of nailing down the problem is that by saying ``space must be filled with matter" Descartes is already assuming that space is a container that can be empty. Yet he is claiming that such thing does not exist. } \item \emph{Proposition}\textrm{: } \emph{``Each part of this matter is inclined to move in straight paths, but because parts are close to each other, their interaction makes every part make circular motion. Each part making this circular motion is called vortices. They are often called `atoms'. Descartes also assumes that rough matter resists the circular movement more strongly than fine matter."} \quad \textrm{It appears that just this claim requires a massive amount of physics.} \textrm{This requires a fully developed and articulate theory of fluid. It is clear that the theory of fluid should be something much more advanced than particle-based dynamics. In fact, we have a serious problem with the transition from particle dynamics to fluid dynamics. Indeed, it is becoming increasingly clearer that fluid dynamics is a very different theory from particle dynamics. So, it appears that before we venture into aether theory we must have a solid understanding of what fluid mechanics is about. We certainly do not have a clear notion of fluid dynamics and its relation with particle dynamics yet. } \item \emph{Proposition}\textrm{: } \emph{``Due to centrifugal force, matter tends towards the outer edges of the vortex, which causes a condensation of this matter there. The rough matter cannot follow this movement due to its greater inertia---so, due to the pressure of the condensed outer matter, those parts will be pushed into the center of the vortex. This inward pressure is gravity." \ \quad }\textrm{There is no such thing as centrifugal force. This is why this force is called a fictitious force. This is a good example of how the violation of the principle of relativity of Galileo occurs when we consider reference frames under acceleration. This is why we do not allow reference frames under acceleration. The effect of the so-called centrifugal force appears only when we consider an object floating inside a container that is rotating about a centre of the rotation. This body tries to stay where it is when the centripetal force pulls the container down. If a body is fixed to the body of the orbiting container, it will not feel any centripetal force. After all this kind of situation is not theorizable as the classical particle dynamics does not allow us to consider things like orbiting container that has an inside space. To be precise, every object must be a point object in classical dynamics. } \end{enumerate} \textrm{Upon the ideas of Descartes, Huygens presented a more articulate vortex theory. The following is a short discussion on his work: } \begin{enumerate} \item \emph{Proposition}\textrm{: } ``\emph{Huygens assumed that the free moving aether particles are pushed back at the outer borders of the vortex and causing a greater concentration of fine matter at the outer borders of the vortices. This causes the fine matter press the rough matter into the center of the vortex." \quad\ }\textrm{It is not clear how this distribution of the fine matter (aether particles) will occur. This argument requires a full theory of particle-based fluid mechanics. It requires a very advanced theory to explicate this process. More fundamentally, due to the very concept of fluid, particle-based fluid is untenable. Particles and fluid cannot be unified. The former is discrete and the latter is continuous. As the space is a continuum, no matter how densely we pack the particles, we still have empty spaces in between the packed particles. The only aether we can think of must be continuum fluid. } \item \emph{Proposition}\textrm{: } \emph{``According to Huygens the centrifugal force is equal to the force that acts in the opposite direction of the centripetal force."} \quad \quad \textrm{Again, this claim needs a fully developed fluid theory. Huygens' definitions of centripetal force and centrifugal force are different from standard Newtonian version. Newton's version is simple and clear. There is no such thing as a centrifugal force. It is a misunderstanding of the fact that a free body inside a container will remain where it is despite the motion of the container under acceleration. So, there is no such thing as a centrifugal force. This ``force" arose when Newton's successors misunderstood Newton's theory and included the reference frame.} \item \emph{Proposition}\textrm{: } \emph{``Huygens also assumed that ``bodies", whatever they may be, must consist mostly of ``empty space" so that the aether can penetrate them." \ \quad }\textrm{Huygens assumed that there is no such thing as empty space.}\emph{\ }\textrm{Moreover, there is no definition of what a body is.} \item \emph{Proposition}\textrm{: } \emph{``He further concluded that the aether moves much faster than the falling bodies." \ \quad \smallskip }% \textrm{A more fundamental question is: what is causing the motion of aether (aether particles) such as fine matters and rough matters?\ The same question can be asked about the issue of the motion of bodies. } \item \emph{Proposition}\textrm{: } \emph{``His theory could not explicate Newton's law of gravity, the inverse square law. Huygens tried to deal with this problem by assuming that the speed of the aether is smaller in greater distance." \ \quad }\textrm{Again, the same problem as above. We do not know what the speed of aether is until we learn what causes the motion of aether. } \end{enumerate} \textrm{The overall judgment on Huygen's aether theory is that it failed to explicate the dynamics of aether itself. It appears to be something even more complex than what we know as fluid dynamics. Fluid dynamics is a derivative of Newton's dynamics that came with great compromises. Fluid dynamics is continuum dynamics as fluid is a continuum. The compromises made include, for example, that ``pressure" is a highly questionable derivative of Newton's force as a vector. In dynamics, force is applicable only to a point matter because force is a pointed arrow. This concept was extended from a point to an area or to a volume, going backward of the direction Newton took to make physics possible, which is to reduce a continuum body to a point body. This compromise and Newton's mechanics combined created fluid dynamics. Therefore it is difficult to imagine that a theory of aether can be framed without using Newtonian mechanics as it was the case with fluid dynamics. } \subsection{Electromagnetic aether} \textrm{We have discussed the difficulty in producing continuum dynamics (fluid dynamics) from particle dynamics of Newton. Aether theory starts with a super fluid structure called aether and then, from the aether, induces particles called atoms.} \textrm{This is yet another example of type lowering taking place in theoretical physics as the fluid is treated as a continuum made from points. So, the trouble associated with the type lowering manifests itself in any aether theory. } \textrm{\ Maxwell was compelled to reject his aether theory and accept the field theory for the theory of electromagnetism of Heaviside and Hertz as a shortcut to resolving the problem of nonlinearity. The problem here is that the theory of electromagnetic fields is not ontological as the concept of a force field is not reality. It is counterfactual modality, as we discussed above. Moreover, it was not understood that the force field theory violates the law of action-reaction and that it is a modal theory. Newton was aware of this and did not use the concept of a force field.} \textrm{ As we showed, Lorentz transformation does not conserve electromagnetic wave equations and so by proof by contradiction we can conclude that it does not conserve all of the Maxwell's axioms, contrary to the claim by Einstein that all axioms of Maxwell are Lorentz-invariant. If all axioms of Maxwell were relativistic as Einstein attempted to prove, then the electromagnetic wave equations must also be relativistic. This is to say that \emph{as Maxwell's electromagnetic wave equations are not relativistic, the theory of electromagnetic field as per Maxwell is not relativistic.} This is consistent with Einstein's failure to prove that Lorentz transformations conserve all axioms of Maxwell. } \textrm{Now let us go back to the aether theory which Maxwell tried to build in order to interpret axiomatic electromagnetic field theory. Maxwell was reluctant to accept Heaviside's and Hertz's axiomatic approach of compiling experimental lab results as the vector equations of electromagnetic fields. From our view point, Maxwell was correct in this reluctance as we understand that force fields violate the third law of Newton. Record shows that even after accepting the axiomatic force field approach in producing his axioms of electromagnetic filed theory, Maxwell still was attempting to push forward with his aether theory. One of the reasons for this was that his theory of electromagnetic waves required a medium as all waves of physics need a medium, while the axiomatic theory does not provide it. } \textrm{There are some factors that made it very difficult for this project of Maxwell's to succeed. In order to succeed, we have to consider at least the following issues: } \begin{enumerate} \item \textrm{ Electromagnetic force must respect the law of action-reaction. This is in conflict with the electromagnetic force fields which violate the law of action-reaction. Maxwell's field equations did not produce a solution here. This means that the right ontological electromagnetism theory, if any, shall not agree with the description of Maxwell's electromagnetic field axioms, which ignores the third law. } \item \textrm{Exactly the same argument applies regarding modality. As discussed above, force fields are not ontological reality. They are all counterfactual modalities. However, the desired ontological theory of electromagnetism cannot be a modality of any kind. } \item \textrm{All of the above is to question whether the desired aether theory, if any, would be a modal theory or not. The answer naturally is ``no". A modal theory does not describe physical reality. The concept of force fields must be rejected from this point of view. There is no such thing as an electromagnetic wave as there is no such an ontological reality as a modal wave. The correct view of what we call ``electromagnetic wave" is the transmission at a distance of the vibration of electromagnetic force to a location where there is a charge. There is no transmission in between. This is to say that in reality there is no such thing as electromagnetic waves. } \end{enumerate} \textrm{All of this suggests that trying to find an aether model for Maxwell's electromagnetic field equations is futile. Instead, we should focus on developing the unjustly abandoned Gauss-Weber's action at a distance theory of electromagnetism \cite{Wes} which does not use problematic field equations. The reason why Newton's dynamics is a little less problematic is because it is not a force field-based theory.} \textbf{Remark (11)} \textit{Electromagnetic force depends on the speed of charge in either the Maxwell-Lorentz formalism (known as the Lorentz force) or the Gauss-Weber formalism. This makes electrodynamics inconsistent as it violates the second law, which is a most important axiom of any dynamics. Therefore it is not quite clear how we can put gravitational aether and electromagnetic aether together. } \textrm{From logical perspective, it is clear that an aether theory, if any, would be more complex than the theory that the given aether theory attempts to explicate. So, attempts to lay out an aether theoretic foundation of a physical theory will tend to be viciously circular. And even if not, there is no obvious way to verify such a meta-theory theoretically and empirically. One may say that we can do that through an empirical verification of the theory which the given aether theory is to lay foundation for. Then it is nothing but vicious circularity. } \section{Type lowering in mathematics} The problem of type lowering which we discussed in the preceding in the context of theoretical physics also appeared in mathematics in more acute forms. We will discuss some of them here. \subsection{Scott's model of lambda calculus \cite{Sco}} Church developed a symbolic reductional calculus called $\lambda $ \emph{-calculus} that described the theory of applying a function of one variable to another such function. By defining natural numbers as a special collection of such symbolic functions, Church simulated universal Turing machine showing that his calculus has the same computational power as that of Turing machines. But since its invention, this formal calculus needed a proof that this reductional calculus is consistent. Dana Scott presented an interpretation of this symbolic calculus by considering a set equation \begin{equation*} D=[D\rightarrow D] \end{equation*} where the right hand side represents the set of all functions from the set $D$ to itself. It is a well know fact that for any set $D$, $[D\rightarrow D]$ is larger than $D$. So, Scott introduced a complete lattice structure and restricted the elements of $[D\rightarrow D]$ to order continuous functions. In this way he manage to cut down the size of $[D\rightarrow D]$ and establish a complete order isomorphism between the left hand side and the right hand side of the above equation, presenting the ``first model of $\lambda $-calculus". This success came with a price. Now we identify a natural number with an element of $D$ which is infinitary. Therefore in Scott's calculus we cannot decide if two natural numbers are equal. In Scott's model, if a term is reducible to another term, semantically, there are many terms that do not syntactically represent natural numbers but we cannot find that out using the syntactic reduction of the calculus. Logicians call this kind of natural numbers recursive natural numbers. \subsection{Universal set theory CFG} Consider the solution of the following set equation \begin{equation*} S=[S\rightarrow T] \end{equation*} where $T$ is the truth value set $\{true,false\}$. Unfortunately this equation has no solution as the right hand side again is larger than the left hand side. Russel presented this problem as the famous set as one (left side of the equation) and set as many (right hand side of the equation) paradox. This tells us that the claim that a set can be fully described by its characteristic function is not correct. This is yet another paradox of set theory. The method Scott developed to solve $D=[D\rightarrow D]$ gives us a solution as the collection of order continuous functions. But it is a well known fact that sets as characteristic functions in mathematics are not ``order continuous" though they are ``order monotonic". So, we have to solve the equation as the collection of order monotonic functions from $S$ to $T$. Apostoli and Kanda \cite{Apo} found a solution as a set of monotonic functions from $S$ to $T$. In this way the authors obtained the first consistent universal set theory which has the set universe $S$. But this set theory, called CFG, has a drawback. CFG cannot identify two sets on the basis of the ``extensional identity" which says that two sets are equal if and only if they are made of exactly the same member sets. The so-called axiom of extensionality fails. It is replaced by the axiom of indiscernibility which says that two sets are equal if and only if they belong to exactly the same members of $S$. The loss of extensional membership relation makes it unusable in the mathematics for working mathematicians. This is yet another price we pay for type lowering. \subsection{Type lowering in recursion theory \cite{Rog}} Recursion theory is a branch of mathematical logic developed by G"{o}del in which we define computable partial functions of natural numbers as functional programs over natural numbers. Using Turing machines, G\"{o}del showed how to calculate a natural number which uniquely represents a functional program as a natural number. This process is called G\"{o}del numbering of partial recursive functions. This process certainly is a type lowering from the type of computable functions to natural numbers. Here each computable function will be represented by infinitely many natural numbers each of which represents a functional program that computes the computable function. This implies that there are infinitely many recursive programs for each computable function. However, at the pain of contradiction, given two natural numbers, we can not computationally decide if the functions by these two numbers are the same or not. So, we loose the identity of computable functions. \section{Type lifting in mathematics and particle-based physics} Understanding all of these fundamental difficulties that the top down approach creates, mathematicians took the bottom up approach as a better methodology for building mathematical theories. A good example is the development of the theory of real numbers. It goes as follows: \begin{enumerate} \item Natural numbers: closed under the operations $+$ and $\times .$ \item Integers: closed under the operations $+,\times $ and $-.$ \item Rational numbers are precisely the fractions $n/m$ of integers where $m\neq 0:$ closed under $+,-,\times $ and $\div .$ They are precisely the repeating infinite decimals. \item Irrational numbers are non repeating infinite decimals. \item Real numbers are precisely the collection of all rational numbers and irrational numbers. Real numbers are also closed under $+,-,\times ,$ $\div .$ Moreover, they are closed under bounded limit. \end{enumerate} From this definition of real numbers we can prove that the real numbers are a ``bounded complete ordered field." This is because the algebra $( \mathcal{R},+,-,\times ,\div ,\leq )$ is an ordered field with the linear ordering $\leq $ and is closed under bounded limit. Here $R$ is the set of all real numbers. The mentioned closure under operations properties can readily be proved except the bounded limit which requires a little bit of work. This is a simplest way of developing the theory of real numbers so that we can make it into a calculus (mathematical analysis). One cannot rely upon intuitive, simplistic understanding of real numbers to develop calculus by replacing the concept of limit by a geometric intuition. This type lifting (or bottom up) approach is based upon the same philosophy of atomism in physics where the most basic physical entities are atoms and from atoms we build more complex physical entities. \section*{References}
1,314,259,996,245
arxiv
\section{\label{sec:level1}First-level heading} In recent times the study of the quantum systems that are interacting with their surroundings has acquired a lot of attention in different fields ranging from condensed matter \cite{weiss,Hu,Rammer,Kamenev}, quantum information \cite{Banerjee:2019b}, subatomic physics \cite{Banerjee:2014vga,Dixit:2019lsg,Naikoo:2019eec,Naikoo:2018amb,Naikoo:2018vug,Alok:2015iua}, quantum dissipative systems \cite{Chakrabarty:2018dov}, holography \cite{Chakrabarty:2019aeu, Jana:2020vyx} to cosmology \cite{Akhtar:2019qdn, Maldacena:2012xp, Kanno:2014lma, Kanno:2014ifa, Kanno:2014bma, Kanno:2015lja, Kanno:2016gas, Kanno:2016qcc, Kanno:2017dci, Albrecht:2018prr, Kanno:2018cuk, Kanno:2019gqw, Kanno:2020lpi, Choudhury:2018fpj, Choudhury:2017qyl, Choudhury:2017bou, Martin:2012pea, Martin:2015qta, Martin:2016tbd, Martin:2016nrr, Martin:2017zxs, Martin:2018zbe, Green:2020whw, Yu:2011eq, Benatti:2004ee, Tian:2014jha, Huang:2017yjt, Choudhury:2018rjl, Kiefer:2008ku, Choudhury:2018bcf, Breuer:2003Ox, Banerjee:2010An, Banerjee:2011QI, Banerjee:2019b}. Here our interest is the study of the curvature of the static patch of De Sitter space as well as the Cosmological Constant from the spectroscopic Lamb shift \cite{Zhou:2010nb, Tian:2016uwp, Bhattacherjee:2019eml}. Wave equation and Hawking radiation in De Sitter space-time have been studied in \cite{Polarski, Polarski:1989iu}. The system under consideration is an open quantum system of $N$ entangled spins which are weakly coupled to their environment, modelled by a massless scalar field minimally coupled to static patch of De Sitter space-time. We are interested to study how the entangled states of the system and the Lamb shift change effect the curvature of the static patch of De Sitter space-time as well as the Cosmological Constant as the number of spins become very large in the thermodynamic limit, in realistic physical situations. One can design such a thought experimental condensed matter analogue gravity \cite{Barcelo:2005fc, Visser:2001fe} set up of measuring spectroscopic shift in an open quantum system in a quantum laboratory to get a proper estimation of the curvature of the static patch of De Sitter space as well as the Cosmological Constant without recourse to any cosmological observation. This is the main highlight of this work, where our claim is that, without doing any cosmological observation one can measure the value of the Cosmological Constant from quantum spectroscopy of open systems. We show from our analysis that the obtained value of the Cosmological Constant is perfectly consistent with the present day observed central value of the Cosmological Constant, $\Lambda_{\rm observed}\sim 2.89\times 10^{-122}$ in the Planckian unit \cite{Aghanim:2018eyx} and is completely independent of the number of entangled spins. Computational details, and some relevant material, are expounded in a number of Appendices. A detailed calculation of the $N$-point Wightman function is given in Appendix A and its Hilbert transformation in Appendix B . We have also added Appendix C and Appendix D, which shows the detailed construction of quantum mechanical states by providing explicit examples of $2$ and $3$ spin systems. Next, in Appendix E we have presented the generalised version of the previously discussed Appendix C and D with an arbitrary $N$ number of spins. We also discuss the thermodynamic large $N$ limiting situation and the flat space limit of the spectroscopic shifts in the next Appendices F and G. Finally, in Appendix H, we provide a detailed derivation of the bath scalar field Hamiltonian in the static patch of De Sitter space. The open quantum set up can be described by the following Hamiltonian: \begin{equation} H_{\text{T}} = H_{\text{S}}\otimes {I}_{\text{2,B}} + {I}_{\text{2,S}}\otimes H_{\text{B}} + H_{\text{I}}, \end{equation} where $H_{\text{S}}$, $H_{\text{B}}$ and $H_{\text{I}}$ respectively describes the Hamiltonian of the spin system, bath and the interaction between them. Also ${I}_{\text{2,S}}$ and ${I}_{\text{2,B}}$ are the identity operators for the system and bath, respectively. We choose our spin Hamiltonian in such a way that the individual Pauli matrices are oriented arbitrarily in space. In the present context, the $N$ spin system Hamiltonian is described by: \begin{equation} \label{system} H_{S} = \frac{\omega}{2} \sum_{\delta=1}^{N}\sum^{3}_{i=1} n^\delta_i.\sigma^{\delta}_{i}, \end{equation} where $n^\delta_i$ represent the unit vectors along any arbitrary $i(=1,2,3)$-th direction for $\delta=1,\cdots,N$. Also, $\sigma^{\delta}_i$, $(i=1,2,3)$, are the three usual Pauli matrices for each particle characterized by the particle number index $\delta$. The free rescaled scalar field, minimally coupled with the static De Sitter background is considered as the bath, and is described by the following Hamiltonian: \begin{widetext}\begin{eqnarray}\label{wqw} &&H_{ B}=\int^{\infty}_{0} dr~\int ^{\pi}_{0}d\theta~\int^{2\pi}_{0}d\phi~\left[\frac{\Pi^2_{\Phi}}{2}+\frac{r^2\sin^2\theta}{2}\left\{r^2~(\partial_{r}\Phi)^2+ \displaystyle \frac{1}{\displaystyle\left(1-\frac{r^2}{\alpha^2}\right)}\left((\partial_{\theta}\Phi)^2+\frac{1}{\sin^2\theta}(\partial_{\phi}\Phi)^2\right)\right\}\right].~~~~~~~ \end{eqnarray} \end{widetext} The details of the Hamiltonian has been provided in \ref{Appendix:H}Appendix H. Here, $\Pi_{\Phi}$ represents the momentum canonically conjugate to the scalar field $\Phi(x)$ in the static De Sitter patch. As a choice of background classical geometry, here we have considered the static De Sitter patch, as our prime objective is to implement the present methodology to the real world cosmological observation. The static De Sitter metric (which we will define later) contains the Cosmological Constant term explicitly which is one of the prime measurable quantities at late time scale (mostly at the present day) in Cosmology. Using this analogue gravity thought experiment performed with $N$ spins our objective is to measure the value of Cosmological Constant at present day from the spectroscopic shift formula indirectly. The choice of De-Sitter space as the background geometry comes from the assumption of identifying our universe with an exponentially flat expanding universe. The proof concerning the validity of the approximation is beyond the scope of this work. For this purpose we have only taken the observed value of Cosmological Constant to check the consistency of our finding from this methodology. Not only the numerical value of the Cosmological Constant, but also the curvature of static patch of De Sitter space can be further constrained using the present methodology. The interaction between the $N$ spin system and the thermal bath plays a crucial role in the dynamics of open quantum system. For the model being considered, the interaction between the system of $N$ entangled spins and the bath is given by: \begin{eqnarray} H_{I}= \mu \sum_{\delta=1}^{N}\sum^{3}_{i=1} (n^\delta_i.\sigma^{\delta}_{i})\Phi(x^{\delta}),~~~~ \end{eqnarray} where the parameter $\mu$ represents the coupling between the system and the bath and is taken to be sufficiently small. Also, it is important to note that in the interaction Hamiltonian we have restricted upto quadratic contribution. Any higher order non-linear interactions are avoided for the sake of simplicity, but for a generalised case one can include such contributions in the present analysis. The normalized $N$ spin entangled states for the system Hamiltonian are given by: \begin{widetext} \begin{eqnarray} &&|{G}\rangle \propto \sum_{\delta,\eta=1,\delta<\eta}^{N} |{g_\delta}\rangle \otimes |{g_\eta}\rangle ,~~~ |{E}\rangle \propto \sum_{\delta,\eta=1,\delta<\eta}^{N}|{e_\delta}\rangle \otimes |{e_\eta}\rangle ,~~|{S}\rangle, |{A}\rangle \propto \sum_{\delta,\eta=1,\delta<\eta}^{N}\frac{1}{\sqrt{2}}\left( |{e_\delta}\rangle \otimes |{g_\eta}\rangle \pm |{g_\delta}\rangle \otimes |{e_\eta}\rangle\right)~,~~~~~ \end{eqnarray} \end{widetext} where $|{g_\delta}\rangle,|{e_\eta}\rangle \forall \delta,\eta=1,\cdots,N$ are the eigen vectors for individual atom corresponding to ground (lower energy) state and excited (higher energy) state. Here we also define the proportionality constant of the normalization factor as: \begin{eqnarray} {\cal N}_{\rm norm}=\frac{1}{\sqrt{{}^N C_2}}=\sqrt{\frac{2(N-2)!}{N!}}.\end{eqnarray} The normalization constant has been fixed by taking the inner products between elements of the direct product space with the restriction that the inner product only acts between elements belonging to the same Hilbert space of the open quantum system under consideration. Some examples of the construction of states for $2$ and $3$ spin case are provided in Appendix C\ref{Appendix:C}. At the starting point we assume separable initial conditions, i.e., the total density matrix $\rho_{T}$ at the initial time scale $\tau=\tau_0$ factorizes as, $\rho_{T}(\tau_0)=\rho_{S}(\tau_0) \otimes \rho_{B}(\tau_0),$ where $\rho_{S}(\tau_0) $ and $\rho_{B}(\tau_0) $ constitute the system and bath density matrices at initial time $\tau=\tau_0$, respectively. As the system evolves with time, it starts interacting with its surrounding which we have treated as a thermal bath modelled by massless scalar field placed in the static De Sitter background. Since we are interested in the dynamics of our system of interest (sub system), made by the $N$ spins, we consider its reduced density matrix by taking partial trace over the thermal bath, i.e., $\rho_{ S}(\tau) = {\rm Tr}_{B} [\rho_{T}(\tau)]$. Though the total system plus bath joint evolution is unitary, the reduced dynamics of the system of interest is not. The non-unitary dissipative time evolution of the reduced density matrix of the sub system in the weak coupling limit can be described by the GKSL (Gorini Kossakowski Sudarshan Lindblad) master equation \cite{Akhtar:2019qdn}, $\partial_{\tau} \rho_{S}(\tau)= -i[H_{ \rm eff},\rho_{ S}(\tau)] + \mathcal{L}[\rho_{S}(\tau)]$, where $\mathcal{L}[\rho_{S}(\tau)]$ is the Lindbladian operator which captures the effects of quantum dissipation and non-unitarity. The effective Hamiltonian, for the present model, is $H_{\text{eff}}=H_{\text{S}}+H_{\text{LS}}$, where $H_{\text{LS}}(\tau)$ is the Lamb shift Hamiltonian given by: \begin{equation} H_{\text{LS}} = - \frac{i}{2}\sum_{\delta,\eta=1}^{N}\sum_{i,j=1}^{3}H^{(\delta \eta)}_{ij}(n^\delta_i.\sigma^{\delta}_{i})(n^\eta_j.\sigma^{\eta}_{j}). \end{equation} We consider interaction between two spins at a time, which can be implemented in terms of the Pauli operators as, $\sigma^{\delta}_{i}=\sigma_i \otimes I_{2} $ (for first spin of the interacting pair), $\sigma^{\delta}_{i}=I_{2} \otimes \sigma_i$ (for second spin of the interacting pair) and $\sigma^{\delta}_{i}=I_{2} \otimes I_{2}$ (for all other non-interacting spins). To show the clarity of notation, we provide a 3 spin entangled system here\footnote{For a simple three spin system, considering pairwise entanglement we have three possibilities; 1 and 2 are entangled (3 is neutral), 2 and 3 are entangled (1 is neutral), 1 and 3 are entangled (2 is neutral). Consider the case when 1 and 3 are entangled and 2 is neutral. Here $\sigma^{1 }_{i}=\sigma_i \otimes I_{2} $ (upper index denotes the spin number and $i$ (=1,2,3) denotes the direction cosines and $\sigma_{i}$ are usual Pauli matrices), $\sigma^{2}_{i}=I_{2} \otimes I_{2}$ (for spin 2) and $\sigma^{3}_{i}=I_{2} \otimes \sigma_i$ (for spin 3).}. In the Lamb shift the time dependent coefficient matrix $H^{(\delta \eta)}_{ij}(\tau)$ can be obtained from the Hilbert transform of the $N$ spin Wightman function, which is computed in the static De Sitter patch, described by the following 4D infinitesimal line element: \begin{widetext} \begin{eqnarray} \label{metric} ds^{2}=\left(1-\frac{r^{2}}{\alpha^{2}}\right)dt^{2}-\frac{1}{\displaystyle \left(1-\frac{r^{2}}{\alpha^{2}}\right)}dr^{2}-r^{2}d\Omega_2~~~~~{\rm where}~~~~~~d\Omega_2=\left(d\theta^2+r^2\sin^2\theta~ d\phi^2\right)~~{\rm with}~~\alpha=\sqrt{\frac{3}{\Lambda}}. \end{eqnarray} \end{widetext} where $\Lambda>0$ is the 4D Cosmological Constant in Static De Sitter patch. We use the Schwinger Keldysh technique to determine the entries of each $N$ spin Wightman functions \footnote{The effect of dS spacetime enters through the Wightman functions}, which are basically two point functions in quantum field theory at finite temperature. Consequently, the diagonal entries (auto-correlations) of the $N$ spin Wightman function are calculated as: \begin{eqnarray} \label{auto} G^{\alpha \alpha}(x,x') &=& G^{\beta \beta}(x,x')=- \frac{1}{16\pi^{2}k^{2}}\sinh^2f(\Delta\tau,k)),~~~~~~ \end{eqnarray} where we define, $f(\Delta\tau,k)=\left(\Delta \tau/2k-i\epsilon\right)$ and $\epsilon$ is an infinitesimal contour deformation parameter. Also the off-diagonal (cross-correlation) components of the $N$ spin Wightman function can be computed as: \begin{eqnarray} \label{cross} G^{\alpha\beta}(x,x') &=& G^{\beta \alpha}(x,x')\nonumber\\ &=&\frac{-(16\pi^{2}k^{2})^{-1}}{\left\{\sinh^2f(\Delta\tau,k)-\frac{r^{2}}{k^{2}}\sin[2](\frac{\Delta \theta}{2})\right\}}.~~~~~~ \end{eqnarray} Here the parameter $k$ can be expressed as, \begin{eqnarray} k=\sqrt{g_{00}}\alpha=\sqrt{\alpha^2-r^2}=\sqrt{3/\Lambda-r^2}>0 \end{eqnarray} Further, the curvature of the static De Sitter patch can be expressed in terms of the Ricci scalar term, given by, $R=12/\alpha^2.$ This directly implies that one can probe the Cosmological Constant from the static De Sitter patch using the spectroscopic shift. The shifts for identical $N$ entangled spins can be physically interpreted as the energy shift obtained for each individual spin immersed in a thermal bath, described by the temperature, \begin{eqnarray} T=\frac{1}{\beta}=\frac{1}{2\pi k}=\sqrt{T^2_{\rm GH}+T^2_{\rm Unruh}},\end{eqnarray} (with Planck's constant $\hbar=1$ and Boltzmann constant $k_{B}=1$) where the {\it Gibbons-Hawking} and {\it Unruh} temperature are defined as \cite{Tian:2016uwp,Bhattacherjee:2019eml}., \begin{widetext}\begin{eqnarray} T_{\rm GH}=\frac{1}{2\pi \alpha},~T_{\rm Unruh}=\frac{a}{2\pi},~~{\rm with}~a=\frac{r}{\alpha^2}\left(1-\frac{r^2}{\alpha^2}\right)^{-1/2}.\end{eqnarray} \end{widetext} When spins are localised at $r=0$, then $a=0$, which in turn implies, $T=T_{\rm GH}$. Here the temperature of the bath $T$ can also be interpreted as the equilibrium temperature which can be obtained by solving the GKSL master equation for the thermal density matrix in the large time limit. Initially when the non-unitary system evolves with time it goes out-of-equilibrium and if we wait for long enough time, it is expected that the system will reach thermal equilibrium. The $N$ dependency comes in the states, in the matrix $H^{\delta\eta}_{ij}$ and the direction cosines of the alignment of each spin. The generic Lamb shifts are given by, $\delta E_{\Psi} = \langle \Psi| H_{LS} | \Psi \rangle,$ where $|\Psi\rangle$ is any possible entangled state. Here the spectral shifts for the $N$ spins are derived as: \begin{eqnarray} \frac{\delta E_{Y}^{N}}{2\Gamma_{1;{\cal DC}}^N}=\frac{\delta E_{S}^{N}}{\Gamma_{2;{\cal DC}}^N}=-\frac{\delta E_{A}^{N} }{\Gamma_{3;{\cal DC}}^N}=-{\cal F}(L,k,\omega_0) /{\cal N}^2_{\rm norm}, ~~~~ \end{eqnarray} where $Y$ represents the ground and the excited states and $S$ and $A$ symmetric and antisymmetric states, respectively. Here, $\Gamma^N_{i;{\cal DC}}~\forall~i=1,2,3$ represent the direction cosine dependent angular factor which appears due to the fact that we have considered any arbitrary orientation of $N$ number of identical spins. These angular factors become extremely complicated to write for any arbitrary number of $N$ spins. Explicit expressions of the angular factors for 2 and 3 spin cases are provided in Appendix D \ref{Appendix:D}. Because of this fact it is also expected that as we approach the large $N$ limit we get extremely complicated expressions. For all the spectral shifts we get an overall common factor of ${\cal N}^{-2}_{\rm norm}={}^NC_2=N!/2(N-2)!$ which is originating from the expectation value of the Lamb Shift Hamiltonian. Here we introduce a spectral function ${\cal F}(L,k,\omega_0)$, given by, \begin{eqnarray}\label{sdsd} {\cal F}(L,k,\omega_0)&=&{\cal E}(L,k)\cos\left(2\omega_0 k\sinh^{-1}\left(L/2k\right)\right),~~~ \end{eqnarray} where, we define: \begin{eqnarray} {\cal E}(L,k)=\mu^2/(8\pi L\sqrt{1+(L/2k)^2}). \end{eqnarray} In this context, $L$ represents the euclidean distance between any entangled pair of spins, and is $L=2r\sin(\Delta \theta/2)$, where $\Delta \theta $ represents the angular separation, which we have assumed to be the same for all the spins. With respect to the length scales $L$ and $k$, we have two asymptotic solutions $L\gg k$ and $L\ll k$. In $L\gg k$ limit, the effect of the curvature of the static patch of de Sitter space is dominant and from the previously mentioned metric as stated in eq.~(\ref{metric}) at the horizon $r=\alpha$ we have ~$k=0$. As a result, at horizon the limit $L\gg k$ corresponds to $L\gg 0$, which implies the effect of the curvature of the static patch of de Sitter space can be probed exactly at the horizon of the metric stated in eq.~(\ref{metric}). This computation can be similarly performed for a near horizon region where one can take $r=\alpha-\Delta$. Therefore, for a near horizon region one can write, $k=\sqrt{\alpha^2-(\alpha-\Delta)^2}=\sqrt{(2\alpha-\Delta)\Delta}$. In the near horizon case, we can write $L\gg k=\sqrt{(2\alpha-\Delta)\Delta}$ and this again implies the fact that the effect of the curvature of the static patch of de Sitter space can be probed at the near horizon region as well. In the other limit $L\ll k$, the curvature of the static patch of de Sitter space is not distinguishable and one can treat the space-time as a flat one which is described by the following metric: \begin{equation} \label{flatmetric} ds^2=dt^2-(dr^2+r^2 d\Omega^2). \end{equation} So, in this region, where the space-time geometry is described in terms of a flat space-time metric, the notion of horizon does not exist. So at the horizon $r=\alpha$ and at $k=0$ of the metric stated in eq.~(\ref{flatmetric}) the limit $L\ll k$ is not valid as we cannot have any understanding of $L\ll 0$ in our computation. The behaviour of the spectral function in these asymptotic limits can be seen to be: \begin{widetext} \begingroup \large \begin{eqnarray} {\cal F}(L,k,\omega_0)= \left\{ \begin{array}{lr} \displaystyle \frac{\mu^2 k}{4\pi L^2}\cos\left(2\omega_0 k\ln\left(L/2k\right)\right), ~~~~~~~~~~~~~~~~~~~~~~& \text{$L>>k$}\\ \\ \displaystyle \frac{\mu^2}{8\pi L}\cos\left(\omega_0 L\right). ~~~~~~~~~~~~~~~~~~~~~~~~~& \text{$L<< k$} \end{array} \right.~~~~~ \end{eqnarray} \endgroup \end{widetext} For a realistic situation we take the large $N$ limit, using the {\it Stirling-Gosper} approximation, as a result of which the normalization factor can be written as: \begin{widetext} \begin{eqnarray} &&{\cal N}_{\rm norm}~ \underrightarrow{\rm Large~N}~\widehat{{\cal N}_{\rm norm}} \approx\sqrt{2}\left(1-\frac{2}{\left(N+\frac{1}{6}\right)}\right)^{1/4}\left(\frac{N}{e}\right)^{-\displaystyle\frac{N}{2}}\left(\frac{N-2}{e}\right)^{N/2-1}\sqrt{\frac{\displaystyle1-\frac{2}{\left(N+\frac{1}{12}\right)}}{\displaystyle\left(1-\frac{2}{N}\right)}}.~~~~ \end{eqnarray} \end{widetext} Here we use: \begin{eqnarray} N!\sim \sqrt{\left(2N+\frac{1}{3}\right)\pi }\left(\frac{N}{e}\right)^{N}\left(1+\frac{1}{12N}\right).\end{eqnarray} In general when we are talking about large number of degrees of freedom, instead of taking direct $N\rightarrow \infty$ limit in the combinatorial formula appearing in $N_{norm}$, Stirling's approximation is very useful to correctly estimate the factorials. This approximation allows us to take $N!$ in the large $N$ limit. It is evident that if we evaluate $N_{norm}$ in the large $N$ limit using the Stirling's approximation, we get most accurate mathematically consistent result which tells us that $N_{norm}$ is non-zero in the large $N$ limit, that cannot be seen by taking $N\rightarrow \infty$ in the formula for the normalization factor, $N_{norm}$. This statement is frequently used in the context of statistical description of QFT to study the behaviour of the theory as a $O(1/N)$-th order perturbation theory, which helps to understand the behaviour of the theory not only at $N\rightarrow \infty$, but also in the intermediate regime where weak coupling behaviour holds good. Actually, within the framework of QFT strong coupling behaviour is very difficult to study, hence an usual approach consists of translating the original theory in the weak coupling regime and solving by taking into account $O(1/N)$-th order perturbation theory. The good part of this approximation technique is that it helps to understand the intermediate weak coupling behaviour in terms of Feynman amplitudes and in the perturbative level those diagrams are computable and exactly solvable. To demonstrate the power of such techniques we want to cite an example, a Chern-Simons Matter Theory where the general approach is to solve the theory in $O(1/N)$-th order perturbation theory to see the behaviour of the theory in weak coupling regime \cite{Choudhury:2018iwf,Choudhury:2017tax,Jain:2013gza,Giombi:2011kc}. More discussions on this large $N$ approximation are made in \ref{Appendix:F}Appendix F. Considering this fact carefully shifts can be approximately derived for large $N$ limiting case as : \begin{eqnarray} \frac{\widehat{\delta E_{Y}^{N}}}{2\Gamma_{1;{\cal DC}}^N}=\frac{\widehat{\delta E_{S}^{N}}}{\Gamma_{2;{\cal DC}}^N}=-\frac{\widehat{\delta E_{A}^{N}} }{\Gamma_{3;{\cal DC}}^N}=-{\cal F}(L,k,\omega_0) /\widehat{{\cal N}_{\rm norm}}^2. ~~~~~~~~ \end{eqnarray} \begin{figure*}[htb] \includegraphics[width=17cm,height=8.7cm]{N.pdf} \caption{Behaviour of the spectroscopic shifts with the number of entangled spins. Here we fix $\mu=0.1$, $L=10$ and $\omega_0=1$ for the given value of the curvature $R=1.714$. } \label{fig:1} \end{figure*} \begin{figure*}[htb] \includegraphics[width=17cm,height=8.7cm]{N5.pdf} \caption{Behaviour of the spectroscopic shifts with the Cosmological Constant for small number of entangled spin (N=5). Here we fix $\mu=0.1$, $L=10$ and $\omega_0=1$. } \label{fig:2} \end{figure*} \begin{figure*}[htb] \includegraphics[width=17cm, height=8.7cm]{N50000.pdf} \caption{Behaviour of the spectroscopic shifts with the Cosmological Constant for large number of entangled spins (N=50000). Here we fix $\mu=0.1$, $L=10$ and $\omega_0=1$.} \label{fig:3} \end{figure*} \begin{figure*}[htb] \includegraphics[width=17cm,height=8.7cm]{Small.pdf} \caption{Behaviour of the spectroscopic shifts with the Euclidean distance for small number of entangled spins (N=5). Here we fix $\mu=0.1$, $L=10$ and $\omega_0=1$ for the given value of the curvature $R=1.714$.} \label{fig:4} \end{figure*} \begin{figure*}[htb] \includegraphics[width=17cm, height=8.7cm]{Large.pdf} \caption{Behaviour of the spectroscopic shifts with the Euclidean distance for large number of entangled spins (N=50000). Here we fix $\mu=0.1$, $\omega_0=1$ for the given value of the curvature $R=1.714$.} \label{fig:5} \end{figure*} In the large $N$ limit, behaviour of ${\cal F}(L,k,\omega_0)$ remains unchanged, as the euclidean distance $L$, inverse of the curvature parameter $k$ and the frequency $\omega_0$ of the $N$ number of identical spins are not controlled by $N$. Also, for large $N$ the normalization factor asymptotically saturates to $\sqrt{2}\left(1+1/2N\right)$. In fig.~(\ref{fig:1}), the behaviour of shifts with the number of entangled spins are depicted. From the plot it is understandable that the present prescription does not hold for $N=1$. For $N=2$ the shifts vary rapidly and reach a peak value. Once $N$ increases the shift gradually decreases and for large $N$ saturates to a constant value. Though such variations in the value of $N$ will not further effect the predicting the value of Cosmological Constant from the spectroscopic studies. This crucial issue is explicitly discussed below. Here it is important to note that, the scaling in these plots is different because of the presence of ${\cal F}(L,k,\omega_0)$ which we have fixed by fixing the $L$, $k$ and $\omega_0$. From this plot one can study the $N$ dependent behaviour of the shifts. In fig.~(\ref{fig:2}) and fig.~(\ref{fig:3}), the behaviour of the shifts with respect to the Cosmological Constant are depicted, for a given small and large $N$, respectively. From the behaviour of both the plots, it is quite clear that the nature of spectroscopic shifts when studied with respect to variation of the Cosmological constant is independent of the number of entangled pair of spins present in the system. The insets of fig.~(\ref{fig:2}) and fig.~(\ref{fig:3}), suggests that even on probing very small tiny fine tuned values of Cosmological Constant the behaviour of the spectroscopic shifts is identical irrespective of the number of entangled pairs of spins for a certain range of Cosmological Constant. Thus we see that on probing the value of the Cosmological Constant, which is accepted now-a-days to be of the order, $O(10^{-122})$, we get a finite value of the spectroscopic shifts out of the present analysis. One can further say instead of predicting the observationally consistent value of Cosmological Constant our analysis is able to predict a tiny window of Cosmological Constant within which the observed value will lie. From fig.~(\ref{fig:4}) and fig.~(\ref{fig:5}) we again observe that the behaviour of the spectroscopic shifts is independent of the number of entangled pairs of spins $N$ (small or large), for the Cosmological Constant fixed at the observed value. It is clearly observed from the plots that the shifts for very small values of $L$ fluctuate with large amplitude and as we increase the value of $L$, decay very fast and saturate to negligibly small values for the asymptotic large value of $L$. One might wonder the utility of doing the analysis with small number of entangled spins, when generally considerations of cosmological studies involve large number of degrees of freedom. Though we are mainly interested in working with the thermodynamic limit which can be achieved through large number of degrees of freedom, the analysis with small number of degrees of freedom brings out the independence of the obtained result on the number of entangled spins. ~\footnote{We thank the referee for providing useful suggestion in this crucial issue.} There emerge two natural length scales in the problem: one from the system, i.e., $L$ which is the Euclidean distance between two consecutive neighbouring spins and another from the bath $k$, which is related to the curvature and the cosmological constant. An interplay between these two scales leads to rich dynamical consequences. For $L \ll k$, one can find an inertial frame where the laws of Minkowski space-time are valid and the present shifts reduce to the flat space limit result. A more detailed discussion on this issue is given in Appendix G \ref{Appendix:G}. For $L \gg k$, the curvature of the static patch of De Sitter space-time dominates and plays a non-trivial role in spectral shifts. Here, the spectral shifts vary as $L^{-2}$ and depend explicitly on $k$. These are related to the Cosmological Constant $\Lambda$ and can be further linked to the equilibrium temperature of the bath. For this reason we will focus on the distances $L\gg k$ to have a non-trivial effect. For $L \ll k$, the spectral shifts vary as $L^{-1}$ and are independent of $k$ or $\Lambda$ for which the shifts should be essentially the same, as obtained in Minkowski case. Presence of $k$ in the shifts for $L\gg k$ confirms the presence of $\Lambda$ in the De Sitter static patch, which is of course, not present in the other limit, i.e., $L\ll k$. We have found, $\Lambda \sim {\cal O}(10^{-122})$ in the Planckian unit; this corresponds to almost constant shifts, which is consistent with the observed value, $\Lambda_{\rm observed}\sim 2.89\times 10^{-122}$ in Planckian unit \cite{Aghanim:2018eyx}. On the other hand, Cosmological Constant in the region $\Lambda\gtrsim(0.05)$ is not allowed, as it gives an initial oscillation with a very small but fast decaying amplitude of the shifts. After crossing this region all the shifts approach to zero asymptotically from which we will not get any information of $\Lambda$. Hence, the observationally relevant feature will come from the very small $\Lambda$ where all shifts vary very slowly in the $L\gg k$ case. Additionally, using the present analysis one can further constrain the curvature of the static patch at very tiny value, $R\sim {\cal O}(10^{-122})$, corresponding to $\Lambda \sim {\cal O}(10^{-122})$. Finally, our theoretical analysis predicts a range of the value of cosmological constant which is not dependent on the number of entangled spins and also consistent with the observed bound on the Cosmological Constant obtained from other observational probes. From this analysis one can comment on a range in which the value of the observable can lie, an issue of obviously interest. Here, we observe that the Cosmological Constant predicted from the Lamb shift spectroscopy can have a value between $O(10^{-10}-10^{-130})$, which is consistent with the observed central value, ${\cal O}(10^{-122})$. Hence, we can say that our theoretical analysis predicts the Cosmological Constant within a certain window. Generally, if an analysis using CMB data is carried out, predicting a particular value of the Cosmological Constant is possible along-with having cosmic variance from CMB, but it is difficult to achieve from a theoretical calculation. However, Fisher information techniques could be useful in this regard \cite{Choudhury:2020dpf}. The analogue gravity thought process set-up discussed here helps us to probe such an important tiny fine tuned number from a theoretical perspective. In conclusion, we have studied indirect detection mechanism of observationally relevant Cosmological Constant from the shifts obtained from a realistic model of open system consisting of entangled $N$ spins. For this purpose, we have utilized the superposition principle along with equal Euclidean distance between all the spins. In this work we found that-(a) the shifts are not sensitive to the number of spins $N$, (b) a correct prediction of a range of the observationally consistent Cosmological Constant \cite{Aghanim:2018eyx} can be made in the region where the Euclidean distance between any value of the spins is large compared to the length scale $k$ (i.e., $L\gg k$), irrespective of the number of the entangled spins, and (c) flat space effects are dominant in the region where the Euclidean distance between any value of the spins is small compared to the length scale $k$ (i.e., $L\ll k$). {\bf Acknowledgement:}~SC would like to thank Junior Scientist position at Max Planck Institute for Gravitational Physics, Potsdam and J. C. Bose Visiting Scientist position at NISER, Bhubaneswar. SP acknowledges the J. C. Bose National Fellowship for support of his research. SC, NG, RND would like to thank NISER Bhubaneswar, IISER Mohali and IIT Bombay respectively for providing fellowships. Last but not the least, we would like to acknowledge our debt to the people belonging to the various part of the world for their generous and steady support for research in natural sciences. Finally, we would like to thank the referees and the editors for many useful suggestions which greatly improved the manuscript. {\bf Important note:} A detailed supplementary material is added just after the reference to clarify all the background material related to the present research problem. Few more additional plots and results are also have discussed in this supplementary material to strengthen our study.
1,314,259,996,246
arxiv
\section{Introduction and Motivations} Due to their many desirable properties such as sparse multiscale representations and fast transforms, orthogonal wavelets have been employed in many applications such as signal/image processing and numerical algorithms (\cite{daubook}). As a generalization of an orthogonal wavelet, a tight framelet (also called a tight wavelet frame) preserves almost all the desirable properties of an orthogonal wavelet and offer many extra new features such as directionality and redundant representations in applications (e.g., \cite{chs02,dhrs03,ds10,hz14,shen10} and many references therein). Before explaining our motivations of this paper, let us recall the definition of tight framelets. For a function $f$ defined on the real line $\R$, we shall adopt the following notation: \[ f_{\gl;k}:=|\gl|^{1/2}f(\gl\cdot-k),\qquad \gl \in \R\bs\{0\}, k\in \R. \] For square integrable functions $\eta,\psi^1,\ldots,\psi^s\in \Lp{2}$, we say that $\{\eta;\psi^1,\ldots,\psi^s\}$ is \emph{a tight framelet} in $\Lp{2}$ if every function $f\in \Lp{2}$ has the following multiscale representation: \be \label{tf} f=\sum_{k\in \Z} \la f, \eta(\cdot-k)\ra \eta(\cdot-k)+ \sum_{j=0}^\infty \sum_{\ell=1}^s \sum_{k\in \Z} \la f, \psi^\ell_{2^j;k}\ra \psi^\ell_{2^j;k} \ee with the series converging unconditionally in $\Lp{2}$. Moreover, if $\{\eta;\psi^1,\ldots,\psi^s\}$ is a tight framelet in $\Lp{2}$, then $\{\psi^1,\ldots,\psi^s\}$ is \emph{a homogeneous tight framelet} in $\Lp{2}$ (e.g. see \cite[Proposition~4]{han12} and \cite{han97,rs97}), i.e., \be \label{htf} f= \sum_{j\in \Z} \sum_{\ell=1}^s \sum_{k\in \Z} \la f, \psi^\ell_{2^j;k}\ra \psi^\ell_{2^j;k},\qquad \forall\, f\in \Lp{2} \ee with the series converging unconditionally in $\Lp{2}$. By $\lp{0}$ we denote the space of all finitely supported sequences on $\Z$. In this paper we are interested in compactly supported generating framelet functions $\psi^1,\ldots,\psi^s$, which are derived from a compactly supported \emph{refinable function} $\phi$ satisfying \be \label{reffunc} \phi=2\sum_{k\in \Z} a(k) \phi(2\cdot-k) \ee for some finitely supported sequence/filter $a\in \lp{0}$. For a filter $a=\{a(k)\}_{k\in \Z} \in \lp{0}$, we define its associated Laurent polynomial to be $\pa(z):= \sum_{k\in \Z} a(k)z^k$ for $z\in \C\bs\{0\}$. Suppose that a filter $a\in \lp{0}$ satisfies $\sum_{k\in \Z} a(k)=1$, i.e., $\pa(1)=1$. Using the Fourier transform, we obtain a refinable function/distribution $\phi$ through \be \label{phi} \wh{\phi}(\xi):=\prod_{j=1}^\infty \pa(e^{-i 2^{-j}\xi}),\qquad \xi\in \R, \ee where the Fourier transform used in this paper is defined to be $\wh{f}(\xi):=\int_{\R} f(x) e^{-ix\xi} dx$ for $f\in \Lp{1}$ and can be naturally extended to square integrable functions and tempered distributions. It is trivial to check that $\wh{\phi}(2\xi)=\pa(e^{-i\xi})\wh{\phi}(\xi)$, which is equivalent to \eqref{reffunc}. Suppose that the refinable function $\phi$ associated with low-pass filter $a$ belongs to $\Lp{2}$. A general procedure called oblique extension principle (OEP) has been introduced in \cite{dhrs03} and independently in \cite{chs02} for constructing compactly supported tight framelets from the refinable function $\phi$. For $\theta,b_1,\ldots,b_s\in \lp{0}$, we define \be \label{eta:psi} \eta:=\sum_{k\in \Z} \theta(k) \phi(\cdot-k) \quad \mbox{and}\quad \psi^\ell:=2\sum_{k\in \Z} b_\ell(k)\phi(2\cdot-k),\qquad \ell=1,\ldots,s. \ee Since $\phi\in \Lp{2}$ and all filters are finitely supported, we have $\eta,\psi^1,\ldots,\psi^s\in \Lp{2}$. Then $\{\eta;\psi^1,\ldots,\psi^s\}$ is a tight framelet in $\Lp{2}$ (e.g., see \cite[Theorem~6.4.2]{hanbook} and \cite{chs02,dhrs03,dh04,han03,han12}) if and only if $\pTh(1)=1$ and $\{a;b_1,\ldots,b_s\}_\Theta$ is \emph{an (OEP-based) tight framelet filter bank} satisfying \begin{align} &\pTh(z^2) \pa(z) \pa^\star(z)+\pb_1(z) \pb_1^\star(z)+\cdots+\pb_s(z)\pb_s^\star(z)=\pTh(z), \qquad z\in \C\bs\{0\}, \label{tffb:1}\\ &\pTh(z^2) \pa(z) \pa^\star(-z)+\pb_1(z) \pb_1^\star(-z)+\cdots+\pb_s(z)\pb_s^\star(-z)=0, \qquad z\in \C\bs\{0\},\label{tffb:0} \end{align} where $\pTh(z):=\pth(z)\pth^\star(z)$. Here we define $\pu^\star(z):=\sum_{k\in \Z} \ol{u(k)}^\tp z^{-k}$ for a finitely supported (matrix-valued) sequence $u:=\{u(k)\}_{k\in \Z}: \Z\rightarrow \C^{m\times n}$. Notice that $ \pu^\star(z) = [\pu(z)]^\star := \overline{\pu(z)}^\tp $ for all $ z\in \T $. Therefore, the task of constructing a tight framelet is reduced to constructing a tight framelet filter bank. In fact, it is known in \cite[Theorem~4.5.4]{hanbook} that every tight framelet $\{\eta;\psi^1,\ldots,\psi^s\}$ in $\Lp{2}$ must come from a refinable function $\phi$ through the refinable structure in \eqref{eta:psi}. One-dimensional tight framelets and tight framelet filter banks have been extensively investigated and constructed in the literature, to only mentioned a few, see \cite{ch00,chs02,daubook,dhrs03,DonShe:2007Pseudo-splines,han03,han15,hanbook,HanMo:2005Symmetric,jiang03,js15,sel01} and references therein. One of the most important features of wavelets is the sparse multiscale representations in \eqref{tf} and \eqref{htf}. The sparsity of the representations in \eqref{tf} and \eqref{htf} come from the vanishing moments of the framelet/wavelet generators $\psi^1,\ldots,\psi^s$ in \eqref{eta:psi}, e.g., see \cite{daubook}. For a compactly supported function $\psi\in \Lp{2}$, we say that $\psi$ has \emph{$m$ vanishing moments} if $\int_{\R} x^j \psi(x) dx=0$ for all $j=0,\ldots, m-1$. If in addition $\psi=2\sum_{k\in \Z} b(k)\phi(2\cdot-k)$ with $b\in \lp{0}$ and $\wh{\phi}(0)\ne 0$, then one can easily deduce that $\psi$ has $m$ vanishing moments if and only if the filter $b$ has $m$ vanishing moments, i.e., $\sum_{k\in \Z} k^j b(k)=0$ for all $j=0,\ldots,m-1$. We define $\vmo(b):=m$ with $m$ being the largest such integer. For convenience, we also define $\vmo(\pb(z)):=\vmo(b)$. The notion of vanishing moments is closely related to sum rules. For a filter $a\in \lp{0}$, we say that $a$ has \emph{$n$ sum rules} (\cite{daubook}) if \be \label{sr} \sum_{k\in \Z} a(2k)(2k)^j=\sum_{k\in \Z} a(1+2k) (1+2k)^j, \qquad \forall\, j=0,\ldots,n-1. \ee Note that $a$ has $n$ sum rules if and only if $\pa(z)=(1+z)^n \pu(z)$ for some Laurent polynomial $\pu$. We define $\sr(\pa(z)):=\sr(a):=n$ with $n$ being the largest such integer. If $\{a;b_1,\ldots,b_s\}_\Theta$ is a tight framelet filter bank with $ \pTh(1)\pa(1)\neq 0 $, then one can easily deduce from \eqref{tffb:1} and \eqref{tffb:0} (e.g., see \cite[Proposition 3.3.1]{hanbook} and \cite{chs02,dhrs03,han15}) that \be \label{tf:vm:sr} \min(\vmo(b_1),\ldots,\vmo(b_s))=\min(\sr(a), \tfrac{1}{2}\vmo(\pTh(z)-\pTh(z^2)\pa(z)\pa^\star(z))). \ee For a given low-pass filter $a$, the role of the filter $\Theta$ is to increase the vanishing moments of $\pTh(z)-\pTh(z^2)\pa(z)\pa^\star(z)$ so that all the high-pass filters $b_1,\ldots,b_s$ have high orders of vanishing moments. Note that the equations in \eqref{tffb:1} and \eqref{tffb:0} for a tight framelet filter bank $\{a;b_1,\ldots,b_s\}_\Theta$ can be equivalently expressed in the matrix form: \be \label{tffb} \left[\begin{matrix} \pb_1(z) &\cdots &\pb_s(z)\\ \pb_1(-z) &\cdots &\pb_s(-z)\end{matrix}\right] \left[\begin{matrix} \pb_1(z) &\cdots &\pb_s(z)\\ \pb_1(-z) &\cdots &\pb_s(-z)\end{matrix}\right]^\star =\cM_{\pa,\pTh}(z) \ee with \be \label{cM} \cM_{\pa,\pTh}(z):= \left[ \begin{matrix} \pTh(z)-\pTh(z^2) \pa(z)\pa^\star(z) &-\pTh(z^2)\pa(z)\pa^\star(-z)\\ -\pTh(z^2) \pa(-z)\pa^\star(z) &\pTh(-z)-\pTh(z^2)\pa(-z)\pa^\star(-z)\end{matrix} \right]. \ee Recall that an $ n\times n $ Hermite matrix $ A $ is called \emph{positive semidefinite}, denoted by $A\ge 0$, if and only if $ x^\star A x \geqslant 0 $ for all $ x\in \C^n $. Obviously, \eqref{tffb} implies $\cM_{\pa,\pTh}(z)\ge 0$ for all $z\in \T:=\{\zeta \in \C \setsp |\zeta|=1\}$, which is known (see \cite{chs02,dhrs03}, \cite[Lemma~6]{han15} and \cite[Lemma~1.4.5]{hanbook}) to be equivalent to that $\Theta(z)\ge 0$ for all $ z\in \T $ and \be \label{det:cM} \det(\cM_{\pa,\pTh}(z))= \pTh(z)\pTh(-z)-\pTh(z^2)[\pTh(-z)\pa(z)\pa^\star(z)+ \pTh(z)\pa(-z)\pa^\star(-z)]\ge 0,\qquad \forall\, z\in \T. \ee Consequently, by the Fej\'er-Riesz lemma, there exists a Laurent polynomial $\pth(z)$ such that $\pth(z)\pth^\star(z)=\pTh(z)$ so that we can define the function $\eta$ in \eqref{eta:psi}. One often can construct a filter $\Theta\in \lp{0}$ so that $\pTh(z)\ge 0$ for all $z\in \T$ and $\vmo(\pTh(z)-\pTh(z^2)\pa(z)\pa^\star(z))$ is reasonably high. However, many pairs of filters $a, \Theta\in \lp{0}$ do not satisfy the condition in \eqref{det:cM}. Let us provide an example here for the most popular choice of $\pTh(z)=1$. Let $u\in \lp{0}$ be a filter such that $|\pu(z_0)|^2+|\pu(z_0)|^2<1$ for some $z_0\in \T$. Let $a\in \lp{0}$ be an arbitrary dual filter of $u$, that is, $\pa(z)\pu^\star(z)+\pa(-z)\pu^\star(-z)=1$ for all $z\in \T$. Consequently, by the Cauchy-Schwarz inequality, we have $1\le (|\pa(z_0)|^2+|\pa(-z_0)|^2) (|\pu(z_0)|^2+|\pu(-z_0)|^2)$, from which we have $|\pa(z_0)|^2+|\pa(-z_0)|^2 \ge (|\pu(z_0)|^2+|\pu(-z_0)|^2)^{-1}>1$. Therefore, by $\pTh(z)=1$, we have $\det(\cM_{a,\Theta}(z))=1-\pa(z)\pa^\star(z)-\pa(-z)\pa^\star(-z)$ but $\det(\cM_{a,\Theta}(z_0))<0$. This shows that the condition in \eqref{det:cM} fails for many filters $a$ even with the most popular and simplest choice of $\pTh(z)=1$. Hence, a tight framelet cannot be derived from the refinable function associated with the filter $a$ and $\pTh(z)=1$. Also, some papers try to design general $ \pTh(z) $ to guarantee $ \cM_{\pa, \pTh}(z) \geqslant 0 $ for all $ z \in \T $. However, in order to prove the existence of such $ \pTh(z) $, they have to put additional assumptions on the spectral radius of the transition operator associated with the low-pass filter $ a $, or the stability of the integer shifts of the refinable function $ \phi $, e.g., see \cite{chs02,dhrs03,HanMo:2005Symmetric}. This motivates us to introduce the notion of quasi-tight framelet filter banks. Let $\Theta,a,b_1,\ldots,b_s\in \lp{0}$ and $\eps_1,\ldots,\eps_s\in \{-1,1\}$. We say that $\{a;b_1,\ldots,b_s\}_{\Theta, (\epsilon_1,\ldots,\epsilon_s)}$ is \emph{a quasi-tight framelet filter bank} if \be \label{qtffb} \left[\begin{matrix} \pb_1(z) &\cdots &\pb_s(z)\\ \pb_1(-z) &\cdots &\pb_s(-z)\end{matrix}\right] \left[\begin{matrix} \eps_1 & &\\ &\ddots &\\ & &\eps_s\end{matrix}\right] \left[\begin{matrix} \pb_1(z) &\cdots &\pb_s(z)\\ \pb_1(-z) &\cdots &\pb_s(-z)\end{matrix}\right]^\star =\cM_{\pa,\pTh}(z), \ee where $\cM_{\pa,\pTh}$ is defined in \eqref{cM}. Hence, a tight framelet filter bank is a special case of a quasi-tight framelet filter bank with $\eps_1=\cdots=\eps_s = 1$. We call $ \epsilon_\ell \in \{-1, 1\} $ the \emph{signature} of the filter $ b_\ell $, $ \ell = 1,\ldots,s $. Moreover, it is straightforward to observe that $\{a;b_1,\ldots,b_s\}_{\Theta, (\epsilon_1,\ldots,\epsilon_s)}$ is a quasi-tight framelet filter bank if and only if $\{a;b_1,\ldots,b_s\}_{-\Theta, (-\epsilon_1, \ldots, -\epsilon_s)}$ is a quasi-tight framelet filter bank. Assume that $\pa(1)=1$ and $\phi\in \Lp{2}$ with $\phi$ being defined in \eqref{phi}. Write $\pTh(z)=\tilde{\pth}(z)\pth^\star(z)$ for some $\tilde{\theta}, \theta\in \lp{0}$. Define $\eta,\psi^1,\ldots,\psi^s$ as in \eqref{eta:psi} and $\tilde{\eta}:=\sum_{k\in\Z} \tilde{\theta}(k) \phi(\cdot-k)$. If in addition $\pTh(z)\ge 0$ for all $z\in \T$, then by Fej\'er-Riesz lemma we can always choose $\tilde{\pth}(z)=\pth(z)$ so that $\tilde{\eta}=\eta$. If $\pTh(1)=1$ and $\pb_1(1)=\cdots=\pb_s(1)=0$, by \cite[Theorems~4.1.9 and 6.4.1]{hanbook} and \cite[Theorem~2.3]{han03}, then $\{\eta,\tilde{\eta};\psi^1,\ldots,\psi^s\}_{ (\eps_1,\ldots,\eps_s)}$ is \emph{a quasi-tight framelet} in $\Lp{2}$, that is, for all $ f \in \Lp{2} $, \be \label{qtf} f=\sum_{k\in \Z} \la f, \tilde{\eta}(\cdot-k)\ra \eta(\cdot-k)+ \sum_{j=0}^\infty \sum_{\ell=1}^s \sum_{k\in \Z} \eps_\ell \la f, \psi^\ell_{2^j;k}\ra \psi^\ell_{2^j;k} \ee with the series converging unconditionally in $\Lp{2}$ and the underlying system being a Bessel sequence in $\Lp{2}$. By \cite[Proposition~4]{han12}, it follows directly from \eqref{qtf} that $\{\psi^1,\ldots,\psi^s\}_{(\eps_1,\ldots,\eps_s)}$ is \emph{a homogeneous quasi-tight framelet} in $\Lp{2}$, that is, \be \label{hqtf} f=\sum_{j\in \Z} \sum_{\ell=1}^s \sum_{k\in \Z} \eps_\ell \la f, \psi^\ell_{2^j;k}\ra \psi^\ell_{2^j;k},\qquad \forall\, f\in \Lp{2} \ee with the series converging unconditionally in $\Lp{2}$ and the underlying system being a Bessel sequence in $\Lp{2}$. The multiscale representations in \eqref{qtf} and \eqref{hqtf} using a quasi-tight framelet are very similar to those in \eqref{tf} and \eqref{htf} under a tight framelet. Therefore, a quasi-tight framelet is a special class of dual framelets in $\Lp{2}$ but behaves almost identically to a tight framelet with the exception of possible sign changes of framelet coefficients. An example of quasi-tight framelets and quasi-tight framelet filter banks was probably first observed in \cite[Example~3.2.2]{hanbook} and was obtained by applying the general algorithm in \cite{han15} for constructing dual framelet filter banks. The equations in \eqref{qtffb} for a quasi-tight framelet filter bank are intrinsically linked to the problem of matrix spectral factorization for which we shall extensively study in this paper. Moreover, similar to the identity in \eqref{tf:vm:sr} for a tight framelet filter bank, if $\{a;b_1,\ldots,b_s\}_{\Theta,(\epsilon_1,\ldots, \epsilon_s)}$ is a quasi-tight framelet filter bank, then we have \be \label{qtf:vm:sr} \min(\vmo(b_1),\ldots,\vmo(b_s))\le \min(\sr(a), \tfrac{1}{2}\vmo(\pTh(z)-\pTh(z^2)\pa(z)\pa^\star(z))). \ee That is, the highest possible order of vanishing moments achieved by a quasi-tight framelet filter bank derived from given filters $a,\Theta\in \lp{0}$ is $\min(\sr(a), \tfrac{1}{2}\vmo(\pTh(z)-\pTh(z^2)\pa(z)\pa^\star(z)))$. As demonstrated in \cite[Theorem~7]{han15} and \cite[Theorem~1.4.7]{hanbook}, for general filters $a,\Theta\in \lp{0}$, $\det(\cM_{a,\Theta}(z))$ is often not identically zero and the minimum number $s$ of high-pass filters in a quasi-tight framelet filter bank is at least $2$. Given a Laurent polynomial $ \pp(z) $, for simplicity, we use $ \pp(z)\equiv 0 $ ($ \pp(z)\not\equiv 0 $) to indicate that $ \pp(z) $ is (is not) identically zero. For an $n\times n$ square matrix $\pA(z)$ of Laurent polynomials, its spectrum $\sigma(\pA)$ is defined to be \be \label{sigmaA} \sigma(\pA):=\{z\in\C\setminus \{0\} \setsp \det(\pA(z))=0\}. \ee If $ \pA^\star(z) = \pA(z) $, then $ \pA(z) $ is a Hermite matrix for all $ z\in \T $ and we call such $ \pA(z) $ \emph{a Hermite matrix of Laurent polynomials}. In this case, for all $ z\in \T $, all the eigenvalues of $\pA(z)$ are real numbers and hence, we define $\nu_+(\pA(z))$ to be the number of positive eigenvalues of the matrix $\pA(z)$, and define $\nu_-(\pA(z))$ to be the number of negative eigenvalues of the matrix $\pA(z)$. In particular, for filters $a,\Theta\in \lp{0}$, we define \be \label{saTheta} s_{a,\Theta}^+ := \max_{z\in \T}\nu_+(\cM_{\pa,\pTh}(z)), \qquad s_{a,\Theta}^- := \max_{z\in \T}\nu_-(\cM_{\pa,\pTh}(z)), \qquad \mbox{and}\quad s_{a,\Theta} := s_{a,\Theta}^+ + s_{a,\Theta}^-, \ee where the matrix $\cM_{\pa,\pTh}(z)$ is defined in \eqref{cM}. Through the study of the generalized matrix spectral factorization in \eqref{qtffb}, we now state the main result obtained in this paper on quasi-tight framelets with the minimum number of generators and the highest possible order of vanishing moments derived from any arbitrarily given filters $a,\Theta\in \lp{0}$. \begin{theorem}\label{thm:qtf} Let $a,\Theta\in \lp{0}\bs\{0\}$ be two finitely supported not-identically-zero filters such that $\pTh^\star=\pTh$. Let $n_b$ be any positive integer satisfying \be \label{nb} 1\le n_b\le \min(\sr(a), \tfrac{1}{2} \vmo(\pTh(z)-\pTh(z^2) \pa(z)\pa^\star(z))). \ee Let $\cM_{\pa,\pTh}(z)$ be defined in \eqref{cM} and the quantities $s_{a,\Theta}^+, s_{a,\Theta}^-, s_{a,\Theta}$ be defined in \eqref{saTheta}. Define $s:=s_{a,\Theta}$. Then there exist $b_1,\ldots,b_s\in \lp{0}$ and $ \eps_1=\ldots= \eps_{s_{a,\Theta}^+} = 1 $, $\eps_{s_{a,\Theta}^+ +1}=\ldots=\eps_s=-1$ such that $\{a;b_1,\ldots,b_s\}_{\Theta, (\epsilon_1,\ldots, \epsilon_s)}$ is a quasi-tight framelet filter bank with $ \min\{\vmo(b_1),\ldots, \vmo(b_s)\}\geqslant n_b $. Moreover, for $1\le s<s_{a,\Theta}$, there does not exist a quasi-tight framelet filter bank $\{a;b_1,\ldots,b_s\}_{\Theta, (\epsilon_1, \ldots, \epsilon_s)}$ with $b_1,\ldots,b_s\in \lp{0}$ and $\eps_1,\ldots,\eps_s\in\{-1,1\}$. Furthermore, if $\pa(1)=\pTh(1)=1$ and $\phi\in \Lp{2}$ with $\phi$ being defined in \eqref{phi}, then $\{\eta,\tilde{\eta};\psi^1,\ldots,\psi^s\}_ {(\eps_1,\ldots,\eps_s)}$ is a quasi-tight framelet in $\Lp{2}$, where $\eta,\psi^1,\ldots,\psi^s\in \Lp{2}$ are defined in \eqref{eta:psi} and $\tilde{\eta}:=\sum_{k\in \Z} \tilde{\theta}(k)\phi(\cdot-k)$ with $\tilde{\pth}(z)\pth^\star(z)=\pTh(z)$. \end{theorem} Since quasi-tight framelets preserve most desirable properties of tight framelets and enjoy great flexibility as demonstrated in Theorem~\ref{thm:qtf}, we expect that quasi-tight framelets will be as useful as tight framelets in applications. We also mention that our investigation on quasi-tight framelets is much involved than the study of tight framelets in \cite{chs02,dhrs03,han15,sel01} and the approach taken in these papers for tight framelets does not carry over to general quasi-tight framelets. Our proof of Theorem~\ref{thm:qtf} is constructive and we shall provide an algorithm to construct the filters in Theorem~\ref{thm:qtf}. To prove Theorem~\ref{thm:qtf} on quasi-tight framelet filter banks, we shall establish two main results on generalized matrix spectral factorizations. If $ A $ is an $ n\times n $ Hermite matrix, its \emph{signature} $ \operatorname{sig}(A) $ is defined as \[ \operatorname{sig}(A) := \nu_+(A) - \nu_-(A), \] where $\nu_+(A)$ and $\nu_-(A)$ are the numbers of its positive and negative eigenvalues, respectively. For a Hermite matrix $ \pA(z) $ of Laurent polynomials, we say that it has constant signature if $ \operatorname{sig}(\pA(z)) $ is constant for all $ z\in \T\bs\sigma(\pA)$. In this situation, we can easily see that $ \nu_+(\pA(z)) $ and $ \nu_-(\pA(z)) $ remain constant for all $ z\in \T\bs\sigma(\pA) $. For Hermite matrices of Laurent polynomials with constant signature, we have the following result on the generalized spectral factorization problem. \begin{theorem} \label{thm:const-sig} Let $\pA(z)$ be an $n \times n$ Hermite matrix of Laurent polynomials such that $\det(\pA(z))$ is not identically zero. If $\nu_+(\pA(z)) = \nu_+$ and $\nu_-(\pA(z)) = \nu_-$ for all $ z\in \T\setminus\sigma(\pA)$ for some nonnegative integers $ \nu_+ $ and $ \nu_- $, then there exists an $n \times n$ matrix $\pU(z)$ of Laurent polynomials such that $\pA(z)=\pU(z)\pD\pU^\star(z)$, where $\pD:=\diag(\mathbf{I}_{\nu_+}, -\mathbf{I}_{\nu_-})$ is an $n \times n$ constant diagonal matrix. \end{theorem} If $\pA(z)\ge 0$ for all $z\in \T$, then it is trivial that $\nu_+(\pA(z))=n$ and $\nu_-(\pA(z))=0$ for all $z\in \T\bs \sigma(\pA)$. Therefore, for the special case $\pA(z)\ge 0$ for all $z\in \T$, Theorem~\ref{thm:const-sig} reduces to the standard result on matrix spectral factorization (also known as Matrix-valued Fej\'er-Riesz Lemma) for nonnegative Hermite matrices of Laurent polnomials, which has been extensively studied in the literature, e.g., see \cite{RosRov:1985Hardy,HarHogSun:2004The-matrix-valued,EphJanLag:2009A-simple} and many references therein. This classical result on matrix spectral factorization plays a key role in the construction of tight framelets and tight framelet filter banks with two (non-symmetric) high-pass filters, e.g., see \cite{chs02,dhrs03,sel01} and references therein. For a general Hermite matrix of Laurent polynomials, we have \begin{theorem} \label{thm:nonconst-sig} Let $\pA(z)$ be an $n \times n$ Hermite matrix of Laurent polynomials such that $\det(\pA(z))$ is not identically zero. Then there exists some $n \times m$ matrix $\pU(z)$ of Laurent polynomials such that $\pA(z)=\pU(z)\pD\pU^\star(z)$ holds with $\pD =\diag( \mathbf{I}_{m_1}, -\mathbf{I}_{m_2})$ and $m:=m_1 + m_2$ if and only if \begin{equation} \label{eq:LargeSig} m_1 \geqslant \max_{z\in\T}\nu_+(\pA(z)), \qquad m_2 \geqslant \max_{z\in\T}\nu_-(\pA(z)). \end{equation} \end{theorem} The above Theorems~\ref{thm:const-sig} and~\ref{thm:nonconst-sig} play a key role in our proof of Theorem~\ref{thm:qtf} and our study on quasi-tight framelets and quasi-tight framelet filter banks. Moreover, our proofs to Theorems~\ref{thm:const-sig} and~\ref{thm:nonconst-sig} are constructive and supplemented by step-by-step algorithms. We also mention that the generalized matrix spectral factorization problem for matrices of polynomials has been extensively investigated in the literature of engineering, for example, see \cite{GohLanRod:1980Spectral,GohLanRod:1982Factorization,RanRod:1994Factorization,RanZiz:1997On-self-adjoint} and many references therein. However, there are barely any references on the generalized matrix spectral factorization problem for matrices of Laurent polynomials. Although the proofs of our construction share some similarities to the polynomial results \cite{GohLanRod:1982Factorization,RanRod:1994Factorization}, indeed, many new ideas and techniques are needed in order to handle the generalized matrix spectral factorization problem for matrices of Laurent polynomials. \begin{comment} For example, the proof of \cite[Theorem~S6.3]{GohLanRod:1982Matrix} on the eigenvalue decomposition for matrices of analytic functions is not constructive and is very complicated. we provide a constructive algorithm (see Algorithm~\ref{algo:Const-Sig}) to realize the associated generalized matrix spectral factorization for Laurent polynomials, only requiring \cite[Theorem~S6.3]{GohLanRod:1982Matrix} to guarantee that the numbers of positive and negative eigenvalues of certain matrix in item (a) of (S4) in Algorithm~\ref{algo:Const-Sig}) must be the same. For example, \cite[Theorem~S6.3]{GohLanRod:1982Matrix}, whose proof is not constructive and is very complicated we provide a constructive algorithm (see Algorithm~\ref{algo:Const-Sig}) to realize the associated generalized matrix spectral factorization, involving only the theoretical existence result in \cite[Theorem~S6.3]{GohLanRod:1982Matrix} to guarantee that the numbers of positive and negative eigenvalues of certain matrix in item (a) of (S4) in Algorithm~\ref{algo:Const-Sig}) must be the same. in \cite[Theorem~S6.3]{GohLanRod:1982Matrix}, whose proof is not constructive and is very complicated. \end{comment} The structure of the paper is as follows. In Section~2 we shall prove Theorem~\ref{thm:qtf} using Theorems~\ref{thm:const-sig} and~\ref{thm:nonconst-sig} on generalized matrix spectral factorization. In Section~3 we shall provide a few examples of quasi-tight framelet filter banks and quasi-tight framelets in $\Lp{2}$ to illustrate our main results on quasi-tight framelets. In Section~4 we shall prove Theorem~\ref{thm:const-sig} on generalized matrix spectral factorization with constant signature. For improved readability, a few technical results for proving Theorem~\ref{thm:const-sig} are presented in the Appendix. In Section~5, we shall prove Theorem~\ref{thm:nonconst-sig}. Finally, in Section~6 we shall briefly discuss some extension of our results to one-dimensional quasi-tight framelets with a general dilation factor. \section{Proof of Theorem~\ref{thm:qtf} on Quasi-tight Framelets} In this section, we shall prove Theorem~\ref{thm:qtf} using Theorems~\ref{thm:const-sig} and \ref{thm:nonconst-sig} on generalized matrix spectral factorization. The proofs of Theorems~\ref{thm:const-sig} and \ref{thm:nonconst-sig} will be presented in Sections~4 and 5. Before proving Theorem~\ref{thm:qtf}, we need the following lemma. \begin{lemma} \label{lem:maxminT} Let $ \pA(z) $ be an $ n\times n $ Hermite matrix of Laurent polynomials. Then \[ \max_{z\in \T}\nu_+(\pA(z)) = \max_{z\in \T \bs B}\nu_+(\pA(z)), \qquad \max_{z\in \T}\nu_-(\pA(z)) = \max_{z\in \T \bs B}\nu_-(\pA(z)) \] for any finite subset $B$ of $\T$. \end{lemma} \begin{proof} Define $ n_+ := \max_{z\in \T} \nu_+(\pA(z)) $. Then there exists some $ z_0 \in \T $, such that $ \nu_+(\pA(z_0)) = n_+ $. Since $ \pA(z) $ is an $ n\times n $ Hermite matrix of Laurent polynomials, its $ n $ eigenvalues $ \lambda_1(z), \ldots, \lambda_n(z) $, which are all the roots of the polynomial $ \det(\lambda\mathbf{I_n} - \pA(z)) $, can be chosen as real-valued continuous functions on $ \T $. (They are actually algebraic functions which are globally analytic.) Therefore, there exists a neighborhood $ U(z_0) $ of $ z_0 $ on $ \T $, such that $ \nu_+(\pA(z)) = n_+ $ for all $ z\in U(z_0) $. As $ U(z_0) $ contains infinitely many points, the set $ U(z_0)\setminus B $ must be nonempty. This implies that \[ \max_{z\in \T \bs B} \nu_+(\pA(z)) \geqslant n_+ = \max_{z\in \T} \nu_+(\pA(z)). \] Since $ \T\bs B $ is a subset of $ \T $, we trivially have $\max_{z\in \T \bs B} \nu_+(\pA(z))\le n_+$. This proves $\max_{z\in \T}\nu_+(\pA(z)) = \max_{z\in \T \bs B}\nu_+(\pA(z))$. The identity $ \max_{z\in \T}\nu_-(\pA(z)) = \max_{z\in \T \bs B}\nu_-(\pA(z)) $ can be proved similarly. \end{proof} \begin{comment} \begin{proof} Since $ \pA(z) $ is an $ n\times n $ Hermite matrix of Laurent polynomials, its $ n $ eigenvalues $ \lambda_1(z), \ldots, \lambda_n(z) $, which are all the roots of the polynomial $ \det(\lambda\mathbf{I_n} - \pA(z)) $, can be chosen as real continuous functions of $ z\in \T $. (They are actually algebraic functions which are globally analytic.) Hence, for all $ j = 1, \ldots, n $, $ \{z\in\T\setsp \lambda_j(z) > 0 \} $ must be an open set on $ \T $. Define $ n_+ := \max_{z\in \T} \nu_+(\pA(z)) $. We deduce that \[ \{z\in \T \setsp \nu_+(\pA(z)) = n_+\} = \bigcup_{\substack{J\subseteq \{1,\ldots,n\}\\ |J|=n_+}}\bigcap_{j\in J}\{z\in\T \setsp \lambda_j(z)>0\}. \] Since the unions and intersections are taken over a finite number of open sets, the above set is still open. By the definition of $ n_+ $, we know that $ \nu_+(\pA(z)) = n_+ $ is achieved for some $ z\in \T $. So the above set is also nonempty. Therefore, the nonempty open set $ \{z\in \T \setsp \nu_+(\pA(z)) = n_+\} $ contains infinite number of points. So $ \{z\in \T \setsp \nu_+(\pA(z)) = n_+\} \bs B $ must be nonempty. This implies that \[ \max_{z\in \T \bs B} \nu_+(\pA(z)) \geqslant n_+ = \max_{z\in \T} \nu_+(\pA(z)). \] Since $ \T\bs B $ is a subset of $ \T $, we trivially have $\max_{z\in \T \bs B} \nu_+(\pA(z))\le n_+$. This proves \[ \max_{z\in \T}\nu_+(\pA(z)) = \max_{z\in \T \bs B}\nu_+(\pA(z)). \] The identity $ \max_{z\in \T}\nu_-(\pA(z)) = \max_{z\in \T \bs B}\nu_-(\pA(z)) $ can be proved similarly. \end{proof} \end{comment} For a Laurent polynomial $ \pp(z)\not\equiv 0 $ and $ z_0\in \C\setminus\{0\} $, we define $\mz(\pp(z), z_0) $ to be the multiplicity of the root of $ \pp(z) $ at $ z_0 $. That is, $ \mz(\pp(z), z_0) $ is the nonnegative integer such that $ (z-z_0)^{\mz(\pp(z), z_0)} \mid \pp(z) $ but $ (z-z_0)^{\mz(\pp(z), z_0)+1} \nmid \pp(z) $. Hence, the orders of vanishing moments and sum rules of a Laurent polynomial $ \pp(z) $ can be equivalently expressed by \[ \vmo(\pp(z)) = \mz(\pp(z), 1) \qquad \mbox{and} \quad \sr(\pp(z)) = \mz(\pp(z), -1). \] Also, recall that for a finitely supported sequence $ u \in \lp{0} $ and $ \gamma \in \Z $, \emph{its $ \gamma$-coset sequence $ u^{[\gamma]} $} is defined to be $ u^{[\gamma]}:=\{u(\gamma + 2k)\}_{k\in \Z} $. In terms of Laurent polynomials, we have $ \pu(z) = \pu^{[0]}(z^2) + z \pu^{[1]}(z^2) $. Moreover, \[ \begin{bmatrix} \pb_1(z) & \cdots & \pb_s(z) \\ \pb_1(-z) & \cdots & \pb_s(-z) \end{bmatrix} = \begin{bmatrix} 1 & z \\ 1 & -z \end{bmatrix} \begin{bmatrix} \pb_1^{[0]}(z^2) & \cdots & \pb_s^{[0]}(z^2) \\ \pb_1^{[1]}(z^2) & \cdots & \pb_s^{[1]}(z^2) \end{bmatrix}, \] where the last $ 2\times s $ matrix is called the \emph{polyphase matrix} of the filter bank $ \{\pb_1, \ldots, \pb_s\} $. We now prove Theorem~\ref{thm:qtf} using Theorems~\ref{thm:const-sig} and~\ref{thm:nonconst-sig}. \begin{proof}[Proof of Theorem~\ref{thm:qtf}] Since all high-pass filters must have at least $n_b$ vanishing moments, we can write \begin{equation} \label{eq:bCoset} \pb_\ell(z) = (1-z^{-1})^{n_b}\mathring{\pb}_\ell(z), \qquad \ell = 1,\ldots, s \end{equation} for some Laurent polynomials $\mathring{\pb}_1(z), \ldots, \mathring{\pb}_s(z)$. Then $\{a;b_1,\ldots,b_s\}_{\Theta,(\eps_1,\ldots,\eps_s)}$ is a quasi-tight framelet filter bank satisfying \eqref{qtffb} and \eqref{eq:bCoset} if and only if \begin{equation} \label{eq:qtfbnb} \begin{bmatrix} \mathring{\pb}_1(z) & \cdots & \mathring{\pb}_s(z) \\ \mathring{\pb}_1(-z) & \cdots & \mathring{\pb}_s(-z) \end{bmatrix} \begin{bmatrix} \epsilon_1 & & \\ & \ddots & \\ & & \epsilon_s \end{bmatrix} \begin{bmatrix} \mathring{\pb}_1(z) & \cdots & \mathring{\pb}_s(z) \\ \mathring{\pb}_1(-z) & \cdots & \mathring{\pb}_s(-z) \end{bmatrix}^\star = \cM_{\pa, \pTh|n_b}(z), \end{equation} where \begin{align} \label{eq:VMfactor} \cM_{\pa, \pTh|n_b}(z) :=& \begin{bmatrix} (1-z^{-1})^{-n_b} & \\ & (1+z^{-1})^{-n_b} \end{bmatrix} \cM_{\pa, \pTh}(z) \begin{bmatrix} (1-z)^{-n_b} & \\ & (1+z)^{-n_b} \end{bmatrix} \\ =& \begin{bmatrix} \pA(z) & \pB(z) \\ \pB(-z) & \pA(-z) \end{bmatrix}, \notag \end{align} with \begin{equation} \label{eq:DefAB} \pA(z):= \frac{\pTh(z) - \pTh(z^2)\pa(z)\pa^\star(z)}{(1-z)^{n_b}(1-z^{-1})^{n_b}}, \qquad \pB(z):= \frac{-\pTh(z^2)\pa(z)\pa^\star(-z)}{(1+z)^{n_b}(1-z^{-1})^{n_b}}. \end{equation} Note that according to \eqref{nb}, we have $ 2n_b \leqslant \mz(\pTh(z) - \pTh(z^2)\pa(z)\pa^\star(z), ~1) $ and $ n_b \leqslant \mz(\pa(z), ~ -1) = \mz(\pa^\star(-z), ~ 1) $. Hence $ \pA(z) $ and $ \pB(z) $ are well-defined Laurent polynomials. Using the coset sequences, we know that \eqref{eq:qtfbnb} is equivalent to \begin{equation} \label{eq:PRpolyphase} \begin{bmatrix} \mathring{\pb}_1^{[0]}(z) & \cdots & \mathring{\pb}_s^{[0]}(z) \\ \mathring{\pb}_1^{[1]}(z) & \cdots & \mathring{\pb}_s^{[1]}(z) \end{bmatrix} \begin{bmatrix} \epsilon_1 & & \\ & \ddots & \\ & & \epsilon_s \end{bmatrix} \begin{bmatrix} \mathring{\pb}_1^{[0]}(z) & \cdots & \mathring{\pb}_s^{[0]}(z) \\ \mathring{\pb}_1^{[1]}(z) & \cdots & \mathring{\pb}_s^{[1]}(z) \end{bmatrix}^\star = \cN_{\pa, \pTh|n_b}(z), \end{equation} where $\cN_{\pa, \pTh|n_b}(z)$ is calculated from: \begin{equation} \label{eq:DefN1} \cM_{\pa, \pTh|n_b}(z) = \begin{bmatrix} 1 & z \\ 1 & -z \end{bmatrix} \cN_{\pa, \pTh|n_b}(z^2) \begin{bmatrix} 1 & z \\ 1 & -z \end{bmatrix}^\star. \end{equation} That is, \begin{equation} \label{eq:DefN2} \cN_{\pa, \pTh|n_b}(z):=\frac{1}{2} \begin{bmatrix} \pA^{[0]}(z) + \pB^{[0]}(z) & z(\pA^{[1]}(z)-\pB^{[1]}(z)) \\ \pA^{[1]}(z) + \pB^{[1]}(z) & \pA^{[0]}(z) - \pB^{[0]}(z) \end{bmatrix}, \end{equation} where $\pA(z)$ and $\pB(z)$ are defined in \eqref{eq:DefAB}. Hence, the existence of a quasi-tight framelet filter bank $ \{a; b_1, \ldots, b_s\}_{\Theta, (\epsilon_1, \ldots, \epsilon_s)} $ with $ n_b $ vanishing moments necessarily implies a generalized spectral factorization in \eqref{eq:PRpolyphase} for the matrix $ \cN_{\pa, \pTh|n_b}(z) $ of Laurent polynomials. According to Theorem~\ref{thm:nonconst-sig}, the existence of the generalized spectral factorization in \eqref{eq:PRpolyphase} implies that the number $ s_+ $ of times that $ ``+1"$ appears in $ \{\epsilon_1,\ldots,\epsilon_s\} $ and the number $ s_- $ of times that $ ``-1"$ appears in $ \{\epsilon_1,\ldots,\epsilon_s\} $ must satisfy \begin{equation} \label{eq:s+s-} s_+ \geqslant \max_{z\in \T}\nu_+(\cN_{\pa, \pTh|n_b}(z)) , \qquad \mbox{and} \quad s_- \geqslant \max_{z\in \T}\nu_-(\cN_{\pa, \pTh|n_b}(z)). \end{equation} By \eqref{eq:VMfactor} and \eqref{eq:DefN1}, we know that \[ \cM_{\pa, \pTh}(z) = \pP(z) \cN_{\pa, \pTh|n_b}(z^2) \pP^\star(z)\quad \mbox{with} \quad \pP(z) := \begin{bmatrix} (1-z^{-1})^{n_b} & \\ & (1+z^{-1})^{n_b} \end{bmatrix} \begin{bmatrix} 1 & z \\ 1 & -z \end{bmatrix}. \] Since $ \det(\pP(z)) = -2 z (1-z^{-1})^{n_b} (1+z^{-1})^{n_b} $, we observe $ \sigma(\pP) \subseteq \{-1, 1\} $. Hence, $ \sigma(\pP) $ is a finite set. For $ z\in \T \bs \sigma(\pP) $, the matrix $ \pP(z) $ is a nonsingular matrix. By Sylvester's law of inertia, we get from $ \cM_{\pa, \pTh}(z) = \pP(z)\cN_{\pa, \pTh|n_b}(z^2)\pP^\star(z) $ that $$ \nu_+(\cM_{\pa, \pTh}(z)) = \nu_+(\cN_{\pa, \pTh|n_b}(z^2)), \qquad \nu_-(\cM_{\pa, \pTh}(z)) = \nu_-(\cN_{\pa, \pTh|n_b}(z^2)), \qquad \forall z \in \T\bs \sigma(\pP). $$ According to Lemma~\ref{lem:maxminT}, we have \begin{align*} s_{a, \Theta}^+ =& \max_{z\in \T} \nu_+(\cM_{\pa, \pTh}(z)) = \max_{z\in \T \bs\sigma(\pP)} \nu_+(\cM_{\pa, \pTh}(z)) \\ =& \max_{z\in \T \bs\sigma(\pP)} \nu_+(\cN_{\pa, \pTh|n_b}(z^2)) = \max_{z\in \T} \nu_+(\cN_{\pa, \pTh|n_b}(z^2)) = \max_{z\in \T} \nu_+(\cN_{\pa, \pTh|n_b}(z)). \end{align*} Similarly, $ s_{a, \Theta}^- = \max_{z\in \T} \nu_-(\cN_{\pa, \pTh|n_b}(z)) $. Therefore, from \eqref{eq:s+s-} we know that the generalized spectral factorization in \eqref{eq:PRpolyphase} implies \begin{equation} \label{eq:s+s-2} s_+ \geqslant s_{a, \Theta}^+ , \qquad s_- \geqslant s_{a, \Theta}^-, \qquad \mbox{and} \quad s = s_+ + s_- \geqslant s_{a, \Theta}^+ + s_{a, \Theta}^- = s_{a, \Theta} . \end{equation} Hence, by Theorem~\ref{thm:nonconst-sig}, for $1\le s<s_{a,\Theta}$, there does not exist a quasi-tight framelet filter bank $\{a;b_1,\ldots,b_s\}_{\Theta, (\epsilon_1, \ldots, \epsilon_s)}$ with $b_1,\ldots,b_s\in \lp{0}$ and $\eps_1,\ldots,\eps_s\in\{-1,1\}$. On the other hand, given filters $ a, \Theta \in \lp{0}\bs\{0\} $, $ \Theta^\star = \Theta $, and a positive integer $ n_b $ satisfying \eqref{nb}, we can calculate the matrix $ \cN_{\pa, \pTh|n_b}(z) $ of Laurent polynomials from \eqref{eq:DefAB} and \eqref{eq:DefN2}. By $ \Theta^\star(z) = \Theta(z) $, we deduce from \eqref{eq:DefAB} that $ \pA^\star(z) = \pA(z) $ and $ \pB^\star(z) = \pB(-z) $. Plugging these identities into $ \pA^{[0]}(z^2) = \tfrac{1}{2}\left(\pA(z) + \pA(-z)\right) $, $ \pA^{[1]}(z^2) = \tfrac{1}{2z}\left(\pA(z) - \pA(-z)\right)$, $ \pB^{[0]}(z^2) = \tfrac{1}{2}\left(\pB(z) + \pB(-z)\right) $, and $ \pB^{[1]}(z^2) = \tfrac{1}{2z}\left(\pB(z)-\pB(-z)\right)$, we can easily verify that \[ \pA^{[0]^\star}(z) = \pA^{[0]}(z),\qquad \pA^{[1]^\star}(z) = z\pA^{[1]}(z), \qquad \pB^{[0]^\star}(z) = \pB^{[0]}(z),\qquad \mbox{and}\quad \pB^{[1]^\star}(z) = -z\pB^{[1]}(z). \] Using the above four equations, we deduce from \eqref{eq:DefN2} that $ \cN_{\pa, \pTh|n_b}^\star(z) = \cN_{\pa, \pTh|n_b}(z) $. That is, $ \cN_{\pa, \pTh|n_b}(z) $ is a Hermite matrix of Laurent polynomials. As we calculated, $ s_{a, \Theta}^+ = \max_{z\in \T} \nu_+(\cN_{\pa, \pTh|n_b}(z)) $ and $ s_{a, \Theta}^- = \max_{z\in \T} \nu_-(\cN_{\pa, \pTh|n_b}(z)) $. Take $ s := s_{a, \Theta} = s_{a, \Theta}^+ + s_{a, \Theta}^- $. According to Theorem~\ref{thm:nonconst-sig}, we can choose $ \eps_1=\cdots= \eps_{s_{a,\Theta}^+} = 1 $, $\eps_{s_{a,\Theta}^+ +1}=\cdots=\eps_{s}=-1$, and find a generalized spectral factorization of $ \cN_{\pa, \pTh|n_b}(z) $ as $ \cN_{\pa, \pTh|n_b}(z) = \pU(z) \mbox{diag}(\eps_1,\ldots,\eps_s) \pU^\star(z)$, where $ \pU(z) $ is a $ 2\times s $ matrix of Laurent polynomials. Define Laurent polynomials $ \mathring{\pb}_1(z), \ldots, \mathring{\pb}_{s}(z) $ by $ \begin{bmatrix} \mathring{\pb}_1^{[0]}(z) & \cdots & \mathring{\pb}_{s}^{[0]}(z) \\ \mathring{\pb}_1^{[1]}(z) & \cdots & \mathring{\pb}_{s}^{[1]}(z) \end{bmatrix} := \pU(z) $. Thus, \eqref{eq:PRpolyphase} holds. Multiplying $ \begin{bmatrix} 1 & z \\ 1 & -z \end{bmatrix} $ and $ \begin{bmatrix} 1 & z \\ 1 & -z \end{bmatrix}^\star $ on the left and right side of $ \cN_{\pa, \pTh|n_b}(z^2) $ respectively, we see that \eqref{eq:PRpolyphase} is equivalent to \eqref{eq:qtfbnb} with $ \cM_{\pa, \pTh|n_b}(z) $ being defined in \eqref{eq:VMfactor}. Define Laurent polynomials $ \pb_1(z)\ldots, \pb_s(z) $ as \eqref{eq:bCoset}, we conclude from \eqref{eq:qtfbnb} that $ \{a; b_1, \ldots, b_s\}_{\Theta, (\epsilon_1, \ldots, \epsilon_s)} $ is a quasi-tight framelet filter bank with $ \min\{\vmo(b_1),\ldots, \vmo(b_s)\}\geqslant n_b $. This proves the existence of quasi-tight framelet filter bank with minimum number of high-pass filters and high vanishing moments. \end{proof} By Theorem~\ref{thm:qtf}, we see that the minimum numbers of high-pass filters with positive and negative signatures in a quasi-tight framelet filter bank are just $s_{a, \Theta}^+ $ and $ s_{a, \Theta}^-$, which are defined in \eqref{saTheta}. We now explicitly present such quantities in the following for any given filters $a,\Theta \in \lp{0}\bs\{0\}$. Note that the matrix $\cM_{a,\Theta}$ cannot be identically zero. If $\det(\cM_{a,\Theta}(z))$ is identically zero, then one of the following two cases must happen: \begin{enumerate} \item[(1)] $ \pTh(z)\ge 0 $ for all $z\in \T$ if and only if $ s_{a, \Theta}^+ = 1 $ and $ s_{a, \Theta}^- = 0 $; \item[(2)] $\pTh(z)\le 0 $ for all $z\in \T$ if and only if $ s_{a, \Theta}^+ = 0 $ and $ s_{a, \Theta}^- = 1$. \end{enumerate} Since $ \det(\cM_{\pa, \pTh}(z))\equiv 0$ and $ \cM_{\pa, \pTh}(z) = -\cM_{\pa, -\pTh}(z) $, by \cite[Lemma~1.4.5]{hanbook} or \cite[Lemma~6]{han15}, we conclude that $\pTh(z)\ge 0$ (or $\pTh(z)\le 0$) for all $z\in \T$ if and only if $\cM_{\pa, \pTh}(z)\ge 0$ (or $\cM_{\pa, \pTh}(z)\le 0$) for all $z\in \T$. Note that $0$ must be an eigenvalue of $\cM_{\pa, \pTh}(z)$ by $\det(\cM_{\pa, \pTh}(z))=0$. Hence, if $\cM_{\pa, \pTh}(z)\ge 0$ (or $\cM_{\pa, \pTh}(z)\le 0$) for all $z\in \T$, then the other eigenvalue of $\cM_{\pa, \pTh}(z)$ must be nonnegative (or non-positive) and cannot be identically zero, since $\cM_{a,\pTh}$ cannot be identically zero. This proves items (1) and (2). We now prove that $\pTh$ cannot change signs on $\T$. By our assumptions $\pTh^\star=\pTh$ and \begin{equation}\label{detM} \det(\cM_{a,\Theta}(z))= \pTh(z)\pTh(-z)-\pTh(z^2)[\pTh(-z)\pa(z)\pa^\star(z)+ \pTh(z)\pa(-z)\pa^\star(-z)]=0, \end{equation} we conclude (see \cite[Theorem~1.4.7]{hanbook} and \cite[Theorem~7]{han15}) that $\pTh(z)\in \R$ for $z\in \T$ and $\pTh(z)\pTh(-z) = \lambda \pTh(z^2)$ for some nonzero real number $\gl$. Consequently, we have $\fth(z)\fth(-z)=\fth(z^2)$ with $\fth(z):=\pTh(z)/\gl$ and the above identity in \eqref{detM} is equivalent to \[ \fth(-z)\pu(z)+\fth(z)\pu(-z)=1\qquad \mbox{with}\quad \pu(z):=\pa(z)\pa^\star(z). \] Since $\pu(z)\ge 0$ for all $z\in \T$, by the above identity, if $\fth(z_0)<0$ for some $z_0\in \T$, then we must have $\fth(-z_0)>0$ and consequently $\fth(z_0^2)=\fth(z_0)\fth(-z_0)<0$. By induction, for any $z_0\in \T$, if $\fth(z_0)<0$, then we must have $\fth(z_0^{2^j})<0$ for all $j\in \N$. If $\pTh$ changes signs on $\T$, then $\fth(e^{-i\xi})<0$ for some $\xi \in (c,d)$ with $c<d$. Then the above argument shows that $\fth(e^{-i\xi})<0$ for all $\xi\in (2^jc,2^jd)$. Therefore, we must have $\gl^{-1}\pTh(z) = \fth(z) <0$ for all $z\in \T$, a contradiction to our assumption. This proves that $\pTh$ cannot change signs on $\T$. If $\det(\cM_{a,\Theta}(z))$ is not identically zero, then one of the following four cases must happen: \begin{enumerate} \item[(3)] $\pTh(z)\ge 0$ and $\det(\cM_{a,\Theta}(z))\ge 0$ for all $z\in \T$ if and only if $ s_{a, \Theta}^+ = 2 $ and $ s_{a, \Theta}^- = 0 $; \item[(4)] $\pTh(z)\le 0$ and $\det(\cM_{a,\Theta}(z))\ge 0$ for all $z\in \T$ if and only if $ s_{a, \Theta}^+ = 0 $ and $ s_{a, \Theta}^- = 2 $; \item[(5)] $\det(\cM_{a,\Theta}(z))\le 0$ for all $z\in \T$ if and only if $ s_{a, \Theta}^+ = 1 $ and $ s_{a, \Theta}^- = 1 $; \item[(6)] Otherwise (i.e., beyond the above three cases in items (3)--(5)), $s_{a,\Theta} = s_{a,\Theta}^+ + s_{a,\Theta}^->2$. \end{enumerate} Since $ \cM_{\pa, \pTh}(z) = -\cM_{\pa, -\pTh}(z)$, items (3) and (4) are direct consequence of \cite[Theorem~1.4.5]{hanbook} and \cite[Theorem~7]{han15}. Since $ \det(\cM_{\pa, \pTh}(z)) $ is the product of its two eigenvalues, we know that $\det(\cM_{a,\Theta}(z))\le 0$ for all $z\in \T$ if and only if for all $ z\in \T\bs\sigma(\cM_{\pa, \pTh}) $, $ \nu_+(\cM_{\pa, \pTh}(z)) = \nu_-(\cM_{\pa, \pTh}(z)) = 1 $. Because $\sigma(\cM_{\pa, \pTh}) $ is a finite set, we conclude from Lemma~\ref{lem:maxminT} that this is equivalent to $ s_{a, \Theta}^+ = s_{a, \Theta}^- = 1 $. This proves item (5). Hence, items (1) and (2) characterize all the cases for $s_{a,\Theta}=1$, while items (3)--(5) characterizes all the cases for $s_{a,\Theta}=2$. Note that items (1) and (3) lead to tight framelet filter bank $\{a; b_1, \ldots, b_{s_{a,\Theta}}\}_\Theta$, and items (2) and (4) lead to tight framelet filter banks $ \{a; b_1, \ldots, b_{s_{a,\Theta}}\}_{-\Theta}$ with $s\in \{1,2\}$. Items (5) and (6) lead to quasi-tight framelet filter banks which cannot be changed into tight framelet filter banks. For the special most popular choice of $\pTh(z) = 1$, according to the above discussion, one of the following four cases must happen: \begin{enumerate} \item[(i)] $ \det(\cM_{\pa, 1}(z)) \equiv 0 $ for all $ z \in \T $ if and only if $ s_{a, \Theta}^+ = 1 $ and $ s_{a, \Theta}^- = 0 $; \item[(ii)] $ \det(\cM_{\pa, 1}(z)) \geqslant 0$ for all $ z \in \T $ with $ \det(\cM_{\pa, 1}(z)) \not\equiv 0 $ if and only if $ s_{a, \Theta}^+ = 2 $ and $ s_{a, \Theta}^- = 0 $; \item[(iii)] $ \det(\cM_{\pa, 1}(z)) \leqslant 0$ for all $ z \in \T $ with $ \det(\cM_{\pa, 1}(z)) \not\equiv 0 $ if and only if $ s_{a, \Theta}^+ = 1 $ and $ s_{a, \Theta}^- = 1 $; \item[(iv)] $\det(\cM_{\pa, 1}(z)) $ changes signs on $\T$ if and only if $ s_{a, \Theta}^+ = 2 $ and $ s_{a, \Theta}^- = 1 $. \end{enumerate} \section{Examples of Quasi-tight Framelets and Quasi-tight Framelet Filter Banks} In this section, we provide some examples for quasi-tight framelet filter banks and quasi-tight framelets. Since tight framelet filter banks have been extensively studied and constructed in the literature, according to our discussion at the end of Section~2, we only provide examples for cases (5) and (6) in Section~2 (i.e., either $\det(\cM_{a,\Theta}(z))\le 0$ or it changes signs on $\T$) which lead to truly quasi-tight framelet filter banks. In order to obtain a quasi-tight framelet in $ \Lp{2} $, we have to check the technical condition that the refinable function $ \phi $ (defined in \eqref{phi}) associated with the low-pass filter $ a $ is in $ \Lp{2} $. Let $ a \in \lp{0} $ with $ \pa(1) = 1 $ and $ m:= \sr(\pa) $, the order of the sum rules of the low-pass filter $a$. Then we can write $ \pa(z) = (1+z)^m \mathring{\pa}(z) $, where $ \mathring{\pa}(-1)\neq 0 $. Let $ w\in \lp{0} $ be the sequence determined by $ \pw(z):= \mathring{\pa}(z)\mathring{\pa}^\star(z) $, whose highest and lowest degrees are $ K $ and $ -K $ respectively. We now recall a technical quantity (e.g., see \cite[(2.0.7)]{hanbook}): \begin{equation}\label{smaM} \sm(a):=-\tfrac{1}{2}-\log_{2} \sqrt{\rho(a)}, \end{equation} where $ \rho(a) $ denotes the spectral radius of the square matrix $ (w(2j - k))_{-K \leqslant j,k \leqslant K} $. Let $\phi$ be defined in \eqref{phi}. If $\sm(a)>0$, then $\phi\in \Lp{2}$ and moreover, $\int_{\R} |\wh{\phi}(\xi)|^2 (1+|\xi|^2)^\tau d\xi<\infty$ for all $0\le \tau<\sm(a)$. The following example shows that for some low-pass filters $a$, one can never obtain a finitely supported tight framelet filter bank, but one can easily construct a quasi-tight framelet filter bank. \begin{example} \label{ex:ThetaVM1} {\rm Consider a low-pass filter $ a $ given by \[ \pa(z) = \tfrac{1}{26}z^{-2}(z+1)(z^2 - z + 1)(9z^2-5z+9). \] Note that $ |\pa(e^{-i2\pi/3})| = \frac{14}{13} > 1 $ and $ |\pa(e^{-i2\pi/3})| \not\in\{ 2^j\setsp j \in \N \} $. By \cite[Proposition 4.4]{HanMo:2005Symmetric}, there does not exist a (rational) Laurent polynomial $ \Theta $ with real coefficients such that $ \cM_{a,\Theta}(z)\ge 0$ for all $ z\in \T $. Therefore, using Oblique Extension Principle, one cannot construct a real-valued tight framelet filter bank from such low-pass filter $a$. Note that $\sr(\pa) =1$ and $\vmo(1-\pa\pa^\star)=2$. Taking $ \Theta(z) = 1 $ and $ n_b = 1 $, we see from Figure~\ref{fig:ThetaVM1} that $ \det(\cM_{\pa, 1}(z))$ changes signs on $\T$. Hence, $ s_{a, \Theta}^+ = 2 $ and $ s_{a, \Theta}^- = 1 $. We have a quasi-tight framelet filter bank $ \{a; b_1, b_2, b_3\}_{\Theta, (1, 1, -1)} $ as follows: \begin{align*} \pb_1(z) &= \tfrac{1}{52}z^{-2}(z-1)(63z^4+28z^3+100z^2+28z+63),\\ \pb_2(z) &= \tfrac{\sqrt{2}}{97344}z^{-2}(z-1)(9z^2+4z+9)(3645z^4 - 1034z^2+3645), \\ \pb_3(z) &= \tfrac{\sqrt{2}}{97344}z^{-2}(z-1)(9z^2+4z+9)(3645z^4 + 9782z^2+3645), \end{align*} with $ \vmo(b_1) = \vmo(b_2) = \vmo(b_3) = 1 $. Since $ \sm(a) \approx 0.7693 $, the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Therefore, $\{\phi, \phi; \psi^1,\psi^2,\psi^3\}_{(1,1,-1)}$ is a quasi-tight framelet in $\Lp{2}$ and $\{\psi^1, \psi^2, \psi^3\}_{(1, 1, -1)} $ is a homogeneous quasi-tight framelet in $\Lp{2}$, where $ \psi^1, \psi^2, \psi^3 $ are defined in \eqref{eta:psi} and have at least one vanishing moment. }\end{example} \begin{figure}[hbt] \centering \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaVM1Phi.eps} \caption{$\phi$}\end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaVM1Psi1.eps} \caption{$\psi^1$}\end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaVM1Psi2.eps} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaVM1Psi3.eps} \caption{$ \psi^3 $ } \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaVM1det.eps} \caption{$\det(\cM_{\pa,1})$}\end{subfigure} \caption{ The quasi-tight framelet $\{\phi,\phi; \psi^1,\psi^2,\psi^3\}_{(1,1,-1)}$ and the homogeneous quasi-tight framelet $\{\psi^1,\psi^2,\psi^3\}_{(1,1,-1)}$ in $\Lp{2}$ obtained in Example~\ref{ex:ThetaVM1}. (A) is the refinable function $\phi\in \Lp{2}$. (B) --(D) are the framelet functions $\psi^1$, $ \psi^2 $ and $ \psi^3$. (E) is $\det(\cM_{\pa, 1}(e^{-i\xi}))$ for $ \xi\in [-\pi, \pi] $, where the dashed line is the horizontal axis. }\label{fig:ThetaVM1} \end{figure} \begin{example} \label{ex:ThetaInterp} {\rm Consider $\Theta(z)=\tfrac{1}{2}(z+\tfrac{1}{z}) $ and the interpolatory low-pass filter \[ \pa(z) = \tfrac{1}{2} + \tfrac{3}{8} (z + z^{-1}) - \tfrac{1}{8}(z^3 + z^{-3}). \] We see from Figure~\ref{fig:ThetaInterp} that $\det(\cM_{\pa, \pTh}(z)) \leqslant 0$ for all $ z\in \T $. Therefore, $ s_{a, \Theta}^+ = s_{a, \Theta}^- = 1 $. Note that $ \sr(\pa) = 2 $ and $ \vmo(\pTh(z) - \pTh(z^2)\pa(z)\pa^\star(z)) = 4 $. Hence, the maximum order of vanishing moments is two. Taking $ n_b = 2 $, we obtain a quasi-tight framelet filter bank $ \{a; b_1, b_2\}_{\Theta, (1, -1)} $ as follows: \begin{align*} \pb_1(z) &= \tfrac{1}{32}z^{-3}(z-1)^2 (z^6+2z^5-4z^4-14z^3-23z^2-16z-8), \\ \pb_2(z) &= -\tfrac{1}{32}z^{-3}(z-1)^2 (z^4+2z^3+4z^2+2z+9), \end{align*} with $ \vmo(b_1) = \vmo(b_2) = 2 $. Since $ \sm(a) = 1 $, the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Define $\tilde{\eta}:=(\phi(\cdot+1)+\phi(\cdot-1))/2$. Therefore, $\{\phi,\tilde{\eta};\psi^1,\psi^2\}_{(1,-1)}$ a quasi-tight framelet in $\Lp{2}$ and $ \{\psi^1, \psi^2\}_{(1, -1)} $ is a homogeneous quasi-tight framelet in $ \Lp{2} $, where $ \psi^1, \psi^2 $ are defined in \eqref{eta:psi} and have at least two vanishing moments. }\end{example} \begin{figure}[htb!] \centering \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaInterpPhi.eps} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaInterpEta.eps} \caption{$\tilde{\eta}$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaInterpPsi1.eps} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaInterpPsi2.eps} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaInterpdet.eps} \caption{$ \det(\cM_{\pa,\pTh}) $ } \end{subfigure} \caption{ The quasi-tight framelet $\{\phi,\tilde{\eta}; \psi^1,\psi^2\}_{(1,-1)}$ and the homogeneous quasi-tight framelet $\{\psi^1,\psi^2\}_{(1,-1)}$ in $\Lp{2}$ obtained in Example~\ref{ex:ThetaInterp}. (A) is the refinable function $\phi\in \Lp{2}$. (B) is the function $\tilde{\eta}:=(\phi(\cdot+1)+\phi(\cdot-1))/2$. (C) and (D) are the framelet functions $\psi^1$ and $ \psi^2 $. (E) is $\det(\cM_{\pa, 1}(e^{-i\xi}))$ for $ \xi\in [-\pi, \pi] $. } \label{fig:ThetaInterp} \end{figure} \begin{comment} \begin{example} \label{ex:Two26} Choose $\Theta = \td$ and the low-pass filter $$ \pa(z) = \frac{1}{64}(z^4-8z^3+30z^2-8z+1)(1+z)^2 z^{-3}. $$ Notice that $ \sr(\pa)=2 $ and $ \vmo(1-\pa\pa^\star)=6 $. Hence, the maximum order of vanishing moments we can achieve is two. Take $n_b=2$, then the constructed quasi-tight framelet filter bank $\{\ta; \tb_1, \tb_2\}_{\Theta, (1, -1)}$ is given by: \begin{align*} \pb_1(z) =& \tfrac{(6\sqrt{17}+17\sqrt{2})}{544z^3} (z-1)^2(z^2-4z+1)(z^2+35-6\sqrt{34}), \\ \pb_2(z) =& \tfrac{\sqrt{17}}{1088} (z-1)^2(5z^4-20z^3+78z^2-20z+5). \end{align*} We have $ \vmo(\tb_1) = \vmo(\tb_2) = 2 $. Since $\sm_2(a) \approx 0.5573$, we know that the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Therefore, $ \{\psi^1, \psi^2\}_{(1, -1)} $ is a homogeneous quasi-tight framelet in $ \Lp{2} $ with $ 2 $ vanishing moments, where $ \psi^1, \psi^2 $ are defined in \eqref{eta:psi}. \begin{figure}[h!] \centering \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two26StemLow} \caption{$\ta$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two26StemHi1} \caption{$\tb_1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two26StemHi2} \caption{$\tb_2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two26det} \caption{$ \det(\cM_{\pa,1}(e^{-i\xi})) $ } \end{subfigure} \\ \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two26phi} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two26psi1} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two26psi2} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two26freq} \caption{$ |\widehat{\ta}|, |\widehat{\tb_1}|, |\widehat{\tb_2}|$ } \end{subfigure} \caption{In Example~\ref{ex:Two26}: (A),(B) and (C) graphs of the filters $ \ta, \tb_1$ and $\tb_2 $. (D) $ \det(\cM_{\pa,1}(e^{-i\xi})) $ for $ \xi \in [-\pi,\pi] $. (E) refinable function $\phi$ defined in \eqref{phi}. (F) and (G) wavelet functions $\psi^1, \psi^2$ defined in \eqref{eta:psi}. (H) magnitude of $|\ha(\xi)|$ (in solid line), $|\hb_1(\xi)|$ (in dotted line) and $|\hb_2(\xi)|$ (in dashed line) for $ \xi\in [-\pi, \pi] $. } \end{figure} \end{example} \end{comment} \begin{example} \label{ex:Two32} {\rm Consider $\pTh(z) = 1$ and the low-pass filter \[ \pa(z) = -\tfrac{1}{16}z^{-2} (z+1)^3 ( z^2 - 4z + 1 ). \] We see from Figure~\ref{fig:Two32} that $\det(\cM_{\pa, 1}(z))\leqslant 0 $ for all $ z\in \T $. Hence, $ s_{a, \Theta}^+ = s_{a, \Theta}^- = 1 $. Note that $ \sr(\pa)=3 $ and $ \vmo(1-\pa\pa^\star)=2 $. Taking $n_b=1$, we obtain a quasi-tight framelet filter bank $\{\ta; \tb_1, \tb_2\}_{\Theta, \{1, -1\}}$ as follows: \begin{align*} \pb_1(z) =& \tfrac{-\sqrt{2}}{512z^2} (z-1)(16z^5-271z^4+16z^3+16z^2-1), \\ \pb_2(z) =& \tfrac{\sqrt{2}}{512z^2} (z-1)(16z^5+241z^4+16z^3+16z^2-1), \end{align*} with $ \vmo(\tb_1) = \vmo(\tb_2) = 1 $. Since $\sm(a) \approx 1.4408$, the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Therefore, $\{\phi,\phi; \psi^1,\psi^2\}_{(1,-1)}$ is a quasi-tight framelet in $\Lp{2}$ and $\{\psi^1, \psi^2\}_{(1, -1)} $ is a homogeneous quasi-tight framelet in $\Lp{2} $, where $ \psi^1, \psi^2 $ are defined in \eqref{eta:psi} and have one vanishing moment. } \end{example} \begin{figure}[h!] \centering \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two32Phi.eps} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two32Psi1.eps} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two32Psi2.eps} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two32det.eps} \caption{$ \det(\cM_{\pa,1}(e^{-i\xi})) $ } \end{subfigure} \caption{ The quasi-tight framelet $\{\phi,\phi; \psi^1,\psi^2\}_{(1,-1)}$ and the homogeneous quasi-tight framelet $\{\psi^1,\psi^2\}_{(1,-1)}$ in $\Lp{2}$ obtained in Example~\ref{ex:Two32}. (A) is the refinable function $\phi\in \Lp{2}$. (B) and (C) are the framelet functions $\psi^1$ and $ \psi^2 $. (D) is $\det(\cM_{\pa, 1}(e^{-i\xi}))$ for $ \xi\in [-\pi, \pi] $. }\label{fig:Two32} \end{figure} \begin{comment} \begin{example} \label{ex:Two34} Choose $\Theta = \td$ and the low-pass filter $$ \ha(\xi) = \frac{1+e^{-i\xi}}{2}\cos^2(\xi/2) \left( 1+\frac{3}{2}\sin^2(\xi/2) + 2\sin^4(\xi/2) \right). $$ Notice that $ \sr(\pa)=3 $ and $ \vmo(1-\pa\pa^\star)=4 $. Hence, the maximum order of vanishing moments we can achieve is two. Take $n_b=2$, then the constructed quasi-tight framelet filter bank $\{\ta; \tb_1, \tb_2\}_{\Theta, \{1, -1\}}$ is given by: \begin{align*} \pb_1(z) =& \tfrac{\sqrt{119327}}{7636928z^3} (z-1)^2(289z^5-578z^4-5667z^3+17981z^2+9342z-28223), \\ \pb_2(z) =& \tfrac{-\sqrt{119327}}{1909232z^3} (z-1)^2(23z^3-46z^2-418z+1365)\left((\sqrt{17}+1)z^2-\sqrt{17}+1\right). \end{align*} We have $ \vmo(\tb_1) = \vmo(\tb_2) = 2 $. Since $\sm_2(a) \approx 1.1268$, we know that the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Therefore, $ \{\psi^1, \psi^2\}_{(1, -1)} $ is a homogeneous quasi-tight framelet in $ \Lp{2} $ with $ 2 $ vanishing moments, where $ \psi^1, \psi^2 $ are defined in \eqref{eta:psi}. \begin{figure}[h!] \centering \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two34StemLow} \caption{$\ta$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two34StemHi1} \caption{$\tb_1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two34StemHi2} \caption{$\tb_2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two34det} \caption{$ \det(\cM_{\pa,1}(e^{-i\xi})) $} \end{subfigure} \\ \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two34phi} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two34psi1} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two34psi2} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two34freq} \caption{$ |\widehat{\ta}|, |\widehat{\tb_1}|, |\widehat{\tb_2}|$ } \end{subfigure} \caption{In Example~\ref{ex:Two34}: (A),(B) and (C) graphs of the filters $ \ta, \tb_1$ and $\tb_2 $. (D) $ \det(\cM_{\pa,1}(e^{-i\xi})) $ for $ \xi \in [-\pi,\pi] $. (E) refinable function $\phi$ defined in \eqref{phi}. (F) and (G) wavelet functions $\psi^1, \psi^2$ defined in \eqref{eta:psi}. (H) magnitude of $|\ha(\xi)|$ (in solid line), $|\hb_1(\xi)|$ (in dotted line) and $|\hb_2(\xi)|$ (in dashed line) for $ \xi\in [-\pi, \pi] $. } \end{figure} \end{example} \begin{example} \label{ex:Two38} {\rm Consider $\Theta = \td$ and the low-pass filter $$ \ha(\xi) = \frac{1+e^{-i\xi}}{2}\cos^2(\xi/2) \left( 1+\frac{3}{2}\sin^2(\xi/2) + \frac{15}{8}\sin^4(\xi/2) + \frac{35}{16}\sin^6(\xi/2) \right). $$ Note that $ \sr(\pa)=3 $ and $ \vmo(1-\pa\pa^\star)=8 $. Hence, the maximum order of vanishing moments is three. Taking $n_b=3$, we obtain a quasi-tight framelet filter bank $\{\ta; \tb_1, \tb_2\}_{\Theta, \{1, -1\}}$ as follows: \begin{align*} \pb_1(z) =& \tfrac{-(z-1)^3}{94208z^4} (1015z^6-3480z^5+14361z^4-30512z^3+14361z^2-3480z+1015), \\ \pb_2(z) =& \tfrac{15\sqrt{7}(z-1)^3}{188416z^4} (21z^4-72z^3+134z^2-72z+21)\left( (2\sqrt{7}+\sqrt{23})z^2 + 2\sqrt{7}-\sqrt{23} \right) \end{align*} with $ \vmo(\tb_1) = \vmo(\tb_2) = 3 $. Since $\sm_2(a) \approx 0.8297$, the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Therefore, $\{\phi,\phi;\psi^1,\psi^2\}_{(1,-1)}$ is a quasi-tight framelet in $\Lp{2}$ and $ \{\psi^1, \psi^2\}_{(1, -1)} $ is a homogeneous quasi-tight framelet in $ \Lp{2} $, where $ \psi^1, \psi^2 $ are defined in \eqref{eta:psi} and have $3$ vanishing moments. }\end{example} \begin{figure}[h!] \centering \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two38StemLow} \caption{$\ta$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two38StemHi1} \caption{$\tb_1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two38StemHi2} \caption{$\tb_2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two38det} \caption{$ \det(\cM_{\pa,1}(e^{-i\xi})) $} \end{subfigure} \\ \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two38phi} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two38psi1} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two38psi2} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two38freq} \caption{$ |\widehat{\ta}|, |\widehat{\tb_1}|, |\widehat{\tb_2}|$ } \end{subfigure} \caption{In Example~\ref{ex:Two38}: (A),(B) and (C) graphs of the filters $ \ta, \tb_1$ and $\tb_2 $. (D) $ \det(\cM_{\pa,1}(e^{-i\xi})) $ for $ \xi \in [-\pi,\pi] $. (E) refinable function $\phi$ defined in \eqref{phi}. (F) and (G) wavelet functions $\psi^1, \psi^2$ defined in \eqref{eta:psi}. (H) magnitude of $|\ha(\xi)|$ (in solid line), $|\hb_1(\xi)|$ (in dotted line) and $|\hb_2(\xi)|$ (in dashed line) for $ \xi\in [-\pi, \pi] $. } \end{figure} \end{example} \end{comment} \begin{example} \label{ex:Two44} {\rm Consider $\pTh(z) = 1$ and the low-pass filter $$ \pa(z) = \tfrac{1}{64}z^{-4}(z+1)^4 (z^4 - 6z^3 + 14 z^2 - 6z + 1). $$ We see from Figure~\ref{fig:Two44} that $ \det(\cM_{\pa, 1}(z))\leqslant 0 $ for all $ z\in \T $. Hence, $ s_{a, \Theta}^+ = s_{a, \Theta}^- = 1 $. Note that $ \sr(\pa)=4 $ and $ \vmo(1-\pa\pa^\star)=4 $. Hence, the maximum order of vanishing moments is two. Taking $n_b=2$, we obtain a quasi-tight framelet filter bank $\{\ta; \tb_1, \tb_2\}_{\Theta, \{1, -1\}}$ as follows: \begin{align*} \pb_1(z) =& \tfrac{\sqrt{2}}{16z^2}(z-1)^2 \left( (2+\sqrt{3})z^4 +2-\sqrt{3} \right), \\ \pb_2(z) =& \tfrac{1}{64z^4} (z-1)^2 (z^6+11z^4+8z^3+11z^2+1), \end{align*} with $ \vmo(\tb_1) = \vmo(\tb_2) = 2 $. Since $\sm(a) \approx 1.6297$, the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Therefore, $\{\phi,\phi;\psi^1,\psi^2\}_{(1,-1)}$ is a quasi-tight framelet in $\Lp{2}$ and $ \{\psi^1, \psi^2\}_{(1, -1)} $ is a homogeneous quasi-tight framelet in $ \Lp{2} $, where $ \psi^1, \psi^2 $ are defined in \eqref{eta:psi} and have two vanishing moments. } \end{example} \begin{figure}[htb!] \centering \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two44Phi.eps} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two44Psi1.eps} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two44Psi2.eps} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two44det.eps} \caption{$ \det(\cM_{\pa,1}(e^{-i\xi})) $} \end{subfigure} \caption{ The quasi-tight framelet $\{\phi,\phi; \psi^1,\psi^2\}_{(1,-1)}$ in $\Lp{2}$ and the homogeneous quasi-tight framelet $\{\psi^1,\psi^2\}_{(1,-1)}$ in $\Lp{2}$ obtained in Example~\ref{ex:Two44}. (A) is the refinable function $\phi\in \Lp{2}$. (B) and (C) are the framelet functions $\psi^1$ and $ \psi^2 $. (D) is $\det(\cM_{\pa, 1}(e^{-i\xi}))$ for $ \xi\in [-\pi, \pi] $. }\label{fig:Two44} \end{figure} \begin{comment} \begin{example} \label{ex:Two48} Choose $\Theta = \td$ and the low-pass filter $$ \ha(\xi) = \cos^4(\xi/2) \left( 1+2\sin^2(\xi/2) + 3 \sin^4(\xi/2) + 4 \sin^6(\xi/2) \right). $$ Notice that $ \sr(\pa)=4 $ and $ \vmo(1-\pa\pa^\star)=8 $. Hence, the maximum order of vanishing moments we can achieve is four. Take $n_b=4$, then the constructed quasi-tight framelet filter bank $\{\ta; \tb_1, \tb_2\}_{\Theta, \{1, -1\}}$ is given by: \begin{align*} \pb_1(z) =& \tfrac{\sqrt{82}(z-1)^4}{2624z^5} (z^4-z^3+6z^2-z+1) \left( (3\sqrt{5}+\sqrt{41})z^2 +3\sqrt{5}-\sqrt{41} \right), \\ \pb_2(z) =& \tfrac{-\sqrt{41}(z-1)^4}{10496z^5} (13z^6-13z^5+255z^4-190z^3+255z^2-13z+13). \end{align*} We have $ \vmo(\tb_1) = \vmo(\tb_2) = 4 $. Since $\sm_2(a) \approx 1.3516$, we know that the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Therefore, $ \{\psi^1, \psi^2\}_{(1, -1)} $ is a homogeneous quasi-tight framelet in $ \Lp{2} $ with $ 4 $ vanishing moments, where $ \psi^1, \psi^2 $ are defined in \eqref{eta:psi}. \begin{figure}[h!] \centering \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two48StemLow} \caption{$\ta$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two48StemHi1} \caption{$\tb_1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two48StemHi2} \caption{$\tb_2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two48det} \caption{$ \det(\cM_{\pa,1}(e^{-i\xi})) $} \end{subfigure} \\ \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two48phi} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two48psi1} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two48psi2} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Two48freq} \caption{$ |\widehat{\ta}|, |\widehat{\tb_1}|, |\widehat{\tb_2}|$ } \end{subfigure} \caption{In Example~\ref{ex:Two48}: (A),(B) and (C) graphs of the filters $ \ta, \tb_1$ and $\tb_2 $. (D) $ \det(\cM_{\pa,1}(e^{-i\xi})) $ for $ \xi \in [-\pi,\pi] $. (E) refinable function $\phi$ defined in \eqref{phi}. (F) and (G) wavelet functions $\psi^1, \psi^2$ defined in \eqref{eta:psi}. (H) magnitude of $|\ha(\xi)|$ (in solid line), $|\hb_1(\xi)|$ (in dotted line) and $|\hb_2(\xi)|$ (in dashed line) for $ \xi\in [-\pi, \pi] $.} \end{figure} \end{example} \end{comment} \begin{example} \label{ex:ThreeHigh22} {\rm Consider $\pTh(z) = 1$ and the low-pass filter \[ \pa(z) = \tfrac{2-\sqrt{6}}{8}z^{-2}(z+1)^2(z^2 - (4+\sqrt{6})z + 1). \] We see from Figure~\ref{fig:Three22} that $ \det(\cM_{\pa, 1}(z)) $ changes sign on $\T $. Hence $ s_{a, \Theta}^+ = 2 $ and $ s_{a, \Theta}^- = 1 $. Note that $ \sr(\pa)=2 $ and $ \vmo(1-\pa\pa^\star)=2 $. Therefore, the maximum order of vanishing moments is one. Taking $n_b=1$, we obtain a quasi-tight framelet filter bank $\{\ta; \tb_1, \tb_2, \tb_3\}_{\Theta, \{1, 1, -1\}}$ as follows: \begin{align*} \pb_1(z) =& \tfrac{\sqrt{10}}{40}(z-1) z^{-2} \left( (\sqrt{6}-3)z^3 + (2\sqrt{6}-3)z^2 + (2\sqrt{6}-1)z - \sqrt{6}-1 \right),\\ \pb_2(z) =& \tfrac{\sqrt{10}}{40}(z-1) z^{-2} \left( (\sqrt{6}-2)z^3 + (\sqrt{6}-4)z^2 + (\sqrt{6}-2)z + \sqrt{6}-4 \right),\\ \pb_3(z) =& \tfrac{\sqrt{10}}{40}(z-1)^2 z^{-2} \left( (2-\sqrt{6})z^2 + (6-2\sqrt{6}) z + 4 - \sqrt{6} \right), \\ \end{align*} with $ \vmo(\tb_1) = \vmo(\tb_2) = 1 $ and $ \vmo(\tb_3) = 2 $. Since $\sm(a) \approx 0.9382$, the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Therefore, $\{\phi,\phi;\psi^1,\psi^2,\psi^3\}_{(1,1,-1)}$ is a quasi-tight framelet in $\Lp{2}$ and $ \{\psi^1, \psi^2, \psi^3\}_{(1, 1, -1)} $ is a homogeneous quasi-tight framelet in $ \Lp{2} $, where $ \psi^1, \psi^2, \psi^3 $ are defined in \eqref{eta:psi} and have one vanishing moment. } \end{example} \begin{figure}[h!] \centering \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three22Phi.eps} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three22Psi1.eps} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three22Psi2.eps} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three22Psi3.eps} \caption{$\psi^3 $ } \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three22det.eps} \caption{$ \det(\cM_{\pa, 1})$} \end{subfigure} \caption{ The quasi-tight framelet $\{\phi,\phi; \psi^1,\psi^2,\psi^3\}_{(1,1,-1)}$ in $\Lp{2}$ and the homogeneous quasi-tight framelet $\{\psi^1,\psi^2,\psi^3\}_{(1,1,-1)}$ in $\Lp{2}$ obtained in Example~\ref{ex:ThreeHigh22}. (A) is the refinable function $\phi\in \Lp{2}$. (B), (C), and (D) are the framelet functions $\psi^1$, $ \psi^2 $ and $ \psi^3$. (E) is $\det(\cM_{\pa, 1}(e^{-i\xi}))$ for $ \xi\in [-\pi, \pi] $. } \label{fig:Three22} \end{figure} \begin{comment} \begin{example} \label{ex:ThreeHigh32} Choose $\hTh(\xi) = 1$ and the low-pass filter $$ \ha(\xi) = \frac{1+e^{-i\xi}}{2}\cos^2(\xi/2) \left( 1 + \left(2\sqrt{15}-6\right)\sin^2(\xi/2) \right). $$ Notice that $ \sr(\pa)=3 $, $ \vmo(1-\pa\pa^\star)=2 $. Hence, the maximum order of vanishing moments we can achieve is one. From the frequency plot of $ \det(\cM_{\pa, 1}(e^{-i\xi})) $ in Figure~\ref{fig:Three32}(I), we see that $ s_{a,\Theta}^+=2 $ and $ s_{a, \Theta}^- = 1 $. Hence we need at least three high-pass filters to construct a quasi-tight framelet filter bank. Take $n_b=1$, then the constructed quasi-tight framelet filter bank $\{\ta; \tb_1, \tb_2, \tb_3\}_{\Theta, \{1, 1, -1\}}$ is given by: \begin{align*} \tb_1 =& \tfrac{1}{16} \{\sqrt{15}+3,~ -\sqrt{15}-5,~ 2\sqrt{15},~ -2\sqrt{15}, ~\sqrt{15}+5, ~ -3 - \sqrt{15} \}_{[-2,3]},\\ \tb_2 =& \tfrac{\sqrt[4]{15}\sqrt{2}}{8} \{\sqrt{3}, ~ -\sqrt{5}, ~\sqrt{5}, ~-\sqrt{3} \}_{[0, 3]}, \\ \tb_3 =& \tfrac{\sqrt[4]{15}}{8} \{ \sqrt{3}, ~-\sqrt{5}, ~\sqrt{5}+\sqrt{3}, ~ -\sqrt{5}-\sqrt{3}, ~\sqrt{5}, ~-\sqrt{3} \}_{[-2,3]}. \end{align*} We have $ \vmo(\tb_1) = \vmo(\tb_2) = \vmo(\tb_3) = 1 $. Since $\sm_2(a) \approx 1.5420$, we know that the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Therefore, $ \{\psi^1, \psi^2, \psi^3\}_{(1, 1, -1)} $ is a homogeneous quasi-tight framelet in $ \Lp{2} $ with at least $ 1 $ vanishing moments, where $ \psi^1, \psi^2, \psi^3 $ are defined in \eqref{eta:psi}. \begin{figure}[h!] \centering \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three32StemLow} \caption{$\ta$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three32StemHi1} \caption{$\tb_1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three32StemHi2} \caption{$\tb_2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three32StemHi3} \caption{$\tb_3 $ } \end{subfigure} \\ \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three32Phi} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three32Psi1} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three32Psi2} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three32Psi3} \caption{$\psi^3 $ } \end{subfigure} \\ \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three32det} \caption{$\det(\cM_{\pa,1}(e^{-i\xi}))$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three32freq} \caption{$ |\widehat{\ta}|, |\widehat{\tb_1}|, |\widehat{\tb_2}|, |\widehat{\tb_3}| $} \end{subfigure} \caption{In Example~\ref{ex:ThreeHigh32}: (A) - (D) graphs of the filters $\ta,\tb_1,\tb_2$ and $\tb_3 $. (E) refinable function $\phi$. (F) - (H) wavelet functions $\psi^1$, $ \psi^2 $ and $ \psi^3 $. (I) $\det(\cM_{\pa,1}(e^{-i\xi}))$ for $ \xi\in [-\pi, \pi] $, where the blue dashed line is $ y=0 $. (J) $ |\ha(\xi)| $ (in solid line), $|\hb_1(\xi)|$ (in dotted line), $|\hb_2(\xi)|$ (in dashed line) and $ |\hb_3(\xi)| $ (in dash-dotted line) for $ \xi\in [-\pi, \pi] $. } \label{fig:Three32} \end{figure} \end{example} \begin{example} \label{ex:ThreeHigh34} Choose $\hTh(\xi) = 1$ and the low-pass filter $$ \ha(\xi) = \frac{1+e^{-i\xi}}{2}\cos^2(\xi/2) \left( 1 + \tfrac{3}{2}\sin^2(\xi/2) + \tfrac{6}{5}\sin^4(\xi/2) \right). $$ Notice that $ \sr(\pa)=3 $, $ \vmo(1-\pa\pa^\star)=4 $. Hence, the maximum order of vanishing moments we can achieve is two. From the frequency plot of $ \det(\cM_{\pa, 1}(e^{-i\xi}))$ in Figure~\ref{fig:Three34}(I), we see that $ s_{a,\Theta}^+=2 $ and $ s_{a, \Theta}^- = 1 $. Hence we need at least three high-pass filters to construct a quasi-tight framelet filter bank. Take $n_b=2$, then the constructed quasi-tight framelet filter bank $\{\ta; \tb_1, \tb_2, \tb_3\}_{\Theta, \{1, 1, -1\}}$ is given by: \begin{align*} \tb_1 =& \tfrac{\sqrt{2741311}}{877219520} \{ 55379, -224754, 191376, 85039, -118449, 6864, 5454, -909 \}_{[-3,4]},\\ \tb_2 =& \tfrac{3\sqrt{8223933}}{109652440} \{ 2122, -4652, 3943, -2608, 1357, -127, -42, 7 \}_{[-3, 4]}, \\ \tb_3 =& \tfrac{3\sqrt{8223933}}{54826220} \{ 1061, -2326, 1441, -141, -42, 7 \}_{[-3,2]}. \end{align*} We have $ \vmo(\tb_1) = \vmo(\tb_2) = \vmo(\tb_3) = 2 $. Since $\sm_2(a) \approx 1.3125$, we know that the refinable function $ \phi $ defined in \eqref{phi} belongs to $ \Lp{2} $. Therefore, $ \{\psi^1, \psi^2, \psi^3\}_{(1, -1, 1)} $ is a homogeneous quasi-tight framelet in $ \Lp{2} $ with at least $ 2 $ vanishing moments, where $ \psi^1, \psi^2, \psi^3 $ are defined in \eqref{eta:psi}. \begin{figure}[h!] \centering \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three34StemLow} \caption{$\ta$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three34StemHi1} \caption{$\tb_1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three34StemHi2} \caption{$\tb_2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three34StemHi3} \caption{$\tb_3 $ } \end{subfigure} \\ \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three34Phi} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three34Psi1} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three34Psi2} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three34Psi3} \caption{$\psi^3 $ } \end{subfigure} \\ \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three34det} \caption{$\det(\cM_{\pa, 1}(e^{-i\xi}))$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/Three34freq} \caption{$ |\widehat{\ta}|, |\widehat{\tb_1}|, |\widehat{\tb_2}|, |\widehat{\tb_3}| $} \end{subfigure} \caption{In Example~\ref{ex:ThreeHigh34}: (A) - (D) graphs of the filters $\ta,\tb_1,\tb_2$ and $\tb_3 $. (E) refinable function $\phi$. (F) - (H) wavelet functions $\psi^1$, $ \psi^2 $ and $ \psi^3 $. (I) $\det(\cM_{\pa, 1}(e^{-i\xi}))$ for $ \xi\in [-\pi, \pi] $, where the blue dashed line is $ y=0 $. (J) $ |\ha(\xi)| $ (in solid line), $|\hb_1(\xi)|$ (in dotted line), $|\hb_2(\xi)|$ (in dashed line) and $ |\hb_3(\xi)| $ (in dash-dotted line) for $ \xi\in [-\pi, \pi] $. } \label{fig:Three34} \end{figure} \end{example} \begin{example} \label{ex:ThetaTwo32} Take the same low-pass filter $ a $ as in Example~\ref{ex:Two32}, and $$ \hTh(\xi) = 1-\tfrac{1}{3}\sin^2(\xi/2) + \tfrac{1}{3}\sin^4(\xi/2). $$ Then we have $ \sr(\pa) = 3 $ and $ \vmo(\pTh(z) - \pTh(z^2)\pa(z)\pa^\star(z)) = 6 $. Hence, the maximum order of vanishing moments we can achieve is three. From the eigenvalues plot of $ \cM_{\pa, \pTh}(e^{-i\xi}) $ in Figure~\ref{fig:ThetaTwo32}(G), we see that $ s_{a, \Theta}^+ = s_{a, \Theta}^- = 1$. That is, we need at least $ 2 $ high-pass filters to construct a quasi-tight framelet filter bank. Take $ n_b = 3 $, then the constructed quasi-tight framelet filter bank $\{\ta; \tb_1, \tb_2\}_{\Theta, \{1, -1\}}$ is given by: \begin{figure}[h!] \centering \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaTwo32StemTheta} \caption{$\Theta$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaTwo32StemHi1} \caption{$\tb_1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaTwo32StemHi2} \caption{$\tb_2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaTwo32freq} \caption{$|\ha|,|\hb_1|,|\hb_2|,|\hTh| $ } \end{subfigure} \\ \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaTwo32Psi1} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaTwo32Psi2} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.24\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/ThetaTwo32eiv} \caption{$\lambda_1(\xi), \lambda_2(\xi)$} \end{subfigure} \caption{In Example~\ref{ex:ThetaTwo32}: (A) - (C) graphs of the filters $\tb_1,\tb_2$ and $\Theta $. (D) $ |\ha(\xi)| $ (in solid line), $|\hb_1(\xi)|$ (in dotted line), $|\hb_2(\xi)|$ (in dashed line) and $ |\hTh(\xi)| $ (in dash-dotted line) for $ \xi\in [-\pi, \pi] $. (E)(F) wavelet functions $\psi^1$, and $ \psi^2 $. (G) $ 2 $ eigenvalues of $\cM_{\pa, \pTh}(e^{-i\xi})$ for $ \xi\in [-\pi, \pi] $, where the blue dashed line is $ y=0 $. } \label{fig:ThetaTwo32} \end{figure} \end{example} \end{comment} \section{Proof of Theorem~\ref{thm:const-sig} on Generalized Spectral Factorization for Matrices with Constant Signature} In this section, we shall prove Theorem~\ref{thm:const-sig} on generalized matrix spectral factorization for Hermite matrices of Laurent polynomials with constant signature. To improve presentation and readability, the proofs of several auxiliary results for proving Theorem~\ref{thm:const-sig} shall be given in the Appendix. For an $ n\times n $ square matrix $ \pU(z) $ of Laurent polynomials, if $ \det(\pU(z))\not\equiv 0 $ is a monomial (Laurent polynomial with only one term), we call it \emph{unimodular}. $ \pU(z) $ is unimodular if and only if there exists a unique $ n \times n $ matrix $ \pU^{-1}(z) = \tfrac{1}{\det(\pU(z))}\operatorname{adj}(\pU(z)) $ of Laurent polynomials such that $ \pU(z)\pU^{-1}(z) = \pU^{-1}(z)\pU(z) = \mathbf{I}_n $. To prove Theorem~\ref{thm:const-sig}, we first show that Theorem~\ref{thm:const-sig} holds under the additional condition that $\det(\pA(z))$ is a nonzero monomial. The general case of Theorem~\ref{thm:const-sig} will be then proved by extracting out the nontrivial factors of $\det(\pA(z))$ one by one. \begin{theorem}[Unimodular Case for Theorem~\ref{thm:const-sig}] \label{thm:unimodular} For an $n \times n$ Hermite matrix $\pA(z)$ of Laurent polynomials with a nonzero monomial $\det(\pA(z))$, there exists an $n \times n$ matrix $\pU(z)$ of Laurent polynomials such that $\pA(z)=\pU(z)\pD\pU^\star(z)$, where $\pD:=\diag(\mathbf{I_{\nu_+}}, -\mathbf{I_{\nu_-}})$ and $n=\nu_++\nu_-$ for some nonnegative integers $ \nu_+$ and $ \nu_- $. \end{theorem} Theorem~\ref{thm:unimodular} is known for rings with involution (e.g., see \cite{Lyu:1973Factorization,Lyu:1973Factorizationa,Cop:1972Linear,Djo:1976Hermitian}), including rings of (Laurent) polynomials as special cases. To provide a self-contained proof to Theorem~\ref{thm:unimodular} for completeness, we present Algorithm~\ref{algo:unimodular} to construct desired matrices $ \pU(z) $ and $ \pD $ in Theorem~\ref{thm:unimodular} by showing that Algorithm~\ref{algo:unimodular} is feasible and will terminate in finitely many steps. For a Laurent polynomial $ \pu(z) \not\equiv 0 $, we use $ \deg(\pu(z)) $ to denote its highest degree, and use $ \operatorname{ldeg}(\pu(z)) $ to denote its lowest degree. We define the length of $ \pu(z) $ as $ \len(\pu) := \deg(\pu) - \operatorname{ldeg}(\pu) $, and the interval: $ \fs(\pu(z)):=[\operatorname{ldeg}(\pu(z)), ~\deg(\pu(z))] $. If $ \pu(z)\equiv 0 $, then we just define $ \len(\pu) := -\infty $ and $ \fs(\pu) $ to be the empty set. For a $ k\times k $ matrix $ \pQ(z) $ of Laurent polynomials, we call it \emph{diagonally dominant at the diagonal entry $s$} if \begin{enumerate} \item for all $ i\neq s $: \begin{equation} \label{def:DiagDominant1} \fs(\pQ_{i,s}(z))\subsetneq \fs(\pQ_{s,s}(z))\quad \mbox{and}~ \fs(\pQ_{s,i}(z))\subsetneq \fs(\pQ_{s,s}(z)); \end{equation} \item for all $ i > s$: \begin{equation} \label{def:DiagDominant2} \deg(\pQ_{s, i}(z)) < \deg(\pQ_{s,s}(z)). \end{equation} \end{enumerate} $ \pQ(z) $ is called \emph{diagonally dominant} if it is diagonally dominant at all its diagonal entries $ s = 1,\ldots,k $. The idea adopted in Algorithm~\ref{algo:unimodular} is similar to \cite{GohLanRod:1982Factorization} for the polynomial matrices. To improve readability for Algorithm~\ref{algo:unimodular}, we provide some auxiliary lemmas with algorithmic proofs given in the Appendix serving as sub-steps in Algorithm~\ref{algo:unimodular}. \begin{lemma} \label{lemma:EmptySpect2} Let $ \pQ(z) $ be a $ k\times k $ Hermite matrix of Laurent polynomials such that its $(1,1)$-entry $ \pQ_{1,1}(z)\not\equiv 0 $ and \begin{equation} \label{eq:lemmaDiagIncreasing} \len(\pQ_{1,1}(z))\leqslant \len(\pQ_{2,2}(z))\leqslant\cdots\leqslant \len(\pQ_{k,k}(z)). \end{equation} Suppose that $ \pQ(z) $ is diagonally dominant at its first $ s $ diagonal entries for some $ s<k $. (If $ \pQ(z) $ is not diagonally dominant at its first diagonal entry, then just take $ s = 0 $.) Then there exists a $ k\times k $ unimodular matrix $ \pU(z) $ of Laurent polynomials such that $ \widetilde{\pQ}(z):= \pU(z)\pQ(z)\pU^\star(z) $ is diagonally dominant at its first $ (s+1) $ diagonal entries, while the top left $ (s+1)\times (s+1) $ submatrix of $ \widetilde{\pQ}(z) $ is the same as that of $ \pQ(z) $. \end{lemma} \begin{lemma} \label{lemma:EmptySpect3} Let $ \pQ(z) $ be a $ k\times k $ unimodular Hermite matrix of Laurent polynomials. If its first diagonal entry $ \pQ_{1,1}(z) \equiv 0 $, then there exist a $ k\times k $ unimodular matrix $ \pU(z) $ of Laurent polynomials and a $ (k-1)\times (k-1) $ matrix $ \widetilde{\pQ}(z) $ of Laurent polynomials such that \[ \pQ(z) = \pU(z)\begin{bmatrix} 1 & \\ & \widetilde{\pQ}(z) \end{bmatrix}\pU^\star(z). \] \end{lemma} We are now ready to present Algorithm~\ref{algo:unimodular} below to prove Theorem~\ref{thm:unimodular}. The structure and idea of the following Algorithm~\ref{algo:unimodular} consist of three main steps. \begin{enumerate} \item If the first diagonal entry of the $ n\times n$ Hermite matrix $ \pA(z) $ is identically zero, then we apply Lemma~\ref{lemma:EmptySpect3} to find a unimodular matrix $ \pU(z) $ such that $ \pA(z) = \pU(z)\begin{bmatrix} 1 & \\ & \widetilde{\pA}(z) \end{bmatrix}\pU^\star(z) $ holds. Hence, the problem is reduced to solving the generalized matrix spectral factorization of the $ (n-1)\times (n-1) $ matrix $ \widetilde{\pA}(z) $. \item If the first diagonal entry of $ \pA(z) $ is not identically zero, then we can repeatedly apply Lemma~\ref{lemma:EmptySpect2}, to reduce $ \pA(z) $ to a diagonally dominant matrix. \item If $ \pA(z) $ is a unimodular diagonally dominant matrix of Laurent polynomials, then it must be a diagonal constant matrix. So we can solve its spectral factorization directly. \end{enumerate} \begin{algorithm} \label{algo:unimodular} Let $\pA(z)$ be an $n \times n$ Hermite matrix of Laurent polynomials such that $\det(\pA(z))$ is a nonzero monomial. \begin{enumerate} \item[(S0)] Initialization. Set $\pU(z) := \mathbf{I}_n$ to be the $n \times n$ identity matrix. Let $\pQ(z) := \pA(z)$ and $k:=n$. \item[(S1)] Find a permutation matrix $\widetilde{\pU}$ such that $\widetilde{\pQ}(z):=\widetilde{\pU}\pQ(z)\widetilde{\pU}^\star$ satisfies $$ \len(\widetilde{\pQ}_{1,1}(z))\leqslant \len(\widetilde{\pQ}_{2,2}(z))\leqslant\cdots\leqslant \len(\widetilde{\pQ}_{k,k}(z)).$$ Update/replace $\pU(z)$ by $\pU(z) \diag(\mathbf{I}_{n-k}, \widetilde{\pU}^{-1})$ and $\pQ(z):=\widetilde{\pQ}(z) $. \item[(S2)] If the first diagonal entry $\pQ_{1,1}(z)\not\equiv 0$, then go to step (S3). Otherwise, apply Lemma~\ref{lemma:EmptySpect3} to find a $ k\times k $ unimodular matrix $ \widetilde{\pU}(z) $ of Laurent polynomials such that $ \widetilde{\pU}(z)\pQ(z)\widetilde{\pU}^\star(z) =\diag(1, \widetilde{\pQ}(z)) $ for some $ (k-1)\times (k-1) $ matrix $ \widetilde{\pQ}(z) $ of Laurent polynomials. Update/replace $\pU(z)$ by $\pU(z) \diag(\mathbf{I}_{n-k}, \widetilde{\pU}^{-1}(z))$ and $\pQ(z):=\widetilde{\pQ}(z)$. Set $k:=k-1$ and restart from (S1). \item[(S3)] For $\pQ_{1,1}(z) \not\equiv 0$, if $ \pQ(z) $ is a diagonally dominant matrix, then go to step (S5). Otherwise, find the largest number $ s $ such that $ \pQ(z) $ is diagonally dominant at its first $ s $ diagonal entries. If it is not diagonally dominant at the first diagonal entry, then just take $ s = 0 $. Apply Lemma~\ref{lemma:EmptySpect2} to find a $ k\times k $ unimodular matrix $ \widetilde{\pU}(z) $ of Laurent polynomials such that $ \widetilde{\pQ}(z):= \widetilde{\pU}(z)\pQ(z)\widetilde{\pU}^\star(z) $ is diagonally dominant at its first $ (s+1) $ diagonal entries. Update/replace $\pU(z)$ by $\pU(z) \diag(\mathbf{I}_{n-k}, \widetilde{\pU}^{-1}(z))$ and $\pQ(z):=\widetilde{\pQ}(z)$. \item[(S4)] If the lengths of diagonal entries in $\pQ(z)$ are not non-decreasing any more, that is, $$ \len(\pQ_{1,1}(z))\leqslant \len(\pQ_{2,2}(z))\leqslant\cdots\leqslant \len(\pQ_{k,k}(z))$$ is not satisfied, then restart from (S1) to sort them again. Otherwise, repeat from (S3). \item[(S5)] If $\pQ(z)$ is diagonally dominant, then $\pQ(z)$ must be a constant diagonal matrix, that is, $\pQ = \diag(\lambda_1, \ldots, \lambda_k) $, where $ \lambda_j \neq 0 $ for all $ j = 1,\ldots, k $. Without loss of generality, we assume that the first $ (k-\nu_-) $ of the $ \lambda_j $ are positive and the last $ \nu_- $ of them are negative. Define $ \widetilde{\pU}:=\diag(\sqrt{|\lambda_1|}, \ldots, \sqrt{|\lambda_k|}) $. We have $ \pQ= \widetilde{\pU}\diag(\mathbf{I}_{k-\nu_-}, -\mathbf{I}_{\nu_-}) \widetilde{\pU}^\star $. Update/replace $\pU(z)$ by $\pU(z) \diag(\mathbf{I}_{n-k},\widetilde{\pU}) $, and define $\pD:= \diag(\mathbf{I}_{n-k}, \mathbf{I}_{k-\nu_-}, -\mathbf{I}_{\nu_-})$. Such output $\pU(z)$ and $\pD$ must satisfy $ \pA(z) = \pU(z)\pD\pU^\star(z) $ and all the requirements in Theorem~\ref{thm:unimodular}. \end{enumerate} \end{algorithm} \begin{proof} It is easy to see that after the initialization step (S0), we have \begin{equation} \label{eq:UQU} \pA(z) = \pU(z)\begin{bmatrix} \mathbf{I}_{n-k} & \\ & \pQ(z) \end{bmatrix}\pU^\star(z). \end{equation} Each time we update $\pU(z)$ and $\pQ(z)$ in steps (S1),(S2),(S3) and (S5), we are actually factoring out some matrices from the original $\pQ(z)$. Hence, by induction, \eqref{eq:UQU} will always hold during the whole process of the algorithm. So if the algorithm can finalize in (S5), the decomposition $\pA(z)=\pU(z)\pD\pU^\star(z)$ must hold. We prove that all the steps in the algorithm are feasible and they will terminate after finitely many steps. The feasibility of steps (S2) and (S3) are proved by Lemmas~\ref{lemma:EmptySpect2} and \ref{lemma:EmptySpect3}. In (S6), we know that if $\pQ(z)$ is diagonally dominant, then $\len(\det(\pQ(z)))=\sum_{l=1}^{k}\len(\pQ_{l,l}(z))$. By \eqref{eq:UQU}, we deduce $\det(\pA(z)) = \det(\pU(z))\det(\pQ(z))\det(\pU^\star(z))$, which implies $\det(\pQ(z))\mid\det(\pA(z))$. Since $\det(\pA(z))$ is a nonzero monomial, $\det(\pQ(z))$ is a nonzero monomial. Hence $\sum_{l=1}^{k}\len(\pQ_{l,l}(z))=0$, which forces all the diagonal entries of $ \pQ(z) $ to be monomials. Since $\pQ(z)$ is a Hermite matrix, all the diagonal entries of $ \pQ(z) $ must be nonzero constants. Because $\pQ(z)$ is diagonally dominant, so $\pQ(z)$ must be a diagonal constant matrix. Finally, we prove that the algorithm will stop after finitely many iterations. The algorithm might restart from (S1) in (S2) and (S4) or restart from (S3) in (S4). When the restart from (S1) in (S2) occurs, the size $ k $ of $\pQ(z)$ will decrease by $1$. By \eqref{eq:UQU}, it can happen only finite number of times. In order to show that the algorithm can only restart from (S1) in (S4) for finitely many times, let us use the lexicographic order of sequences of length $k$. For any two sequences of nonnegative integers with length $k$: $\{\alpha_j\}_{j=1}^k,\{\beta_j\}_{j=1}^k \in \NN^k$, we say that $\{\alpha_j\}_{j=1}^k $ is less than $\{\beta_j\}_{j=1}^k$ if there exists some index $j_0\in \{1,\ldots, k\}$, such that $\alpha_j=\beta_j$ for all $j<j_0$, and $\alpha_{j_0} < \beta_{j_0}$. $\{\alpha_j\}_{j=1}^k $ is equal to $\{\beta_j\}_{j=1}^k$ if $\alpha_j=\beta_j$ for all $j=1,\ldots,k$. It is easy to see that $\NN^k$ is a well-ordered set under this lexicographic order. Every time the algorithm restarts from (S1) in (S4), the lexicographic order of $\{\len(\pQ_{i,i}(z))\}_{i=1}^k\in \NN^k$ will decrease. Since the sequence is lower bounded by the sequence $ \{0, \ldots,0\} $, the restarts can occur only finitely many times. Every time the algorithm restarts from (S3) in (S4), $s$ will increase by at least 1, until the matrix $ \pQ(z) $ becomes diagonally dominant. So these iterations can only happen for finite number of times. This completes the proof of Algorithm~\ref{algo:unimodular} and Theorem~\ref{thm:unimodular}. \end{proof} To complete the proof of Theorem~\ref{thm:const-sig}, we have to extract out nontrivial factors of $\det(\pA(z))$. To do so, let us recall some necessary notations first. An $n \times n$ matrix $\pA(z)$ of Laurent polynomials can be factorized into $$ \pA(z) = \mathsf{E}(z)\pD(z)\mathsf{F}(z), $$ where $\mathsf{E}(z)$ and $\mathsf{F}(z)$ are unimodular matrices of Laurent polynomials, and $\pD(z)=\diag(d_1(z), \ldots, d_n(z))$ is a diagonal matrix of Laurent polynomials with $d_j(z)\mid d_{j+1}(z)$ for all $j=1,\ldots,n-1$. $\pD(z)$ is called the \emph{Smith Normal Form} of $\pA(z)$. Moreover, we can normalize $d_j$ by requiring that its leading coefficient should be $1$ and its constant term be nonzero. Such polynomials $d_j(z)$ in the Smith normal form are called the \emph{invariant polynomials} of $\pA(z)$. For all $ k = 1,\ldots, n $, the product $ \prod_{j = 1}^{k} d_j(z)$ is essentially the greatest common divisor (gcd) of all the determinants of $k \times k$ submatrices in $\pA(z)$. Let us write the invariant polynomials in $\C$ as follows: $$ d_j(z) = \prod_{k=1}^{n_j}(z-z_{j,k})^{\alpha_{j,k}}, \qquad j=1,\ldots, n. $$ The factors $(z-z_{j,k})^{\alpha_{j,k}}$, $k=1,2,\ldots, n_j$, $j=1,2,\ldots, n$, where each factor could repeat as many times as it occurs, are called the \emph{elementary divisors} of $\pA(z)$. For each $ j=1,2,\ldots, n $, since we require $d_j(z)$ to have a nonzero constant term, $d_j(z)$ has no root at $0$. Thus there won't be any $(z-0)^{\alpha_{j,k}}$ terms in the elementary divisors. Also, by $d_j(z)\mid d_{j+1}(z)$ for all $j=1,\ldots,n-1$, we see that the Smith Normal Form $\pD(z)$ of $\pA(z)$ is uniquely determined by its elementary divisors. Observe that $\det(\pA(z)) = \det(\mathsf{E}(z))\det(\pD(z))\det(\mathsf{F}(z))$. Since both $\det(\mathsf{E}(z))$ and $\det(\mathsf{F}(z))$ are nonzero monomials, we see that the determinant of $\pA(z)$ is essentially the product of all its invariant polynomials or the product of all its elementary divisors, up to some multiplicative nonzero monomials: \begin{equation} \label{eq:DetEleDiv} \det(\pA(z)) = c_A z^{k_A} \prod_{j=1}^n d_j(z) = c_A z^{k_A} \prod_{j=1}^n \prod_{k=1}^{n_j}(z-z_{j,k})^{\alpha_{j,k}}, \end{equation} for some nonzero constant $c_A\in\C$ and some integer $k_A\in \Z$. To prove the general case in Theorem~\ref{thm:const-sig}, we need some auxiliary results to show that if $ \pA(z) $ is not unimodular, then its elementary divisors can be factored out. For this purpose, we need the following auxiliary results Theorems~\ref{thm:extract1} and \ref{thm:extract2}. Theorem~\ref{thm:extract1} deals with the elementary divisor $ (z-z_0)^\alpha $ in the case $ z_0 \not\in \T $ or the case $ z_0 \in\T $ but $ \alpha \geqslant 2 $. Theorem~\ref{thm:extract2} handles the elementary divisors with $ z_0\in \T $ and $ \alpha = 1 $. \begin{theorem} \label{thm:extract1} Let $\pA(z)$ be an $n \times n$ Hermite matrix of Laurent polynomials with $\len(\det(A(z)))>0$. If $\pA(z)$ has some elementary divisor $(z-z_0)^\alpha$ satisfying either one of the two conditions: \begin{enumerate} \item $z_0\in \left(\C\setminus\{0\}\right)\setminus\T$, \item $z_0\in \T$ and $\alpha\geqslant 2$, \end{enumerate} then there exist two $n \times n$ matrices $\pU(z)$ and $\widetilde{\pA}(z)$ of Laurent polynomials such that $\pA(z)=\pU(z)\widetilde{\pA}(z)\pU^\star(z)$, where $\widetilde{\pA}^\star(z) = \widetilde{\pA}(z)$ and $\len(\det(\widetilde{\pA}(z)))\leqslant \len(\det(\pA(z)))-2$. \end{theorem} \begin{proof} Denote the invariant polynomials of $\pA(z)$ by $d_1(z), \ldots, d_n(z) $. Then there exist unimodular matrices $\mathsf{E}(z)$ and $\mathsf{F}(z)$ of Laurent polynomials such that $$ \pA(z) = \mathsf{E}(z)\diag(d_1(z), \ldots, d_n(z)) \mathsf{F}(z). $$ Define $\mathring{\pA}(z)$ as \begin{equation} \label{eq:smith} \mathring{\pA}(z):=\mathsf{E}^{-1}(z)\pA(z)\mathsf{E}^{-\star}(z) = \diag(d_1(z), \ldots, d_n(z)) \mathsf{F}(z)\mathsf{E}^{-\star}(z). \end{equation} Since $(z-z_0)^\alpha$ is an elementary divisor of $\pA(z)$, there exists some $d_k(z)$ such that $(z-z_0)^\alpha \mid d_k(z)$. Hence $(z-z_0)^\alpha$ divides the $k$-th row of $\mathring{\pA}(z)$. Also, $\mathring{\pA}(z)$ being a Hermite matrix implies that $ \left((z-z_0)^\alpha\right)^\star=(z^{-1}-\overline{z_0})^\alpha = (-\overline{z_0})^{\alpha}z^{-\alpha}(z-\overline{z_0}^{-1})^\alpha $ divides the $k$-th column of $\mathring{\pA}(z)$. In the following, we will show that in the items (1) and (2), we can factor out $ (z-z_0)^\beta $ from the $ k $-th row of $ \mathring{\pA}(z) $, and factor out $ ((z-z_0)^\beta)^\star $ from the $ k $-th column of $ \mathring{\pA}(z) $ simultaneously, where $ \beta = \alpha $ in item (1) and $ \beta = \lfloor \alpha /2 \rfloor$ in item (2). For item (1), we have $z_0\not\in \T$, and hence $\overline{z_0}^{-1}\neq z_0$. So $(z-z_0)^\alpha$ and $ (z-\overline{z_0}^{-1})^\alpha $ are different polynomials. Since they divide the $k$-th row and the $k$-th column of $\mathring{\pA}(z)$ respectively, we deduce that $ (z-z_0)^\alpha (z-\overline{z_0}^{-1})^\alpha $ (or equivalently $\left((z-z_0)^\alpha\right)^\star (z-z_0)^\alpha$) divides the $(k, k)$-entry of the matrix $\mathring{\pA}(z)$. So we can factor out $(z-z_0)^\alpha$ from the $k$-th row and factor out $\left((z-z_0)^\alpha\right)^\star $ from the $k$-th column of $\mathring{\pA}(z)$ simultaneously. Use $ \pD_{k, \alpha}(z) $ to denote the $ n\times n $ diagonal matrix with the $k$-th diagonal entry equal to $(z-z_0)^\alpha$, and all other diagonal entries equal to 1, i.e., \begin{equation} \label{eq:Diagk} \pD_{k, \alpha}(z) := \diag(1,\ldots, 1, (z-z_0)^\alpha, 1, \ldots, 1). \end{equation} We get $\mathring{\pA}(z)= \pD_{k, \alpha}(z)\widetilde{\pA}(z)\pD_{k, \alpha}^\star(z)$, where $\widetilde{\pA}(z)$ is an $n \times n$ Hermite matrix of Laurent polynomials. So $\pA(z)$ can be written as $$ \pA(z) = \mathsf{E}(z) \mathring{\pA}(z) \mathsf{E}^\star(z) = \mathsf{E}(z)\pD_{k, \alpha}(z)\widetilde{\pA}(z)\pD_{k, \alpha}^\star(z)\mathsf{E}^\star(z). $$ Let $\pU(z):=\mathsf{E}(z)\pD_{k, \alpha}(z)$. Then we get $ \pA(z)=\pU(z)\widetilde{\pA}(z)\pU^\star(z)$. Since $\det(\mathsf{E}(z))$ is a nonzero monomial and $\det(\pU(z)) = \det(\mathsf{E}(z))\det(\pD_{k, \alpha}(z))$, we conclude that $\len(\det(\pU(z)))= \len(\det( \pD_{k, \alpha}(z) )) =\alpha$. So \begin{align*} \len(\det(\widetilde{\pA}(z))) &= \len(\det(\pA(z))) - \len(\det(\pU(z))) - \len(\det(\pU^\star(z))) \\ &= \len(\det(\pA(z))) -\alpha -\alpha \leqslant \len(\det(\pA(z))) -2. \end{align*} For item (2), we have $ \alpha \geqslant 2 $. Let $\beta:=\lfloor \alpha /2 \rfloor$ be the largest integer that is no larger than $ \alpha/2 $. Then $\beta\geqslant 1$ and $2\beta \leqslant \alpha$. From $\beta \leqslant \alpha$, we see that $(z-z_0)^\beta$ divides the $k$-th row and $((z-z_0)^\beta)^\star $ divides the $k$-th column of $\mathring{\pA}(z)$. For $z_0\in \T$, $|z_0|^2=1$ implies $\overline{z_0}^{-1} = z_0 $. So $ (z-\overline{z_0}^{-1})^\beta $ and $(z-z_0)^\beta$ are the same polynomial. Since $2\beta\leqslant \alpha$ and $ (z-z_0)^\alpha $ divides the $(k,k)$-entry of $\mathring{\pA}(z)$, we get $(z-z_0)^{\beta}((z-z_0)^\beta)^\star =(-\overline{z_0})^{\beta}z^{-\beta}(z-z_0)^{2\beta} $ which divides the $(k,k)$-entry of $\mathring{\pA}(z)$. So we can factor out $(z-z_0)^\beta$ from the $k$-th row and $((z-z_0)^\beta)^\star$ from the $k$-th column at the same time to get $\mathring{\pA}(z) = \pD_{k, \beta}(z)\widetilde{\pA}(z)\pD_{k, \beta}^\star(z)$, where $\widetilde{\pA}(z)$ is an $n \times n$ Hermite matrix of Laurent polynomials and $\pD_{k, \beta}(z)$ is defined as \eqref{eq:Diagk}. Using similar arguments as for item (1), we get $$ \pA(z) = \mathsf{E}(z) \mathring{\pA}(z) \mathsf{E}^\star(z) = \mathsf{E}(z)\pD_{k, \beta}(z)\widetilde{\pA}(z)\pD_{k, \beta}^\star(z)\mathsf{E}^\star(z) = \pU(z)\widetilde{\pA}(z)\pU^\star(z), $$ where $\pU(z):=\mathsf{E}(z)\pD_{k, \beta}(z)$. Because $\len(\det(\pU(z)))= \len(\det(\pD_{k, \beta}(z))) = \beta$, we have \begin{align*} \len(\det(\widetilde{\pA}(z))) & = \len(\det(\pA(z))) - \len(\det(\pU(z))) - \len(\det(\pU^\star(z))) \\ & = \len(\det(\pA(z))) -\beta -\beta \leqslant \len(\det(\pA(z))) -2. \end{align*} This completes the proof. \end{proof} If the Hermite matrix $ \pA(z) \geqslant 0 $ for all $ z\in \T $, then we can prove that all the elementary divisors of $ \pA(z) $ must be either item (1) or item (2) in Theorem~\ref{thm:extract1}. Actually, if $ \pA(z) $ is positive semidefinite for all $ z\in\T $, all its elementary divisors $(z-z_0)^\alpha $ with $z_0\in \T$ will have even multiplicity $\alpha$. See Corollary \ref{cor:SPDEvenEleDiv} later in this paper. However, $ z_0\in \T $ and $\alpha=1$ can indeed happen if the matrix $ \pA(z) $ is not positive semidefinite. This is the main difference/difficulty in the proof of the generalized spectral factorization of matrices with constant signature, in comparison to the proof of the standard matrix-valued Fej\'er-Riesz lemma, as demonstrated by the following example. \begin{example} \label{ex:SingleRoot} {\rm Consider the matrix \[ \pA(z)= \left[ \begin {array}{cc} { z^{-1}\left( z-1 \right)^{2}}& \left( z-1 \right) \left( z+1 \right) \\ \noalign{\medskip} \left( {z}^{-1}-1 \right) \left( {z}^{-1}+1 \right) & -{z^{-1}\left( z-1 \right)^{2}} \end {array} \right]. \] By direct calculation we have $\pA^\star(z) =\pA(z)$ and $\det(\pA(z)) = \frac{4(z-1)^2}{z}=-\pd(z)\pd^\star(z) \leqslant 0$ for all $ z\in \T $, where $\pd(z) = 2(z-1)$. Since the determinant is equal to the product of all the eigenvalues of $ \pA(z) $, we know that $ \nu_+( \pA(z)) = \nu_-( \pA(z)) = 1$ for all $z\in\T\setminus\sigma(\pA)$. Hence, the signature of $ \pA(z) $ is constant for all $ z\in\T\setminus\sigma(\pA)$. As to the Smith Normal Form of $ \pA(z) $, let $$ \mathsf{E}(z):= \left[ \begin {array}{cc} \frac{-2z^3+4z^2+z-1}{z}& 2z\left(2-z \right) \\ \noalign{\medskip}{\frac {2\,{z}^{3}-z-1}{{ z}^{2}}}&2\,z\end {array} \right] ,\quad \mathsf{F}(z):= \left[ \begin {array}{cc} 1&2\,{z}^{2}-z\\ \noalign{\medskip}-1& z+1-2z^2\end {array} \right],\quad \pD(z):= \left[ \begin{array}{cc} z-1 & 0\\ \noalign{\medskip}0&{ z-1 } \end {array} \right]. $$ We can directly verify that $\pA(z) = \mathsf{E}(z)\pD(z)\mathsf{F}(z)$ and $\mathsf{E}(z), \mathsf{F}(z)$ are both unimodular matrices. So $\pD(z)$ is the Smith Normal Form of $\pA(z)$. Hence $\pA(z)$ has two elementary divisors being $(z-1)$. } \end{example} The following theorem handles the elementary divisors $ (z-z_0)^\alpha $ with $ z_0 \in \T $ and $ \alpha = 1 $. \begin{theorem} \label{thm:extract2} Let $\pA(z)$ be an $n \times n$ Hermite matrix of Laurent polynomials. If $\pA(z)$ satisfies \begin{enumerate} \item $\operatorname{sig}(A(z))$ is constant for all $z\in \T \setminus \sigma(\pA)$; \item there exists some $z_0\in \sigma(\pA) \cap \T$ and all the elementary divisors of $\pA(z)$ with root $ z_0 $ have degree equal to $1$, \end{enumerate} then there exist two $n \times n$ matrices $\pU(z)$ and $\widetilde{\pA}(z)$ of Laurent polynomials such that $\pA(z)=\pU(z)\widetilde{\pA}(z)\pU^\star(z)$, where $\widetilde{\pA}^\star(z) = \widetilde{\pA}(z)$ and $\len(\det(\widetilde{\pA}(z)))\leqslant \len(\det(\pA(z)))-2$. \end{theorem} We need the following result to prove Theorem~\ref{thm:extract2}, which connects the study of the eigenvalues of $ \pA(z) $ and its invariant polynomials. Let us recall the big $ \bo $ notation to study real analytic functions. For an analytic function $ f(\xi) $, we say that $ f(\xi) = \bo((\xi - \xi_0)^n) $ as $ \xi \rightarrow \xi_0 $ if the $ k $-th derivative $ f^{(k)}(\xi_0) = 0$ for all $ 0 \leqslant k < n $. We also abuse the notation for the multiplicity of the root of Laurent polynomials. For an analytic function $ f(\xi) $, we use $ \mz(f, \xi_0) $ to denote the largest integer $ n $ such that $ f(\xi) = \bo((\xi-\xi_0)^n) $ as $\xi\to \xi_0$. \begin{theorem} \label{thm:partialMultiplicity} Suppose that $ \pA(z) $ is an $ n\times n $ Hermite matrix of Laurent polynomials and $ z_0 = e^{-i\xi_0} \in \T $ with $ \xi_0\in \R $. Let $ \pd_1(z), \ldots, \pd_n(z) $ be the invariant polynomials of $ \pA(z) $ and define the sequence $ \{\alpha_j\}_{j=1}^n $ by \[ \alpha_j := \mz(\pd_j(z), z_0), \qquad j = 1,\ldots, n. \] Also, we can find $ n $ analytic functions $ \lambda_1(\xi), \ldots, \lambda_n(\xi) $ for $ \xi \in \R $, which are the eigenvalues of the analytic matrix $ \pA(e^{-i\xi}) $. Define the sequence $ \{\beta_j\}_{j=1}^n $ by \[ \beta_j := \mz(\lambda_j(\xi), \xi_0), \qquad j = 1,\ldots, n. \] Without loss of generality, we can assume $ \beta_1 \leqslant \cdots \leqslant \beta_n $. Then the sequence $ \{\alpha_j\}_{j=1}^n $ and the sequence $ \{\beta_j\}_{j=1}^n $ must be the same. \end{theorem} \begin{proof} The invariant polynomials $\pd_j(z)\mid\pd_{j+1}(z)$ hold for all $j=1,2,\ldots,n-1$. Hence, $\alpha_1\leqslant\cdots\leqslant\alpha_n $. There exist $ n\times n $ invertible matrices of Laurent polynomials $\mathsf{E}(z)$ and $\mathsf{F}(z)$ such that \begin{equation} \label{eq:smith2} \pA(z) = \mathsf{E}(z)\diag(\pd_1(z), \pd_2(z), \ldots, \pd_n(z)) \mathsf{F}(z) \end{equation} holds. Take $z=e^{-i\xi}$, $ \xi\in\R $. We see that the invariant polynomials $\pd_j(e^{-i\xi})$ are analytic functions of $\xi\in \R$ and $\mz(\pd_j(e^{-i\xi}), \xi_0) = \mz(\pd_j(z), z_0) = \alpha_j$ for all $j=1,2,\ldots,n$. Write $\pd_j(e^{-i\xi}) = (\xi-\xi_0)^{\alpha_j}\widetilde{d_j}(\xi)$ with $\widetilde{d_j}(\xi_0)\neq 0$. We can rewrite equation \eqref{eq:smith2} as follows, \begin{align*} \pA(e^{-i\xi}) =& \mathsf{E}(e^{-i\xi})\diag(\pd_1(e^{-i\xi}), \ldots, \pd_n(e^{-i\xi})) \mathsf{F}(e^{-i\xi}) \\ = & \mathsf{E}(e^{-i\xi}) \diag((\xi-\xi_0)^{\alpha_1},\ldots,(\xi-\xi_0)^{\alpha_n}) \diag(\widetilde{d_1}(\xi),\ldots,\widetilde{d_n}(\xi)) \mathsf{F}(e^{-i\xi})\\ = & E_{\xi_0}(\xi) \diag((\xi-\xi_0)^{\alpha_1},\ldots,(\xi-\xi_0)^{\alpha_n}) F_{\xi_0}(\xi), \end{align*} where $E_{\xi_0}(\xi):=\mathsf{E}(e^{-i\xi})$ and $ F_{\xi_0}(\xi):= \diag(\widetilde{d_1}(\xi),\ldots, \widetilde{d_n}(\xi))\mathsf{F}(e^{-i\xi})$. From the definition, $E_{\xi_0}(\xi)$ and $F_{\xi_0}(\xi)$ are both analytic matrices, and $\det(E_{\xi_0}(\xi_0))\neq 0$, $\det(F_{\xi_0}(\xi_0))\neq 0 $. Hence, the matrices $E_{\xi_0}(\xi), F_{\xi_0}(\xi)$ and the sequence $\{\alpha_j\}_{j=1}^n$ satisfy all the conditions of Lemma \ref{lemma:PartialMultiplicity} in the Appendix. So the partial multiplicities of $\pA(e^{-i\xi})$ at $\xi_0$ are $\{\alpha_j\}_{j=1}^n$. Since $\pA(e^{-i\xi})$ is an analytic Hermite matrix for $ \xi \in \R $, by \cite[Theorem~S6.3]{GohLanRod:1982Matrix}, it can also be factorized as \begin{equation} \label{eq:AnalyticEVD} \pA(e^{-i\xi}) = W(\xi) \diag(\lambda_1(\xi), \ldots, \lambda_n(\xi)) (W(\xi))^\star, \end{equation} where $W(\xi)$ is a unitary analytic matrix and the eigenvalues $\lambda_1(\xi), \ldots, \lambda_n(\xi)$ are analytic functions of $\xi\in\R$. Without loss of generality, we can assume that $\beta_1\leqslant \cdots \leqslant \beta_n$. From $ \beta_j = \mz(\lambda_j(\xi), \xi_0) $, we can write $\lambda_j(\xi) = (\xi - \xi_0)^{\beta_j} f_j(\xi) $ with $f_j(\xi_0)\neq 0 $ for all $j=1,\ldots, n$. The factorization \eqref{eq:AnalyticEVD} becomes \begin{align*} \pA(e^{-i\xi}) =& W(\xi) \diag((\xi-\xi_0)^{\beta_1},\ldots,(\xi-\xi_0)^{\beta_n}) \diag(f_1(\xi),\ldots,f_n(\xi)) (W(\xi))^\star\\ =&\widetilde{E}_{\xi_0}(\xi) \diag((\xi-\xi_0)^{\beta_1},\ldots,(\xi-\xi_0)^{\beta_n}) \widetilde{F}_{\xi_0}(\xi), \end{align*} where $\widetilde{E}_{\xi_0}(\xi):= W(\xi) $ and $ \widetilde{F}_{\xi_0}(\xi):= \diag(f_1(\xi),\ldots, f_n(\xi))(W(\xi))^\star$. From the definition, $\widetilde{E}_{\xi_0}(\xi)$ and $\widetilde{F}_{\xi_0}(\xi)$ are both analytic matrices, and $\det(\widetilde{E}_{\xi_0}(\xi_0))\neq 0$, $\det(\widetilde{F}_{\xi_0}(\xi_0))\neq 0 $. Hence, the matrices $\widetilde{E}_{\xi_0}(\xi), \widetilde{F}_{\xi_0}(\xi)$ and the sequence $\{\beta_j\}_{j=1}^n$ satisfy all the conditions in Lemma~\ref{lemma:PartialMultiplicity} in the Appendix. By Lemma~\ref{lemma:PartialMultiplicity}, we must have $ \{\beta_j\}_{j=1}^n = \{\alpha_j\}_{j=1}^n $. This completes the proof. \end{proof} We now prove Theorem~\ref{thm:extract2} using Theorem~\ref{thm:partialMultiplicity}. \begin{proof}[Proof of Theorem \ref{thm:extract2}] Denote the invariant polynomials of the matrix $ \pA(z) $ by $ \pd_1(z),\ldots, \pd_n(z) $. Define the sequence $ \{\alpha_j\}_{j=1}^n $ by $ \alpha_j := \mz(\pd_j(z), z_0)$, $ j=1,\ldots, n $. From the condition in item (2), all $\alpha_j\leqslant 1$. Also, by $\pd_j(z)\mid\pd_{j+1}(z)$ for all $j=1,\ldots,n-1$, we have $\alpha_1\leqslant\cdots\leqslant\alpha_n $. Thus $$\{\alpha_j\}_{j=1}^n = \{0,\ldots,0,1,\ldots,1\}.$$ Taking $z=e^{-i\xi}$, we get a matrix $ \pA(e^{-i\xi}) $ that is analytic of $ \xi \in\R $. By \cite[Theorem~S6.3]{GohLanRod:1982Matrix}, the analytic Hermite matrix $\pA(e^{-i\xi})$ can also be factorized as \begin{equation} \label{eq:AnalyticEVDnew} \pA(e^{-i\xi}) = W(\xi) \diag(\lambda_1(\xi),\ldots,\lambda_n(\xi)) (W(\xi))^\star, \end{equation} where $W(\xi)$ is a unitary analytic matrix and $\lambda_1(\xi), \ldots, \lambda_n(\xi)$ are analytic functions of $\xi\in\R$. Since $z_0\in \T$, we can find some $\xi_0\in \left[ -\pi,\pi \right)$ such that $z_0 = e^{-i\xi_0}$. Define the sequence $ \{\beta_j\}_{j=1}^n $ by $\beta_j:= \mz(\lambda_j(\xi), \xi_0)$ for all $j=1,\ldots, n$. Without loss of generality, we can choose the factorization such that $\beta_1\leqslant \cdots \leqslant \beta_n$. According to Theorem~\ref{thm:partialMultiplicity}, we must have \[ \{\beta_j\}_{j=1}^n = \{\alpha_j\}_{j=1}^n = \{0,\ldots,0,1,\ldots,1\}. \] Let $K$ be the number of times $``1"$ appearing in $ \{\beta_j\}_{j=1}^n$ or $\{\alpha_j\}_{j=1}^n$. Recall from the definition of $ \{\alpha_j\}_{j=1}^n$, each $``1"$ corresponds to an elementary divisor $(z-z_0)$. So $K>0$ is the number of times that the elementary divisor $(z-z_0)$ appears. Let us see how the signs of the eigenvalues $\lambda_j(\xi)$ change from the left to the right side of $\xi_0$. For $j=1,\ldots, n-K$, we have $\beta_j=0$. So $\lambda_j(\xi_0) \neq 0$. Since the eigenvalue $\lambda_j(\xi)$ is a continuous function of $\xi\in\R$, it will not change its sign between the two sides of $\xi_0$, i.e., $\operatorname{sign}(\lambda_j(\xi_0-)) = \operatorname{sign}(\lambda_j(\xi_0+))$. For $j=n-K+1, \ldots, n$, we have $\beta_j=1$. In this case, $ \lambda_j(\xi_0) = 0 $ and $ \lambda_j'(\xi_0)\neq 0 $. We know that the eigenvalues of a Hermite matrix are all real, so $ \lambda_j(\xi), \lambda_j'(\xi) $ are both real functions of $ \xi\in\R $. Hence, $ \lambda_j'(\xi_0) $ is a nonzero real number. We have the following two possible situations. \begin{enumerate} \item If $\lambda_j'(\xi_0) > 0$, then $\lambda_j(\xi)$ is increasing near $\xi_0$. So $\lambda_j(\xi_0-) <0$ and $ \lambda_j(\xi_0+)> 0$. \item If $\lambda_j'(\xi_0) < 0$, then $\lambda_j(\xi)$ is decreasing near $\xi_0$. So $\lambda_j(\xi_0-) >0 $ and $ \lambda_j(\xi_0+)< 0$. \end{enumerate} Since the signature of $\pA(z)$ is constant for all $ z\in\T\setminus\sigma(\pA) $, we know that the number of positive eigenvalues and the number of negative eigenvalues of $ \pA(e^{-i\xi}) $ will remain unchanged between the two sides of $ \xi_0 $. So the above two cases must happen exactly the same number of times. That is, $K$ has to be an even integer. And there are exactly $K/2$ number of $\lambda_j(\xi)$ such that $ \lambda_j(\xi_0) = 0 $ and $\lambda_j'(\xi_0)>0$. Meanwhile, there are exactly $K/2$ number of $\lambda_j(\xi)$ such that $ \lambda_j(\xi_0)=0 $ and $\lambda_j'(\xi_0)<0$. The sign of $\lambda_j'(\xi_0)$ here are called the \emph{sign characteristic}, which was firstly studied in \cite{GohLanRod:1980Spectral} for matrices of polynomials. Since $ K > 0 $, there exist some $j_1, j_2 \geqslant n-K+1$ such that $$\lambda_{j_1}(\xi) = \gamma_1^2(\xi - \xi_0) + \bo((\xi-\xi_0)^2),\qquad \lambda_{j_2}(\xi) = -\gamma_2^2(\xi - \xi_0) + \bo((\xi-\xi_0)^2),\qquad \mbox{as}~ \xi \rightarrow \xi_0.$$ for some real $ \gamma_1, \gamma_2 \neq 0 $. In the eigenvalue decomposition \eqref{eq:AnalyticEVDnew}, $W(\xi)$ being a unitary and analytic matrix on $\xi\in\R$ implies that $W^{-1}(\xi_0)W(\xi) = \mathbf{I}_n + \bo((\xi-\xi_0))$ as $ \xi \rightarrow \xi_0 $. So, there exists an $n \times n$ analytic matrix $G(\xi)$ such that $$ W^{-1}(\xi_0)W(\xi) = \mathbf{I}_n + (\xi-\xi_0)G(\xi), \qquad \left( W^{-1}(\xi_0)W(\xi) \right)^\star = \mathbf{I}_n + (\xi-\xi_0)(G(\xi))^\star. $$ Multiplying constant matrices $W^{-1}(\xi_0) $ and $W^{-\star}(\xi_0)$ on the left and the right side of \eqref{eq:AnalyticEVDnew} respectively, we define $\mathring{\pA}(e^{-i\xi}) $ as \begin{align} \label{eq:diagfirstder} \mathring{\pA}(e^{-i\xi}):= & W^{-1}(\xi_0) \pA(e^{-i\xi})(W(\xi_0))^{-\star} = W^{-1}(\xi_0)W(\xi) \diag(\lambda_1(\xi),\ldots,\lambda_n(\xi)) (W^{-1}(\xi_0)W(\xi))^\star \notag \\ =& (\mathbf{I}_n + (\xi-\xi_0)G(\xi)) \diag(\lambda_1(\xi),\ldots,\lambda_n(\xi)) (\mathbf{I}_n + (\xi-\xi_0)(G(\xi))^\star) \notag \\ =& \Lambda(\xi) + (\xi - \xi_0) G(\xi) \Lambda(\xi) + (\xi - \xi_0)\Lambda(\xi) (G(\xi))^\star + (\xi - \xi_0)^2 G(\xi) \Lambda(\xi) (G(\xi))^\star, \end{align} where $\Lambda(\xi) := \diag(\lambda_1(\xi), \ldots, \lambda_n(\xi))$. Plugging in $ \xi = \xi_0 $, we can directly get \begin{equation} \label{eq:EVDFirstOrder} \mathring{\pA}(e^{-i\xi_0}) = \Lambda(\xi_0) = \diag(\lambda_1(\xi_0),\ldots,\lambda_{n-K}(\xi_0), \mathbf{0}_{n\times n}) \end{equation} As we picked $j_1, j_2 \geqslant n-K+1$, the $j_1$-th and the $j_2$-th rows, as well as the $j_1$-th and the $j_2$-th columns of $\mathring{\pA}(e^{-i\xi})$ are equal to $\bo((\xi-\xi_0))$ as $\xi \rightarrow \xi_0$. Now, we will check the lower right $K \times K$ submatrix of $ \mathring{\pA}(e^{-i\xi})$ from \eqref{eq:diagfirstder}. Since $ \lambda_{n-K+1}(\xi),\ldots,\lambda_n(\xi) $ are equal to $ \bo((\xi-\xi_0)) $ as $ \xi \rightarrow \xi_0 $, the lower right $K \times K$ submatrices of the second and the third term on the right hand side of \eqref{eq:diagfirstder} are both $ \bo((\xi-\xi_0)^2) $ as $ \xi \rightarrow \xi_0 $. Hence, the summation of the four terms on the right hand side of \eqref{eq:diagfirstder} yields: \begin{align} \label{eq:EVDSecondOrder} \mathring{\pA}_{(n-K+1):n, (n-K+1):n}(e^{-i\xi}) = & \diag(\lambda_{n-K+1}(\xi),\ldots,\lambda_n(\xi)) + \bo((\xi-\xi_0)^2) + \bo((\xi-\xi_0)^2) + \bo((\xi-\xi_0)^2) \notag \\ = & \diag(\lambda_{n-K+1}(\xi),\ldots,\gamma_1^2(\xi - \xi_0),\ldots,-\gamma_2^2(\xi - \xi_0),\ldots,\lambda_n(\xi)) + \bo((\xi-\xi_0)^2), \end{align} as $\xi \rightarrow \xi_0$. The $ \gamma_1^2(\xi - \xi_0) $ and $-\gamma_2^2(\xi - \xi_0) $ terms appear at the $j_1$-th and the $j_2$-th diagonal positions respectively. Now, we can use the following matrix $ V $ to cancel the first order term at the $(j_1, j_1)$ position of $ \mathring{\pA}(e^{-i\xi}) $. Define the $n\times n$ matrix $V$ as \begin{equation} \label{eq:V} V:= \begin{bmatrix} 1 & & & & & & \\ & \ddots & & & & & \\ & & \gamma_1^{-1} & & \gamma_2^{-1} & & \\ & & & \ddots & & & \\ & & & & 1 & & \\ & & & & & \ddots & \\ & & & & & & 1 \end{bmatrix}, \end{equation} where $ V\mathring{\pA} $ corresponds to dividing the $j_1$-th row of $ \mathring{\pA} $ by $\gamma_1$, then adding $\gamma_2^{-1}$ times the $j_2$-th row to the $j_1$-th row of $ \mathring{\pA} $. Taking symmetric operations on both rows and columns of $\mathring{\pA}(e^{-i\xi})$, we define $ \breve{\pA}(e^{-i\xi}) := V \mathring{\pA}(e^{-i\xi}) V^\star$. Then the lower-right $K \times K$ submatrix of $ \breve{\pA}(e^{-i\xi})$ becomes: \begin{align*} &\breve{\pA}_{(n-K+1):n, (n-K+1):n}(e^{-i\xi}) = \begin{bmatrix} \lambda_{n-K+1}(\xi) & & & & & & \\ & \ddots & & & & & \\ & & 0 & & -\gamma_2(\xi - \xi_0) & & \\ & & & \ddots & & & \\ & & -\gamma_2(\xi - \xi_0) & & -\gamma_2^2(\xi - \xi_0) & & \\ & & & & & \ddots & \\ & & & & & & \lambda_n(\xi) \end{bmatrix} + \bo((\xi-\xi_0)^2). \end{align*} Thus, the $(j_1, j_1)$-diagonal entry of $\breve{\pA}(e^{-i\xi}) $ is $\bo((\xi-\xi_0)^2)$ as $\xi\rightarrow \xi_0$. From the definition of $ \breve{\pA} $, we see that similar to $\mathring{\pA}(e^{-i\xi})$, the $j_1$-th and the $j_2$-th rows, as well as the $j_1$-th and the $j_2$-th columns of $\breve{\pA}(e^{-i\xi})$ are still $\bo((\xi-\xi_0))$ as $\xi \rightarrow \xi_0$. Finally, we can change back to Laurent polynomials. The matrix $ \breve{\pA}(z) $ of Laurent polynomials is written as $$\breve{\pA}(z) = V \mathring{\pA}(z) V^\star = V W^{-1}(\xi_0) \pA(z)W^{-\star}(\xi_0) V^\star. $$ Since the $j_1$-th row and the $j_1$-th column of $ \breve{\pA}(e^{-i\xi}) $ are $ \bo((\xi-\xi_0)) $, we know that $(z-z_0)$ divides both the $j_1$-th row and the $j_1$-th column of $ \breve{\pA}(z) $. Also, the fact that the $(j_1, j_1)$ entry of $\breve{\pA}(e^{-i\xi}) $ is $\bo((\xi-\xi_0)^2)$ implies that $(z-z_0)^2$ divides the $(j_1, j_1)$ entry of $\breve{\pA}(z)$. So we can factor out $(z-z_0)$ from the $j_1$-th row and $(z-z_0)^\star$ from the $j_1$-th column simultaneously to get $$ \breve{\pA}(z) = \pD_{j_1, 1}(z)\widetilde{\pA}(z)\pD_{j_1, 1}^\star(z), $$ for some $n \times n$ Hermite matrix $\widetilde{\pA}(z)$ of Laurent polynomials and $\pD_{j_1, 1}(z)$ is defined as \eqref{eq:Diagk}. Thus, we have \begin{align*} \pA(z) =& W(\xi_0)V^{-1}\breve{\pA}(z)V^{-\star}W^\star(\xi_0) = W(\xi_0)V^{-1} \pD_{j_1, 1}(z)\widetilde{\pA}(z)\pD_{j_1, 1}^\star(z) V^{-\star}W^\star(\xi_0) = \pU(z) \widetilde{\pA}(z) \pU^\star(z), \end{align*} where $\pU(z):= W(\xi_0)V^{-1}\pD_{j_1, 1}(z) $. Observe that \begin{align*} \len(\det(\widetilde{\pA}(z))) = \len(\det(\pA(z))) - \len(\det(\pU(z))) - \len(\det(\pU^*(z))) = \len(\det(\pA(z))) - 2. \end{align*} So $ \pU(z) $ and $ \widetilde{\pA}(z) $ satisfy all the requirements. This proves Theorem~\ref{thm:extract2}. \end{proof} Now we are ready to prove Theorem~\ref{thm:const-sig}. \begin{proof}[Proof of Theorem \ref{thm:const-sig}] If $\len(\det(\pA(z)))>0$, then $\pA(z)$ has some elementary divisor $(z-z_0)^\alpha$ with $z_0\neq 0$ and $ \alpha \in \N $. Let $\pA_0(z):=\pA(z)$. For $j\geqslant 0$, if $\pA_j(z)$ has some elementary divisor $(z-z_0)^\alpha$ with $z_0\in \C\setminus\T\setminus\{0\}$ or $\alpha>1$, apply Theorem~\ref{thm:extract1} to get a factorization of $ \pA_j(z) $ as $\pA_j(z)=\pU_{j+1}(z)\pA_{j+1}(z)\pU^\star_{j+1}(z) $, for some $n\times n$ matrices $\pU_{j+1}(z)$ and $\pA_{j+1}(z)$ of Laurent polynomials satisfying $\pA_{j+1}^\star(z) = \pA_{j+1}(z)$ and $ \len(\pA_{j+1}(z)) < \len(\pA_{j}(z)) $. If all the elementary divisors $(z-z_0)^\alpha$ of $\pA_j(z)$ has degree $\alpha =1$ and $z_0\in\T$, we can apply Theorem~\ref{thm:extract2} to still get the factorization $\pA_j(z)=\pU_{j+1}(z)\pA_{j+1}(z)\pU^\star_{j+1}(z) $ with $\pA_{j+1}^\star(z) = \pA_{j+1}(z)$ and $ \len(\pA_{j+1}(z)) < \len(\pA_{j}(z)) $. Reset $ j$ by $j+1 $ and repeat the steps, until $ \len(\det(\pA_j(z)))=0 $. This iteration will stop after finite number of steps, since $\len(\det(\pA(z)))$ is finite and $\len(\det(\pA_j(z)))$ is strictly decreasing after each step. Hence, we can get a factorization as $$ \pA(z) = \pU_{1}(z)\cdots\pU_{k}(z) \pA_{k}(z)\pU^\star_{k}(z)\cdots\pU^\star_{1}(z),$$ where $ \pA_{k}(z)$ has no elementary divisors, i.e., $\len(\det(\pA_{k}(z)))=0$. In this case, it is proved by Theorem \ref{thm:unimodular} that $\pA_k(z)$ can be factorized as $\pA_k(z)=\pU_{k+1}(z)\pD\pU^\star_{k+1}(z) $ for some $n\times n$ matrix $\pU_{k+1}(z)$ of Laurent polynomials and $\pD=\diag(\mathbf{I}_{\nu_1}, -\mathbf{I}_{\nu_2})$ is an $n \times n$ constant diagonal matrix for some nonnegative integers $ \nu_1$ and $ \nu_2 $ satisfying $ \nu_1 + \nu_2 = n $. Define $\pU(z):= \prod_{j=1}^{k+1}\pU_j(z)$. Then $ \pA(z)=\pU(z)\pD\pU^\star(z) $ holds. Also, notice that for all $ z_0 \in \T\setminus\sigma(\pA(z)) $, $ \pU(z_0) $ is a nonsingular matrix. By Sylvester's law of inertia, $ \nu_1 = \nu_+(\pA(z_0)) = \nu_+ $ and $ \nu_2 = \nu_-(\pA(z_0)) = \nu_- $. This completes the proof. \end{proof} All the steps, except finding $W(\xi)$ in \eqref{eq:AnalyticEVDnew}, in the above proof of Theorem~\ref{thm:const-sig} are constructive. The existence of $W(\xi)$ in \eqref{eq:AnalyticEVDnew} is guaranteed by \cite[Theorem~S6.3]{GohLanRod:1982Matrix}, which is not constructive and is very complicated. We now provide the following simple constructive algorithm to realize the generalized spectral factorization in Theorem~\ref{thm:const-sig}. Steps (S3) and (S5) simply follow the proof of Theorem~\ref{thm:extract1} and Algorithm~\ref{algo:unimodular}, respectively. We use step (S4) to find the factorization in Theorem~\ref{thm:extract2}. The idea of step (S4) is that for $ z_0 = e^{-i\xi_0}\in\T $, where all the elementary divisors have single root, we can easily calculate the first two coefficient matrices of the Taylor expansion $ \pA(e^{-i\xi}) = C_0 + C_1 (\xi - \xi_0) + \bo((\xi-\xi_0)^2) $ as $ C_0 = \pA(z_0) $ and $ C_1 = -i z_0 \pA'(z_0) $. Then if restricted to the null space of $ C_0 $, the matrix $ C_1 $ must have half of the eigenvalues being positive and the other half of the eigenvalues being negative. Thus, we can find a nonsingular matrix $ V $ such that $ VC_1 V^\star $ has one zero on some diagonal position, and hence we can factor out $ (z - z_0) $ from the row and $ (z-z_0)^\star $ from the column of $ V\pA(z)V^\star $ simultaneously. See the Appendix for the proof of the following algorithm. \begin{algorithm} \label{algo:Const-Sig} Input an $n\times n$ Hermite matrix $\pA(z)$ of Laurent polynomials with constant signature on $z\in\T\bs\sigma(\pA)$ such that $\det(\pA(z))\not\equiv 0$. \begin{enumerate} \item[(S0)] Initialization. Set $\widetilde{\pA}(z):=\pA(z)$ and $\pU(z):= \mathbf{I}_n$. \item[(S1)] Compute the Smith Normal Form $\pD(z) = \diag(\pd_1(z), \ldots, \pd_n(z))$ of $\widetilde{\pA}(z)$ and get a decomposition $\widetilde{\pA}(z) = \mathsf{E}(z)\pD(z)\mathsf{F}(z)$, where $\mathsf{E}(z)$ and $\mathsf{F}(z)$ are unimodular matrices of Laurent polynomials. \item[(S2)] If $\pD(z)$ is a constant matrix, then go to (S5). Otherwise, redefine \begin{equation} \label{eq:SNFchange} \widetilde{\pA}(z):=\mathsf{E}^{-1}(z)\widetilde{\pA}(z)\mathsf{E}^{-\star}(z) = \diag(\pd_1(z), \ldots, \pd_n(z))\mathsf{F}(z)\mathsf{E}^{-\star}(z), \end{equation} and update/replace $\pU(z)$ by $\pU(z)\mathsf{E}(z)$. \item[(S3)] {\bfseries For $j$ from $1$ to $n$:}\\ \hspace*{0.5 cm} Factorize $\pd_j(z)= \prod_{k=1}^{n_j}(z-z_{j,k})^{\alpha_{j,k}}. $\\ \hspace*{0.5 cm} {\bfseries If} there exists some factor $ (z-z_{j,k})^{\alpha_{j,k}} $ with $z_{j,k} \in (\C\setminus\{0\})\setminus \T$: \begin{enumerate} [leftmargin=0.7 in] \item redefine $\pU(z) $ by multiplying its $j$-th column by $(z-z_{j,k})^{\alpha_{j,k}}$; \item redefine $\widetilde{\pA}(z)$ by dividing its $j$-th row by $(z-z_{j,k})^{\alpha_{j,k}} $, and dividing its $j$-th column by $(z^{-1}-\overline{z_{j,k}})^{\alpha_{j,k}}$; \item {\bfseries break} the {\bfseries for} loop, and go back to (S1); \end{enumerate} {\bfseries else if} there exists some factor $ (z-z_{j,k})^{\alpha_{j,k}} $ with $z_{j,k} \in \T$ and $ \alpha_{j,k} \geqslant 2$: \begin{enumerate}[leftmargin=0.7 in] \item redefine $\pU(z) $ by multiplying its $j$-th column by $(z-z_{j,k})^{\lfloor\alpha_{j,k}/2\rfloor}$; \item redefine $\widetilde{\pA}(z)$ by dividing its $j$-th row by $(z-z_{j,k})^{\lfloor\alpha_{j,k}/2\rfloor} $, and dividing its $j$-th column by $(z^{-1}-\overline{z_{j,k}})^{\lfloor\alpha_{j,k}/2\rfloor}$; \item {\bfseries break} the {\bfseries for} loop, and go back to (S1); \end{enumerate} \hspace*{0.5 cm} {\bfseries end if;}\\ {\bfseries end for;} \item[(S4)] If the {\bfseries for} loop doesn't break from any conditions in (S3), then all the elementary divisors will have roots on $\T$ with degree equal to $1$. Pick one of the elementary divisors $(z-z_0)$. Suppose that it is contained in the last $K$ invariant polynomials $d_{n-K+1}(z), \ldots, d_n(z)$: \begin{enumerate} \item From \eqref{eq:SNFchange}, we see the last $K$ columns and the last $K$ rows of $\widetilde{\pA}(z_0)$ have to be $0$. Consider the constant Hermite matrix $-iz_0\widetilde{\pA}'(z_0)$. Take its lower right $K \times K$ submatrix, denoted as $\pA_K$, and find its eigenvalue decomposition as $\pA_K:=\pU_1\Gamma\pU^\star_1$, for some unitary matrix $\pU_1$ and $\Gamma=\diag(\gamma_1^2, -\gamma_2^2,\ldots, \gamma_K)$. Then the eigenvalues in $\Gamma$ must be all nonzero, while $K/2$ of them are positive and $K/2$ of them are negative. Arrange them such that the first one is positive and the second one is negative. Redefine $\widetilde{\pA}(z):=\diag(\mathbf{I}_{n-K}, \pU_1^{-1})\widetilde{\pA}(z)\diag(\mathbf{I}_{n-K}, \pU_1^{-\star})$ and $ \pU(z):=\pU(z)\diag(\mathbf{I}_{n-K}, \pU_1)$. \item Take $\pU_2:=\diag\left(\mathbf{I}_{n-K}, \begin{bmatrix} \gamma_1^{-1} & \gamma_2^{-1} \\ 0 & 1 \end{bmatrix}, \mathbf{I}_{K-2} \right)$. Redefine $\widetilde{\pA}(z):=\pU_2\widetilde{\pA}(z)\pU^\star_2$ and $\pU(z):=\pU(z)\pU_2^{-1}$. \item Redefine $\widetilde{\pA}(z)$ by dividing its $(n-K+1)$-th row by $(z-z_0)$ and dividing its $(n-K+1)$-th column by $(z^{-1}-\overline{z_0})$. Redefine $\pU(z) $ by multiplying its $(n-K+1)$-th column by $(z-z_0)$. \end{enumerate} Go back to (S1). \item[(S5)] Finalize: Since $\widetilde{\pA}(z)$ has no elementary divisor, apply Algorithm \ref{algo:unimodular} to get the factorization $\widetilde{\pA}(z)= \widetilde{\pU}(z)\pD\widetilde{\pU}^\star(z)$. Redefine $\pU(z):=\pU(z)\widetilde{\pU}(z)$. Output $\pU(z)$ and $\pD$. Then $\pA(z)=\pU(z)\pD\pU^\star(z)$ must hold. \end{enumerate} \end{algorithm} Let us make some interesting remarks and consequences about Theorem~\ref{thm:partialMultiplicity}. For a Hermite matrix $ \pA(z) $ of Laurent polynomials, although we know from Theorem~\ref{thm:partialMultiplicity} that the analytic eigenvalues $ \lambda_1(\xi),\ldots, \lambda_n(\xi) $ of $ \pA(e^{-i\xi}) $ have some relationship to the invariant polynomials of $ \pA(z) $, we cannot expect $ \lambda_1(\xi),\ldots, \lambda_n(\xi) $ to be Laurent polynomials in general. Actually, the following example shows that the analytic functions $ \lambda_1(\xi),\ldots, \lambda_n(\xi) $ might not be even $ 2\pi$-periodic functions of $ \xi\in\R $. \begin{example} \label{ex:AnalyticalEig} {\rm Consider the same matrix $ \pA(z) $ as in Example~\ref{ex:SingleRoot}. Solving $ \det(\pA(e^{-i\xi}) - \lambda \mathbf{I}_{2})=0 $, we can find two analytic functions that are eigenvalues of $ \pA(e^{-i\xi}) $ as $ \lambda_1(\xi) = -\lambda_2(\xi) = 4\sin(\xi/2) $. They are both $ 4\pi $-periodic functions of $ \xi\in \R $, and we cannot find two eigenvalues of $ \pA(e^{-i\xi}) $ that are both analytic and $ 2\pi $-periodic functions of $ \xi\in\R $. Also, as calculated in Example~\ref{ex:SingleRoot}, the two invariant polynomials of $ \pA(z) $ are $ \pd_1(z) = \pd_2(z) = z-1 $. Take $ \xi_0=0 $ and $ z_0 = e^{-i\xi_0}= 1 $, we can calculate $ \alpha_j := \mz(\pd_j(z), 1) = 1 $, $ \beta_j := \mz(\lambda_j(e^{-i\xi}), 0) = 1 $ for $ j = 1,2 $. } \end{example} Since the sequence $ \{\beta_j\}_{j=1}^n $ in Theorem~\ref{thm:partialMultiplicity} is related to the sign change of the eigenvalues $ \lambda_j(\xi) $, we have the following corollary for the positive semidefinite matrix $ \pA(z) $ of Laurent polynomials. \begin{cor} \label{cor:SPDEvenEleDiv} Suppose that $\pA(z)$ is a Hermite matrix of Laurent polynomials such that $ \pA(z)\geqslant 0 $ for all $z\in \T$. Then all its elementary divisors $(z-z_0)^{\alpha}$ with $z_0\in \T$ must have even degree, i.e., $\alpha\in 2\Z$. \end{cor} \begin{proof} Since $ z_0\in \T $, we can find some $ \xi_0\in\R $ such that $ z_0 = e^{-i\xi_0} $. Suppose that $ \lambda_1(\xi),\ldots,\lambda_n(\xi) $ are the eigenvalues of $ \pA(e^{-i\xi}) $ which are also analytic functions of $ \xi\in\R $. Define the sequences $ \{\alpha_j\}_{j=1}^n $ and $ \{\beta_j\}_{j=1}^n $ as in Theorem~\ref{thm:partialMultiplicity}. By Theorem~\ref{thm:partialMultiplicity}, we must have $ \{\beta_j\}_{j=1}^n = \{\alpha_j\}_{j=1}^n $. Since $ \pA(e^{-i\xi}) $ is positive semidefinite for all $ \xi\in\R $, that is, $ \lambda_j(\xi) $ will not change sign across $ \xi_0 $ for all $ j=1,\ldots,n $, we conclude that \[ \beta_j = \mz(\lambda_j(\xi), \xi_0) \in 2\Z, \qquad \forall~ j = 1,\ldots, n. \] So $ \alpha_j \in 2\Z $ for all $ j = 1, \ldots, n $. From the definition of $ \alpha_j $, we know that $ \{\alpha_j\}_{j=1}^n $ are just the degrees of elementary divisors $ (z-z_0)^\alpha $ in each invariant polynomial. So all such $ \alpha $ satisfy $ \alpha \in 2\Z $. \end{proof} \section{Proof of Theorem~\ref{thm:nonconst-sig} on Generalized Matrix Spectral Factorization} In this section, we prove Theorem~\ref{thm:nonconst-sig}. To prove the necessity part of Theorem~\ref{thm:nonconst-sig}, we need the following result. \begin{lemma} \label{thm:SylvesterInertia} Suppose that an $n \times n$ Hermite matrix $A$ can be decomposed in the following way \begin{equation} \label{eq:SylvesterConst} A = U \begin{bmatrix} \mathbf{I}_{m_+} & \\ & -\mathbf{I}_{m_-} \end{bmatrix} U^\star, \end{equation} where $U$ is an $n \times m$ matrix and $\mathbf{I}_{m_+}$, $\mathbf{I}_{m_-}$ are the identity matrices of size $m_+$ and $m_-$, respectively, such that $m_++m_- = m$. Then $$m_+ \geqslant \nu_+(A), \qquad m_- \geqslant \nu_-(A).$$ \end{lemma} \begin{proof} First, we consider the case that $A$ is nonsingular. In this case, the decomposition \eqref{eq:SylvesterConst} forces that all the three matrices on the right hand side of \eqref{eq:SylvesterConst} must have rank at least $ n $. So $m\geqslant n$ and $U$ must have full row rank. If $m=n$, then $U$ is a nonsingular square matrix. By Sylvester's law of inertia, $$m_+ = \nu_+(A),\qquad m_- = \nu_-(A).$$ If $m > n$, since $U$ has full row rank, we can add $ m-n $ more rows to $U$ to get $ \widetilde{U} $ such that $\widetilde{U} := \begin{bmatrix} U \\ V \end{bmatrix}$ is an $m \times m$ nonsingular square matrix. Then the $m \times m$ matrix $\widetilde{A} := \widetilde{U} \begin{bmatrix} \mathbf{I}_{m_+} & \\ & -\mathbf{I}_{m_-} \end{bmatrix} \widetilde{U}^\star$ has $A$ on the top left corner: \begin{equation} \label{eq:Sylvestertilde} \widetilde{A} := \widetilde{U}\begin{bmatrix} \mathbf{I}_{m_+} & \\ & -\mathbf{I}_{m_-} \end{bmatrix} \widetilde{U}^\star = \begin{bmatrix} U \\ V \end{bmatrix} \begin{bmatrix} \mathbf{I}_{m_+} & \\ & -\mathbf{I}_{m_-} \end{bmatrix} \begin{bmatrix} U^\star & V^\star \end{bmatrix} =\begin{bmatrix} A & B^\star \\ B & C \end{bmatrix} \end{equation} for some $(m-n)\times n$ matrix $B$ and some $(m-n)\times (m-n)$ matrix $C$. Define nonsingular $m\times m$ matrix $W := \begin{bmatrix} \mathbf{I}_n & \mathbf{0} \\ -BA^{-1} & \mathbf{I}_{m-n} \end{bmatrix}$, and let $ \mathring{A} := W\widetilde{A}W^\star $. Plugging \eqref{eq:Sylvestertilde} in $ \mathring{A} $, we can directly calculate that \begin{equation} \label{eq:Sylvesterring} \mathring{A} := W\widetilde{A}W^\star = \begin{bmatrix} \mathbf{I}_n & \mathbf{0} \\ -BA^{-1} & \mathbf{I}_{m-n} \end{bmatrix} \begin{bmatrix} A & B^\star \\ B & C \end{bmatrix} \begin{bmatrix} \mathbf{I}_n & -A^{-\star}B^\star \\ \mathbf{0} & \mathbf{I}_{m-n} \end{bmatrix} = \begin{bmatrix} A & \mathbf{0} \\ \mathbf{0} & D \end{bmatrix}, \end{equation} where the $(m-n)\times (m-n)$ matrix $D := C-BA^{-1}B^\star$. From \eqref{eq:Sylvesterring}, we see that the eigenvalues of $\mathring{A}$ are just the eigenvalues of $A$ combined with the eigenvalues of $D$. So \begin{equation} \label{eq:sigAring} \nu_+(\mathring{A}) \geqslant \nu_+(A),\qquad \nu_-(\mathring{A}) \geqslant \nu_-(A). \end{equation} Also, from the definition of $ \widetilde{A} $ and $ \mathring{A} $ in \eqref{eq:Sylvestertilde} and \eqref{eq:Sylvesterring}, we deduce that \begin{align} \label{eq:Sylvester} \mathring{A} = W \widetilde{A} W^\star = W\widetilde{U} \begin{bmatrix} \mathbf{I}_{m_+} & \\ & -\mathbf{I}_{m_-} \end{bmatrix} \widetilde{U}^\star W^\star = W\widetilde{U} \begin{bmatrix} \mathbf{I}_{m_+} & \\ & -\mathbf{I}_{m_-} \end{bmatrix} (W\widetilde{U})^\star. \end{align} Since $W\widetilde{U}$ is an $ m \times m $ nonsingular matrix, by Sylvester's law of inertia again, \eqref{eq:Sylvester} implies that \begin{equation} \label{eq:sigAring2} \nu_+(\mathring{A}) = m_+ ,\qquad \nu_-(\mathring{A}) = m_-. \end{equation} Combining \eqref{eq:sigAring} and \eqref{eq:sigAring2}, we get $m_+ \geqslant \nu_+(A) $ and $ m_- \geqslant \nu_-(A)$. This proves the lemma for the case that $A$ is nonsingular. For the case that $A$ is singular, we can find its eigenvalue decomposition first: $$PAP^\star = \begin{bmatrix} \Lambda & \\ & \mathbf{0} \end{bmatrix},$$ where $\Lambda$ is a $k\times k$ nonsingular diagonal matrix containing all the nonzero eigenvalues of $A$ and $P$ is an $n \times n$ unitary matrix. Plugging \eqref{eq:SylvesterConst} into the above decomposition: $$ \begin{bmatrix} \Lambda & \\ & \mathbf{0} \end{bmatrix} =PAP^\star =PU \begin{bmatrix} \mathbf{I}_{m_+} & \\ & -\mathbf{I}_{m_-} \end{bmatrix} U^\star P^\star = Q\begin{bmatrix} \mathbf{I}_{m_+} & \\ & -\mathbf{I}_{m_-} \end{bmatrix} Q^\star, $$ where $Q:=PU$. We define $\widetilde{Q}$ by removing the last $n-k$ rows of $Q$. Then the above equation implies: $$ \Lambda = \widetilde{Q} \begin{bmatrix} \mathbf{I}_{m_+} & \\ & -\mathbf{I}_{m_-} \end{bmatrix} \widetilde{Q}^\star. $$ Since $\Lambda$ is nonsingular, we know from the previously proved case that $$m_+ \geqslant \nu_+(\Lambda) = \nu_+(A),\qquad m_- \geqslant \nu_-(\Lambda) = \nu_-(A).$$ This proves the lemma for the case that $A$ is a singular matrix. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:nonconst-sig}] Necessity. $ \pA^\star(z) = \pA(z) $ implies that for all $ z_0\in \T $, $ \pA(z_0) $ is a Hermite matrix and $ \pU^\star(z_0) = (\pU(z_0))^\star $ holds. Hence, we know from Lemma~\ref{thm:SylvesterInertia} that the decomposition $ \pA(z) = \pU(z)\diag(\mathbf{I}_{m_1}, -\mathbf{I}_{m_2})\pU^\star(z) $ yields $ m_1 \geqslant \nu_+(\pA(z_0)) $ and $ m_2 \geqslant \nu_-(\pA(z_0)) $. Considering all $ z_0\in \T $, we see that \eqref{eq:LargeSig} holds. This proves the necessity part of Theorem~\ref{thm:nonconst-sig}. To prove the sufficiency part, we first consider the case that $ \det(\pA(z))\not\equiv 0 $, where $ \sigma(\pA) $ is a finite subset of $ \C\bs\{0\} $. The degenerate case is proved later using the Smith Normal Form of $ \pA(z) $. Suppose that the claim holds for $$ m_1 = \max_{z\in\T}\nu_+(\pA(z)), \qquad m_2 = \max_{z\in\T}\nu_-(\pA(z)). $$ Then $ \pA(z) = \widetilde{\pU}(z)\widetilde{\pD}\widetilde{\pU}^\star(z) $ is obviously true with $ \widetilde{\pU}(z) := [\mathbf{0}_{n\times s_1}, \pU(z), \mathbf{0}_{n\times s_2}] $ and $ \widetilde{\pD}:= \diag(\mathbf{I}_{s_1+m_1},-\mathbf{I}_{s_2+m_2}) $, for any integers $ s_1, s_2 \geqslant 0 $. Therefore, we only need to prove the claim for $m_1$ and $m_2$ equal to the lower bounds in \eqref{eq:LargeSig}. Define \begin{equation} \label{eq:n+n-} n_+:= \max_{z\in\T}\nu_+(\pA(z)),\qquad n_-:= \max_{z\in\T}\nu_-(\pA(z)). \end{equation} If the signature of $ \pA(z) $ is constant on $\T\bs\sigma(\pA)$, that is, $ \nu_+(\pA(z)) $ and $ \nu_-(\pA(z)) $ are both constant on $ z\in \T\bs\sigma(\pA)$, then by Lemma~\ref{lem:maxminT}, for all $ z_0\in \T\bs\sigma(\pA) $, \[ \nu_+(\pA(z_0)) = \max_{z\in \T \bs\sigma(\pA)}\nu_+(\pA(z)) = n_+, \qquad \nu_-(\pA(z_0)) = \max_{z\in \T \bs\sigma(\pA)}\nu_-(\pA(z)) = n_-. \] Hence, $ n_+ + n_- = n $ and the result is proved by Theorem \ref{thm:const-sig}. If $\operatorname{sig}(\pA(z))$ is not constant on $\T\bs\sigma(\pA)$, we have $m_0:= n_+ + n_- > n$. In the following, we will construct $(m_0 - n)$ Laurent polynomials $\mu_1(z),\ldots, \mu_{m_0 - n}(z)$ such that the Hermite matrix \begin{equation} \label{eq:extend} \widetilde{\pA}(z):= \diag(\pA(z),\mu_1(z),\ldots,\mu_{m_0 - n}(z)) \end{equation} has constant signature on $\T\setminus\sigma(\widetilde{\pA})$. Since $ \det(\pA(z)) $ is a Laurent polynomial that is not identically zero, $\{z_1,\ldots, z_K\}:=\sigma(\pA) \cap \T $ contains only finite number of points on $\T$. So $ \{z_1,\ldots, z_K\} $ cuts $\T$, which is the unit circle in the complex plane, into $K$ connected open segments: $\Gamma_1, \ldots, \Gamma_K $, such that \begin{enumerate} \item $\bigcup_{j=1}^K \Gamma_j \bigcup \{z_l\}_{l=1}^K = \T $; \item Pairwise disjoint: $\Gamma_j \cap \{z_l\}_{l=1}^K $ is empty, $\Gamma_j \cap \Gamma_k$ is empty for all $j,k=1,\ldots,K$, $j\neq k$; \item Both endpoints of $\Gamma_j $ are contained inside $\{z_l\}_{l=1}^K$, denote them by $z_{j,1}$ and $z_{j,2}$, for all $j=1,2,\ldots,K$. \end{enumerate} We can choose all the eigenvalues $\lambda_1(\xi), \ldots, \lambda_n(\xi)$ of $\pA(e^{-i\xi})$ to be analytic functions of $\xi\in\R$. In each $ \Gamma_j $, since $ \det(\pA(e^{-i\xi})) = \prod_{k=1}^{n}\lambda_k(\xi) \neq 0 $, none of the $ \lambda_k(\xi) $ will attain zero. As nonzero continuous functions on an open interval, all $ \lambda_k(\xi) $ will not change signs within each $ \Gamma_j $. Thus $\nu_+(\pA(z))$ and $\nu_-(\pA(z))$ remain constant on each $\Gamma_j$. For each $\Gamma_j$, define a function $$ \eta_j(z):= (z_{j,1}z_{j,2})^{-\frac{1}{2}}z^{-1}(z-z_{j,1})(z-z_{j,2}), \qquad j = 1,\ldots, K. $$ The square root of $ z_{j,1}z_{j,2} $ is chosen in the complex plane, where the two possible solutions only differ by a $``-"$ sign. For both solutions, we can directly verify that $ \eta_j^\star(z) = \eta_j(z) $. So $ \eta_j(z) $ is a real function for all $ z\in \T $. Since the signature of $ \pA(z) $ is not constant for all $ z\in\T\bs \sigma(\pA) $, $ \T $ contains more than one open segments $ \Gamma_j $. So $ z_{j,1}\neq z_{j,2} $ and both $ z_{j,1} $ and $ z_{j,2} $ are single roots of $ \eta_j(z) $. Hence $ \eta_j(z) $ will have different signs between two sides of $ z_{j,1} $ and $ z_{j,2} $ on $ \T $. Therefore, in calculation of the square root of $ z_{j,1}z_{j,2} $, we can just choose the solution such that $ \eta_j(z)>0 $ for all $ z\in \Gamma_j $ and $ \eta_j(z) < 0 $ for all $ z\in \T\setminus \Gamma_j\setminus \{z_{j,1}, z_{j,2}\} $. In summary, $ \eta_j(z) $ satisfies \begin{enumerate} \item $\eta_j(z)$ is real for all $z\in \T$; \item $\eta_j(z)>0 $ for all $z\in \Gamma_j$ and $\eta_j(z)<0 $ for all $z\in \Gamma_k $, $ k\neq j $. \end{enumerate} Let us construct functions $\mu_k(z)$ recursively for $k=1,\ldots, m_0-n $, such that \eqref{eq:extend} has constant signature on $ z\in\T\setminus\sigma(\widetilde{\pA}) $. Start with $\pA_0(z):=\pA(z)$ and $k=1$. In order to have our following construction work, we only need to verify two conditions before the start of each new iteration: \begin{enumerate}[label=(\roman*)] \item $ \pA_{k-1}(z) $ is a Hermite matrix of Laurent polynomials satisfying $ \max_{z\in\T}\nu_+(\pA_{k-1}(z)) = n_+$ and $\max_{z\in\T}\nu_-(\pA_{k-1}(z)) = n_- $, where $n_+$ and $n_-$ are defined in \eqref{eq:n+n-}. \item $ k \leqslant m_0 - n $. \end{enumerate} They are obviously true for $ k=1 $. Define an index set $J:=\{j\setsp \nu_-(\pA_{k-1}(z))=n_- ~\mbox{for all}~ z\in \Gamma_j \}$. Now, take $$ \mu_k(z):= (-1)^{|J|+1}\prod_{j\in J}\eta_j(z), \qquad \pA_k(z):=\begin{bmatrix} \pA_{k-1}(z) & \\ & \mu_k(z) \end{bmatrix}. $$ Since all $ \eta_j(z) $ are real functions on $ z\in\T $, $\mu_k^\star(z)=\mu_k(z)$ is also real on $\T$. From $ \pA_{k-1}^\star(z) = \pA_{k-1}(z) $ in item (i), the matrix $\pA_k(z)$ is also a Hermite matrix of Laurent polynomials. By the definition of $\mu_k(z)$, we can directly verify from the sign of $ \eta_j(z) $ that $\mu_k(z)> 0$ for all $z\in \cup_{j\in J} \Gamma_j$, and $\mu_k(z)<0$ for all $z\in \cup_{j\notin J}\Gamma_j$. For $z\in\T$, the eigenvalues of $\pA_k(z)$ are just all the eigenvalues of $\pA_{k-1}(z)$, combined with $\mu_k(z)$. Now, let us calculate $\nu_+(\pA_k(z))$ and $\nu_-(\pA_k(z))$ on each $\Gamma_j$. \begin{itemize} \item For $z\in \bigcup_{j\in J} \Gamma_j$, since $\mu_k(z)>0$, we have $\nu_-(\pA_k(z)) = \nu_-(\pA_{k-1}(z)) = n_-$. By item (ii), we know that $k\leqslant m_0-n = n_++n_--n$, and hence \begin{align*} \nu_+(\pA_{k}(z)) &= (n+k)-\nu_-(\pA_{k}(z)) = (n+k)-n_- \leqslant n+ (n_++n_--n)-n_- = n_+ . \end{align*} \item For $z\in\bigcup_{j\notin J}\Gamma_j$, since $\mu_k(z)<0$ and $\nu_-(\pA_{k-1}(z))<n_- $, we have $ \nu_-(\pA_k(z)) = \nu_-(\pA_{k-1}(z))+1\leqslant n_-. $ Meanwhile, $ \nu_+(\pA_{k}(z)) = \nu_+(\pA_{k-1}(z))\leqslant n_+ $. \end{itemize} Combining the two cases, we showed that $$ \max_{z\in\T\bs\sigma(\pA)}\nu_+(\pA_{k}(z)) \leqslant n_+ \qquad \mbox{and}\quad \max_{z\in\T\bs\sigma(\pA)}\nu_-(\pA_{k}(z)) \leqslant n_-. $$ The inequalities of the other direction is obvious, since $$ \max_{z\in\T\bs\sigma(\pA)}\nu_+(\pA_{k}(z)) \geqslant \max_{z\in\T\bs\sigma(\pA)}\nu_+(\pA_{k-1}(z)) = n_+, \qquad \max_{z\in\T\bs\sigma(\pA)}\nu_-(\pA_{k}(z)) \geqslant \max_{z\in\T\bs\sigma(\pA)}\nu_-(\pA_{k-1}(z)) = n_-. $$ So, $ \max_{z\in\T\bs\sigma(\pA)}\nu_+(\pA_{k}(z)) = n_+$ and $\max_{z\in\T\bs\sigma(\pA)}\nu_-(\pA_{k}(z)) = n_- $. According to Lemma~\ref{lem:maxminT}, we have \begin{equation} \label{eq:Sig+-} \max_{z\in\T}\nu_+(\pA_{k}(z)) = n_+,\qquad \max_{z\in\T}\nu_-(\pA_{k}(z)) = n_-. \end{equation} Now we can take $k:=k+1$ and repeat the above procedure recursively to construct all the Laurent polynomials $\mu_1(z),\ldots,\mu_{m_0 - n}(z)$. Equalities in \eqref{eq:Sig+-} guarantees that the item (i) will always hold in the new iteration. We can repeat our constructions until the item (ii) is violated. Take $\widetilde{\pA}(z):=\pA_{m_0-n}(z)$ to be the last matrix constructed. It is an $m_0 \times m_0$ Hermite matrix of Laurent polynomials still satisfying $$ \max_{z\in\T\setminus\sigma(\widetilde{\pA})}\nu_+(\widetilde{\pA}(z)) = n_+, \qquad \max_{z\in\T\setminus\sigma(\widetilde{\pA})}\nu_-(\widetilde{\pA}(z)) = n_-. $$ Since $n_+ + n_- = m_0$, both $ \nu_+(\widetilde{\pA}(z)) $ and $\nu_-(\widetilde{\pA}(z))$ must be constant for all $ z\in\T\setminus\sigma(\widetilde{\pA})$. Hence, $\operatorname{sig}(\widetilde{\pA}(z))$ is constant on $\T\setminus\sigma(\widetilde{\pA})$. By Theorem \ref{thm:const-sig}, there exists an $m_0 \times m_0$ matrix $\widetilde{\pU}(z)$ of Laurent polynomials such that \begin{equation*} \widetilde{\pA}(z) = \widetilde{\pU}(z)\pD\widetilde{\pU}^\star(z) \end{equation*} holds with $\pD = \diag(\mathbf{I}_{n_+}, -\mathbf{I}_{n_-})$ being the $m_0\times m_0$ constant diagonal matrix. From the structure of $\widetilde{\pA}(z)$ in \eqref{eq:extend}, we conclude that $\pA(z)$ can be reconstructed by deleting the last $(m_0-n)$ rows and last $ (m_0 - n) $ columns of $\widetilde{\pA}(z)$. So, define $\pU(z)$ to be the $ n \times m_0$ matrix of Laurent polynomials constructed by deleting the last $(m_0 - n)$ rows of $\widetilde{\pU}(z)$, we get the desired factorization $ \pA(z)=\pU(z)\pD\pU^\star(z)$. This proves the sufficiency part of Theorem~\ref{thm:nonconst-sig} for the case $ \det(\pA(z))\not\equiv 0 $. Now we consider the degenerate case that $ \det(\pA(z))\equiv 0 $. For a matrix $ \pA(z) $ of Laurent polynomials, if its invariant polynomials are $ \pd_1(z), \cdots, \pd_n(z) $, then we call the number of $ \pd_j(z) $ that are not identically zero the \emph{general rank} of $ \pA(z) $. Let us write $ \pA(z) $ into its Smith Normal Form: \[ \pA(z) = \mathsf{E}(z) \diag(\pd_1(z),\ldots,\pd_r(z),\mathbf{0}_{(n-r)\times (n-r)}) \mathsf{F}(z), \] where $ r $ is the general rank of $ \pA(z) $, $ \pd_1(z),\ldots,\pd_r(z) $ are the first $ r $ invariant polynomials of $ \pA(z) $ that are not identically zero and $ \mathsf{E}(z), \mathsf{F}(z) $ are unimodular matrices of Laurent polynomials. Define \[ \mathring{\pA}(z):=\mathsf{E}^{-1}(z)\pA(z)\mathsf{E}^{-\star}(z) = \diag(\pd_1(z),\ldots,\pd_r(z),\mathbf{0}_{(n-r)\times (n-r)}) \mathsf{F}(z)\mathsf{E}^{-\star}(z). \] Then $ \mathring{\pA}(z) $ is Hermite and its last $ (n-r) $ rows are zero. This implies that its last $ (n-r) $ columns must also be zero. Hence, $ \mathring{\pA}(z) = \diag(\widetilde{\pA}(z), \mathbf{0}) $, where $\widetilde{\pA}(z) $ is an $ r \times r $ Hermite matrix of Laurent polynomials. Since the invariant polynomials of $ \mathring{\pA}(z) $ are the same as that of $ \pA(z) $, which are $ \pd_1(z), \ldots, \pd_r(z), 0, \ldots, 0 $, the invariant polynomials of $ \widetilde{\pA}(z) $ must be $ \pd_1(z), \ldots, \pd_r(z) $. So $ \det(\widetilde{\pA}(z)) $ is not identically zero. Also, for all $ z\in\T $, since $ \mathsf{E}^{-1}(z) $ is nonsingular, we get \[ \nu_+(\widetilde{\pA}(z)) = \nu_+(\mathring{\pA}(z)) = \nu_+(\pA(z)), \qquad \nu_-(\widetilde{\pA}(z)) = \nu_-(\mathring{\pA}(z)) = \nu_-(\pA(z)).\] Using the previously proved non-degenerate case, we know that for every $$ m_1 \geqslant \max_{z\in\T}\nu_+(\widetilde{\pA}(z)) = \max_{z\in\T}\nu_+(\pA(z)),\qquad m_2 \geqslant \max_{z\in\T}\nu_-(\widetilde{\pA}(z)) = \max_{z\in\T}\nu_-(\pA(z)),$$ there exists an $r \times (m_1 + m_2)$ matrix of Laurent polynomials $\widetilde{\pU}(z)$ and a constant diagonal matrix $\pD=\diag(\mathbf{I}_{m_1}, -\mathbf{I}_{m_2})$, such that $$ \widetilde{\pA}(z)=\widetilde{\pU}(z)\pD\widetilde{\pU}^\star(z) .$$ Adding $ (n-r) $ more rows of zeros to $ \widetilde{\pU}(z) $ yields an $ n \times (m_1 + m_2) $ matrix $ \pV(z) :=\begin{bmatrix} \widetilde{\pU}(z) \\ \mathbf{0}_{(n-r)\times (m_1+m_2)} \end{bmatrix} $. We can directly verify that $\mathring{\pA}(z) = \pV(z)\pD\pV^\star(z) $. Define $ \pU(z) :=\mathsf{E}(z)\pV(z) $, we know $ \pA(z)=\pU(z)\pD\pU^\star(z) $ holds. This proves the sufficiency part of Theorem~\ref{thm:nonconst-sig} for the case that $ \det(\pA(z))\equiv 0 $. \end{proof} \section{Quasi-tight Framelets with a General Dilation Factor} Since the proof of Theorem~\ref{thm:qtf} on quasi-tight framelets is built on Theorems~\ref{thm:const-sig} and \ref{thm:nonconst-sig} for the generalized matrix spectral factorization of Hermite matrices of Laurent polynomials, we can easily generalize Theorem~\ref{thm:qtf} to quasi-tight framelets with an arbitrary dilation factor. Let $ \dm $ be an integer such that $ \dm \geqslant 2 $. Suppose that $ \Theta, a, b_1, \ldots, b_s \in \lp{0} $ and $ \epsilon_1,\ldots, \epsilon_s \in \{-1, 1\} $. We say that $ \{a; b_1, \ldots, b_s\}_{\Theta, (\epsilon_1,\ldots,\epsilon_s)} $ is a \emph{quasi-tight $\dm$-framelet filter bank} if \be \label{Mqtffb} \left[\begin{matrix} \pb_1(\omega^0 z) &\cdots &\pb_s(\omega^0 z)\\ \pb_1(\omega^1 z) &\cdots &\pb_s(\omega^1 z)\\ \vdots & \ddots & \vdots \\ \pb_1(\omega^{\dm-1} z) &\cdots &\pb_s(\omega^{\dm-1} z) \end{matrix}\right] \left[\begin{matrix} \eps_1 & &\\ &\ddots &\\ & &\eps_s\end{matrix}\right] \left[\begin{matrix} \pb_1(\omega^0 z) &\cdots &\pb_s(\omega^0 z)\\ \pb_1(\omega^1 z) &\cdots &\pb_s(\omega^1 z)\\ \vdots & \ddots & \vdots \\ \pb_1(\omega^{\dm-1} z) &\cdots &\pb_s(\omega^{\dm-1} z) \end{matrix}\right]^\star =\cM_{\pa,\pTh}(z), \ee where $ \omega := e^{-i 2\pi/\dm} $ and the $ \dm \times \dm $ matrix $\cM_{\pa,\pTh}$ is defined to be \begin{equation} \label{McM} \cM_{\pa,\pTh}(z):= \left[\begin{matrix} \pTh(\omega^0 z) & & \\ & \ddots & \\ & & \pTh(\omega^{\dm-1} z) \end{matrix}\right] - \pTh(z^{\dm}) \left[ \begin{matrix} \pa(\omega^0 z) \\ \vdots \\ \pa(\omega^{\dm-1}z) \end{matrix} \right] \left[ \begin{matrix} \pa(\omega^0 z) \\ \vdots \\ \pa(\omega^{\dm-1}z) \end{matrix} \right]^\star. \end{equation} Assume that $\pa(1)=1$ and $\phi\in \Lp{2}$ with $\phi$ being defined by \be \label{Mphi} \wh{\phi}(\xi):=\prod_{j=1}^\infty \pa(e^{-i \dm^{-j}\xi}),\qquad \xi\in \R. \ee Write $\pTh(z)=\tilde{\pth}(z)\pth^\star(z)$ for some $\tilde{\theta}, \theta\in \lp{0}$. Define $\eta, \tilde{\eta}, \psi^1,\ldots,\psi^s$ by \be \label{Meta:psi} \eta:=\sum_{k\in \Z} \theta(k) \phi(\cdot-k), \; \tilde{\eta}:=\sum_{k\in \Z} \tilde{\theta}(k) \phi(\cdot-k),\quad \psi^\ell:=\dm\sum_{k\in \Z} b_\ell(k)\phi(\dm\cdot-k),\qquad \ell=1,\ldots,s. \ee If in addition $\pTh(z)\ge 0$ for all $z\in \T$, then by Fej\'er-Riesz lemma we can always choose $\tilde{\theta}=\theta$ so that $\tilde{\eta}=\eta$. If $\pTh(1)=1$ and $\pb_1(1)=\cdots=\pb_s(1)=0$, then $\{\eta,\tilde{\eta};\psi^1,\ldots,\psi^s\}_{(\epsilon_1, \ldots, \epsilon_s)}$ is \emph{a quasi-tight $\dm$-framelet} in $\Lp{2}$, that is, for every $ f\in \Lp{2} $, \be \label{Mqtf} f=\sum_{k\in \Z} \la f, \tilde{\eta}(\cdot-k)\ra \eta(\cdot-k)+ \sum_{j=0}^\infty \sum_{\ell=1}^s \sum_{k\in \Z} \eps_\ell \la f, \psi^\ell_{\dm^j;k}\ra \psi^\ell_{\dm^j;k} \ee with the series converging unconditionally in $\Lp{2}$ and the underlying system being a Bessel sequence in $\Lp{2}$. Moreover, if $\{a;b_1,\ldots,b_s\}_{\Theta,(\epsilon_1,\ldots, \epsilon_s)}$ is a quasi-tight $\dm$-framelet filter bank, then \be \label{qtf:vm:srM} \min(\vmo(b_1),\ldots,\vmo(b_s))\le \min(\sr(a, \dm), \tfrac{1}{2}\vmo(\pTh(z)-\pTh(z^\dm)\pa(z)\pa^\star(z))), \ee where $ \sr(a, \dm) $ is the largest integer $ n $ such that $ (1+z+\cdots+z^{\dm-1})^n \mid \pa(z) $. \begin{theorem}\label{thm:Mqtf} Let $a,\Theta\in \lp{0}\bs\{0\}$ be two finitely supported not-identically-zero filters such that $\pTh^\star=\pTh$. Let $n_b$ be any positive integer satisfying \be \label{Mnb} 1\le n_b\le \min(\sr(a, \dm), \tfrac{1}{2} \vmo(\pTh(z)-\pTh(z^\dm) \pa(z)\pa^\star(z))). \ee Let $\cM_{\pa,\pTh}(z)$ be defined in \eqref{McM} and the quantities $s_{a,\Theta}^+, s_{a,\Theta}^-, s_{a,\Theta}$ be defined in \eqref{saTheta}. Then there exist $b_1,\ldots,b_s\in \lp{0}$ with $s=s_{a,\Theta}$ and $ \eps_1=\cdots= \eps_{s_{a,\Theta}^+} = 1 $, $\eps_{s_{a,\Theta}^+ +1}=\cdots=\eps_s=-1$ such that $\{a;b_1,\ldots,b_s\}_{\Theta, (\epsilon_1,\ldots, \epsilon_s)}$ is a quasi-tight $\dm$-framelet filter bank with $ \min\{\vmo(b_1),\ldots, \vmo(b_s)\}\geqslant n_b $. Also, for $1\le s<s_{a,\Theta}$, there does not exist a quasi-tight $\dm$-framelet filter bank $\{a;b_1,\ldots,b_s\}_{\Theta, (\epsilon_1, \ldots, \epsilon_s)}$ with $b_1,\ldots,b_s\in \lp{0}$ and $\eps_1,\ldots,\eps_s\in\{-1,1\}$. Moreover, if $\pa(1)=\pTh(1)=1$ and $\phi\in \Lp{2}$ with $\phi$ being defined in \eqref{Mphi}, then $\{\eta,\tilde{\eta};\psi^1,\ldots,\psi^s\}_{(\epsilon_1, \ldots, \epsilon_s)}$ is a quasi-tight $\dm$-framelet in $\Lp{2}$, where $\eta,\tilde{\eta},\psi^1,\ldots,\psi^s\in \Lp{2}$ are defined in \eqref{Meta:psi} with $\tilde{\pth}(z)\pth^\star(z)=\pTh(z)$. \end{theorem} \begin{proof} Since all high-pass filters must have at least $n_b$ vanishing moments, we can write \begin{equation} \label{eq:MbCoset} \pb_\ell(z) = (1-z^{-1})^{n_b}\mathring{\pb}_\ell(z), \qquad \ell = 1,\ldots, s \end{equation} for some Laurent polynomials $\mathring{\pb}_0,\ldots,\mathring{\pb}_s$. Then $\{a;b_1,\ldots,b_s\}_{\Theta,(\eps_1,\ldots,\eps_s)}$ is a quasi-tight $\dm$-framelet filter bank satisfying \eqref{Mqtffb} and \eqref{Mnb} if and only if \begin{equation} \label{eq:Mqtfbnb} \left[\begin{matrix} \mathring{\pb_1}(\omega^0 z) &\cdots &\mathring{\pb_s}(\omega^0 z)\\ \mathring{\pb_1}(\omega^1 z) &\cdots &\mathring{\pb_s}(\omega^1 z)\\ \vdots & \ddots & \vdots \\ \mathring{\pb_1}(\omega^{\dm-1} z) &\cdots &\mathring{\pb_s}(\omega^{\dm-1} z) \end{matrix}\right] \left[\begin{matrix} \eps_1 & &\\ &\ddots &\\ & &\eps_s\end{matrix}\right] \left[\begin{matrix} \mathring{\pb_1}(\omega^0 z) &\cdots &\mathring{\pb_s}(\omega^0 z)\\ \mathring{\pb_1}(\omega^1 z) &\cdots &\mathring{\pb_s}(\omega^1 z)\\ \vdots & \ddots & \vdots \\ \mathring{\pb_1}(\omega^{\dm-1} z) &\cdots &\mathring{\pb_s}(\omega^{\dm-1} z) \end{matrix}\right]^\star = \cM_{\pa, \pTh|n_b}(z), \end{equation} where $ \cM_{\pa, \pTh|n_b}(z) $ is an $ \dm\times \dm $ matrix defined by \begin{align*} [\cM_{\pa, \pTh|n_b}]_{j,j}(z):=& \frac{\pTh(\omega^{j-1} z) - \pTh(z^\dm)\pa(\omega^{j-1} z)\pa^\star(\omega^{j-1} z)} {(1-\omega^{j-1} z)^{n_b}(1-(\omega^{j-1}z)^{-1})^{n_b}}, \qquad j = 1,\ldots, \dm, \\ [\cM_{\pa, \pTh|n_b}]_{j,k}(z):=& \frac{-\pTh(z^\dm)\pa(\omega^{j-1} z)\pa^\star(\omega^{k-1} z)} {(1 - \omega^{k-1}z)^{n_b}(1-(\omega^{j-1}z)^{-1})^{n_b}}, \qquad j,k = 1,\ldots, \dm, \quad j\neq k. \end{align*} Note that according to the upper bound of $ n_b $ in \eqref{Mnb}, $ \cM_{\pa, \pTh|n_b}(z) $ is a well-defined $ \dm\times\dm $ matrix of Laurent polynomials. For $ \gamma \in \{0, \dots, \dm-1\} $, define the \emph{$ \gamma$-coset sequence} of a filter $ u\in \lp{0} $ as $ u^{[\gamma]} := \{u(\gamma + \dm k)\}_{k\in \Z} $. Then we have \[ \begin{bmatrix} \pu(\omega^0 z) \\ \vdots \\ \pu(\omega^{\dm-1} z) \end{bmatrix} = \begin{bmatrix} 1 & 1 & \ldots & 1 \\ 1 & \omega & \ldots & \omega^{\dm-1} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & \omega^{\dm-1} & \ldots & \omega^{(\dm-1)(\dm-1)} \end{bmatrix} \begin{bmatrix} 1 & & & \\ & z & & \\ & & \ddots & \\ & & & z^{\dm-1} \end{bmatrix} \begin{bmatrix} \pu^{[0]}(z^{\dm}) \\ \vdots \\ \pu^{[\dm-1]}(z^{\dm}) \end{bmatrix} = F(z) \begin{bmatrix} \pu^{[0]}(z^{\dm}) \\ \vdots \\ \pu^{[\dm-1]}(z^{\dm}) \end{bmatrix}, \] where $ F(z) $ is defined as $ F_{j,k}(z) = \omega^{(j-1)(k-1)}z^{k-1} $, $ j, k = 1,\dots \dm $. Notice that $ F(z) $ satisfies $ F(z)F^\star(z) = \dm \mathbf{I_\dm} $, hence, $ F(z) $ is invertible: $ F^{-1}(z) = \frac{1}{\dm}F^\star(z) = \frac{1}{\dm} [\omega^{-(j-1)(k-1)}z^{1-j}]_{1\leqslant j \leqslant \dm, 1\leqslant k \leqslant \dm} $. Define an $ \dm\times\dm $ matrix of Laurent polynomial $ N(z):= F^{-1}(z)\cM_{\pa, \pTh|n_b}(z)F^{-\star}(z) $. That is, for $ j, k = 1,\dots, \dm $, \begin{align} \label{eq:Njk} N_{j,k}(z) =& \sum_{q = 1}^{\dm}\sum_{p = 1}^{\dm} [F^{-1}]_{j,p}(z) [\cM_{\pa, \pTh|n_b}]_{p,q}(z) [F^{-\star}]_{q,k}(z) \notag \\ =& \frac{1}{\dm^2}z^{k-j} \left( \sum_{p=0}^{\dm-1} \sum_{\substack{q=0 \\ q\neq p}}^{\dm-1} \omega^{(k-1)q-(j-1)p} \frac{-\pTh(z^\dm)\pa(\omega^{p} z)\pa^\star(\omega^{q} z)} {(1 - \omega^{q}z)^{n_b}(1-(\omega^{p}z)^{-1})^{n_b}} \right. \notag \\ & \qquad \qquad\qquad \qquad \left. + \sum_{\ell=0}^{\dm-1} \omega^{(k-j)\ell} \frac{\pTh(\omega^{\ell} z) - \pTh(z^\dm)\pa(\omega^{\ell} z)\pa^\star(\omega^{\ell} z)} {(1-\omega^{\ell} z)^{n_b}(1-(\omega^{\ell}z)^{-1})^{n_b}} \right). \end{align} It is easy to verify that $ N(\omega^{r}z) = N(z) $ for all $ r = 0, \ldots, \dm-1 $. Hence, $ N(z) $ only depends on $ z^\dm $, and we can write $ N(z) = \cN_{\pa, \pTh|n_b}(z^\dm) $, where $ \cN_{\pa, \pTh|n_b}(z) $ is an $ \dm \times \dm$ Hermite matrix of Laurent polynomials. Multiplying $ F^{-1}(z) $ and $ F^{-\star}(z) $ on the left and right side of \eqref{eq:Mqtfbnb} respectively, we see that \eqref{eq:Mqtfbnb} is equivalent to \begin{equation} \label{eq:MPRpolyphase} \left[\begin{matrix} \mathring{\pb_1}^{[0]}(z) &\cdots &\mathring{\pb_s}^{[0]}(z)\\ \vdots & \ddots & \vdots \\ \mathring{\pb_1}^{[\dm-1]}(z) &\cdots &\mathring{\pb_s}^{[\dm-1]}(z) \end{matrix}\right] \left[\begin{matrix} \eps_1 & &\\ &\ddots &\\ & &\eps_s\end{matrix}\right] \left[\begin{matrix} \mathring{\pb_1}^{[0]}(z) &\cdots &\mathring{\pb_s}^{[0]}(z)\\ \vdots & \ddots & \vdots \\ \mathring{\pb_1}^{[\dm-1]}(z) &\cdots &\mathring{\pb_s}^{[\dm-1]}(z) \end{matrix}\right]^\star =\cN_{\pa, \pTh|n_b}(z), \end{equation} Hence, the existence of a quasi-tight $\dm$-framelet filter bank $ \{a; b_1, \ldots, b_s\}_{\Theta, (\epsilon_1, \ldots, \epsilon_s)} $ with $ n_b $ order of vanishing moments necessarily implies a generalized spectral factorization \eqref{eq:MPRpolyphase} for the Hermite matrix $ \cN_{\pa, \pTh|n_b}(z) $ of Laurent polynomials. According to Theorem~\ref{thm:nonconst-sig}, the existence of the generalized spectral factorization \eqref{eq:MPRpolyphase} implies that the number $ s_+ $ of times that ``$ +1 $" appears in $ \{\epsilon_1,\ldots,\epsilon_s \} $ and the number $ s_- $ of times that ``$ -1 $" appears in $ \{\epsilon_1,\ldots,\epsilon_s \} $ satisfy \begin{equation} \label{eq:Ms+s-} s_+ \geqslant \max_{z\in \T}\nu_+(\cN_{\pa, \pTh|n_b}(z)) , \qquad \mbox{and} \quad s_- \geqslant \max_{z\in \T}\nu_-(\cN_{\pa, \pTh|n_b}(z)). \end{equation} By \eqref{eq:MbCoset}, \eqref{eq:Mqtfbnb} and \eqref{eq:Njk}, we deduce that \begin{align*} \cM_{\pa, \pTh}(z)= \pP(z) \cN_{\pa, \pTh|n_b}(z^{\dm}) \pP^\star(z), \end{align*} where $ \pP(z) := \diag((1-(\omega^0 z)^{-1})^{n_b}, \ldots,(1-(\omega^{\dm - 1} z)^{-1})^{n_b}) $. Hence, $ \sigma(\pP) \subseteq \{1, \omega, \ldots, \omega^{\dm}\} $, which is a finite set. Similar to the proof of Theorem~\ref{thm:qtf}, we conclude that \begin{equation*} s_{a, \Theta}^+ = \max_{z\in \T} \nu_+(\cN_{\pa, \pTh|n_b}(z)), \qquad \mbox{and}\quad s_{a, \Theta}^- = \max_{z\in \T} \nu_-(\cN_{\pa, \pTh|n_b}(z)). \end{equation*} Therefore, from \eqref{eq:Ms+s-} we see the generalized spectral factorization \eqref{eq:MPRpolyphase} implies \begin{equation*} s_+ \geqslant s_{a, \Theta}^+ , \qquad s_- \geqslant s_{a, \Theta}^-, \qquad \mbox{and} \quad s = s_+ + s_- \geqslant s_{a, \Theta}^+ + s_{a, \Theta}^- = s_{a, \Theta} . \end{equation*} Hence, for $1\le s<s_{a,\Theta}$, there does not exist a quasi-tight framelet filter bank $\{a;b_1,\ldots,b_s\}_{\Theta, (\epsilon_1, \ldots, \epsilon_s)}$ with $b_1,\ldots,b_s\in \lp{0}$ and $\eps_1,\ldots,\eps_s\in\{-1,1\}$. On the other hand, given filters $ a, \Theta \in \lp{0}\bs\{0\} $, $ \Theta^\star = \Theta $, and positive integer $ n_b $ satisfying \eqref{Mnb}, we can calculate the matrix $ \cN_{\pa, \pTh|n_b}(z) $ of Laurent polynomials from $ \cN_{\pa, \pTh|n_b}(z^\dm):= N(z) $, where $ N(z) $ is defined as \eqref{eq:Njk}. By $ \Theta^\star(z) = \Theta(z) $, we see from \eqref{eq:Njk} that $ N(z) $ and $ \cN_{\pa, \pTh|n_b}(z) $ are both Hermite matrices of Laurent polynomials. Take $ s := s_{a, \Theta} = s_{a, \Theta}^+ + s_{a, \Theta}^- $. According to Theorem~\ref{thm:nonconst-sig}, we can choose $ \eps_1=\ldots= \eps_{s_{a,\Theta}^+} = 1 $, and $\eps_{s_{a,\Theta}^+ +1}=\ldots=\eps_{s}=-1$, and find a generalized spectral factorization of $ \cN_{\pa, \pTh|n_b}(z) $ as $ \cN_{\pa, \pTh|n_b}(z) = \pU(z) \diag(\eps_1,\ldots,\eps_s) \pU^\star(z)$, where $ \pU(z) $ is an $ \dm\times s $ matrix of Laurent polynomials. Then we can define Laurent polynomials $ \mathring{\pb}_1(z), \ldots, \mathring{\pb}_s(z) $ as \[ \left[\begin{matrix} \mathring{\pb_1}^{[0]}(z) &\cdots &\mathring{\pb_s}^{[0]}(z)\\ \vdots & \ddots & \vdots \\ \mathring{\pb_1}^{[\dm-1]}(z) &\cdots &\mathring{\pb_s}^{[\dm-1]}(z) \end{matrix}\right] := \pU(z). \] Thus, \eqref{eq:MPRpolyphase} holds. Multiplying $ F(z) $ and $ F^\star(z) $ on the left and right side of $ \cN_{\pa, \pTh|n_b}(z^\dm) $ respectively, we see that \eqref{eq:MPRpolyphase} is equivalent to \eqref{eq:Mqtfbnb}. Define Laurent polynomials $ \pb_1(z)\ldots, \pb_s(z) $ in \eqref{eq:MbCoset}, we can conclude from \eqref{eq:Mqtfbnb} that $ \{a; b_1, \ldots, b_s\}_{\Theta, (\epsilon_1, \ldots, \epsilon_s)} $ is a quasi-tight framelet filter bank with $ \min\{\vmo(b_1),\ldots, \vmo(b_s)\}\geqslant n_b $. This proves the existence of the quasi-tight framelet filter bank with minimum number of high-pass filters and high vanishing moments. \end{proof} \begin{example} \label{ex:3Band} { \rm Let $ \dm = 3 $ be a dilation factor. Consider $ \pTh(z) = 1 $ and the low-pass filter \[ \pa(z) = -\tfrac{1}{27}z^{-3}(1+z+z^2)^2 (2z^2 - 7z + 2). \] By the definition of $ \cM_{\pa, 1}(z) $ in \eqref{McM}, the three eigenvalues of $ \cM_{\pa, 1}(z) $ are $ 1, 1 $ and $ \det(\cM_{\pa, 1}(z)) $. We see from Figure~\ref{fig:3Band} that $ \det(\cM_{\pa, 1}(z)) \leqslant 0 $ on $ \T $. Hence $ s_{a, \Theta}^+ = 2 $ and $ s_{a, \Theta}^- = 1 $. Note that $ \sr(\pa, 3) = 2 $ and $ \vmo(1-\pa\pa^\star) = 4 $. Therefore, the maximum order of vanishing moments is two. Taking $ n_b = 2 $, we obtain a quasi-tight $3$-framelet filter bank $ \{a; b_1, b_2, b_3\}_{\Theta, (1, 1, -1)} $ as follows: \begin{align*} \pb_1(z) = \tfrac{\sqrt{6}}{6}(z-1)^2(z+1),\qquad \pb_2(z) = \tfrac{\sqrt{6}}{18}(z-1)^3, \qquad \pb_3(z) = \tfrac{1}{27}z^{-3}(z-1)^4(2z^2 + 5z + 2), \end{align*} with $ \vmo(b_1) = 2 $, $ \vmo(b_2) = 3 $ and $ \vmo(b_3) = 4 $. Since $ \sm(a, 3)\approx 0.6599 $ (see \cite[(7.2.2)]{hanbook} for its definition), the refinable function $ \phi $ defined in \eqref{Mphi} belongs to $ \Lp{2} $. Therefore, $ \{\phi,\phi; \psi^1, \psi^2, \psi^3\}_{(1, 1, -1)} $ is a quasi-tight $3$-framelet in $ \Lp{2} $ and $ \{\psi^1, \psi^2, \psi^3\}_{(1, 1, -1)} $ is a homogeneous quasi-tight $3$-framelet in $ \Lp{2} $, where $ \psi^1, \psi^2 $ and $ \psi^3 $ are defined in \eqref{Meta:psi} and have at least two vanishing moments. } \end{example} \begin{figure}[h!] \centering \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/3BandPhi.eps} \caption{$\phi$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/3BandPsi1.eps} \caption{$\psi^1$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/3BandPsi2.eps} \caption{$\psi^2$} \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/3BandPsi3.eps} \caption{$\psi^3 $ } \end{subfigure} \begin{subfigure}[]{0.18\textwidth} \includegraphics[width=\textwidth, height=0.8\textwidth]{pics/3Banddet.eps} \caption{$ \det(\cM_{\pa, 1})$} \end{subfigure} \caption{ The quasi-tight $3$-framelet $\{\phi,\phi; \psi^1,\psi^2,\psi^3\}_{(1,1,-1)}$ in $\Lp{2}$ and the homogeneous quasi-tight $3$-framelet $\{\psi^1,\psi^2,\psi^3\}_{(1,1,-1)}$ in $\Lp{2}$ obtained in Example~\ref{ex:3Band}. (A) is the refinable function $\phi\in \Lp{2}$. (B), (C) and (D) are the framelet functions $\psi^1$, $ \psi^2 $ and $ \psi^3$. (E) is $\det(\cM_{\pa, 1}(e^{-i\xi}))$ for $ \xi\in [-\pi, \pi] $. } \label{fig:3Band} \end{figure}
1,314,259,996,247
arxiv
\section{Introduction} We consider in this paper numerical approximation of the following magneto-hydrodynamic (MHD) equations \cite{sermange1983some}: \begin{subequations}\label{e_MHD_model} \begin{align} \frac{\partial \textbf{u} }{\partial t} + ( \textbf{u} \cdot \nabla ) \textbf{u} - \nu \Delta \textbf{u} + \nabla p - \alpha ( \nabla \times \textbf{b} ) \times \textbf{b} = 0 \quad &\ \rm{in} \ \Omega\times J, \label{e_MHD_modelA} \\ \frac{\partial \textbf{b} }{\partial t} + \eta \nabla \times ( \nabla \times \textbf{b} ) + \nabla \times ( \textbf{b} \times \textbf{u} ) = 0 \quad &\ \rm{in} \ \Omega\times J, \label{e_MHD_modelB} \\ \nabla\cdot\textbf{u}=0 , \ \nabla\cdot\textbf{b}=0 \quad &\ \rm{in} \ \Omega\times J, \label{e_MHD_modelC} \end{align} \end{subequations} with boundary and initial conditions \begin{align*} & \textbf{u}=\textbf{0}, \ \ \textbf{b} \cdot \textbf{n} =0, \ \ \textbf{n} \times ( \nabla \times \textbf{b} ) =0 \quad \rm{on} \ \partial\Omega\times J, \\ & \textbf{u} ( \textbf{x}, 0 ) = \textbf{u}^0 ( \textbf{x} ), \quad \textbf{b} ( \textbf{x}, 0 ) = \textbf{b}^0 ( \textbf{V} ) \quad \rm{in} \ \Omega, \end{align*} where $\Omega$ is an open bounded domain in $\mathbb{R}^d \;(d=2,3)$ with a sufficiently smooth boundary $\partial \Omega$, $\textbf{n}$ is the unit outward normal of the domain $\Omega$, $ J=(0,T] $, $(\textbf{u},p, \textbf{b} )$ represent respectively the unknown velocity, pressure and magnetic field. The parameters $ \nu $ and $ \eta $ are kinematic viscosity and magnetic diffusivity, respectively, and $ \alpha = 1/ (4 \pi \mu \rho ) $ with $ \mu $ as the magnetic permeability and $ \rho $ as the fluid density. The MHD system is used to describe the interaction between a viscous, incompressible, electrically conducting fluid and an external magnetic field. When a conducting fluid is placed in an existing magnetic field, the fluid motion produces electric currents which in turn create forces on the fluid and change the magnetic field itself. It has been widely used in many science and engineering applications, such as liquid metal cooling for nuclear reactors, sustained plasma confinement for controlled thermonuclear fusion, etc \cite{goldstein2002classical,davidson2002introduction}. The mathematical theory of MHD equations can be found in \cite{sermange1983some}. Numerical approximation of the MHD equations is challenging, as it involves delicate nonlinear coupling between the velocity and magnetic field in addition to the difficulties associated with the Navier-Stokes equations and Maxwell equations. There exists a large literature devoted to constructing compatible spatial discretization for the MHD equations, see \cite{yee1966numerical,babuvska1971error,nedelec1980mixed,Gerbeau2006Mathematical,brezzi2012mixed} and related references. In this paper, we are only concerned with time discretization, which can be coupled with any well developed compatible spatial discretization. The MHD equations \eqref{e_MHD_model} is energy dissipative. More precisely, taking the inner products of \eqref{e_MHD_modelA} and \eqref{e_MHD_modelB} with $\textbf{u}$ and $\alpha\textbf{b}$, respectively, summing up the results, we find that the nonlinear terms do not contribute to the energy and that the following energy dissipation law holds: \begin{equation}\label{engdiss} \frac d{dt}E(\textbf{u},\textbf{b})=-\nu\|\nabla \textbf{u}\|^2-\alpha\eta\|\nabla\times \textbf{b}\|^2 \quad\text{with }\; E(\textbf{u},\textbf{b})=\frac 12\|\textbf{u}\|^2+\frac{\alpha}2\|\textbf{b}\|^2. \end{equation} It is thus desirable to construct numerical schemes which satisfy a discrete energy dissipation law. Most existing work use fully implicit or semi-implicit treatments for the nonlinear terms so that the effect of nonlinear coupling can cancel each other and a discrete energy dissipation law can be derived. However, one needs to solve a nonlinear system or a coupled linear system with time dependent coefficients at each time step. For examples, Armero and Simo developed in \cite{armero1996long} energy dissipative schemes for an abstract evolution equation with applications to the incompressible MHD equations; Tone \cite{tone2009long} considered an implicit Euler scheme for the 2D MHD equations and established a uniform H2 stability; Layton et al. constructed in \cite{layton2014numerical} two partitioned methods for uncoupling evolutionary MHD flows; Hiptmair et al. \cite{hiptmair2018fully} developed a fully divergence-free finite element method for MHD equations with a semi-implicit treatment of the nonlinear terms; Zhang et al. \cite{zhang2018second} proposed a second order linear BDF scheme with an extrapolated treatment for the nonlinear terms and proved its unconditionally stability and convergence, cf. also \cite{zhang2015unconditional}; And most recently, Li et al. \cite{li2020convergent} proposed a fully discrete linearized H1 conforming Lagrange finite element method, and derived the convergence based on the reg- ularity of the initial conditions and source terms without extra assumptions on the regularity of the solution. To alleviate the cost of solving fully coupled systems at each time step, Badia et al. \cite{badia2013unconditionally} developed an operator splitting algorithm by a stabilized finite element formulation based on projections; Choi and Shen \cite{choi2016efficient} constructed several efficient splitting schemes based on the standard and rotational pressure-correction schemes with a semi-implicit treatment of the nonlinear terms for the MHD equations. From a computational point of view, it is desirable for a numerical scheme to treat the nonlinear term explicitly while still being energy dissipative, so that one only needs to solve simple linear equations with constant coefficients at each time step. However, with a direct explicit treatment of the nonlinear terms, their energy contribution no longer vanishes, so it becomes very difficult to derive a uniform bound for the numerical solution. Liu and Pego \cite{liu2010stable} constructed a first-order scheme with fully explicit treatment of the nonlinear terms and showed that its numerical solution is bounded with the time step sufficiently small, but their scheme is not shown to be energy dissipative. The recently proposed scalar auxiliary variable (SAV) approach \cite{shen2018scalar,shen2018convergence,shen2019new} provides a general approach to construct linear, decoupled unconditionally energy stable schemes for gradient flows. The approach has been extended to Navier-Stokes equations in \cite{lin2019numerical}. However, the scheme in \cite{lin2019numerical} requires solving a nonlinear algebraic equation whose well posedness is not guaranteed. We introduced in \cite{li2020new} a different SAV approach which leads to purely linear and unconditionally stable schemes for the Navier-Stokes equations, and proved corresponding error estimates. The aim of this work is to extend the approach proposed in \cite{li2020new} to the MHD equations which are much more complicated with nonlinear couplings between the velocity and magnetic fields. Our main contributions are two-folds: \begin{itemize} \item We construct first- and second-order IMEX SAV schemes for the MHD equations and show that they are unconditionally energy stable. These schemes only require solving a sequence of differential equations with constant coefficients at each time step so they are very efficient and easy to implement. \item We establish rigorous error estimates for the first-order scheme in the two-dimensional case without any condition on the time step. \end{itemize} Compared to the Navier-Stokes equations or Maxwell's equations, the error analysis for the MHD equations is much more involved due to the nonlinear coupling terms. Our error analysis uses essentially the unconditional bounds of the numerical solution that we derive for our SAV schemes. To the best of our knowledge, this is the first linear, unconditional energy stable and convergent schemes with fully explicit treatment of nonlinear terms for the MHD equations. The paper is organized as follows. In Section 2, we construct our IMEX SAV schemes and prove their stability. In Section 3, we carry out a rigorous error analysis for the first-order IMEX SAV scheme in the two-dimensional case. We present some numerical experiments to validate our schemes in Section 4, and conclude with a few remarks in Section 5. \section{The SAV schemes and their energy stability} In this section, we construct first- and second-order IMEX schemes based on the SAV approach for the MHD equations, and show that they are unconditionally energy stable. We introduce a scalar auxiliary variable (SAV): \begin{equation}\label{e_definition of q} \aligned q(t)=\rm{exp} (-\frac{t}{T}), \endaligned \end{equation} and expand the system \eqref{e_MHD_model} as follows: \begin{numcases}{} \frac{\partial \textbf{u}}{\partial t} -\nu\Delta\textbf{u}+\nabla p + \exp( \frac{t}{T} ) q(t) ( \textbf{u}\cdot \nabla \textbf{u}- \alpha ( \nabla \times \textbf{b} ) \times \textbf{b}) = 0, \label{e_MHD_model_SAV1} \\ \frac{\partial \textbf{b} }{\partial t} + \eta \nabla \times ( \nabla \times \textbf{b} ) + \exp( \frac{t}{T} ) q(t) \nabla \times ( \textbf{b} \times \textbf{u} ) = 0, \label{e_MHD_model_SAV2} \\ \nabla\cdot\textbf{u}=0 , \ \nabla\cdot\textbf{b}=0, \label{e_MHD_model_SAV3} \\ \frac{\rm{d} q}{\rm{d} t}=-\frac{1}{T}q + \exp( \frac{t}{T} ) \big( (\textbf{u}\cdot \nabla \textbf{u}, \textbf{u} ) - \alpha \left( ( \nabla \times \textbf{b} ) \times \textbf{b}, \textbf{u} \right) + \alpha \left( \nabla \times ( \textbf{b} \times \textbf{u} ), \textbf{b} \right) \big). \label{e_MHD_model_SAV4} \end{numcases} Since the sum of the nonlinear terms in \eqref{e_MHD_model_SAV4} is zero so \eqref{e_MHD_model_SAV4} is equivalent to the time derivative of \eqref{e_definition of q}. Hence, with $q(0)=1$, the exact solution of \eqref{e_MHD_model_SAV4} is given by \eqref{e_definition of q}, so that \eqref{e_MHD_model_SAV1}-\eqref{e_MHD_model_SAV3} is exactly the same as \eqref{e_MHD_model}. Therefore, the above system is equivalent to the original system. Note that we have, in addition to the original energy law \eqref{engdiss}, an additional energy law \begin{equation}\label{engdiss2} \frac 12\frac d{dt}(\|\textbf{u}\|^2+{\alpha}\|\textbf{b}\|^2+|q|^2) =-\nu\|\nabla \textbf{u}\|^2-\alpha\eta\|\nabla\times \textbf{b}\|^2 -\frac 1T|q|^2. \end{equation} Note that, unlike in the original SAV approach, the SAV $q(t)$ is related to the nonlinear part of the free energy, here the SAV $q(t)$ is pure artificial but will allow us to construct unconditional energy stable, with respect to the energy in \eqref{engdiss2}, schemes with fully explicit treatment of the nonlinear terms. \subsection{The IMEX SAV schemes} We set $$\Delta t=T/N,\ t^n=n\Delta t, \ d_t g^{n+1}=\frac{g^{n+1}-g^n}{\Delta t}, \ {\rm for} \ n\leq N.$$ \textbf{Scheme \uppercase\expandafter{\romannumeral 1} (first-order):} Find ($ \textbf{u}^{n+1}, p^{n+1}, q^{n+1},\textbf{b}^{n+1} $) by solving \begin{eqnarray} && d_t \textbf{u}^{n+1} - \nu \Delta \textbf{u}^{n+1} + \nabla p^{n+1} = \exp( \frac{t^{n+1} }{T} ) q^{n+1} (\alpha (\nabla \times \textbf{b}^n) \times \textbf{b}^n - \textbf{u}^{n}\cdot \nabla \textbf{u}^{n}), \label{e_SAV_scheme_first_u} \\ && d_t \textbf{b}^{n+1} + \eta \nabla \times ( \nabla \times \textbf{b}^{n+1} ) + \exp( \frac{t^{n+1} }{T} ) q^{n+1} \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) = 0, \label{e_SAV_scheme_first_b}\\ && \nabla\cdot\textbf{u}^{n+1} =0, \ \ \ \nabla\cdot\textbf{b}^{n+1} =0, \label{e_SAV_scheme_first_div} \\ && \textbf{u}^{n+1} |_{\partial \Omega} =\textbf{0}, \ \ \textbf{b}^{n+1} \cdot \textbf{n} |_{\partial \Omega}=0, \ \ \textbf{n} \times ( \nabla \times \textbf{b}^{n+1} ) |_{\partial \Omega} =0, \label{e_SAV_scheme_first_boundary} \\ && d_t q^{n+1} = -\frac{1}{T}q^{n+1} + \exp( \frac{t^{n+1} }{T} ) \nonumber\\ && \big( (\textbf{u}^n\cdot\nabla \textbf{u}^n, \textbf{u}^{n+1}) - \alpha (( \nabla \times \textbf{b}^n) \times \textbf{b}^n, \textbf{u}^{n+1}) + \alpha( \nabla \times (\textbf{b}^n \times \textbf{u}^n) , \textbf{b}^{n+1} )\big),\label{e_SAV_scheme_first_q} \end{eqnarray} We now describe how to solve the semi-discrete-in-time scheme \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_boundary} efficiently. We denote $S^{n+1}= \exp ( \frac{t^{n+1}}{T}) q^{n+1} $ and set \begin{numcases}{} \textbf{b}^{n+1}= \textbf{b}^{n+1}_1 +S^{n+1}\textbf{b}^{n+1}_2,\label{e_split_b} \\ \textbf{u}^{n+1}=\textbf{u}_1^{n+1}+S^{n+1}\textbf{u}_2^{n+1},\label{e_split_u} \\ p^{n+1}=p_1^{n+1}+S^{n+1}p_2^{n+1}. \label{e_split_p} \end{numcases} Plugging \eqref{e_split_b}-\eqref{e_split_p} in the scheme \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_boundary}, we find that $\textbf{u}_i^{n+1}, p_i^{n+1} $ $(i=1,2)$ satisfy \begin{numcases}{} \frac{ \textbf{u}_1^{n+1}-\textbf{u}^{n}}{\Delta t}= \nu\Delta \textbf{u}_1^{n+1}-\nabla p^{n+1}_1 , \label{e_implementation_u1p1} \\ \frac{ \textbf{u}_2^{n+1} }{ \Delta t } + \textbf{u}^{n}\cdot \nabla \textbf{u}^{n}= \nu\Delta \textbf{u}_2^{n+1} - \nabla p^{n+1}_2 + \alpha ( \nabla \times \textbf{b}^n ) \times \textbf{b}^n, \label{e_implementation_u2p2} \\ \nabla \cdot \textbf{u}^{n+1}_i =0,\ \ \textbf{u}^{n+1}_i |_{\partial \Omega} =\textbf{0},\quad i=1,2. \label{e_implementation_divu} \end{numcases} Next we determine $\textbf{b}_{i}^{n+1}$ $(i=1,2)$ from \begin{numcases}{} \frac{ \textbf{b}_1^{n+1}-\textbf{b}^{n}}{\Delta t} + \eta \nabla \times ( \nabla \times \textbf{b}^{n+1}_1 ) = 0 , \label{e_implementation_b1} \\ \frac{ \textbf{b}_2^{n+1} }{ \Delta t } + \eta \nabla \times ( \nabla \times \textbf{b}^{n+1}_2 ) + \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) = 0, \label{e_implementation_b2} \\ \nabla\cdot\textbf{b}^{n+1}_i =0, \ \ \textbf{b}^{n+1}_i \cdot \textbf{n} |_{\partial \Omega}=0, \ \ \textbf{n} \times ( \nabla \times \textbf{b}^{n+1}_i ) |_{\partial \Omega} =0,\quad i=1,2. \label{e_implementation_divb} \end{numcases} Once $\textbf{u}_{i}^{n+1}$, $p_i^{n+1} $, $\textbf{b}_i^{n+1}$ $(i=1,2)$ are known, we can determine explicitly $S^{n+1}$ from \eqref{e_SAV_scheme_first_q} as follows: \begin{equation} \label{e_S_solve} \aligned \left( \frac{T+\Delta t}{T \Delta t }- \exp( \frac{2 t^{n+1} }{T} ) A_2 \right) \exp( -\frac{t^{n+1} }{T} )S^{n+1} =\exp( \frac{t^{n+1} }{T} )A_1 +\frac{1}{\Delta t} q^n, \endaligned \end{equation} where \begin{equation*} \aligned & A_i= (\textbf{u}^n\cdot\nabla \textbf{u}^n,\tilde{\textbf{u}}_i^{n+1}) - \alpha \left( ( \nabla \times \textbf{b}^n ) \times \textbf{b}^n, \textbf{u}_i^{n+1} \right) + \alpha \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) , \textbf{b}^{n+1}_i \right), \ i = 1, 2. \endaligned \end{equation*} Finally, we can obtain $\textbf{u}^{n+1}$, $p^{n+1}$ and $\textbf{b}^{n+1}$ from \eqref{e_split_b}-\eqref{e_split_p}. In summary, at each time step, we only need to solve two generalized Stokes equations in \eqref{e_implementation_u1p1}-\eqref{e_implementation_divu}, and two elliptic equations \eqref{e_implementation_b1}-\eqref{e_implementation_divb} with constant ciefficients plus a linear algebraic equation \eqref{e_S_solve} at each time step. Hence, the scheme is very efficient. \medskip \textbf{Scheme \uppercase\expandafter{\romannumeral 2} (second-order):} Find ($ \textbf{u}^{n+1}, p^{n+1}, q^{n+1},\textbf{b}^{n+1} $) by solving \begin{eqnarray} && \frac{ 3 \textbf{u}^{n+1}-4\textbf{u}^{n}+\textbf{u}^{n-1} }{ 2 \Delta t } - \nu \Delta \textbf{u}^{n+1} + \nabla p^{n+1} \nonumber\\ &&= \exp( \frac{t^{n+1} }{T} ) q^{n+1}\big( \alpha(\nabla \times \bar{ \textbf{b} }^{n+1} ) \times \bar{ \textbf{b} }^{n+1}- \bar{ \textbf{u} }^{n+1}\cdot \nabla \bar{ \textbf{u} }^{n+1}\big), \label{e_SAV_scheme_second_u} \\ && \frac{ 3 \textbf{b}^{n+1}-4\textbf{b}^{n}+\textbf{b}^{n-1} }{ 2 \Delta t } + \eta \nabla \times ( \nabla \times \textbf{b}^{n+1} ) + \exp( \frac{t^{n+1} }{T} ) q^{n+1} \nabla \times ( \bar{ \textbf{b} }^{n+1} \times \bar{ \textbf{u} }^{n+1} ) = 0, \label{e_SAV_scheme_second_b} \\ && \nabla\cdot\textbf{u}^{n+1} =0 , \ \ \ \nabla\cdot\textbf{b}^{n+1} =0, \label{e_SAV_scheme_second_div} \\ && \textbf{u}^{n+1} |_{\partial \Omega} =\textbf{0}, \ \ \textbf{b}^{n+1} \cdot \textbf{n} |_{\partial \Omega}=0, \ \ \textbf{n} \times ( \nabla \times \textbf{b}^{n+1} ) |_{\partial \Omega} =0, \label{e_SAV_scheme_second_boundary}\\ && \frac{ 3q^{n+1}- 4q^n + q^{n-1} }{ 2\Delta t } = -\frac{1}{T}q^{n+1}+ \exp( \frac{t^{n+1} }{T} ) \nonumber \\ && \left[ \alpha ( (\nabla \times ( \bar{ \textbf{b} }^{n+1} \times \bar{ \textbf{u} }^{n+1} ) , \textbf{b}^{n+1}) - \alpha( (\nabla \times \bar{ \textbf{b} }^{n+1}) \times \bar{ \textbf{b} }^{n+1}, \textbf{u}^{n+1} ) + ( \bar{ \textbf{u} }^{n+1} \cdot \nabla \bar{ \textbf{u} }^{n+1},\textbf{u}^{n+1}) \right], \label{e_SAV_scheme_second_q} \end{eqnarray} where $\bar{\textbf{v}}^{n+1}=2\textbf{v}^{n}-\textbf{v}^{n-1}$ for any function $\textbf{v}$. For $n = 0$, we can compute ($\textbf{u}^{1}$, $p^{1}$, $q^{1}$, $\textbf{b}^{1}$) by the first-order scheme described above. The second-order scheme \eqref{e_SAV_scheme_second_u}-\eqref{e_SAV_scheme_second_q} can be implemented the same way as the first-order scheme \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_q}. \subsection{Energy Stability} We show below that the first- and second-order SAV schemes \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_q} and \eqref{e_SAV_scheme_second_u}-\eqref{e_SAV_scheme_second_q} are unconditionally energy stable. We shall use $\|\cdot\|$ and $(\cdot, \cdot)$ to denote the norm and inner product in $L^2(\Omega)$, and $<\cdot, \cdot>$ to denote the inner product in $L^2(\partial\Omega)$. \medskip \begin{theorem}\label{thm_energy stability_first order} The scheme \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_q} is unconditionally stable in the sense that \begin{equation}\label{discteng} \aligned E^{n+1}-E^{n} \leq - \nu\Delta t \| \nabla \textbf{u}^{n+1} \|^2 - \eta \alpha \Delta t \| \nabla \textbf{b}^{n+1} \|^2 -\frac{1}{T}\Delta t|q^{n+1}|^2, \ \ \forall \Delta t,\; n\geq 0, \endaligned \end{equation} where \begin{equation*} E^{n+1}=\frac 12 \|\textbf{u}^{n+1}\|^2 + \frac \alpha 2 \|\textbf{b}^{n+1}\|^2 +\frac 12|q^{n+1}|^2 . \end{equation*} \end{theorem} \begin{proof} Taking the inner product of \eqref{e_SAV_scheme_first_u} with $\Delta t \textbf{u}^{n+1}$ and using the identity \begin{equation}\label{e_identity_Euler} \aligned (a-b,a)=\frac{1}{2}(|a|^2-|b|^2+|a-b|^2), \endaligned \end{equation} we have \begin{equation}\label{e_stability_first_u} \aligned &\frac{\| \textbf{u} ^{n+1}\|^2-\| \textbf{u}^{n} \|^2}{2}+\frac{\| \textbf{u}^{n+1}-\textbf{u}^{n} \|^2}{2} +\nu\Delta t \| \nabla \textbf{u}^{n+1} \|^2 + \Delta t(\nabla p^{n+1}, \textbf{u}^{n+1}) \\ & = \Delta t \exp( \frac{t^{n+1} }{T} ) q^{n+1} \left( \alpha ( \nabla \times \textbf{b}^n ) \times \textbf{b}^n, \textbf{u}^{n+1} )-\textbf{u}^{n}\cdot \nabla \textbf{u}^{n},\textbf{u}^{n+1} )\right) . \endaligned \end{equation} Taking the inner product of \eqref{e_SAV_scheme_first_b} with $ \alpha \Delta t \textbf{b}^{n+1}$ and using the identity \begin{equation} \label{e_curl(curl b)} \aligned & \nabla \times ( \nabla \times \textbf{b}^{n+1} ) = -\Delta \textbf{b}^{n+1} + \nabla ( \nabla \cdot \textbf{b}^{n+1} ), \endaligned \end{equation} we have \begin{equation}\label{e_stability_first_b} \aligned & \alpha \frac{\| \textbf{b} ^{n+1}\|^2-\| \textbf{b}^{n} \|^2}{2}+ \alpha \frac{\| \textbf{b}^{n+1}-\textbf{b}^{n} \|^2 }{2} + \eta \alpha \Delta t \| \nabla \textbf{b}^{n+1} \|^2 \\ & \ \ \ \ \ + \alpha \Delta t \exp( \frac{t^{n+1}}{T} ) q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) , \textbf{b}^{n+1} \right) =0. \endaligned \end{equation} Multiplying \eqref{e_SAV_scheme_first_q} by $q^{n+1}\Delta t$ leads to \begin{equation}\label{e_stability_first_q} \aligned & \frac{ |q^{n+1}|^2 - |q^n|^2 } { 2 } + \frac{1}{2} | q^{n+1}- q^n |^2 + \frac{1}{T} \Delta t |q^{n+1}|^2 \\ &= \Delta t q^{n+1} \exp( \frac{t^{n+1} }{T} )\big( (\textbf{u}^n\cdot\nabla \textbf{u}^n, \textbf{u}^{n+1}) - \alpha ( (\nabla \times \textbf{b}^n ) \times \textbf{b}^n, \textbf{u}^{n+1}) + \alpha ( \nabla \times (\textbf{b}^n \times \textbf{u}^n) , \textbf{b}^{n+1}) \big) . \endaligned \end{equation} Then summing up \eqref{e_stability_first_u} with \eqref{e_stability_first_b}-\eqref{e_stability_first_q} results in \begin{equation*}\label{e_stability_first_final} \aligned & \|\textbf{u}^{n+1}\|^2-\|\textbf{u}^{n}\|^2 + \alpha \|\textbf{b}^{n+1}\|^2- \alpha \|\textbf{b}^{n}\|^2 +|q^{n+1}|^2-|q^n|^2 \\ & +|q^{n+1}-q^n|^2+\| \textbf{u} ^{n+1}-\textbf{u}^{n}\|^2 + +\| \textbf{b}^{n+1}-\textbf{b}^{n}\|^2 \\ \hskip 1cm & \leq -2 \nu\Delta t \| \nabla \textbf{u}^{n+1} \|^2 - 2 \eta \alpha \Delta t \| \nabla \textbf{b}^{n+1} \|^2-\frac{2}{T}\Delta t|q^{n+1}|^2, \endaligned \end{equation*} which implies the desired result. \end{proof} We observe that the discrete energy dissipation law \eqref{discteng} is an approximation of the continuous energy dissipation law \eqref{engdiss2}. \medskip \begin{theorem}\label{thm_energy stability_second order} The scheme \eqref{e_SAV_scheme_second_u}-\eqref{e_SAV_scheme_second_q} is unconditionally stable in the sense that \begin{equation}\label{e_energy decay_second} \aligned E^{n+1}-E^{n}\leq -\Delta t (\nu\| \nabla \textbf{u}^{n+1} \|^2 + \eta \alpha \| \nabla \textbf{b}^{n+1} \|^2+\frac 1T|q^{n+1}|^2), \ \ \forall \Delta t,\; n\geq 0, \endaligned \end{equation} where \begin{equation}\label{e_definition of E} \aligned E^{n+1}= & \frac 14 (\| \textbf{u}^{n+1}\|^2 + \alpha \| \textbf{b}^{n+1}\|^2+|q^{n+1}|^2)\\ &+\frac 14 (\| 2\textbf{u}^{n+1}-\textbf{u}^{n} \|^2 + \alpha \| 2\textbf{b}^{n+1}-\textbf{b}^{n} \|^2 +|2q^{n+1}-q^n|^2) . \endaligned \end{equation} \end{theorem} \begin{proof} Taking the inner product of \eqref{e_SAV_scheme_second_u} with $ 4 \Delta t \textbf{u}^{n+1}$ and using the identity \begin{equation}\label{e_identity_BDF} \aligned 2(3a-4b+c,a)=|a|^2+|2a-b|^2-|b|^2-|2b-c|^2+|a-2b+c|^2, \endaligned \end{equation} we have \begin{equation}\label{e_stability_second_u} \aligned & \| \textbf{u} ^{n+1} \|^2 + \| 2 \textbf{u} ^{n+1}- \textbf{u} ^{n} \|^2 - \| \textbf{u} ^{n} \|^2 - \| 2 \textbf{u} ^{n}- \textbf{u} ^{n-1} \|^2 + \| \textbf{u} ^{n+1}-2\textbf{u} ^n+ \textbf{u} ^{n-1} \|^2 \\ &\ \ \ \ \ \ + 4 \nu\Delta t \| \nabla \textbf{u}^{n+1} \|^2 + 4 \Delta t(\nabla p^{n+1}, \textbf{u}^{n+1}) \\ & = 4 \Delta t\exp( \frac{t^{n+1} }{T} ) q^{n+1} \left( \alpha ( (\nabla \times \bar{ \textbf{b} }^{n+1} ) \times \bar{ \textbf{b} }^{n+1}, \textbf{u}^{n+1} ) -( \bar{ \textbf{u} }^{n+1} \cdot \nabla \bar{ \textbf{u} }^{n+1},\textbf{u}^{n+1} )\right) . \endaligned \end{equation} Taking the inner product of \eqref{e_SAV_scheme_second_b} with $ 4 \alpha \Delta t \textbf{b}^{n+1}$ leads to \begin{equation}\label{e_stability_second_b} \aligned & \alpha ( \| \textbf{b} ^{n+1} \|^2 + \| 2 \textbf{b} ^{n+1}- \textbf{b} ^{n} \|^2 - \| \textbf{b} ^{n} \|^2 - \| 2 \textbf{b} ^{n}- \textbf{b} ^{n-1} \|^2 + \| \textbf{b} ^{n+1}-2\textbf{b} ^n+ \textbf{b} ^{n-1} \|^2 ) \\ & + 4 \eta \alpha \Delta t \| \nabla \textbf{b}^{n+1} \|^2 + 4 \alpha \Delta t \exp( \frac{t^{n+1}}{T} ) q^{n+1} \left( \nabla \times ( \bar{ \textbf{b} }^{n+1} \times \bar{ \textbf{u} }^{n+1} ) , \textbf{b}^{n+1} \right) =0. \endaligned \end{equation} Multiplying \eqref{e_SAV_scheme_second_q} by $4 \Delta t q^{n+1}$ leads to \begin{equation}\label{e_stability_second_q} \aligned & |q^{n+1}|^2+|2q^{n+1}-q^n|^2-|q^n|^2-|2q^{n}-q^{n-1}|^2+|q^{n+1}-2q^n+q^{n-1}|^2 \\ = & - \frac{4\Delta t}{T} |q^{n+1}|^2 + 4 \Delta t q^{n+1} \exp( \frac{t^{n+1} }{T} )( ( \bar{ \textbf{u} }^{n+1} \cdot \nabla ) \bar{ \textbf{u} }^{n+1}, \textbf{u}^{n+1} ) \\ & - 4 \alpha \Delta t q^{n+1} \exp( \frac{t^{n+1} }{T} ) \left( ( (\nabla \times \bar{ \textbf{b} }^{n+1}) \times \bar{ \textbf{b} }^{n+1}, \textbf{u}^{n+1} ) - (\nabla \times (\bar{ \textbf{b} }^{n+1} \times \bar{ \textbf{u} }^{n+1} ) , \textbf{b}^{n+1}) \right) . \endaligned \end{equation} Then summing up \eqref{e_stability_second_u} with \eqref{e_stability_second_b}-\eqref{e_stability_second_q} results in \begin{equation*}\label{e_stability_second_final} \aligned & \| \textbf{u}^{n+1}\|^2+\| 2\textbf{u}^{n+1}-\textbf{u}^{n} \|^2 + \alpha \| \textbf{b}^{n+1}\|^2+ \alpha \| 2\textbf{b}^{n+1}-\textbf{b}^{n} \|^2 \\ &+ |q^{n+1}|^2+|2q^{n+1}-q^n|^2 + \| \textbf{u} ^{n+1}-2\textbf{u} ^n+ \textbf{u} ^{n-1} \|^2 + \alpha \| \textbf{b} ^{n+1}-2\textbf{b} ^n+ \textbf{b} ^{n-1} \|^2 \\ & + |q^{n+1}-2q^n+q^{n-1}|^2 + \frac{4\Delta t}{T} |q^{n+1}|^2 + 4 \nu\Delta t \| \nabla \textbf{u}^{n+1} \|^2 + 4 \eta \alpha \Delta t \| \nabla \textbf{b}^{n+1} \|^2 \\ \leq & \| \textbf{u}^{n}\|^2+\| 2\textbf{u}^{n}-\textbf{u}^{n-1} \|^2 + \alpha \| \textbf{b}^{n}\|^2+ \alpha \| 2\textbf{b}^{n}-\textbf{b}^{n-1} \|^2 + |q^{n}|^2+|2q^{n}-q^{n-1} |^2, \endaligned \end{equation*} which implies the desired result. \end{proof} Note that the discrete energy defined in \eqref{e_definition of E} is a second-order approximation of the continuous energy defined in \eqref{engdiss2}, and \eqref{e_energy decay_second} is an approximation of the continuous energy dissipation law \eqref{engdiss2}. \section{Error Analysis} In this section, we carry out a rigorous error analysis for Scheme I \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_q} in the two-dimensional case. Similar analysis can also be carried out for Scheme II but the process is much more tedious so we opt to only consider Scheme I here. We emphasize that while both schemes can be used in the three-dimension case, the error analysis can not be easily extended to the three-dimension case due to some technical issues. Hence, we set $d=2$ in this section. \subsection{Preliminaries} We describe below some notations and results which will be frequently used in the analysis. We use $C$, with or without subscript, to denote a positive constant, which could have different values at different places. We use the standard notations $L^2(\Omega)$, $H^k(\Omega)$ and $H^k_0(\Omega)$ to denote the usual Sobolev spaces. The norm corresponding to $H^k(\Omega)$ will be denoted simply by $\|\cdot\|_k$. The vector functions and vector spaces will be indicated by boldface type. We define \begin{flalign*} \begin{array}{l} \displaystyle L^2_0( \Omega) = \{ p \in L^2 ( \Omega ) : \int_{\Omega} q dx=0 \} , \\ \displaystyle \textbf{H}^k( \Omega ) = ( H^k( \Omega) )^d,\ \ \textbf{H}^1_0( \Omega ) = \{ \textbf{v} \in \textbf{H}^1( \Omega ) : \textbf{v} |_{\partial \Omega }=0 \}, \\ \displaystyle \textbf{H}^1_n( \Omega ) = \{ \textbf{v} \in \textbf{H}^1( \Omega ) : \textbf{v} \cdot \textbf{n}| _{\partial \Omega }= 0 \} , \\ \displaystyle \textbf{V} = \{ \textbf{v} \in \textbf{H}_0^1( \Omega ) : \nabla\cdot \textbf{v} =0 \} , \\ \displaystyle \textbf{H} = \{ \textbf{v} \in ( L^2 ( \Omega ) )^2 : \nabla\cdot \textbf{v} =0, \ \textbf{v} \cdot \textbf{n}| _{\partial \Omega }= 0 \} . \end{array} \end{flalign*} The following formulae are essential and useful for our analysis \begin{equation}\label{e_cross_product1} \aligned ( \nabla \times \textbf{v} ) \times \textbf{v} = ( \textbf{v} \cdot \nabla ) \textbf{v} - \frac{1}{2} \nabla | \textbf{v}|^2, \endaligned \end{equation} \begin{equation}\label{e_cross_product2} \aligned \textbf{v} \times ( \textbf{w} \times \textbf{z} ) = ( \textbf{v} \cdot \textbf{z} ) \textbf{w} - ( \textbf{v} \cdot \textbf{w} ) \textbf{z}, \endaligned \end{equation} \begin{equation}\label{e_cross_product2_plus} \aligned \nabla \times ( \textbf{v} \times \textbf{w} ) = ( \textbf{w} \cdot \nabla ) \textbf{v} - ( \textbf{v} \cdot \nabla ) \textbf{w} + ( \nabla \cdot \textbf{w} ) \textbf{v} - ( \nabla \cdot \textbf{v} ) \textbf{w}, \endaligned \end{equation} \begin{equation}\label{e_cross_product3} \aligned ( \textbf{v} \times \textbf{w} ) \times \textbf{z} \cdot \textbf{q} = ( \textbf{v} \times \textbf{w} ) \cdot ( \textbf{z} \times \textbf{q} ) = - ( \textbf{v} \times \textbf{w} ) \cdot ( \textbf{q} \times \textbf{z} ) , \endaligned \end{equation} \begin{equation}\label{e_integration by parts1} \aligned \int_{ \Omega } ( \nabla \times \textbf{v} ) \cdot \textbf{w} d \textbf{x} = \int_{ \Omega } \textbf{v} \cdot ( \nabla \times \textbf{w} ) d \textbf{x} + \int_{ \partial \Omega } ( \textbf{n} \times \textbf{v} ) \cdot \textbf{w} ds. \endaligned \end{equation} Define the Stokes operator $$ A\textbf{u}=-P \Delta\textbf{u},\ \ \forall \ \textbf{u}\in D(A)=\textbf{H}^2(\Omega)\cap\textbf{V},$$ where $P $ is the orthogonal projector in $\textbf{L}^2(\Omega)$ onto $\textbf{H}$, and the Stokes operator $A$ is an unbounded positive self-adjoint closed operator in $\textbf{H}$ with domain $D(A)$. We then derive from the above and Poincar\'e inequality that \cite{temam2001navier,heywood1982finite} \begin{equation}\label{e_norm H2} \aligned \|\nabla\textbf{v}\|\leq c_1\|A^{\frac{1}{2}}\textbf{v}\|,\ \ \|\Delta\textbf{v}\|\leq c_1\|A\textbf{v}\|, \ \ \forall \ \textbf{v}\in D(A)=\textbf{H}^2(\Omega)\cap\textbf{V}, \endaligned \end{equation} and \begin{equation}\label{e_norm H1} \aligned \|\textbf{v}\|\leq c_1\|\nabla\textbf{v}\|, \ \forall \ \textbf{v}\in \textbf{H}^1_0(\Omega),\ \ \|\nabla\textbf{v}\|\leq c_1\|A\textbf{v}\|, \ \ \forall \ \textbf{v}\in D(A) . \endaligned \end{equation} We recall the following inequalities will be used in the sequel \cite{Gerbeau2006Mathematical,Jinjin2019A}: \begin{equation}\label{e_norm curl} \aligned \|\nabla \times \textbf{v}\|_0 \leq c_1 \| \nabla \textbf{v} \|_0 , \ \ \|\nabla \cdot \textbf{v}\|_0 \leq c_1 \| \nabla \textbf{v} \|_0, \ \forall \ \textbf{v} \in \textbf{H}^1 (\Omega), \endaligned \end{equation} \begin{equation}\label{e_norm curl_div} \aligned \|\nabla \times \textbf{v}\|_0^2 + \| \nabla \cdot \textbf{v}\|_0 ^2 \geq c_1 \| \textbf{v} \|_1^2, \ \forall \ \textbf{v} \in \textbf{H}^1_n (\Omega), \endaligned \end{equation} and the following well-known inequalities which are valid with $d=2$ \cite{liu2010stable}: \begin{equation}\label{e_norm L4} \aligned \|\textbf{v}\|_{L^4} \leq c_1 \| \textbf{v}\|^{1/2}_0 \| \textbf{v}\|^{1/2}_1, \ \forall \ \textbf{v} \in \textbf{H}^1 (\Omega), \endaligned \end{equation} \begin{equation}\label{e_norm L_infty} \aligned \|\textbf{v}\|_{L^{\infty}} \leq c_1 \| \textbf{v}\|^{1/2}_1 \| \textbf{v}\|^{1/2}_2, \ \forall \ \textbf{v} \in \textbf{H}^2 (\Omega), \endaligned \end{equation} where $c_1$ is a positive constant depending only on $\Omega$. Next we define the trilinear form $b(\cdot,\cdot,\cdot)$ by \begin{equation*} \aligned b(\textbf{u},\textbf{v},\textbf{w})=\int_{\Omega}(\textbf{u}\cdot\nabla)\textbf{v}\cdot \textbf{w}d\textbf{x}. \endaligned \end{equation*} We can easily obtain that the trilinear form $b(\cdot,\cdot,\cdot)$ is a skew-symmetric with respect to its last two arguments, i.e., \begin{equation}\label{e_skew-symmetric1} \aligned b(\textbf{u},\textbf{v},\textbf{w})=-b(\textbf{u},\textbf{w},\textbf{v}),\ \ \forall \ \textbf{u}\in \textbf{H}, \ \ \textbf{v}, \textbf{w}\in \textbf{H}^1(\Omega), \endaligned \end{equation} and \begin{equation}\label{e_skew-symmetric2} \aligned b(\textbf{u},\textbf{v},\textbf{v})=0,\ \ \forall \ \textbf{u}\in \textbf{H}, \ \ \textbf{v}\in \textbf{H}^1 (\Omega). \endaligned \end{equation} By using a combination of integration by parts, Holder's inequality, and Sobolev inequalities\cite{Temam1995Navier,Shen1992On,he2013euler}, we have that for $d \leq 4$, \begin{flalign}\label{e_estimate for trilinear form} b(\textbf{u},\textbf{v},\textbf{w})\leq \left\{ \begin{array}{l} c_2\|\textbf{u}\|_1\|\textbf{v}\|_1\|\textbf{w}\|_1,\\ c_2\|\textbf{u}\|_2\|\textbf{v}\|\|\textbf{w}\|_1,\\ c_2\|\textbf{u}\|_2\|\textbf{v}\|_1\|\textbf{w}\|,\\ c_2\|\textbf{u}\|_1\|\textbf{v}\|_2\|\textbf{w}\|,\\ c_2\|\textbf{u}\|\|\textbf{v}\|_2\|\textbf{w}\|_1, \end{array} \right. \end{flalign} and that for $d=2$, we have \begin{flalign}\label{e_estimate for trilinear form1} b(\textbf{u},\textbf{v},\textbf{w})\leq \left\{ \begin{array}{l} c_2\|\textbf{u}\|_1^{1/2}\|\textbf{u}\|^{1/2}\|\textbf{v}\|_1^{1/2}\|\textbf{v}\|^{1/2}\|\textbf{w}\|_1, \\ c_2\|\textbf{u}\|_1^{1/2}\|\textbf{u}\|^{1/2}\|A\textbf{v}\|^{1/2}\|\textbf{v}\|^{1/2}\|\textbf{w}\|, \\ c_2\|A\textbf{u}\|^{1/2}\|\textbf{u}\|^{1/2}\|\textbf{v}\|_1\|\textbf{w}\|, \end{array} \right. \end{flalign} where $c_2$ is a positive constant depending only on $\Omega$. We will frequently use the following discrete version of the Gronwall lemma: \medskip \begin{lemma} \label{lem: gronwall2} Let $a_k$, $b_k$, $c_k$, $d_k$, $\gamma_k$, $\Delta t_k$ be nonnegative real numbers such that \begin{equation}\label{e_Gronwall3} \aligned a_{k+1}-a_k+b_{k+1}\Delta t_{k+1}+c_{k+1}\Delta t_{k+1}-c_k\Delta t_k\leq a_kd_k\Delta t_k+\gamma_{k+1}\Delta t_{k+1} \endaligned \end{equation} for all $0\leq k\leq m$. Then \begin{equation}\label{e_Gronwall4} \aligned a_{m+1}+\sum_{k=0}^{m+1}b_k\Delta t_k \leq \exp \left(\sum_{k=0}^md_k\Delta t_k \right)\{a_0+(b_0+c_0)\Delta t_0+\sum_{k=1}^{m+1}\gamma_k\Delta t_k \}. \endaligned \end{equation} \end{lemma} Finally, we may drop the dependence on ${\bm x}$ if no confusion can arise. In particular, we set \begin{numcases}{} \displaystyle e_{\textbf{b}}^{n+1}=\textbf{b}^{n+1}-\textbf{b}(t^{n+1}),\ \ \displaystyle e_{\textbf{u}}^{n+1}=\textbf{u}^{n+1}-\textbf{u}(t^{n+1}), \notag\\ \displaystyle e_{p}^{n+1}=p^{n+1}-p(t^{n+1}),\ \ \ \displaystyle e_{q}^{n+1}=q^{n+1}-q(t^{n+1}).\notag \end{numcases} \subsection{Error estimates for the velocity and magnetic field} In this subsection, we derive the following error estimates for the velocity $\textbf{u}$ and magnetic field $\textbf{b}$. \begin{theorem}\label{thm: error_estimate_ubq} Assuming $\textbf{u}\in H^2(0,T;\textbf{H}^{-1}(\Omega))\bigcap H^1(0,T;\textbf{H}^2(\Omega))\bigcap L^{\infty}(0,T; \textbf{H}^2(\Omega) )$, and $\textbf{b}\in H^2(0,T;\textbf{H}^{-1}(\Omega))\bigcap H^1(0,T;\textbf{H}^2(\Omega))\bigcap L^{\infty}(0,T; \textbf{H}^2(\Omega) )$, then for the scheme \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_q}, we have \begin{equation*} \aligned & \| e_{\textbf{u}}^{m +1}\|^2 + \| e_{\textbf{b}}^{m+1} \|^2 + |e_q^{m+1}|^2 + \nu \Delta t \sum\limits_{n=0}^{m} \| \nabla e_{\textbf{u}}^{n+1}\|^2 \\ & + \eta \Delta t \sum\limits_{n=0}^{m} \| \nabla e_{\textbf{b}}^{n+1}\|^2 + \Delta t \sum\limits_{n=0}^{m} | e_{q}^{n+1} |^2 + \sum\limits_{n=0}^{m} \| e_{\textbf{u}}^{n+1}-e_{\textbf{u}}^n\|^2 \\ & + \sum\limits_{n=0}^{m} \| e_{\textbf{b}}^{n+1}-e_{\textbf{b}}^n\|^2 + \sum\limits_{n=0}^{m} | e_q^{n+1}-e_q^n |^2 \leq C (\Delta t)^2 , \ \ \ \forall \ 0\leq n \leq N-1, \endaligned \end{equation*} where $C$ is a positive constant independent of $\Delta t$. \end{theorem} The proof of the above theorem will be carried out with a sequence of lemmas below. We start first with the following uniform bounds which are direct consequence of the energy stability in Theorem \ref{thm_energy stability_first order}. \begin{lemma}\label{lem_L2H1_boundedness} Let ($\textbf{u}^{n+1}$, $p^{n+1}$, $q^{n+1}$, $\textbf{b}^{n+1}$) be the solution of \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_q}, then we have \begin{equation}\label{e_ubq_boundedness_L2 } \aligned \| \textbf {u}^{m+1} \|^2 + \| \textbf {b}^{m+1} \|^2 + | q^{m+1} |^2 \leq k_1, \ \ \forall \ 0\leq m\leq N-1, \endaligned \end{equation} and \begin{equation}\label{e_ub_boundedness_H1 } \aligned \Delta t\sum_{n=0}^{m} \| \textbf {u}^{n+1} \|_1^2 + \Delta t\sum_{n=0}^{m} \| \textbf {b}^{n+1} \|_1^2 \leq k_2, \ \ \forall \ 0\leq m\leq N-1, \endaligned \end{equation} where the constants $k_i$ $(i=1,2)$ are independent of $\Delta t$. \end{lemma} Next, we derive a first bound for the velocity errors. \begin{lemma}\label{lem: error_estimate_u} Under the assumptions of Theorem \ref{thm: error_estimate_ubq}, we have \begin{equation}\label{lem3.4} \aligned \frac{\| e_{\textbf{u}}^{n+1}\|^2-\|e_{\textbf{u}}^n\|^2}{2\Delta t}&+\frac{\| e_{\textbf{u}}^{n+1}-e_{\textbf{u}}^n\|^2}{2\Delta t}+ \frac{ \nu } {2} \| \nabla e_{\textbf{u}}^{n+1}\|^2 \\ \leq & \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left(\alpha( (\nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1}) - (\textbf{u}^n \cdot \nabla\textbf{u}^n, e_{\textbf{u}}^{n+1})\right) \\ & + C ( \| \textbf{u}(t^n)\|_2^2 + \| \textbf{u}( t^{n+1} )\|_2^2 +\|e_{\textbf{u}}^n\|^2_1) \|e_{\textbf{u}}^n\|^2 + C( \| e_{ \textbf{b} }^{n} \|_1^2 + \| \textbf{b}( t^{n+1} ) \|_2^2 ) \| e_{ \textbf{b} }^{n} \|^2 \\ &+ C \Delta t \int_{t^n}^{t^{n+1}} ( \|\textbf{u}_t \|_2^2 + \|\textbf{u}_{tt}\|_{-1}^2 + \|\textbf{b}_t \|^2_2 ) dt, \ \ \ \forall \ 0\leq n \leq N-1, \endaligned \end{equation} where $C$ is a positive constant independent of $\Delta t$. \end{lemma} \begin{proof} Let $\textbf{R}_{\textbf{u}}^{n+1}$ be the truncation error defined by \begin{equation}\label{e_error_Ru} \aligned \textbf{R}_{\textbf{u}}^{n+1}=\frac{\partial \textbf{u}(t^{n+1})}{\partial t}- \frac{\textbf{u}(t^{n+1})-\textbf{u}(t^{n})}{\Delta t}=\frac{1}{\Delta t}\int_{t^n}^{t^{n+1}}(t^n-t)\frac{\partial^2 \textbf{u}}{\partial t^2}dt. \endaligned \end{equation} Subtracting \eqref{e_MHD_model_SAV1} at $t^{n+1}$ from \eqref{e_SAV_scheme_first_u}, we obtain \begin{equation}\label{e_error_u} \aligned d_t e_{\textbf{u} }^{n+1} &- \nu \Delta e_{\textbf{u} }^{n+1} + \nabla e_p^{n+1} = \textbf{R}_{\textbf{u}}^{n+1} \\ &+ \exp( \frac{t^{n+1}}{T} ) q(t^{n+1}) (\textbf{u}(t^{n+1})\cdot \nabla\textbf{u}(t^{n+1}) - \textbf{u}^{n}\cdot \nabla \textbf{u}^{n})\\ &+ \alpha \exp( \frac{t^{n+1}}{T} ) q^{n+1} ( (\nabla \times \textbf{b}^{n}) \times \textbf{b}^{n} - (\nabla \times \textbf{b}(t^{n+1})) \times \textbf{b}(t^{n+1} )). \endaligned \end{equation} Taking the inner product of \eqref{e_error_u} with $e_{\textbf{u}}^{n+1}$, we obtain \begin{equation}\label{e_error_u_inner_product} \aligned &\frac{\| e_{\textbf{u}}^{n+1}\|^2-\|e_{\textbf{u}}^n\|^2}{2\Delta t}+\frac{\| e_{\textbf{u}}^{n+1}-e_{\textbf{u}}^n\|^2}{2\Delta t}+\nu \| \nabla e_{\textbf{u}}^{n+1}\|^2 + ( \nabla e_p^{n+1}, e_{\textbf{u}}^{n+1} ) =(\textbf{R}_{\textbf{u}}^{n+1}, e_{\textbf{u}}^{n+1})\\ & + \exp( \frac{t^{n+1}}{T} ) \left( q(t^{n+1}) \textbf{u}(t^{n+1})\cdot \nabla\textbf{u}(t^{n+1})-q^{n+1} \textbf{u}^{n}\cdot \nabla \textbf{u}^{n}, e_{\textbf{u}}^{n+1} \right) \\ & + \alpha \exp( \frac{t^{n+1}}{T} ) \left( q^{n+1} ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n} - q(t^{n+1}) ( \nabla \times \textbf{b}(t^{n+1}) ) \times \textbf{b}(t^{n+1} ) , e_{\textbf{u}}^{n+1} \right) . \endaligned \end{equation} For the first term on the right hand side of \eqref{e_error_u_inner_product}, we have \begin{equation}\label{e_error_inner_Ru} \aligned &(\textbf{R}_{\textbf{u}}^{n+1}, e_{\textbf{u}}^{n+1})\leq \frac{\nu}{16} \|\nabla e_{\textbf{u}}^{n+1} \|^2+C \Delta t\int_{t^n}^{t^{n+1}}\|\textbf{u}_{tt}\|_{-1}^2dt. \endaligned \end{equation} For the second term on the right hand side of \eqref{e_error_u_inner_product}, we have \begin{equation}\label{e_error_u_nonlinear_convective} \aligned \exp( \frac{t^{n+1}}{T} ) &\left( q(t^{n+1}) \textbf{u}(t^{n+1})\cdot \nabla\textbf{u}(t^{n+1})-q^{n+1} \textbf{u}^{n}\cdot \nabla\textbf{u}^{n}, e_{\textbf{u}}^{n+1} \right) \\ =& \left( ( \textbf{u}(t^{n+1})-\textbf{u}^{n} )\cdot \nabla \textbf{u}(t^{n+1}), e_{\textbf{u}}^{n+1}\right) + \left( \textbf{u}^{n}\cdot \nabla (\textbf{u}(t^{n+1})-\textbf{u}^{n} ), e_{\textbf{u}}^{n+1}\right)\\ &- \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left(\textbf{u}^n \cdot \nabla\textbf{u}^n, e_{\textbf{u}}^{n+1}\right). \endaligned \end{equation} Using Cauchy-Schwarz inequality and recalling Lemma \ref{lem_L2H1_boundedness} and \eqref{e_estimate for trilinear form}, the first term on the right hand side of \eqref{e_error_u_nonlinear_convective} can be bounded by \begin{equation}\label{e_error_u_nonlinear_convective1} \aligned & \left( ( \textbf{u}(t^{n+1})-\textbf{u}^{n} )\cdot \nabla \textbf{u}(t^{n+1}), e_{\textbf{u}}^{n+1}\right)\\ & \ \ \ \ \ \leq c_2(1+c_1) \| \textbf{u}(t^{n+1})-\textbf{u}^{n} \| \| \textbf{u}(t^{n+1})\|_2 \|\nabla e_{\textbf{u}}^{n+1} \| \\ & \ \ \ \ \ \leq \frac{\nu}{16} \|\nabla e_{\textbf{u}}^{n+1} \|^2+C \| \textbf{u}(t^{n+1})\|_2^2\|e_{\textbf{u}}^n\|^2+C \| \textbf{u}(t^{n+1})\|_2^2\Delta t\int_{t^n}^{t^{n+1}}\|\textbf{u}_t\|^2dt. \endaligned \end{equation} The second term on the right hand side of \eqref{e_error_u_nonlinear_convective} can be estimated as follows by using the similar procedure in \cite{li2020new}, \begin{equation}\label{e_error_u_nonlinear_convective2} \aligned ( \textbf{u}^{n}\cdot \nabla & (\textbf{u}(t^{n+1})-\textbf{u}^{n} ), e_{\textbf{u}}^{n+1})\\ = & \left( \textbf{u}^{n}\cdot \nabla (\textbf{u}(t^{n+1})-\textbf{u}(t^{n}) ), e_{\textbf{u}}^{n+1}\right)- \left( e_{\textbf{u}}^{n}\cdot \nabla e_{\textbf{u}}^n, e_{\textbf{u}}^{n+1}\right)- \left( \textbf{u}(t^n)\cdot \nabla e_{\textbf{u}}^n, e_{\textbf{u}}^{n+1}\right)\\ \leq &c_2(1+c_1) \|\nabla e_{\textbf{u}}^{n+1} \| (\|\textbf{u}^{n}\| \| \int_{t^n}^{t^{n+1}}\textbf{u}_tdt \|_2+\|e_{\textbf{u}}^n\| \| \textbf{u}(t^n)\|_2)\\ &+c_2(1+c_1) \|e_{\textbf{u}}^n\|^{1/2} \|e_{\textbf{u}}^n\|^{1/2}_1\|e_{\textbf{u}}^n\|^{1/2} \|e_{\textbf{u}}^n\|^{1/2}_1\| \nabla e_{\textbf{u}}^{n+1} \| \\ \leq & \frac{\nu}{16} \|\nabla e_{\textbf{u}}^{n+1} \|^2+C (\| \textbf{u}(t^n)\|_2^2+\|e_{\textbf{u}}^n\|^2_1) \|e_{\textbf{u}}^n\|^2+C \Delta t \int_{t^n}^{t^{n+1}}\|\textbf{u}_t \|_2^2dt. \endaligned \end{equation} For the last term on the right hand side of \eqref{e_error_u_inner_product}, we have \begin{equation}\label{e_error_u_Lorentz_force} \aligned \exp( \frac{t^{n+1}}{T} )& \left( q^{n+1} ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n} - q(t^{n+1}) ( \nabla \times \textbf{b}(t^{n+1}) ) \times \textbf{b}(t^{n+1} ) , e_{\textbf{u}}^{n+1} \right) \\ =& \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} \right) + \left( (\nabla \times (\textbf{b}^{n}- \textbf{b}(t^{n+1}) )) \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} \right) \\ & + \left( ( \nabla \times \textbf{b} (t^{n+1}) ) \times ( \textbf{b}^{n} - \textbf{b}(t^{n+1}) ), e_{\textbf{u}}^{n+1} \right) . \endaligned \end{equation} The second term on the right hand side of \eqref{e_error_u_Lorentz_force} can be transformed into \begin{equation}\label{e_error_u_Lorentz_force1} \aligned ( (\nabla \times (\textbf{b}^{n}- \textbf{b}(t^{n+1}) )) &\times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} ) \\ =& \left( ( \nabla \times e_{ \textbf{b} }^{n}) \times e_ { \textbf{b} }^{n}, e_{\textbf{u}}^{n+1} \right) + \left( ( \nabla \times e_{ \textbf{b} }^{n} ) \times \textbf{b} (t^n), e_{\textbf{u}}^{n+1} \right) \\ & + \left( (\nabla \times ( \textbf{b}(t^n)- \textbf{b}(t^{n+1}) ) ) \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} \right). \endaligned \end{equation} Using the identity \eqref{e_cross_product1}, the first term on the right hand side of \eqref{e_error_u_Lorentz_force1} can be bounded by \begin{equation}\label{e_error_u_Lorentz_force2} \aligned & \left( (\nabla \times e_{ \textbf{b} }^{n}) \times e_ { \textbf{b} }^{n}, e_{\textbf{u}}^{n+1} \right) = \left( ( e_{ \textbf{b} }^{n} \cdot \nabla ) e_{ \textbf{b} }^{n}, e_{\textbf{u}}^{n+1} \right) - \frac{1}{2} \left( \nabla |e_{ \textbf{b} }^{n}|^2 , e_{\textbf{u}}^{n+1} \right) \\ & \ \ \ \ \ \leq C \| e_{ \textbf{b} }^{n} \|^{1/2} \| e_{ \textbf{b} }^{n} \|_1^{1/2} \| e_{ \textbf{b} }^{n} \|^{1/2} \| e_{ \textbf{b} }^{n} \|_1^{1/2} \| \nabla e_{\textbf{u}}^{n+1} \| \\ & \ \ \ \ \ \leq \frac{ \nu }{16} \| \nabla e_{\textbf{u}}^{n+1} \|^2 + C \| e_{ \textbf{b} }^{n} \|_1^2 \| e_{ \textbf{b} }^{n} \|^2 . \endaligned \end{equation} Using \eqref{e_cross_product2}, \eqref{e_cross_product3} and integration by parts \eqref{e_integration by parts1}, the second term on the right hand side of \eqref{e_error_u_Lorentz_force1} can be controlled by \begin{equation}\label{e_error_u_Lorentz_force3} \aligned \left( (\nabla \times e_{ \textbf{b} }^{n} )\times \textbf{b} (t^n), e_{\textbf{u}}^{n+1} \right) =& - \left( e_{\textbf{u}}^{n+1} \times \textbf{b} (t^n), \nabla \times e_{ \textbf{b} }^{n} \right) \\ =& - \left( \nabla \times (e_{\textbf{u}}^{n+1} \times \textbf{b} (t^n) ), e_{ \textbf{b} }^{n} \right) - < \textbf{n} \times (e_{\textbf{u}}^{n+1} \times \textbf{b} (t^n) ), e_{ \textbf{b} }^{n} > \\ =& \left( ( e_{\textbf{u}}^{n+1} \cdot \nabla ) \textbf{b} (t^n) , e_{ \textbf{b} }^{n} \right) - \left( ( \textbf{b} (t^n) \cdot \nabla ) e_{\textbf{u}}^{n+1}, e_{ \textbf{b} }^{n} \right) \\ \leq & \frac{ \nu }{16} \| \nabla e_{\textbf{u}}^{n+1} \|^2 + C \| \textbf{b}(t^n) \|_2^2 \| e_{ \textbf{b} }^{n} \| ^2, \endaligned \end{equation} where we use the identity \begin{equation*} \aligned \nabla \times ( \textbf{v} \times \textbf{w} ) = ( \textbf{w} \cdot \nabla ) \textbf{v} - ( \textbf{v} \cdot \nabla ) \textbf{w}, \ \forall \ \textbf{v}, \textbf{w} \in \textbf{H}. \endaligned \end{equation*} Lemma \ref{lem_L2H1_boundedness} and \eqref{e_estimate for trilinear form}, the last term on the right hand side of \eqref{e_error_u_Lorentz_force1} can be estimated by \begin{equation}\label{e_error_u_Lorentz_force4} \aligned & \left( (\nabla \times ( \textbf{b}(t^n)- \textbf{b}(t^{n+1}) )) \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} \right) \\ & \ \ \ \ \ \ \leq \frac{ \nu }{16} \| \nabla e_{\textbf{u}}^{n+1} \|^2 + C \| \textbf{b}^{n} \|^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{b}_t \|^2_2 dt . \endaligned \end{equation} For the last term on the right hand side of \eqref{e_error_u_Lorentz_force}, we have \begin{equation}\label{e_error_u_Lorentz_force5} \aligned & \left( ( \nabla \times \textbf{b} (t^{n+1}) ) \times ( \textbf{b}^{n} - \textbf{b}(t^{n+1}) ), e_{\textbf{u}}^{n+1} \right) \\ & \ \ \ \ \ \ \leq \frac{ \nu }{16} \| \nabla e_{ \textbf{u} }^{n+1} \|^2 + C \| \textbf{b}( t^{n+1} ) \|_2^2 \| e_{ \textbf{b} } ^n \|^2 + C \Delta t \int_{t^n}^{t^{n+1}} \| \textbf{b}_t \|^2 dt . \endaligned \end{equation} Finally, combining \eqref{e_error_u_inner_product} with \eqref{e_error_u_nonlinear_convective}-\eqref{e_error_u_Lorentz_force5} leads to the desired result. \begin{comment} we obtain \begin{equation*}\label{e_error_u_combine} \aligned \frac{\| e_{\textbf{u}}^{n+1}\|^2-\|e_{\textbf{u}}^n\|^2}{2\Delta t}&+\frac{\| e_{\textbf{u}}^{n+1}-e_{\textbf{u}}^n\|^2}{2\Delta t}+\nu \| \nabla e_{\textbf{u}}^{n+1}\|^2 \\ \leq & \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times \textbf{b}^{n} \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} \right) - \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left((\textbf{u}^n \cdot \nabla)\textbf{u}^n, e_{\textbf{u}}^{n+1}\right) \\ & + \frac{\nu}{2} \|\nabla e_{\textbf{u}}^{n+1} \|^2 + C ( \| \textbf{u}(t^n)\|_2^2 + \| \textbf{u}( t^{n+1} )\|_2^2 +\|e_{\textbf{u}}^n\|^2_1) \|e_{\textbf{u}}^n\|^2 \\ & + C( \| e_{ \textbf{b} }^{n} \|_1^2 + \| \textbf{b}(t^n) \|_2^2 + \| \textbf{b}( t^{n+1} ) \|_2^2 ) \| e_{ \textbf{b} }^{n} \|^2 \\ &+ C \Delta t \int_{t^n}^{t^{n+1}} ( \|\textbf{u}_t \|_2^2 + \|\textbf{u}_{tt}\|_{-1}^2 + \|\textbf{b}_t \|^2_2 ) dt, \endaligned \end{equation*} which implies the desired result. \end{comment} \end{proof} \medskip We derive below a bound for the errors of the magnetic field. \begin{lemma}\label{lem: error_estimate_b} Under the assumptions of Theorem \ref{thm: error_estimate_ubq}, we have \begin{equation}\label{lem3.5} \aligned \frac{\| e_{\textbf{b}}^{n+1}\|^2-\|e_{\textbf{b}}^n\|^2}{2\Delta t}&+\frac{\| e_{\textbf{b}}^{n+1}-e_{\textbf{b}}^n\|^2}{2\Delta t} + \frac{\eta}{ 2 } \| \nabla e_{\textbf{b}}^{n+1}\|^2 \\ \le & - \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ), e_{\textbf{b}}^{n+1} \right) + C ( \| \textbf{u}(t^{n+1}) \|_2^2 + \| e_{ \textbf{b} }^n \|_1^2 ) \| e_{ \textbf{b} } ^n \|^2 \\ & + C ( \| e_{ \textbf{u} }^n \|_1^2+ \| \textbf{b}(t^{n+1}) \|^2_2 ) \| e_{ \textbf{u} }^n \|^2 +C \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{u}_t \|^2_2 dt \\ & + C \Delta t \int_{t^n}^{t^{n+1}} ( \|\textbf{b}_t \|^2 + \|\textbf{b}_{tt}\|_{-1}^2 )dt , \ \ \ \forall \ 0\leq n \leq N-1, \endaligned \end{equation} where $C$ is a positive constant independent of $\Delta t$. \end{lemma} \begin{proof} Let $\textbf{R}_{\textbf{b}}^{n+1}$ be the truncation error defined by \begin{equation}\label{e_error_Rb} \aligned \textbf{R}_{\textbf{b}}^{n+1}=\frac{\partial \textbf{b}(t^{n+1})}{\partial t}- \frac{\textbf{b}(t^{n+1})-\textbf{b}(t^{n})}{\Delta t}=\frac{1}{\Delta t}\int_{t^n}^{t^{n+1}}(t^n-t)\frac{\partial^2 \textbf{b}}{\partial t^2}dt. \endaligned \end{equation} Subtracting \eqref{e_MHD_model_SAV2} at $t^{n+1}$ from \eqref{e_SAV_scheme_first_b} and using \eqref{e_curl(curl b)}, we obtain \begin{equation}\label{e_error_b} \aligned d_t e_{\textbf{b} }^{n+1} - \eta \Delta e_{\textbf{b} }^{n+1} = & \exp( \frac{t^{n+1}}{T} ) q(t^{n+1}) \nabla \times ( \textbf{b}(t^{n+1}) \times \textbf{u}(t^{n+1}) ) \\ & - \exp( \frac{t^{n+1} }{T} ) q^{n+1} \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) +\textbf{R}_{\textbf{u}}^{n+1} . \endaligned \end{equation} Taking the inner product of \eqref{e_error_b} with $e_{\textbf{b}}^{n+1}$, we obtain \begin{equation}\label{e_error_b_inner_product} \aligned \frac{\| e_{\textbf{b}}^{n+1}\|^2-\|e_{\textbf{b}}^n\|^2}{2\Delta t}&+\frac{\| e_{\textbf{b}}^{n+1}-e_{\textbf{b}}^n\|^2}{2\Delta t} + \eta \| \nabla e_{\textbf{b}}^{n+1}\|^2 \\ =& \exp( \frac{t^{n+1}}{T} ) q(t^{n+1}) \left( \nabla \times ( \textbf{b}(t^{n+1}) \times \textbf{u}(t^{n+1}) ) , e_{\textbf{b}}^{n+1} \right) \\ & - \exp( \frac{t^{n+1}}{T} ) q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) , e_{\textbf{b}}^{n+1} \right) +(\textbf{R}_{\textbf{b}}^{n+1}, e_{\textbf{b}}^{n+1}) .\endaligned \end{equation} The first two terms on the right hand side of \eqref{e_error_b_inner_product} can be recast as \begin{equation}\label{e_error_b_nonlinear_convective} \aligned \exp( \frac{t^{n+1}}{T} )& \left( q(t^{n+1}) \nabla \times ( \textbf{b}(t^{n+1}) \times \textbf{u}(t^{n+1}) ) - q^{n+1} \nabla \times ( \textbf{b}^n \times \textbf{u}^n ), e_{\textbf{b}}^{n+1} \right) \\ = & \left( \nabla \times [ ( \textbf{b}(t^{n+1}) - \textbf{b}^n ) \times \textbf{u}(t^{n+1}) ] , e_{\textbf{b}}^{n+1} \right) + \left( \nabla \times [ \textbf{b}^n \times ( \textbf{u}(t^{n+1}) - \textbf{u}^n ) ] , e_{\textbf{b}}^{n+1} \right) \\ & - \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ), e_{\textbf{b}}^{n+1} \right) . \endaligned \end{equation} By using \eqref{e_norm L_infty}, \eqref{e_norm curl} and integration by parts \eqref{e_integration by parts1}, we have \begin{equation}\label{e_error_b_nonlinear_convective1} \aligned \big( \nabla \times &[ ( \textbf{b}(t^{n+1}) - \textbf{b}^n ) \times \textbf{u}(t^{n+1}) ] , e_{\textbf{b}}^{n+1} \big) = \left( ( \textbf{b}(t^{n+1}) - \textbf{b}^n ) \times \textbf{u}(t^{n+1}) , \nabla \times e_{\textbf{b}}^{n+1} \right) \\ \leq & \frac{\eta}{ 6 } \| \nabla e_{\textbf{b}}^{n+1} \|^2 + C \| \textbf{u}(t^{n+1}) \|_2^2 e_{ \textbf{b} } ^n \|^2 + C \| \textbf{u}(t^{n+1}) \|_2^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{b}_t \|^2 dt . \endaligned \end{equation} Thanks to \eqref{e_norm L4} and \eqref{e_norm curl}, we have \begin{equation}\label{e_error_b_nonlinear_convective2} \aligned \big( \nabla \times &[ \textbf{b}^n \times ( \textbf{u}(t^{n+1}) - \textbf{u}^n ) ] , e_{\textbf{b}}^{n+1} \big) \\ = & \left( \textbf{b}^n \times ( \textbf{u}(t^{n+1}) - \textbf{u}^n ) , \nabla \times e_{\textbf{b}}^{n+1} \right) \\ =& \left( e_{ \textbf{b} }^n \times ( \textbf{u}(t^{n+1}) - \textbf{u}(t^n) ) , \nabla \times e_{\textbf{b}}^{n+1} \right) - \left( e_{ \textbf{b} }^n \times e_{ \textbf{u} }^n , \nabla \times e_{\textbf{b}}^{n+1} \right) \\ & + \left( \textbf{b}(t^{n+1}) \times ( \textbf{u}(t^{n+1}) - \textbf{u}^n ) , \nabla \times e_{\textbf{b}}^{n+1} \right) \\ \leq & \frac{\eta}{ 6 } \| \nabla e_{\textbf{b}}^{n+1} \|^2 + C \| e_{ \textbf{b} }^n \|^2_{L^4} \| e_{ \textbf{u} }^n \|^2_{L^4} + C \| \textbf{b}(t^{n+1}) \|^2_2 \| e_{ \textbf{u} }^n \|^2 \\ & + C \| e_{ \textbf{b} }^n \|^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{u}_t \|^2_2 dt + C \| \textbf{b}(t^{n+1}) \|^2_2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{u}_t \|^2 dt \\ \leq & \frac{\eta}{ 6 } \| \nabla e_{\textbf{b}}^{n+1} \|^2 + C \| e_{ \textbf{b} }^n \|_1^2 \| e_{ \textbf{b} }^n \|^2 + C ( \| e_{ \textbf{u} }^n \|_1^2+ \| \textbf{b}(t^{n+1}) \|^2_2 ) \| e_{ \textbf{u} }^n \|^2 \\ & + C \| e_{ \textbf{b} }^n \|^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{u}_t \|^2_2 dt + C \| \textbf{b}(t^{n+1}) \|^2_2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{u}_t \|^2 dt . \endaligned \end{equation} For the last term on the right hand side of \eqref{e_error_b_inner_product}, we have \begin{equation}\label{e_error_inner_Rb} \aligned &(\textbf{R}_{\textbf{b}}^{n+1}, e_{\textbf{b}}^{n+1})\leq \frac{ \eta }{6} \|\nabla e_{\textbf{b}}^{n+1} \|^2+C \Delta t\int_{t^n}^{t^{n+1}}\|\textbf{b}_{tt}\|_{-1}^2dt. \endaligned \end{equation} Combining \eqref{e_error_b_inner_product} with \eqref{e_error_b_nonlinear_convective}-\eqref{e_error_inner_Rb} leads to the desired result. \begin{comment} \begin{equation}\label{e_error_b_combine} \aligned \frac{\| e_{\textbf{b}}^{n+1}\|^2-\|e_{\textbf{b}}^n\|^2}{2\Delta t}&+\frac{\| e_{\textbf{b}}^{n+1}-e_{\textbf{b}}^n\|^2}{2\Delta t} + \eta \| \nabla e_{\textbf{b}}^{n+1}\|^2 \\ =& \frac{\eta}{ 2 } \| \nabla e_{\textbf{b}}^{n+1} \|^2 - \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ), e_{\textbf{b}}^{n+1} \right) \\ & + C ( \| \textbf{u}(t^{n+1}) \|_2^2 + \| e_{ \textbf{b} }^n \|_1^2 ) \| e_{ \textbf{b} } ^n \|^2 + C ( \| e_{ \textbf{u} }^n \|_1^2+ \| \textbf{b}(t^{n+1}) \|^2_2 ) \| e_{ \textbf{u} }^n \|^2 \\ & + C \| \textbf{u}(t^{n+1}) \|_2^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{b}_t \|^2 dt + C \| e_{ \textbf{b} }^n \|^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{u}_t \|^2_2 dt \\ & +C \| \textbf{b}(t^{n+1}) \|^2_2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{u}_t \|^2 dt +C \Delta t\int_{t^n}^{t^{n+1}}\|\textbf{b}_{tt}\|_{-1}^2dt , \endaligned \end{equation} which implies the desired result. \end{comment} \end{proof} \medskip In the next lemma, we derive a bound for the errors with respect to $q$. \begin{lemma}\label{lem: error_estimate_q} Under the assumptions of Theorem \ref{thm: error_estimate_ubq}, we have \begin{equation}\label{lem3.6} \aligned \frac{|e_q^{n+1}|^2-|e_q^n|^2}{2\Delta t}&+\frac{|e_q^{n+1}-e_q^n|^2}{2\Delta t}+\frac{1}{2T}|e_q^{n+1}|^2 \\ \leq & \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left(\textbf{u}^n \cdot \nabla\textbf{u}^n, e_{\textbf{u}}^{n+1}\right) - \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} \right) \\ & + \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ), e_{\textbf{b}}^{n+1} \right) + C \|\textbf{u}^n \|_1^2 \| e_{\textbf{u}}^n \|^2 \\ & + C ( \| e_{\textbf{b}}^n \|_1^2 + \| \textbf{u} ^n \|_1^2 + \| \textbf{b}( t^{n+1} ) \|_1^2 ) \| e_{\textbf{b}}^n \|^2 + C \Delta t\int_{t^n}^{t^{n+1} } \|q_{tt} \|^2dt \\ & + C \Delta t \int_{t^n}^{t^{n+1} } ( \|\textbf{u}_t \|_0^2 + \|\textbf{b}_t \|^2_1 )dt , \ \ \ \forall \ 0\leq n \leq N-1, \endaligned \end{equation} where $C$ is a positive constant independent of $\Delta t$. \end{lemma} \begin{proof} Subtracting \eqref{e_MHD_model_SAV3} from \eqref{e_SAV_scheme_first_q} leads to \begin{equation}\label{e_error_q} \aligned \frac{e_q^{n+1}-e_q^n}{\Delta t}&+\frac{1}{T}e_q^{n+1} = \textbf{R}_{q}^{n+1} \\ & + \exp( \frac{t^{n+1}}{T} ) ( (\textbf{u}^n\cdot\nabla \textbf{u}^n, \textbf{u}^{n+1} ) - (\textbf{u}(t^{n+1})\cdot \nabla \textbf{u}(t^{n+1}),\textbf{u}(t^{n+1}) )) \\ & - \alpha \exp( \frac{ t^{n+1} }{T} ) \left( ( (\nabla \times \textbf{b}^n) \times \textbf{b}^n, \textbf{u}^{n+1} ) - ( (\nabla \times \textbf{b}(t^{n+1})) \times \textbf{b}(t^{n+1}), \textbf{u}(t^{n+1}) ) \right) \\ & + \alpha \exp( \frac{t^{n+1}}{T} ) \left( ( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) , \textbf{b}^{n+1} ) - ( \nabla \times ( \textbf{b}(t^{n+1}) \times \textbf{u}(t^{n+1}) ) , \textbf{b}(t^{n+1}) ) \right), \endaligned \end{equation} where \begin{equation}\label{e_error_Rq} \aligned \textbf{R}_{q}^{n+1}=\frac{ \rm{d} q(t^{n+1})}{ \rm{d} t}- \frac{q(t^{n+1})-q(t^{n})}{\Delta t}=\frac{1}{\Delta t}\int_{t^n}^{t^{n+1}}(t^n-t)\frac{\partial^2 q}{\partial t^2}dt. \endaligned \end{equation} Multiplying both sides of \eqref{e_error_q} by $e_q^{n+1}$ yields \begin{equation}\label{e_error_q_inner} \aligned &\frac{|e_q^{n+1}|^2-|e_q^n|^2}{2\Delta t}+\frac{|e_q^{n+1}-e_q^n|^2}{2\Delta t}+\frac{1}{T}|e_q^{n+1}|^2 = \textbf{R}_{q}^{n+1} e_q^{n+1}\\ & \ \ \ \ \ + \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} ( ( \textbf{u}^n\cdot\nabla \textbf{u}^n, \textbf{u}^{n+1} ) - (\textbf{u}(t^{n+1})\cdot \nabla \textbf{u}(t^{n+1}),\textbf{u}(t^{n+1}) )) \\ & \ \ \ \ \ - \alpha \exp( \frac{ t^{n+1} }{T} ) e_q^{n+1} \left( ( (\nabla \times \textbf{b}^n ) \times \textbf{b}^n, \textbf{u}^{n+1} ) - ( (\nabla \times \textbf{b}(t^{n+1}) )\times \textbf{b}(t^{n+1}), \textbf{u}(t^{n+1}) ) \right) \\ & \ \ \ \ \ + \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) , \textbf{b}^{n+1} ) - ( \nabla \times ( \textbf{b}(t^{n+1}) \times \textbf{u}(t^{n+1}) ) , \textbf{b}(t^{n+1}) ) \right) . \endaligned \end{equation} We bound the right hand side of the above as follows: \begin{equation}\label{e_error_q_nonlinear4} \aligned \textbf{R}_{q}^{n+1}e_q^{n+1} \leq \frac{1}{12 T} |e_q^{n+1}|^2+C \Delta t\int_{t^n}^{t^{n+1} } \|q_{tt} \|^2dt . \endaligned \end{equation} The second term on the right hand side of \eqref{e_error_q_inner} can be estimated as \begin{equation}\label{e_error_q_nonlinear1} \aligned & \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \textbf{u}^n\cdot\nabla \textbf{u}^n, \textbf{u}^{n+1} ) - ( \textbf{u}(t^{n+1})\cdot \nabla \textbf{u}(t^{n+1}),\textbf{u}(t^{n+1}) ) \right) \\ = & \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left(\textbf{u}^n \cdot \nabla\textbf{u}^n, e_{\textbf{u}}^{n+1}\right) + \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \textbf{u}^n\cdot\nabla ( \textbf{u}^n - \textbf{u}(t^{n+1}) ), \textbf{u}(t^{n+1}) \right) \\ & + \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \textbf{u}^n - \textbf{u}(t^{n+1})) \cdot \nabla \textbf{u}(t^{n+1}), \textbf{u}(t^{n+1}) \right) . \endaligned \end{equation} Thanks to \eqref{e_estimate for trilinear form} and Lemma \ref{lem_L2H1_boundedness}, we bound the second term on the right hand side of \eqref{e_error_q_nonlinear1} by \begin{equation}\label{e_error_q_nonlinear2} \aligned & \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \textbf{u}^n\cdot\nabla ( \textbf{u}^n - \textbf{u}(t^{n+1}) ), \textbf{u}(t^{n+1}) \right) \\ & \ \ \ \ \ \leq C \|\textbf{u}^n \|_1 \| \textbf{u}(t^{n+1})- \textbf{u}(t^{n}) - e_{ \textbf{u} }^n \|_0 \| \textbf{u}(t^{n+1}) \|_{2} |e_q^{n+1}| \\ & \ \ \ \ \ \leq \frac{1}{12T} |e_q^{n+1}|^2 + C \|\textbf{u}^n \|_1^2 \| e_{\textbf{u}}^n \|^2 +C \| \textbf{u}(t^{n+1}) \|_{2} ^2 \Delta t \int_{t^n}^{t^{n+1} } \|\textbf{u}_t \|_0^2dt . \endaligned \end{equation} The third term on the right hand side of \eqref{e_error_q_nonlinear1} can be bounded by \begin{equation}\label{e_error_q_nonlinear3} \aligned & \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \textbf{u}^n - \textbf{u}(t^{n+1}) )\cdot \nabla \textbf{u}(t^{n+1}), \textbf{u}(t^{n+1}) \right) \\ & \ \ \ \ \ \leq C \| \textbf{u}(t^{n+1})-\textbf{u}^n \| \|\textbf{u}(t^{n+1}) \|_1 \|\textbf{u}(t^{n+1}) \|_2 |e_q^{n+1}| \\ & \ \ \ \ \ \leq \frac{1}{12T} |e_q^{n+1}|^2 + C\|e_{\textbf{u}}^{n}\|^2 +C\Delta t\int_{t^n}^{t^{n+1} } \|\textbf{u}_t \|^2dt . \endaligned \end{equation} The second to last term on the right hand side of \eqref{e_error_q_inner} can be recast as \begin{equation}\label{e_error_q_nonlinear5} \aligned - \alpha \exp( \frac{ t^{n+1} }{T} )& e_q^{n+1} \left( ( (\nabla \times \textbf{b}^n) \times \textbf{b}^n, \textbf{u}^{n+1} ) - ( (\nabla \times \textbf{b}(t^{n+1})) \times \textbf{b}(t^{n+1}), \textbf{u}(t^{n+1}) ) \right) \\ = & \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( (\nabla \times ( \textbf{b}(t^{n+1}) - \textbf{b}^{n} ) ) \times \textbf{b}^{n}, \textbf{u}(t^{n+1}) \right) \\ & + \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( (\nabla \times \textbf{b}(t^{n+1} )) \times ( \textbf{b}(t^{n+1}) - \textbf{b}^{n} ) , \textbf{u}(t^{n+1}) \right) \\ & - \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} \right) . \endaligned \end{equation} Thanks to \eqref{e_estimate for trilinear form}, \eqref{e_estimate for trilinear form1} and using the similar procedure in \eqref{e_error_u_Lorentz_force3}, the first term on the right hand side of \eqref{e_error_q_nonlinear5} can be estimated by \begin{equation}\label{e_error_q_nonlinear6} \aligned \alpha \exp( \frac{t^{n+1}}{T} )& e_q^{n+1} \left( ( \nabla \times ( \textbf{b}(t^{n+1}) - \textbf{b}^{n} ) \times \textbf{b}^{n}, \textbf{u}(t^{n+1}) \right) \\ = & - \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times ( \textbf{u}( t^{n+1} ) \times \textbf{b}^{n} ), \textbf{b}(t^{n+1}) - \textbf{b}^{n} \right) \\ = & \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \textbf{u}( t^{n+1} ) \cdot \nabla ) \textbf{b}^{n} , \textbf{b}(t^{n+1}) - \textbf{b}^{n} \right) \\ & - \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \textbf{b}^{n} \cdot \nabla ) \textbf{u}( t^{n+1} ) , \textbf{b}(t^{n+1}) - \textbf{b}^{n} \right) \\ \leq & \frac{1}{12 T} |e_q^{n+1}|^2 + C \| e_{\textbf{b}}^n \|_1^2 \| e_{\textbf{b}}^n \|^2 + C \| \textbf{u}( t^{n+1} ) \|_2^2 \| \textbf{b}^{n} \| ^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{b}_t \|^2_1 dt . \endaligned \end{equation} For the second term on the right hand side of \eqref{e_error_q_nonlinear5}, we have \begin{equation}\label{e_error_q_nonlinear7} \aligned & \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times \textbf{b}(t^{n+1} ) \times ( \textbf{b}(t^{n+1}) - \textbf{b}^{n} ) , \textbf{u}(t^{n+1}) \right) \\ \leq & \frac{1}{12 T} |e_q^{n+1}|^2 + C \| \textbf{b}( t^{n+1} ) \|_1^2 \| e_{\textbf{b}}^n \|^2 + C \| \textbf{u}( t^{n+1} ) \|_2^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{b}_t \|^2 dt . \endaligned \end{equation} Using \eqref{e_norm L4} and \eqref{e_norm curl} and the integration by parts \eqref{e_integration by parts1}, the last term on the right hand side of \eqref{e_error_q_inner} can be bounded by \begin{equation}\label{e_error_q_nonlinear8} \aligned \alpha \exp( \frac{t^{n+1}}{T} ) & e_q^{n+1} \left( ( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) , \textbf{b}^{n+1} ) - ( \nabla \times ( \textbf{b}(t^{n+1}) \times \textbf{u}(t^{n+1}) ) , \textbf{b}(t^{n+1}) ) \right) \\ \leq & \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( ( \textbf{b}^n - \textbf{b}(t^{n+1}) ) \times \textbf{u}^n ), \textbf{b}(t^{n+1}) \right) \\ & + \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( \textbf{b}(t^{n+1}) \times ( \textbf{u}^n - \textbf{u}(t^{n+1}) ) ), \textbf{b}(t^{n+1}) \right) \\ & + \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ), e_{\textbf{b}}^{n+1} \right) \\ \leq & \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ), e_{\textbf{b}}^{n+1} \right) + \frac{1}{12 T} |e_q^{n+1}|^2 \\ & + C \| \textbf{u} ^n \|_1^2 \| e_{ \textbf{b} } ^n\|^2 + C \| e_{ \textbf{u} } ^n\|^2 + C \| \textbf{b}(t^{n+1} )\|^2_2 \Delta t \int_{t^n}^{t^{n+1}} ( \|\textbf{b}_t \|^2_1 + \|\textbf{u}_t \|^2 ) dt . \endaligned \end{equation} Finally, combining \eqref{e_error_q_nonlinear1}-\eqref{e_error_q_nonlinear8} in \eqref{e_error_q_inner} leads to the desired result. \begin{comment} results in \begin{equation}\label{e_error_q_combine} \aligned \frac{|e_q^{n+1}|^2-|e_q^n|^2}{2\Delta t}&+\frac{|e_q^{n+1}-e_q^n|^2}{2\Delta t}+\frac{1}{2T}|e_q^{n+1}|^2 \\ \leq & \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left((\textbf{u}^n \cdot \nabla)\textbf{u}^n, e_{\textbf{u}}^{n+1}\right) - \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} \right) \\ & + \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ), e_{\textbf{b}}^{n+1} \right) + C \|\textbf{u}^n \|_1^2 \| e_{\textbf{u}}^n \|^2 \\ & + C ( \| e_{\textbf{b}}^n \|_1^2 + \| \textbf{u} ^n \|_1^2 + \| \textbf{b}( t^{n+1} ) \|_1^2 ) \| e_{\textbf{b}}^n \|^2 + C \Delta t\int_{t^n}^{t^{n+1} } \|q_{tt} \|^2dt \\ & + C ( \| \textbf{u}(t^{n+1}) \|_{2} ^2 + \| \textbf{b}(t^{n+1} ) \|^2_2 )\Delta t \int_{t^n}^{t^{n+1} } \|\textbf{u}_t \|_0^2dt \\ & + C ( \| \textbf{u}( t^{n+1} ) \|_2^2 + \| \textbf{b}(t^{n+1} ) \|^2_2 ) \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{b}_t \|^2_1 dt , \endaligned \end{equation} which implies the desired result. \end{comment} \end{proof} Now we are in the position to prove Theorem \ref{thm: error_estimate_ubq} by using Lemmas \ref{lem: error_estimate_u}-\ref{lem: error_estimate_q}. {\it Proof of Theorem \ref{thm: error_estimate_ubq}.} Multiplying both sides of \eqref{lem3.5} by $\alpha$ and summing up this inequality with \eqref{lem3.4} and \eqref{lem3.6} lead to \begin{equation}\label{e_error_ubq_final1} \aligned &\frac{\| e_{\textbf{u}}^{n+1}\|^2-\|e_{\textbf{u}}^n\|^2}{2\Delta t}+\frac{\| e_{\textbf{u}}^{n+1}-e_{\textbf{u}}^n\|^2}{2\Delta t}+\frac{ \nu }{2} \| \nabla e_{\textbf{u}}^{n+1}\|^2 + \alpha \frac{\| e_{\textbf{b}}^{n+1}\|^2-\|e_{\textbf{b}}^n\|^2}{2\Delta t} \\ & + \alpha \frac{\| e_{\textbf{b}}^{n+1}-e_{\textbf{b}}^n\|^2}{2\Delta t} + \frac{\alpha \eta }{ 2 } \| \nabla e_{\textbf{b}}^{n+1}\|^2 + \frac{|e_q^{n+1}|^2-|e_q^n|^2}{2\Delta t}+\frac{|e_q^{n+1}-e_q^n|^2}{2\Delta t}+\frac{1}{2T}|e_q^{n+1}|^2 \\ \leq & C ( \| \textbf{b}(t^{n+1}) \|^2_2 +\|e_{\textbf{u}}^n\|^2_1) \|e_{\textbf{u}}^n\|^2 + C( \| e_{ \textbf{b} }^{n} \|_1^2 + \| \textbf{u} ^n \|_1^2 ) \| e_{ \textbf{b} }^{n} \|^2 \\ &+ C \Delta t \int_{t^n}^{t^{n+1}} ( \|\textbf{u}_t \|_2^2 + \|\textbf{u}_{tt}\|_{-1}^2 + \|q_{tt} \|^2 ) dt \\ &+ C \Delta t \int_{t^n}^{t^{n+1}} ( \|\textbf{b}_t \|^2_2 + \|\textbf{b}_{tt}\|_{-1}^2 ) dt . \endaligned \end{equation} Multiplying \eqref{e_error_ubq_final1} by $2\Delta t$ and summing over $n$, $n=0,1,\ldots,m$, and applying the discrete Gronwall lemma \ref{lem: gronwall2}, we have \begin{equation}\label{e_error_ubq_final2} \aligned & \| e_{\textbf{u}}^{m +1}\|^2 + \| e_{\textbf{b}}^{n+1} \|^2 + |e_q^{m+1}|^2 + \nu \Delta t \sum\limits_{n=0}^{m} \| \nabla e_{\textbf{u}}^{n+1}\|^2 \\ & + \eta \Delta t \sum\limits_{n=0}^{m} \| \nabla e_{\textbf{b}}^{n+1}\|^2 + \Delta t \sum\limits_{n=0}^{m} | e_{q}^{n+1} |^2 + \sum\limits_{n=0}^{m} \| e_{\textbf{u}}^{n+1}-e_{\textbf{u}}^n\|^2 \\ & + \sum\limits_{n=0}^{m} \| e_{\textbf{b}}^{n+1}-e_{\textbf{b}}^n\|^2 + \sum\limits_{n=0}^{m} | e_q^{n+1}-e_q^n |^2 \\ \leq & C ( \|\textbf{u}\|_{H^1(0,T;H^2( {\Omega}) )}^2 + \|\textbf{u}\|_{H^2(0,T;H^{-1} ( {\Omega}) )}^2 + \| \textbf{u} \|_{L^{\infty}(0,T; \textbf{H}^2(\Omega) )} ^2 ) (\Delta t)^2 \\ & + C ( \|\textbf{b}\|_{H^1(0,T;H^2( {\Omega}) )}^2 + \|\textbf{b}\|_{H^2(0,T;H^{-1} {\Omega}) )}^2 ) (\Delta t)^2 \\ & + C ( \| \textbf{b} \|_{L^{\infty}(0,T; \textbf{H}^2(\Omega) )} ^2 + \|q\|_{H^2(0,T)}^2 ) (\Delta t)^2, \endaligned \end{equation} which concludes the proof of Theorem \ref{thm: error_estimate_ubq}. \subsection{Error estimates for the pressure} The main result in this section is the following error estimate for the pressure. \begin{theorem}\label{thm: error_estimate_p} Assuming $\textbf{u}\in H^2(0,T;\textbf{L}^{2}(\Omega))\bigcap H^1(0,T;\textbf{H}^2(\Omega))\bigcap L^{\infty}(0,T; \textbf{H}^2(\Omega) )$, $\textbf{b}\in H^2(0,T;\textbf{L}^{2}(\Omega))\bigcap H^1(0,T;\textbf{H}^2(\Omega))\bigcap L^{\infty}(0,T; \textbf{H}^2(\Omega) )$, $p\in L^2(0,T; L^2_0(\Omega) )$, then for the first-order scheme \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_q}, we have \begin{equation}\label{e_error_p_L2} \aligned &\Delta t\sum\limits_{n=0}^m\|e_{p}^{n+1}\|^2_{L^2(\Omega)/R} \leq C(\Delta t)^2, \ \ \ \forall \ 0\leq m\leq N-1, \endaligned \end{equation} where $C$ is a positive constant independent of $\Delta t$. \end{theorem} \begin{proof} In order to prove the above results, we need to first establish an estimate on $ \| d_te_{\textbf{u}}^{n+1} \|$. Thanks to Theorem \ref{thm: error_estimate_ubq}, we have \begin{equation}\label{e_error_ubH1_boundedness1} \aligned & \| e_{\textbf{u}}^{m+1}\|^2 + \| e_{\textbf{b}}^{m+1}\|^2 + \Delta t\sum\limits_{n=0}^m ( \|\nabla e_{\textbf{u}}^{n+1}\|^2 + \|\nabla e_{\textbf{b}}^{n+1}\|^2 ) \leq C(\Delta t)^2, \endaligned \end{equation} which implies that \begin{equation}\label{e_error_ubH1_boundedness2} \aligned &\| \textbf{u}^{n+1} \|_{1} \leq C \left( (\Delta t)^{1/2} + \| \textbf{u}(t^{n+1}) \|_1 \right), \ \ \| \textbf{b}^{n+1} \|_{1} \leq C \left( (\Delta t)^{1/2} + \| \textbf{b}(t^{n+1}) \|_1 \right). \endaligned \end{equation} Taking the inner product of \eqref{e_error_u} with $ Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} $, we obtain \begin{equation}\label{e_error_Au} \aligned & (1+\nu ) \frac{ \| \nabla e_{\textbf{u}}^{n+1} \|^2 - \| \nabla e_{\textbf{u}}^{n} \|^2 }{ 2\Delta t } + \| d_t e_{\textbf{u}}^{n+1} \|^2 + \nu \| A e_{\textbf{u}}^{n+1} \|^2 \\ = & \exp( \frac{t^{n+1}}{T} ) \left( q(t^{n+1}) \textbf{u}(t^{n+1})\cdot \nabla\textbf{u}(t^{n+1}) - q^{n+1} \textbf{u}^{n}\cdot \nabla \textbf{u}^{n} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ &+ \alpha \exp( \frac{t^{n+1}}{T} ) \left( q^{n+1} ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n} - q(t^{n+1}) ( \nabla \times \textbf{b}(t^{n+1}) ) \times \textbf{b}(t^{n+1} ) , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ & + ( \textbf{R}_{\textbf{u}}^{n+1} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} ) . \endaligned \end{equation} For the first term on the right hand side of \eqref{e_error_Au}, we have \begin{equation}\label{e_error_Au_nonlinear1} \aligned \exp( \frac{t^{n+1}}{T} ) & \left( q(t^{n+1}) \textbf{u}(t^{n+1})\cdot \nabla\textbf{u}(t^{n+1}) - q^{n+1} \textbf{u}^{n}\cdot \nabla \textbf{u}^{n} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ = & - \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \textbf{u}^{n}\cdot \nabla ) \textbf{u}^{n} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ & + \left( ( \textbf{u}(t^{n+1}) - \textbf{u}^{n} ) \cdot \nabla \textbf{u}(t^{n+1}) , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ & + \left( \textbf{u}^{n} \cdot \nabla ( \textbf{u}(t^{n+1}) - \textbf{u}^{n} ) , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) . \endaligned \end{equation} Thanks to \eqref{e_estimate for trilinear form1} and \eqref{e_error_ubH1_boundedness2}, the first term on the right hand side of \eqref{e_error_Au_nonlinear1} can be bounded by \begin{equation}\label{e_error_Au_nonlinear2} \aligned - \exp( \frac{t^{n+1}}{T} )& e_q^{n+1} \left( \textbf{u}^{n}\cdot \nabla \textbf{u}^{n} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ = & - \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \textbf{u}^{n}\cdot \nabla e_{ \textbf{u} }^{n} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ & - \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( (\textbf{u}^{n}\cdot \nabla \textbf{u}(t^n) , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ \leq & C | e_q^{n+1}| \| \textbf{u}^{n} \|^{1/2} \| \nabla \textbf{u}^{n} \|^{1/2} \| e_{\textbf{u}}^{n} \|^{1/2} \| A e_{ \textbf{u} }^{n} \|^{1/2} \| Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \| \\ & + C | e_q^{n+1}| \| \textbf{u}^{n} \|_1 \| \textbf{u}(t^n) \|_2^2 \| Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \| \\ \leq & \frac{1}{12} \| d_t e_{\textbf{u}}^{n+1} \|^2 + \frac{ \nu } {24} \| A e_{\textbf{u}}^{n+1} \|^2 + \frac{ \nu } {8} \| A e_{\textbf{u}}^{n} \|^2 \\ & + C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 ) \| e_{\textbf{u} }^n \|^2 + C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 ) |e_q^{n+1}|^2 . \endaligned \end{equation} The second term on the right hand side of \eqref{e_error_Au_nonlinear1} can be estimated by \begin{equation}\label{e_error_Au_nonlinear3} \aligned \big( ( \textbf{u}(t^{n+1}) - \textbf{u}^{n} ) &\cdot \nabla \textbf{u}(t^{n+1}) , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \big) \\ \leq & C \| \textbf{u}(t^{n+1}) - \textbf{u}^{n} \|_1 \| \textbf{u}(t^{n+1}) \|_2 \| Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \| \\ \leq & \frac{1}{12} \| d_t e_{\textbf{u}}^{n+1} \|^2 + \frac{ \nu } {24} \| A e_{\textbf{u}}^{n+1} \|^2 + C \| e_{ \textbf{u} }^n \|_1^2 \\ & + C \| \textbf{u}(t^{n+1}) \|_2 ^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{u}_t \|_1^2 dt . \endaligned \end{equation} Using \eqref{e_estimate for trilinear form1} and \eqref{e_error_ubH1_boundedness2}, the last term on the right hand side of \eqref{e_error_Au_nonlinear1} can be controlled by \begin{equation}\label{e_error_Au_nonlinear4} \aligned \big( \textbf{u}^{n} \cdot \nabla ( \textbf{u}(t^{n+1}) - \textbf{u}^{n} ) & , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \big) \\ = & \left( \textbf{u}^{n} \cdot \nabla ( \textbf{u}(t^{n+1}) - \textbf{u}( t^{n} ) ) , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ & - \left( \textbf{u}^{n} \cdot \nabla e_ { \textbf{u} }^n , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ \leq & C \| \textbf{u}^{n} \|_1 \| \textbf{u}(t^{n+1}) - \textbf{u}( t^{n} ) \|_2 \| Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \| \\ & + C \| \textbf{u}^{n} \|_1^{1/2} \| \textbf{u}^{n} \|_0^{1/2} \| A e_{ \textbf{u} }^n \|^{1/2} \| e_{ \textbf{u} }^n \|^{1/2} \| Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \| \\ \leq & \frac{1}{12} \| d_t e_{\textbf{u}}^{n+1} \|^2 + \frac{ \nu } {24} \| A e_{\textbf{u}}^{n+1} \|^2 + C ( \Delta t + \| \textbf{u}(t^{n+1}) \|_1^2 ) \| e_{\textbf{u} }^n \|^2 \\ & + \frac{ \nu } {8} \| A e_{\textbf{u}}^{n} \|^2 + C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 ) \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{u}_t \|_2^2 dt . \endaligned \end{equation} For the second term on the right hand side of \eqref{e_error_Au}, we have \begin{equation}\label{e_error_Au_nonlinear5} \aligned \alpha \exp( \frac{t^{n+1}}{T} ) & \left( q^{n+1} ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n} - q(t^{n+1}) ( \nabla \times \textbf{b}(t^{n+1}) ) \times \textbf{b}(t^{n+1} ) , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ = & \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ & + \alpha \left( ( \nabla \times ( \textbf{b}^{n} - \textbf{b}(t^{n+1}) ) ) \times \textbf{b}^{n} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ & + \alpha \left( ( \nabla \times \textbf{b}(t^{n+1}) ) \times ( \textbf{b}^{n} - \textbf{b}(t^{n+1}) ) , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) . \endaligned \end{equation} Thanks to \eqref{e_norm L_infty} and and \eqref{e_error_ubH1_boundedness2}, the first term on the right hand side of \eqref{e_error_Au_nonlinear5} can be bounded by \begin{equation}\label{e_error_Au_nonlinear6} \aligned \alpha \exp( \frac{t^{n+1}}{T} )& e_q^{n+1} \left( ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ = & \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times \textbf{b}^{n} ) \times e_{ \textbf{b} } ^{n} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ & + \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times \textbf{b}^{n} ) \times \textbf{b} ( t^{n} ) , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ \leq & C \| \nabla \times \textbf{b}^{n} \| \| e_{ \textbf{b} }^n \|_1^{1/2} \| e_{ \textbf{b} }^n \|_2^{1/2} \| Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \| \\ & + C | e_q^{n+1} | \| \nabla \times \textbf{b}^{n} \| \| \textbf{b} ( t^{n} ) \|_2 \| Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \| \\ \leq & \frac{1}{12} \| d_t e_{\textbf{u}}^{n+1} \|^2 + \frac{ \nu } {24} \| A e_{\textbf{u}}^{n+1} \|^2 + \frac{ \eta } {8} \| \Delta e_{\textbf{b}}^{n} \|^2 \\ & + C ( \Delta t + \| \textbf{b}(t^{n}) \|_1^2 ) \| e_{\textbf{b}}^{n} \|_1^2 + C ( \Delta t + \| \textbf{b}(t^{n}) \|_1^2 ) | e_q^{n+1} |^2 . \endaligned \end{equation} The last two terms on the right hand side of \eqref{e_error_Au_nonlinear5} can be estimated by \begin{equation}\label{e_error_Au_nonlinear7} \aligned \alpha \big( ( \nabla \times & ( \textbf{b}^{n} - \textbf{b}(t^{n+1}) ) ) \times \textbf{b}^{n} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \big) \\ & + \alpha \left( ( \nabla \times \textbf{b}(t^{n+1}) ) \times ( \textbf{b}^{n} - \textbf{b}(t^{n+1}) ) , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \right) \\ \leq & C \| e_{ \textbf{b} }^n + \textbf{b}(t^{n} ) - \textbf{b}(t^{n+1}) \| _1 \| e_{ \textbf{b} }^n \|_1^{1/2} \| e_{ \textbf{b} }^n \|_2^{1/2} \| Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \| \\ & + C \| e_{ \textbf{b} }^n + \textbf{b}(t^{n} ) - \textbf{b}(t^{n+1}) \| _1 \| \textbf{b}(t^{n} ) \|_2 \| Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \| \\ & + C \| \nabla \times \textbf{b}(t^{n+1}) \|_{L^4} \| \textbf{b}^{n} - \textbf{b}(t^{n+1}) \| _{L^4} \| Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} \| \\ \leq & \frac{1}{12} \| d_t e_{\textbf{u}}^{n+1} \|^2 + \frac{ \nu } {24} \| A e_{\textbf{u}}^{n+1} \|^2 + \frac{ \eta } {8} \| \Delta e_{\textbf{b}}^{n} \|^2 \\ & + C \| e_{\textbf{b}}^{n} \|_1^2 + C \| \textbf{b}(t^{n+1}) \|_2 ^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{b}_t \|_1^2 dt . \endaligned \end{equation} For the last term on the right hand side of \eqref{e_error_Au}, we have \begin{equation}\label{e_error_Au_Ru} \aligned & ( \textbf{R}_{\textbf{u}}^{n+1} , Ae_{\textbf{u}}^{n+1}+d_t e_{\textbf{u}}^{n+1} ) \leq \frac{1}{12} \| d_t e_{\textbf{u}}^{n+1} \|^2 + \frac{ \nu } {24} \| A e_{\textbf{u}}^{n+1} \|^2 + C \Delta t\int_{t^n}^{t^{n+1}}\|\textbf{u}_{tt}\|^2dt . \endaligned \end{equation} Combining \eqref{e_error_Au} with \eqref{e_error_Au_nonlinear1}-\eqref{e_error_Au_Ru}, we have \begin{equation}\label{e_error_Au_combine} \aligned & (1+\nu ) \frac{ \| \nabla e_{\textbf{u}}^{n+1} \|^2 - \| \nabla e_{\textbf{u}}^{n} \|^2 }{ 2\Delta t } + \frac{1}{2} \| d_t e_{\textbf{u}}^{n+1} \|^2 + \frac{3 \nu} {4} \| A e_{\textbf{u}}^{n+1} \|^2 \\ & \ \ \ \ \ \ \leq \frac{ \eta } {4} \| \Delta e_{\textbf{b}}^{n} \|^2 + \frac{ \nu} {4} \| A e_{\textbf{u}}^{n} \|^2 +C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 ) \| e_{\textbf{u} }^n \|^2_1 + C ( \Delta t + \| \textbf{b}(t^{n}) \|_1^2 ) \| e_{\textbf{b} }^n \|^2_1 \\ & \ \ \ \ \ \ +C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 + \| \textbf{b}(t^{n}) \|_1^2 ) |e_q^{n+1}|^2 \\ & \ \ \ \ \ \ + C \Delta t \int_{t^n}^{t^{n+1}} ( \|\textbf{u}_t \|_2^2 + \|\textbf{u}_{tt}\|^2 + \|\textbf{b}_t \|_1^2 ) dt . \endaligned \end{equation} Next we shall balance the first term on the right hand side of \eqref{e_error_Au_combine} by using the error equation \eqref{e_error_b} for magnetic field. We proceed as follows. Taking the inner product of \eqref{e_error_b} with $ -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} $, we obtain \begin{equation}\label{e_error_Ab} \aligned (1+\eta ) & \frac{ \| \nabla e_{\textbf{b}}^{n+1} \|^2 - \| \nabla e_{\textbf{b}}^{n} \|^2 }{ 2\Delta t } + \| d_t e_{\textbf{b}}^{n+1} \|^2 + \eta \| \Delta e_{\textbf{b}}^{n+1} \|^2 \\ =& \exp( \frac{t^{n+1}}{T} ) q(t^{n+1}) \left( \nabla \times ( \textbf{b}(t^{n+1}) \times \textbf{u}(t^{n+1}) ) , -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ & - \exp( \frac{t^{n+1}}{T} ) q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) , -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ & + (\textbf{R}_{\textbf{b}}^{n+1}, -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} ) . \endaligned \end{equation} The first two terms on the right hand side of \eqref{e_error_Ab} can be recast as \begin{equation}\label{e_error_Ab_nonlinear1} \aligned \exp( \frac{t^{n+1}}{T} ) &q(t^{n+1}) \left( \nabla \times ( \textbf{b}(t^{n+1}) \times \textbf{u}(t^{n+1}) ) , -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ & - \exp( \frac{t^{n+1}}{T} ) q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ) , -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ = & \left( \nabla \times [ ( \textbf{b}(t^{n+1}) - \textbf{b}^n ) \times \textbf{u}(t^{n+1}) ] , -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ & + \left( \nabla \times [ \textbf{b}^n \times ( \textbf{u}(t^{n+1}) - \textbf{u}^n ) ] , -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ & - \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ), -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) . \endaligned \end{equation} Noting \eqref{e_cross_product2_plus} and \eqref{e_estimate for trilinear form}, the first term on the right hand side of \eqref{e_error_Ab_nonlinear1} can be bounded by \begin{equation}\label{e_error_Ab_nonlinear2} \aligned \big( \nabla \times & [ ( \textbf{b}(t^{n+1}) - \textbf{b}^n ) \times \textbf{u}(t^{n+1}) ] , -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \big) \\ \leq & C \| \textbf{b}(t^{n+1}) - \textbf{b}^n \|_1 \| \textbf{u}(t^{n+1}) \|_2 \| d_t e_{\textbf{b}}^{n+1} - \Delta e_{\textbf{b}}^{n+1} \| \\ \leq & \frac{1}{8} \| d_t e_{\textbf{b}}^{n+1} \|^2 + \frac{ \eta }{ 16} \| \Delta e_{\textbf{b}}^{n+1} \| ^2 + C \| e_{ \textbf{b} } ^n \|^2_1 \\ & + C \| \textbf{u}(t^{n+1}) \|_2^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{b}_t \|_1^2 dt . \endaligned \end{equation} For the second term on the right hand side of \eqref{e_error_Ab_nonlinear1}, we have \begin{equation}\label{e_error_Ab_nonlinear3} \aligned \big( \nabla \times &[ \textbf{b}^n \times ( \textbf{u}(t^{n+1}) - \textbf{u}^n ) ] , -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \big) \\ = & \left( \nabla \times [ e_ { \textbf{b} }^n \times ( \textbf{u}(t^{n+1}) - \textbf{u}^n ) ] , -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ & + \left( \nabla \times [ \textbf{b} (t^n) \times ( \textbf{u}(t^{n+1}) - \textbf{u}^n ) ] , -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ \leq & C \| e_ { \textbf{b} }^n \|_{1}^{1/2} \| e_ { \textbf{b} }^n \|_{2}^{1/2} \| \textbf{u}(t^{n+1}) - \textbf{u}^n \|_1 \| d_t e_{\textbf{b}}^{n+1} - \Delta e_{\textbf{b}}^{n+1} \| \\ & + C \| \textbf{b} (t^n) \|_{2} \| \textbf{u}(t^{n+1}) - \textbf{u}^n \|_1 \| d_t e_{\textbf{b}}^{n+1} - \Delta e_{\textbf{b}}^{n+1} \| \\ \leq & \frac{1}{8} \| d_t e_{\textbf{b}}^{n+1} \|^2 + \frac{ \eta }{ 16} \| \Delta e_{\textbf{b}}^{n+1} \| ^2 + \frac{ \eta }{ 8} \| \Delta e_{\textbf{b}}^{n} \| ^2 + C \| e_{ \textbf{b} } ^n \|^2_1 \\ & + C \| \textbf{b}(t^{n}) \|_2^2 \Delta t \int_{t^n}^{t^{n+1}} \|\textbf{u}_t \|_1^2 dt . \endaligned \end{equation} Thanks to \eqref{e_cross_product2_plus} and \eqref{e_estimate for trilinear form}, the last term on the right hand side of \eqref{e_error_Ab_nonlinear1} can be \begin{equation}\label{e_error_Ab_nonlinear4} \aligned - \exp( \frac{t^{n+1}}{T} )& e_q^{n+1} \left( \nabla \times ( \textbf{b}^n \times \textbf{u}^n ), -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ = & - \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( e_{ \textbf{b} }^n \times \textbf{u}^n ), -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ & - \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( \nabla \times ( \textbf{b} (t^n) \times \textbf{u}^n ), -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} \right) \\ \leq & C |e_q^{n+1}| \| e_ { \textbf{b} }^n \|_{1}^{1/2} \| e_ { \textbf{b} }^n \|_{2}^{1/2} \| \textbf{u} ^n \|_{1} \| d_t e_{\textbf{b}}^{n+1} - \Delta e_{\textbf{b}}^{n+1} \| \\ & + C |e_q^{n+1}| \| \textbf{b} (t^n) \|_{2} \| \textbf{u} ^n \|_{1} \| d_t e_{\textbf{b}}^{n+1} - \Delta e_{\textbf{b}}^{n+1} \| \\ \leq & \frac{1}{8} \| d_t e_{\textbf{b}}^{n+1} \|^2 + \frac{ \eta }{ 16} \| \Delta e_{\textbf{b}}^{n+1} \| ^2 + \frac{ \eta }{ 8} \| \Delta e_{\textbf{b}}^{n} \| ^2 \\ & + C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 ) \| e_{ \textbf{b} } ^n \|^2_1 + C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 ) |e_q^{n+1}| ^2 . \endaligned \end{equation} For the last term on the right hand side of \eqref{e_error_Ab}, we have \begin{equation}\label{e_error_Ab_Rb} \aligned & (\textbf{R}_{\textbf{b}}^{n+1}, -\Delta e_{\textbf{b}}^{n+1}+d_t e_{\textbf{b}}^{n+1} ) \leq \frac{1}{8} \| d_t e_{\textbf{b}}^{n+1} \|^2 + \frac{ \eta } {16} \| \Delta e_{\textbf{b}}^{n+1} \|^2 + C \Delta t\int_{t^n}^{t^{n+1}}\|\textbf{b}_{tt}\|^2dt . \endaligned \end{equation} Combining \eqref{e_error_Ab} with \eqref{e_error_Ab_nonlinear1}-\eqref{e_error_Ab_Rb}, we obtain \begin{equation}\label{e_error_Ab_combine} \aligned (1+\eta ) & \frac{ \| \nabla e_{\textbf{b}}^{n+1} \|^2 - \| \nabla e_{\textbf{b}}^{n} \|^2 }{ 2\Delta t } + \frac{1}{2} \| d_t e_{\textbf{b}}^{n+1} \|^2 + \frac{3 \eta } {4} \| \Delta e_{\textbf{b}}^{n+1} \|^2 \\ \leq & \frac{ \eta }{ 4 } \| \Delta e_{\textbf{b}}^{n} \| ^2 + C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 ) \| e_{ \textbf{b} } ^n \|^2_1 + C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 ) |e_q^{n+1}| ^2 \\ & + C \Delta t \int_{t^n}^{t^{n+1}} ( \|\textbf{u}_t \|_1^2 + \|\textbf{b}_t \|_1^2 + \|\textbf{b}_{tt}\|^2 ) dt . \endaligned \end{equation} Summing up \eqref{e_error_Ab_combine} with \eqref{e_error_Au_combine} leads to \begin{equation}\label{e_error_AuAb_final1} \aligned & (1+\nu ) \frac{ \| \nabla e_{\textbf{u}}^{n+1} \|^2 - \| \nabla e_{\textbf{u}}^{n} \|^2 }{ 2\Delta t } + \frac{1}{2} \| d_t e_{\textbf{u}}^{n+1} \|^2 + \frac{3 \nu} {4} \| A e_{\textbf{u}}^{n+1} \|^2 \\ & + (1+\eta ) \frac{ \| \nabla e_{\textbf{b}}^{n+1} \|^2 - \| \nabla e_{\textbf{b}}^{n} \|^2 }{ 2\Delta t } + \frac{1}{2} \| d_t e_{\textbf{b}}^{n+1} \|^2 + \frac{3 \eta } {4} \| \Delta e_{\textbf{b}}^{n+1} \|^2 \\ \leq & \frac{ \eta } {2} \| \Delta e_{\textbf{b}}^{n} \|^2 + \frac{ \nu} {4} \| A e_{\textbf{u}}^{n} \|^2 + C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 ) \| e_{\textbf{u} }^n \|^2_1 \\ & + C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 + \| \textbf{b}(t^{n}) \|_1^2 )( \| e_{\textbf{b} }^n \|^2_1 + |e_q^{n+1}|^2 ) \\ & + C \Delta t \int_{t^n}^{t^{n+1}} ( \|\textbf{u}_t \|_2^2 + \|\textbf{u}_{tt}\|^2 + \|\textbf{b}_t \|_1^2 + \|\textbf{b}_{tt}\|^2 ) dt . \endaligned \end{equation} Multiplying \eqref{e_error_AuAb_final1} by $2\Delta t$ and summing over $n$, $n=0,2,\ldots,m$, and applying the discrete Gronwall lemma \ref{lem: gronwall2}, we obtain \begin{equation}\label{e_error_AuAb_final2} \aligned \| \nabla e_{\textbf{u}}^{m+1} \|^2& + \Delta t \sum\limits_{n=0}^{m} \| d_t e_{\textbf{u}}^{n+1} \|^2 + \nu \Delta t \sum\limits_{n=0}^{m} \| A e_{\textbf{u}}^{n+1} \|^2 \\ & + \| \nabla e_{\textbf{b}}^{m+1} \|^2 + \Delta t \sum\limits_{n=0}^{m} \| d_t e_{\textbf{b}}^{n+1} \|^2 + \eta \Delta t \sum\limits_{n=0}^{m} \| \Delta e_{\textbf{b}}^{n+1} \|^2 \\ \leq & C ( \Delta t + \| \textbf{u}(t^{n}) \|_1^2 + \| \textbf{b}(t^{n}) \|_1^2 ) \Delta t \sum\limits_{n=0}^{m} ( \| e_{\textbf{u} }^n \|^2_1 + \| e_{\textbf{b} }^n \|^2_1 ) \\ & + C \Delta t \sum\limits_{n=0}^{m} | e_q^{n+1} |^2 + C (\Delta t)^2 . \endaligned \end{equation} Combining the above estimate with Theorem \ref{thm: error_estimate_ubq}, we finally obtain \begin{equation}\label{e_error_AuAb_final3} \aligned \Delta t& \sum\limits_{n=0}^{m} \| d_t e_{\textbf{u}}^{n+1} \|^2 + \| \nabla e_{\textbf{u}}^{m+1} \|^2 + \nu \Delta t \sum\limits_{n=0}^{m} \| A e_{\textbf{u}}^{n+1} \|^2 + \| \nabla e_{\textbf{b}}^{m+1} \|^2 \\ & + \Delta t \sum\limits_{n=0}^{m} \| d_t e_{\textbf{b}}^{n+1} \|^2 + \eta \Delta t \sum\limits_{n=0}^{m} \| \Delta e_{\textbf{b}}^{n+1} \|^2 \leq C (\Delta t)^2 . \endaligned \end{equation} We are now in position to prove the pressure estimate. Taking the inner product of \eqref{e_error_u} with $\textbf{v}\in \textbf{H}^1_0(\Omega)$, we obtain \begin{equation}\label{e_error_p1} \aligned (\nabla e_p^{n+1},\textbf{v})=&-( d_t e_{\textbf{u} }^{n+1} , \textbf{v})+\nu(\Delta e_{\textbf{u}}^{n+1},\textbf{v})+(\textbf{R}_{\textbf{u}}^{n+1},\textbf{v})\\ &+ \exp( \frac{t^{n+1}}{T} ) \left( q(t^{n+1}) ( \textbf{u}(t^{n+1})\cdot \nabla ) \textbf{u}(t^{n+1}) - q^{n+1} ( \textbf{u}^{n}\cdot \nabla ) \textbf{u}^{n}, \textbf{v}\right) \\ & + \alpha \exp( \frac{t^{n+1}}{T} ) \left( q^{n+1} ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n} - q(t^{n+1}) ( \nabla \times \textbf{b}(t^{n+1}) ) \times \textbf{b}(t^{n+1} ) , \textbf{v} \right) . \endaligned \end{equation} We derive from \begin{equation}\label{e_error_p2} \aligned &\| e_p^{n+1} \|_{L^2(\Omega)/\mathbb{R}} \leq \sup_{\textbf{v} \in \textbf{H}^1_0(\Omega)} \frac{ (\nabla e_p^{n+1},\textbf{v}) }{ \|\nabla \textbf{v} \| }, \endaligned \end{equation} and \eqref{e_error_u_nonlinear_convective}-\eqref{e_error_u_nonlinear_convective2} that, for all $\textbf{v}\in \textbf{H}^1_0(\Omega)$, \begin{equation}\label{e_error_p3} \aligned \exp( \frac{t^{n+1}}{T} )& \left( q(t^{n+1}) ( \textbf{u}(t^{n+1})\cdot \nabla ) \textbf{u}(t^{n+1}) - q^{n+1} ( \textbf{u}^{n}\cdot \nabla ) \textbf{u}^{n}, \textbf{v}\right) \\ =&\frac{q(t^{n+1}) }{ \exp( -\frac{t^{n+1}}{T} ) }\left( ( \textbf{u}(t^{n+1})-\textbf{u}^{n} )\cdot \nabla \textbf{u}(t^{n+1}), \textbf{v} \right) -\frac{e_q^{n+1}}{ \exp( -\frac{t^{n+1}}{T} ) }\left((\textbf{u}^n \cdot \nabla)\textbf{u}^n, \textbf{v} \right) \\ &+\frac{q(t^{n+1}) }{ \exp( -\frac{t^{n+1}}{T} ) }\left( \textbf{u}^{n}\cdot \nabla (\textbf{u}(t^{n+1})-\textbf{u}^{n} ), \textbf{v} \right) \\ \leq & C(\|e_{ \textbf{u} }^{n}\|_1 +\| \int_{t^n}^{t^{n+1}}\textbf{u}_tdt \|_1+ |e_q^{n+1}| ) \|\nabla \textbf{v} \| , \endaligned \end{equation} and for the last term on the right hand side of \eqref{e_error_p1}, by using \eqref{e_error_u_Lorentz_force}-\eqref{e_error_u_Lorentz_force5}, we have \begin{equation}\label{e_error_p4} \aligned \alpha \exp( \frac{t^{n+1}}{T} ) & \left( q^{n+1} ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n} - q(t^{n+1}) ( \nabla \times \textbf{b}(t^{n+1}) ) \times \textbf{b}(t^{n+1} ) , \textbf{v} \right) \\ = & \alpha \exp( \frac{t^{n+1}}{T} ) e_q^{n+1} \left( ( \nabla \times \textbf{b}^{n} ) \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} \right) + \alpha \left( \nabla \times (\textbf{b}^{n}- \textbf{b}(t^{n+1}) ) \times \textbf{b}^{n}, e_{\textbf{u}}^{n+1} \right) \\ & + \alpha \left( ( \nabla \times \textbf{b} (t^{n+1}) ) \times ( \textbf{b}^{n} - \textbf{b}(t^{n+1}) ), e_{\textbf{u}}^{n+1} \right) \\ \leq & C( \|e_{ \textbf{b} }^{n}\|_1 + \| \textbf{b}^n \| \| \int_{t^n}^{t^{n+1}}\textbf{b}_tdt \|_1+ |e_q^{n+1}| ) \|\nabla \textbf{v} \| . \endaligned \end{equation} Finally thanks to Theorem \ref{thm: error_estimate_ubq} and \eqref{e_error_AuAb_final3}, we can derive from the above that \begin{equation*}\label{e_error_p5} \aligned &\Delta t \sum\limits_{n=0}^m \|e_p^{n+1}\|^2_{L^2(\Omega)/\mathbb{R}} \leq C\Delta t \sum\limits_{n=0}^m \left( \| d_te_{\textbf{u}}^{n+1}\|^2+ \| \nabla e_{\textbf{u}}^{n+1} \|^2 \right. \\ &\ \ \ \ \ \left. + \| e_{\textbf{u}}^{n} \|_1^2+\|e_{\textbf{b}}^n\|_1^2 + |e_q^{n+1}|^2 \right) +C (\Delta t)^2 \int_{t^0}^{t^{m+1}} \|\textbf{b}_t\|_1^2dt \\ &\ \ \ \ \ +C (\Delta t)^2 \int_{t^0}^{t^{m+1}} (\|\textbf{u}_t\|_1^2+\|\textbf{u}_{tt}\|_{-1}^2 ) dt \leq C(\Delta t)^2. \endaligned \end{equation*} The proof is complete. \end{proof} \section{Numerical experiments} In this section we provide some numerical experiments to validate the SAV schemes developed in the previous sections. Although we only discussed semi-discretization in time in the previous sections, the IMEX SAV schemes can be coupled with any compatible spatial discretization. More precisely, let $ \textbf{X}_h \subset \textbf{H}^1_0( \Omega ) $, $ M_h \subset L^2_0( \Omega) $ and $ \textbf{W}_h \subset \textbf{H}^1_n( \Omega ) $ be a set of compatible approximation spaces for the velocity, pressure and magnetic field, a fully discrete first-order IMEX SAV scheme is as follows: ($ \textbf{u}_h^{n+1}, p_h^{n+1}, \textbf{b}_h^{n+1} $) in ($\textbf{X}_h, M_h, \textbf{W}_h$) and $q_h^{n+1} \in \mathbb{R}$ such that \begin{eqnarray} && ( d_t \textbf{u}_h^{n+1}, \textbf{v}_h ) + \nu (\nabla \textbf{u}_h^{n+1}, \textbf{v}_h ) - (p_h^{n+1}, \nabla\cdot \textbf{v}_h ) = \alpha \exp( \frac{t^{n+1} }{T} ) q^{n+1}_h \left( (\nabla \times \textbf{b}^n_h) \times \textbf{b}^n_h, \textbf{v}_h \right) \nonumber \\ && \ \ \ \ \ \ \ \ \ \ - \exp( \frac{t^{n+1} }{T} ) q^{n+1}_h ( \textbf{u}^{n}_h \cdot \nabla \textbf{u}^{n}_h ), \textbf{v}_h ), \ \ \forall \textbf{v}_h \in \textbf{X}_h, \label{e_SAV_Fully discrete_first_u} \\ && ( \nabla \cdot \textbf{u}_h^{n+1}, \xi_h)=0, \ \ \forall \xi_h \in M_h, \label{e_SAV_Fully div_u} \\ && (d_t \textbf{b}_h^{n+1}, \textbf{w}_h ) + \eta ( \nabla \times \textbf{b}_h^{n+1} , \nabla \times \textbf{w}_h ) + \eta (\nabla \cdot \textbf{b}_h^{n+1}, \nabla \cdot \textbf{w}_h) \nonumber \\ && \ \ \ \ \ \ \ \ \ \ + \exp( \frac{t^{n+1} }{T} ) q^{n+1}_h \left( \nabla \times ( \textbf{b}^n_h \times \textbf{u}^n_h ), \textbf{w}_h \right) = 0, \ \ \forall \textbf{w}_h \in \textbf{W}_h, \label{e_SAV_Fully discrete_first_b} \\ && d_t q^{n+1}_h = -\frac{1}{T}q^{n+1}_h + \exp( \frac{t^{n+1} }{T} ) \nonumber\\ && \big( (\textbf{u}^n_h \cdot\nabla \textbf{u}^n_h, \textbf{u}^{n+1}_h ) - \alpha ( ( \nabla \times \textbf{b}^n_h ) \times \textbf{b}^n_h , \textbf{u}^{n+1}_h ) + \alpha( \nabla \times (\textbf{b}^n_h \times \textbf{u}^n_h ) , \textbf{b}^{n+1}_h )\big).\label{e_SAV_Fully discrete_first_q} \end{eqnarray} Second-order fully discrete IMEX SAV scheme can be constructed similarly. Following the same procedure as in the proof of Theorem \ref{thm_energy stability_first order}, namely, setting $\textbf{v}_h=\textbf{u}_h^{n+1}$, $\xi_h=p_h^{n+1}$, $\textbf{w}_h= \alpha \textbf{b}_h^{n+1}$ in \eqref{e_SAV_Fully discrete_first_u}-\eqref{e_SAV_Fully discrete_first_b} respectively and taking the inner product of \eqref{e_SAV_Fully discrete_first_q} with $q_h^{n+1}$, we can obtain the following stability result: The scheme \eqref{e_SAV_Fully discrete_first_u}-\eqref{e_SAV_Fully discrete_first_q} is unconditionally stable in the sense that \begin{equation}\label{fully discrete_eng} \aligned E_h^{n+1}-E_h^{n} \leq & - \nu\Delta t \| \nabla \textbf{u}_h^{n+1} \|^2 - \eta \alpha \Delta t \| \nabla \textbf{b}_h^{n+1} \|^2 \\ & - \eta \alpha \Delta t \| \nabla \times \textbf{b}_h^{n+1} \|^2 -\frac{1}{T}\Delta t|q_h^{n+1}|^2, \ \ \forall \Delta t,\; n\geq 0, \endaligned \end{equation} where \begin{equation*} E_h^{n+1}=\frac 12 \|\textbf{u}_h^{n+1}\|^2 + \frac \alpha 2 \|\textbf{b}_h^{n+1}\|^2 +\frac 12|q_h^{n+1}|^2 . \end{equation*} In our simulation, we use $(P_2,P_1,P_2)$ finite-elements to approximate velocity, pressure and magnetic field, respectively. Note that the $(P_2,P_1)$ finite-elements for velocity and pressure satisfy the inf-sup conditions so that one can easily show that the fully discrete scheme \eqref{e_SAV_Fully discrete_first_u}-\eqref{e_SAV_Fully discrete_first_q} coupled with $(P_2,P_1,P_2)$ finite elements are well posed and can be solved following the procedure described in Section 2. In this example, we set $\Omega=(0,1)\times(0,1)$, $\nu=0.01$, $\eta=0.01$, $\alpha=1$, $T=1$. The right hand side of the equations is computed according to the analytic solution given as below:\\ \begin{equation*} \aligned \begin{cases} u_1(x,y,t)=\pi k \sin^2(\pi x)\sin(\pi y)\cos(t),\\ u_2(x,y,t)=-\pi k \sin(\pi x)\sin^2(\pi y)\cos(t),\\ p(x,y,t)=k (x-1/2)(y-1/2)\cos(t)/10,\\ b_1(x,y,t)=k \sin(\pi x)\cos(\pi y)\cos(t),\\ b_2(x,y,t)=-k \cos(\pi x)\sin(\pi y)\cos(t), \end{cases} \endaligned \end{equation*} where $k=0.01$. To test the time accuracy, we choose $h=0.005$ so that the spatial discretization error is negligible compared to the time discretization error for the time steps used in this experiment. \begin{table}[htbp] \renewcommand{\arraystretch}{1.1} \small \centering \caption{Errors and convergence rates with the first-order scheme \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_q}}\label{table1} \begin{tabular}{p{1.2cm}p{1.8cm}p{1.3cm}p{1.8cm}p{1.3cm}p{1.8cm}p{1.3cm}}\hline $\Delta t$ &$\|\mathbf{u}_h-\mathbf{u}\|_{H^1}$ &Order &$\|\mathbf{u}_h-\mathbf{u}\|_{L^2}$&Order &$\|p_h-p\|_{L^2}$ &Order \\ \hline 1/2 & 8.26E-3 & --- &1.34E-3 &--- &2.66E-5 &--- \\ 1/4 & 3.96E-3 &1.06 &7.16E-4 &0.91 &1.16E-5 &1.12 \\ 1/8 & 1.93E-3 &1.04 &3.70E-4 &0.95 &5.41E-6 &1.10 \\ 1/16 & 9.52E-4 &1.04 &1.89E-4 &0.97 &2.61E-6 &1.05 \\ 1/32 &4.72E-4 &1.01 &9.51E-5 &0.99 &1.28E-6 &1.03 \\ 1/64 &2.35E-4 &1.01 &4.78E-5 &0.99 &6.33E-7 &1.01 \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \renewcommand{\arraystretch}{1.1} \small \centering \caption{Errors and convergence rates with the first-order scheme \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_q} }\label{table2} \begin{tabular}{p{1.2cm}p{2.0cm}p{1.5cm}p{2.0cm}p{1.5cm}}\hline $\Delta t$ &$\|\mathbf{b}_h-\mathbf{b}\|_{H^1}$& Order &$\|\mathbf{b}_h-\mathbf{b}\|_{L^2}$&Order \\ \hline 1/2 &4.52E-3 & --- &1.22E-3 & --- \\ 1/4 &2.10E-3 &1.11 &6.39E-4 &0.94 \\ 1/8 &1.00E-3 &1.07 &3.27E-4 &0.97 \\ 1/16 &4.89E-4 &1.04 &1.65E-4 &0.98 \\ 1/32 &2.41E-4 &1.02 &8.31E-5 &0.99 \\ 1/64 &1.20E-4 &1.01 &4.17E-5 &1.00 \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \renewcommand{\arraystretch}{1.1} \small \centering \caption{Errors and convergence rates with the second-order scheme \eqref{e_SAV_scheme_second_u}-\eqref{e_SAV_scheme_second_q} }\label{table3} \begin{tabular}{p{1.2cm}p{1.8cm}p{1.3cm}p{1.8cm}p{1.3cm}p{1.8cm}p{1.3cm}}\hline $\Delta t$ &$\|\mathbf{u}_h-\mathbf{u}\|_{H^1}$ &Order &$\|\mathbf{u}_h-\mathbf{u}\|_{L^2}$&Order &$\|p_h-p\|_{L^2}$ &Order \\ \hline 1/2 &6.43E-3 & --- &8.84E-4 &--- &1.94E-5 & --- \\ 1/4 &1.99E-3 &1.70 &2.32E-4 &1.93 &5.23E-6 &1.89 \\ 1/8 &5.49E-4 &1.85 &5.35E-5 &2.12 &1.38E-6 &1.92 \\ 1/16 &1.44E-4 &1.93 &1.26E-5 &2.09 &3.53E-7 &1.96 \\ 1/32 &3.70E-5 &1.96 &3.05E-6 &2.04 &8.92E-8 &1.99 \\ 1/64 &1.03E-5 &1.85 &7.52E-7 &2.02 &2.24E-8 &1.99 \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \renewcommand{\arraystretch}{1.1} \small \centering \caption{Errors and convergence rates with the second-order scheme \eqref{e_SAV_scheme_second_u}-\eqref{e_SAV_scheme_second_q} }\label{table4} \begin{tabular}{p{1.2cm}p{2.0cm}p{1.5cm}p{2.0cm}p{1.5cm}}\hline $\Delta t$ &$\|\mathbf{b}_h-\mathbf{b}\|_{H^1}$& Order &$\|\mathbf{b}_h-\mathbf{b}\|_{L^2}$&Order \\ \hline 1/2 &3.54E-3 & --- &8.38E-4 & --- \\ 1/4 & 1.06E-3 &1.74 &2.30E-4 &1.87 \\ 1/8 & 2.90E-4 &1.88 &5.57E-5 &2.05 \\ 1/16 & 7.54E-5 &1.94 &1.35E-5 &2.04 \\ 1/32 & 1.92E-5 &1.97 &3.32E-6 &2.02 \\ 1/64 &4.88E-6 &1.98 &8.23E-7 &2.01 \\ \hline \end{tabular} \end{table} Numerical results for this example with first- and second-order schemes are presented in Tables \ref{table1}-\ref{table4}. We observe that the results for the first-order scheme \eqref{e_SAV_scheme_first_u}-\eqref{e_SAV_scheme_first_q} are consistent with the error estimates in Theorems \ref{thm: error_estimate_ubq} and \ref{thm: error_estimate_p}. While second-order convergence rates for the velocity, pressure and magnetic field were observed for the second-order scheme \eqref{e_SAV_scheme_second_u}-\eqref{e_SAV_scheme_second_q}. \section{Concluding remarks} We constructed first- and second-order discretization schemes in time based on the SAV approach for the MHD equations. The nonlinear terms are treated explicitly in our schemes so they only require solving a sequence of linear differential equations with constant coefficients at each time step. Thus, the schemes are efficient and easy to implement. Despite the fact that the nonlinear terms are treated explicitly, we proved that our schemes are unconditionally energy stable. This is made possible by introducing a purely artificial scalar auxiliary variable, $q(t)$, which enables the nonlinear contributions to the energy to cancel with each other as in the continuous case, leading to the unconditionally energy stability. By using the unconditional energy result which leads to uniform bound on the numerical solution , we derived rigorous error estimates for the velocity, pressure and magnetic field of the first-order scheme in the two-dimensional case without any condition on the time step. To the best of our knowledge, this is the first linear, unconditional energy stable and convergent scheme with fully explicit treatment for the MHD equations. We believe that the error estimates can also be established for the second-order scheme in the two-dimensional case although the process will surely be much more tedious. However, it appear that the error estimates can not be easily extended to the three dimensional case as our proof uses essentially some inequalities which are only valid in the two-dimensional case. \bibliographystyle{siamplain}
1,314,259,996,248
arxiv
\section{Introduction} In this letter we report the results of some Molecular Dynamics simulations of a one component model of plasma. In particular we estimate the diffusion coefficient $D_{\perp}$ transverse to the magnetic field, in the case of a weakly coupled plasma, for different values of temperature $T$ and of magnetic field strength $B$. Up to now, Molecular Dynamics simulations for the diffusion coefficient were implemented only for the case of strongly coupled plasmas, see for example the recent paper~\cite{OttBonitz}. The general conclusion of that paper can be summarized by saying that, in the strongly coupled case, the diffusion coefficient obeys the scaling law $D_{\perp}\propto T B^{-1}$ proposed long ago by Bohm (see Table~1, page 135003-2 of \cite{OttBonitz}). Our computations, while confirming the ones of paper~\cite{OttBonitz} for the strongly coupled case, indicate that some important differences arise in the weakly coupled regime. In particular there exists a threshold in the coupling parameter, such that \begin{itemize} \item concerning the dependence on $T$, at variance with Bohm's law, above the threshold the diffusion coefficient $D_{\perp}$ starts decreasing as temperature increases; \item concerning the dependence on $B$, one finds that $D_{\perp}$ decreases as $B^{-2}$ for small fields, but then apparently saturates to a constant value for larger values of $B$. \end{itemize} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{graph_new} \caption{ Diffusion coefficient transverse to the magnetic field versus $\beta$, as computed by MD simulations. Full circles are the results of our computations, while the empty triangles are values taken from paper~\cite{OttBonitz}. The straight line corresponding to $\beta^{-2}$ is also shown (dashed line). }\label{fig:graph_new} \end{figure} The results of our computations are summarized in figure~\ref{fig:graph_new}, where, in logarithmic scale, the value of the coefficient $D_{\perp}$ is reported (full circles) as a function of the dimensionless parameter $\beta$, defined by $\beta = B/\sqrt{nmc^2}$, where $m$ is the electron mass, $n$ the electron density and $c$ the speed of light (we are working in the c.g.s. system). The different lines connect simulations performed at the same value of $\Gamma$, the dimensionless coupling parameter defined by $\Gamma = n^{1/3}e^2/(k_BT)$, where $k_B$ is the Boltzmann constant, and $e$ the electron charge. Such parameter discriminates between the strongly coupled case, corresponding to $\Gamma>1$, and the weakly coupled case corresponding to $\Gamma \ll 1$. For comparison, in the same figure are reported also some values (empty triangles) taken from reference \cite{OttBonitz}, for $\Gamma$ equal to $1.24$ and $3.1$ (corresponding to the values $2$ and $5$ if one uses the definition of $\Gamma$ given in that paper). The agreement with our data seems to be good. The figure clearly exhibits that, while for large values of $\Gamma$, actually up to $0.1$, the coefficient $D_{\perp}$ decreases as a function of $\Gamma$ (i.e. is an \textbf{increasing} function of $T$), for the smaller value $\Gamma=0.01$ an inversion occurs, i.e. the values of $D_{\perp}$ are \textbf{smaller} than the corresponding values at $\Gamma=0.1$. So there must exist a threshold in $\Gamma$, below which $D_{\perp}$ becomes a \textbf{decreasing} function of $T$. A straight line corresponding to $\beta^{-2}$ is also shown. One can check that up to $\beta=1$, the data for $\Gamma=0.1$ and $\Gamma=0.01$ seem to lie parallel to such a line. This means that, at fixed density, the coefficient $D_{\perp}$ decreases as $B^{-2}$, as predicted by kinetic theory (see~\cite{Spitzer,Rosenbluth}). However, by further increasing the magnetic field above $\beta=1$, the diffusion coefficient appears to saturate to an apparently constant value independent of $B$. To our knowledge, this phenomenon was neither observed nor foreseen before. The only reported evidence of some kind of transition that should occur in a weakly coupled plasma, when passing from $\beta \lesssim 1$ (weakly magnetized) to $\beta\gtrsim 1$ (strongly magnetized) was given in~\cite{CBMZG}. In that paper, such a transition was ascribed to a transition from a fully chaotic regime (low magnetic field) to a partially ordered one (high magnetic field), as first proposed in paper \cite{CZMMMG}. We now illustrate how a change of the dynamical behavior, might also explain the behavior of $D_{\perp}$ reported above. We recall (see for example the textbook~\cite{Wannier}) that the diffusion coefficient can be expressed in terms of the velocity autocorrelation $\langle \vec {v}_{\perp}(t) \cdot \vec{v}_{\perp} (0)\rangle$ as follows \begin{equation}\label{eq:diffusione} D_{\perp} = \frac 12 \int_0^{+\infty} \langle \vec {v}_{\perp}(t) \cdot \vec{v}_{\perp} (0)\rangle \,\mathrm{d} t \ , \end{equation} where $\vec{v}_{\perp}$ in the component of the velocity of a particle, transverse to the magnetic field, and the brackets mean a suitable average over the particles. Now, in figure~\ref{fig:corr_vel} we report the velocity spectrum, i.e. the Fourier transform of the autocorrelation, calculated at $\Gamma=0.1$, for two values of $\beta$ (below and above $\beta=1.$). One sees that in both cases, a strong peak occurs at the corresponding cyclotron frequency $\omega_c := eB/mc$, so that one can suppose that $$ \langle \vec {v}_{\perp}(t) \cdot \vec{v}_{\perp} (0)\rangle \simeq \langle \vec {v}_{\perp}(0) \cdot \vec{v}_{\perp} (0)\rangle \cos(\omega_c) f(t) \ , $$ where $f(t)$ is a function which characterizes the decay to zero of the autocorrelation as $t\to +\infty$. In the fully chaotic regime one can take $f(t)=e^{-\gamma t}$, where $\gamma$ is the inverse of the decorrelation time. If such a time is larger than the cyclotron period, from eq. (\ref{eq:diffusione}) one gets \begin{equation}\label{eq:coeff1} D_{\perp} = \frac {k_BT}{m} \frac {\gamma}{\omega_c^2} \ . \end{equation} This expression shows that $D_{\perp}$ decreases as $B^{-2}$, in agreement with our numerical data for small $B$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{spectrum01} \caption{ Fourier transform of the velocity autocorrelation function $\langle \vec{v}_{\perp}(t)\cdot\vec{v}_{\perp}(0)\rangle$, obtained in two simulations, with the same value $\Gamma=0.1$, and two different values of $\beta$, namely, $\beta=0.5$ (black line) and $\beta=5.$ (gray line). The frequencies are reported in units of $\omega_p$. Notice the peaks centered at the corresponding cyclotron frequencies. }\label{fig:corr_vel} \end{figure} Instead one can suppose that, in a partially ordered case, the decay of correlations is much slower, for example as an inverse power of time. Taking for example $f(t)= 1/\big(1+(\gamma^* t)^2\big)$, one gets the expression \begin{equation}\label{eq:coeff2} D_{\perp} = \frac {k_BT}{m} \frac{\pi}2 \frac {\exp(-\omega_c/\gamma^*)}{\gamma^*} \ . \end{equation} Now one needs to match the expression (\ref{eq:coeff1}) with (\ref{eq:coeff2}) at $\omega_c$ equal to the plasma frequency $\omega_p := \sqrt{e^2/n}$, i.e., for $\beta=1$, which is precisely the value of the threshold predicted in reference~\cite{CZMMMG}. Given a value of $\gamma/\omega_p$, this matching determines two possible values of $\gamma^*/\omega_p$, one smaller than 1 and one bigger. If one chooses the larger one, the expression (\ref{eq:coeff2}) for $\omega_c\simeq\omega_p$ gives a curve with a slope much smaller than the curve (\ref{eq:coeff1}), thus reproducing the behavior found numerically. At very large values of $\omega_c$, the diffusion coefficient $D_{\perp}$ should begin to decrease faster than any inverse power of $B$, but this range is actually outside our reach. At any rate, a decrease of $D_{\perp}$ faster than any inverse power of $B$, is reported in reference \cite{Ranganathan}, and could actually be ascribed to the phenomenon just described. We give now some details of our numerical computations. First of all, for the purpose of estimating $D_{\perp}$, it is better to use directly its definition, instead of making use of relation (\ref{eq:diffusione}). We recall that the transverse diffusion coefficient is defined by \begin{equation} D_{\perp} = \lim_{t\rightarrow\infty} \frac{\langle\dx\rangle}{4t} \label{diffusionCoefficientDef} \end{equation} where \begin{equation} \dx := |x(t) - x(t_0)|^2 + |y(t) - y(t_0)|^2 \ , \end{equation} i.e, as the mean square displacement of a particle in the plane orthogonal to the magnetic field $\vec B$ (which is taken directed as the $z$--axis) and the average should in principle be an average over all the plasma particles. The quantity in eq.~\eqref{diffusionCoefficientDef} can be computed from a numerical simulation by averaging over the particles which participate to the simulation, while the asymptotic value can be found from the plot of $\langle\dx\rangle$ versus $t$. A typical example of such a plot is shown in Fig.~\ref{fig:oscillations}; as one can see, it displays an oscillatory behavior superimposed to a linear growth, and a ballistic motion at the beginning. To remove the oscillations we replaced each value of $\langle\dx\rangle(t)$ by its mean value $\bar{X}(t)$ taken over a cyclotron period centered at $t$. The resulting data were analyzed with a linear regression of the law $\bar{X} = D t + C$, with two constants $C$ and $D$. Here one has to choose a temporal window to include only the tail of the graph; we progressively restricted such window until the reduced error $\chi$ (the sum of the squared residual divided by the number of points minus 2) became less than 1. The values thus found are reported in figure~\ref{fig:graph_new}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{graph2} \caption{ Graph of $\langle \dx \rangle$ vs time $t$ (in $\omega_c^{-1}$ units) in the case $\Gamma=0.01$ and $\beta=4$. }\label{fig:oscillations} \end{figure} For what concerns the model, we recall that the one component model of a plasma consists of a gas of electrons moving in a fixed uniform neutralizing background. So we consider a number $N$ of electrons in a cubic box of side $L$ with periodic boundary conditions, the electrons being subject to mutual Coulomb interactions, and to an external magnetic field $\vec{B} = B\vec{e}_z$. The density is then defined by $n=N/L^3$. If $t$ denotes time and $\vec{x}_i$ the position of the $i$-th electron (with $i=1,\dots,N$), with the rescaling \begin{equation} \vec{y}_i = n^{-1/3}\vec{x}_i,\quad \tau = \omega_c t,\quad m=1 \ , \label{units} \end{equation} which in particular implies that the density takes the value 1, the equations of motion read \begin{equation} \label{newtonEquation} \frac{d^2 {\vec{y}}_i}{d\tau^2} = \vec{e}_z \times \frac{{d\vec{y}}_i}{d\tau} +\frac{1}{\beta^2} \sum_{j \neq i} \vec{E}_j(\vec{y}_i) \end{equation} where $\vec{E}_j$ is the electric field created by the $j$--th electron, evaluated at the position of the $i$-th one. The total electric field $\vec{E}=\sum_{j \neq i} \vec{E}_j$ acting on an electron, created by a periodic system of charges, can be computed via the Ewald formula (see~\cite{Gibbon}), as follows \begin{equation} \begin{split} &\vec{E}(\vec{y}_i) = \\ & \sum_{\vec{l}} \sum_{j=1}^N \frac{ \vec{y}_{ij\vec{l}} }{| \vec{y}_{ij\vec{l}} |^3} \left[ \erfc({\alpha} | \vec{y}_{ij\vec{l}} | ) + \frac{2 \alpha | \vec{y}_{ij\vec{l}} | }{ \sqrt{\pi} } \exp(-{\alpha}^2{ | \vec{y}_{ij\vec{l}} |}^2) \right] \\ & + \frac{4 \pi}{N} \sum_{\vec{k} \neq 0} \sum_{j=1}^N \frac{\vec{k}}{k^2} e^{-k^2\!/4 {\alpha}^2} \sin( \vec{k} \cdot \vec{y}_{ji}), \qquad \alpha = \frac{\sqrt{\pi}N^{1/6}}{L} \notag \label{ewaldField} \\ \end{split} \end{equation} Here $\vec{y}_{ij\vec{l}} = \vec{y}_i - \vec{y}_j + \vec{l}$, where $\vec{l}$ is a triplet of integers denoting the position of an image cell. One has to point out that only the parameter $\beta$ enters into the equations of motion. The second one $\Gamma$, enters through the choice of the initial data: indeed, while the positions are extracted from a uniform distribution, the velocities are taken from a Maxwell distribution at temperature $T$. With this choice, at the beginning of each simulation the system is out of equilibrium: so there is a drift of the kinetic energy, and the system reaches a different, random, temperature. In order to fix the temperature to the desired value, we operate in this way: after extracting the initial values, we let the system evolve until equilibrium is reached, i.e. until the kinetic energy appears to stabilize. We then generate new velocities again with a Maxwell distribution at temperature $T$, and repeat the process until the kinetic energy appears to be constant, close to the chosen value. Equations \eqref{newtonEquation} are integrated using a symplectic splitting algorithm. The inter particle forces are computed with the aid of parallel calculators, using GPUs with up to 15 multiprocessors. But even with such a device, we cannot afford to integrate the equations of motion for small values of $\Gamma$, and we have to stop at $\Gamma=0.01$. This for two problems which arise in the weakly coupled regime. The first problem concerns the integration step $h$. In fact, as the velocities are proportional to $\Gamma^{-1/2}$, to achieve a good energy conservation when short distance collisions occur, one has to use a very small time step. A step $h=10^{-3}$ is sufficiently small for the strongly coupled cases, in which conservation of energy was always better than $0.05\%$. The step had to be reduced up to $h=2.5\times10^{-5}$ for $\Gamma=0.01$. Curiously enough, it seems that also the value of $\beta$ has an influence on the choice of $h$. For example, if $\beta=10$, the value $h=10^{-3}$ proved to be adequate even in the case $\Gamma=0.01$. The error on conservation of energy is still below $0.05\%$ for $\Gamma=0.1$ but increases up to $0.5\%$ for $\Gamma=0.01$. The second problem is that, working with periodic boundary conditions, one should have the fundamental cell with side larger than the Debye length $\lambda_D$, which in our units reads $\lambda_D = \sqrt{1/\Gamma}$. As in the rescaled variables the density has value 1, we have $L = N^{1/3}$, so that the requirement $L > \lambda_D$ in our units takes the form of the constraint $N > \Gamma^{-3/2}$, which is a very stringent condition in the weakly coupled regime $\Gamma \ll 1$. Indeed, as the Coulomb force is a long range one, the computational cost increases an $N^2$, i.e., increases as $\Gamma^{-3}$. This means that the computational cost increases at least a thousand times by decreasing $\Gamma$ by a factor ten. Actually, as one must also decrease the value of the integration step, the computational cost increases even more. Now, although in the strongly coupled cases a single particle would satisfy the constraint, we actually used $480$ particles. In the cases $\Gamma= 0.1$ and $0.01$ instead, for which the constraint gives $N>31$ and $N>1000$ respectively, we took $N=896$ and $N=1024$ respectively. This last figure is the maximum number of particles we can deal with. Computations with this number of particles take months to be completed. Our data are affected by some fluctuations, due to various factors; mainly, we suppose, the very limited number of particles. In paper~\cite{OttBonitz}, in which a model essentially equal to ours was integrated, the authors report that a certain stability of the numerical results is obtained using a number of particles equal to $N=8192$, which is beyond our reach. So this work should be intended, also numerically, as a preliminary one. Naturally, the main improvement would be to be able to simulate a two-component model, which however is, at the moment, far from our numerical capabilities.
1,314,259,996,249
arxiv
\section{Introduction} \indent\indent In this paper, we are concerned with isentropic compressible Euler equations with a nonlinear term: \begin{equation}\label{a1} \left\{\begin{aligned} &\partial_{t}\rho+\partial_{x}(\rho u)=0,\\ &\partial_{t}(\rho u)+\partial_{x}(\rho u^{2}+p)=\beta\rho|u|^{\alpha}u, \end{aligned}\right. \quad(t,x)\in[0,+\infty)\times[0,L], \end{equation} where $\rho,u$ and $p$ are the density, velocity and pressure of gas, respectively. The pressure $p(\rho)$ is governed by $p(\rho)=a\rho^{\gamma}$, here the adiabatic exponent $\gamma>1$ and the parameter $a$ is scaled to unity for mathematical convenience. The sound speed $c\geq0$ is defined by $c^{2}=\partial p/\partial\rho$. And the term $\beta\rho|u|^{\alpha}u$ represents the friction with $\alpha,\beta\in\mathbb{R}$.\\ \indent In this paper, we assume the initial data are \begin{align}\label{a2} (\rho,u)^{\top}|_{t=0}=(\rho_{0}(x),u_{0}(x))^{\top}. \end{align} The boundary conditions are \begin{align}\label{a3} (\rho,u)^{\top}|_{x=0}=(\rho_{l}(t),u_{l}(t))^{\top} \end{align} and $\rho_{l}(t),u_{l}(t)$ are periodic functions with a period $P>0$, i.e. $$\rho_{l}(t+P)=\rho_{l}(t), u_{l}(t+P)=u_{l}(t). $$ In order to obtain the $C^{1}$ solution, the initial and boundary data should satisfy the following compatibility conditions at point $(0,0)$ \begin{equation}\label{a16} \left\{ \begin{aligned} &\rho_{l}'(0)+\rho_{0}'(0)u_{0}(0)+\rho_{0}(0)u_{0}'(0)=0,\\ &\rho_{l}'(0)u_{l}(0)+\rho_{l}(0)u_{l}'(0)+\rho_{0}'(0)u_{0}^{2}(0) +2\rho_{0}(0)u_{0}(0)u_{0}'(0)\\ &\quad+p'_{0}(0)-\beta\rho_{0}(0)u_{0}^{\alpha+1}(0)=0,\\ &\rho_{0}(0)=\rho_{l}(0),u_{0}(0) =u_{l}(0), \end{aligned}\right. \end{equation} where $$ p'_{0}(0)=\gamma\rho_{0}^{\gamma-1}(0)\rho'_{0}(0). $$\\ \indent Because of the widespread application background, the compressible Euler equation with several kinds of source term have been studied extensively and there are fruitful results. For example, we can refer \cite{Hsiao1,Hsiao, Pan, Xin} for the research on the existence and stability of the small smooth solution, \cite{Chen, Ding, Huang,Hsiao2, Sui, Wang-Chen, Yin} for the singularity formation of smooth solution and the results on weak solution. In this paper, we are interested in the time-periodic solution of problem \eqref{a1}-\eqref{a3}. As far as we know, there are many works on the studies of time-periodic solutions of the partial differential equations such as the viscous fluids equations~\cite{Cai,Jin,Luo,Ma,Matsumura} and the hyperbolic conservation laws~\cite{G,Ohnawa,Takeno,Temple,Naoki}. All of the studies mentioned above discuss the time-periodic solutions which are derived by the time-periodic external forces or the piston motion. But there are few works on the time-periodic solutions of the hyperbolic conservations laws derived by the time-periodic boundary condition. In~\cite{Yuan}, Yuan studied time-periodic supersonic solutions for the isentropic compressible Euler equation (i.e. $\beta=0$) triggered by periodic supersonic boundary condition. For the quasilinear hyperbolic system with a more general time-periodic boundary conditions, Qu showed the existence and stability of the time-periodic solutions around a small neighborhood of $u\equiv0$ in~\cite{Qu}. Recently, Wei et al.~\cite{Wei} studied the global stability problem for supersonic flows in one dimensional compressible Euler equations with a friction term $-\mu\rho|u|u,\mu>0$. \\ \indent In this paper, we would like to show global existence and uniqueness of time-periodic supersonic solutions of initial-boundary value problem~\eqref{a1}-\eqref{a3} with the general friction term $\beta\rho|u|^\alpha u$ by perturbing some supersonic Fanno flow. Different from \cite{Wei}, we consider \eqref{a1}-\eqref{a3} in the form of sound speed and fluid speed. Then the Fanno fluid are considered for some upstream with positive constants state $(c_-, u_-)$ at the left side. After analyzing the ODEs carefully, we get the maximal duct length $L_m$, exceed which the flow will get chock. Base on the supersonic Fanno flow, we prove the existence of time periodic solution by wave decomposition.\\ \indent The main results of this paper are:\\ \begin{theorem}\label{t3} For any fixed non-sonic upstream state $(\rho_-, u_-)$ satisfying $0<u_-\neq \sqrt{\gamma}\rho_-^{{\gamma-1}\over 2}$, there exists a maximal duct length $L_m$, which only depend on $\alpha, \beta, \gamma$ and $(\rho_-, u_-)^{\top}$, such that the steady solution $\tilde{V}=(\tilde{\rho}(x), \tilde{u}(x))^{\top}$ of problem \eqref{a1} exists in $[0, L_m]$ and keeps the upstream supersonic/subsonic state. \end{theorem} \begin{theorem}\label{t2} Suppose the duct length $L<L_m$ and the upstream state $(\rho_-, u_-)$ is supersonic, i.e. $u_-> \sqrt{\gamma}\rho_-^{{\gamma-1}\over 2}$. Then there exists a $\varepsilon_{0}>0$ such that for any fixed $\varepsilon$ with $0<\varepsilon\leq\varepsilon_{0}$, if \begin{align} \|(\rho_0(x)-\tilde{\rho}(x),~u_0(x)-\tilde{u}(x))\|_{C^{1}([0,L])}<\varepsilon,\label{a19}\\ \|(\rho_l(t)-\rho_-, ~u_l(t)-u_-)\|_{C^{1}([0,+\infty))}<\varepsilon,\label{a20} \end{align} then the mixed initial-boundary value problem~\eqref{a1} -\eqref{a3} have a unique $C^{1}$ solution $V=(\rho(t,x),u(t,x))^{\top}$ in the domain $E=\{(t,x)|t>0,x\in(0,L]\}$, satisfying \begin{align*} \|V-\tilde{V}\|_{C^{1}(E)}<C\varepsilon \end{align*} for some constant $C>0$ and \begin{align*} V(t+P,x)=V(t,x),\quad\forall t>T_{1},x\in[0,L], \end{align*} where $\tilde{V}=(\tilde{\rho}(x),\tilde{u}(x))^{\top}$ is the supersonic Fanno flow obtained in Theorem 1.1 and \begin{align}\label{a21} T_{1}=\max_{\substack{t\geq0,x\in[0,L]\\i=1,2}}\frac{L}{\lambda_{i}(V(t,x))}. \end{align} \end{theorem} \begin{remark} For the supersonic flow, the flow at $x=L$ is completely determined by the initial data at $x\in[0,L]$ and boundary conditions at $x=0$, so we only need to give the boundary condition at $x=0$. \end{remark} \indent The rest of the paper is organised as follows. In Section $2$, we construct the Fanno flow. In Section $3$, we present a reformulation of the problem by perturbing the solution around the supersonic Fanno flow and introduce a wave decomposition for the perturbed solution. In Section $4$, we prove global existence and uniqueness of solution under the help of uniform a-priori estimates. In Section $5$, we prove time-periodicity of solutions by the Gronwall's inequality. \section{Fanno Flow} \indent\indent Fanno flow refers to adiabatic flow through a constant area duct where the effect of friction $(i.e. \beta<0)$ is considered. The friction causes the flow properties to change along the duct. For the completeness of our results, we also consider the case $\beta>0$ in this section.\\ \indent We rewrite the initial-boundary problem~\eqref{a1}-~\eqref{a3} in terms of the sound speed $c=\sqrt{\gamma}\rho^{\frac{\gamma-1}{2}}$ and the fluid velocity $u$ as follows \begin{equation}\label{a4} \left\{\begin{aligned} &c_{t}+c_{x}u+\frac{\gamma-1}{2}cu_{x}=0,\\ &u_{t}+uu_{x}+\frac{2}{\gamma-1}cc_{x}=\beta|u|^{\alpha}u,\\ &(c,u)^{\top}|_{t=0}=(c_{0}(x),u_{0}(x))^{\top},\\ &(c,u)^{\top}|_{x=0}=(c_{l}(t),u_{l}(t))^{\top}, \end{aligned}\right. \end{equation} where $c_{0}(x)=\sqrt{\gamma}\rho_{0}^{\frac{\gamma-1}{2}}(x), c_{l}(t)=\sqrt{\gamma}\rho_{l}^{\frac{\gamma-1}{2}}(t)$.\\ \indent Now, we consider the positive solution $(\tilde{c},\tilde{u})^{\top}$ of the steady flow of system \eqref{a4} which satisfies \begin{align}\label{a5} \left\{ \begin{aligned} &\tilde{c}'\tilde{u}+\frac{\gamma-1}{2}\tilde{c}\tilde{u}'=0,\\ &\tilde{u}\tilde{u}'+\frac{2}{\gamma-1}\tilde{c}\tilde{c}' =\beta\tilde{u}^{1+\alpha},\\ &(\tilde{c},\tilde{u})^{\top}|_{x=0}=(c_{-},u_{-})^{\top}, \end{aligned} \right. \end{align} where $u_{-}$ and $c_{-}$ are two positive constants.\\ \indent First, by $~\eqref{a5}_{1}$, we get \begin{align} \tilde{c}&=c_{-}u_{-}^{\frac{\gamma-1}{2}}\tilde{u}^{-\frac{\gamma-1}{2}}.\label{a6} \end{align} Substituting \eqref{a6} into $\eqref{a5}_{2}$, we have \begin{align} \tilde{u}^{-\alpha}\tilde{u}'-c_{-}^{2}u_{-}^{\gamma-1}\tilde{u}^{-\gamma-\alpha-1}\tilde{u}' =\beta.\label{a7} \end{align} We consider~\eqref{a7} by classifying $\alpha$ and $\beta$.\\\\ \textbf{Case 1:} $\alpha\neq1$ and $\alpha\neq-\gamma$.\\ \indent In this case, \eqref{a7} becomes \begin{align} \frac{1}{-\alpha+1}(\tilde{u}^{-\alpha+1})'+\frac{1}{\gamma+\alpha}c_{-}^{2}u_{-}^{\gamma-1} (\tilde{u}^{-\gamma-\alpha})'=\beta.\label{a8} \end{align} Integrating~\eqref{a8} from $0$ to $x$, we get \begin{align}\label{a9} \frac{1}{-\alpha+1}\tilde{u}^{-\alpha+1}+\frac{1}{\gamma+\alpha}c_{-}^{2}u_{-}^{\gamma-1} \tilde{u}^{-\gamma-\alpha}=\frac{1}{-\alpha+1}u_{-}^{-\alpha+1}+\frac{1}{\gamma+\alpha} c_{-}^{2}u_{-}^{-1-\alpha}+\beta x. \end{align} Denote the left-hand-side function of \eqref{a9} as $h(s)$, i.e. $$ h(s)=\frac{1}{-\alpha+1}s^{-\alpha+1}+\frac{1}{\gamma+\alpha}c_{-}^{2}u_{-}^{\gamma-1} s^{-\gamma-\alpha}, $$ then we deduce \begin{align*} &h'(s)<0,\quad {\rm for~ }0<s<s_{c};\\ &h'(s)>0,\quad {\rm for~ }s>s_{c}, \end{align*} where $s_{c}=c_{-}^{\frac{2}{\gamma+1}}u_{-}^{\frac{\gamma-1}{\gamma+1}}$. This means that $h(s)$ gets its minimum at point $s=s_c$. On the other hand, from ~\eqref{a6}, we have $\tilde{c}=c_{-}^{\frac{2}{\gamma+1}}u_{-}^{\frac{\gamma-1}{\gamma+1}}$, when $\tilde{u}=s_{c}=c_{-}^{\frac{2}{\gamma+1}}u_{-}^{\frac{\gamma-1}{\gamma+1}}$. That is, the flow speed equals to the sound speed $(i.e. M=1)$ at the choked point $(s_c, h(s_c))$. See Figure 1 below.\\ \begin{figure}[H] \centering \includegraphics[width=9cm]{2.png} \begin{center} Figure 1 \end{center} \end{figure} \indent If $\beta>0$ and the upstream is supersonic (i.e. $u_->c_-$), $\tilde{u}$ is monotonically increasing by considering \eqref{a9} and $\tilde{u}> u_{-}$. By \eqref{a6}, $\tilde{c}$ is monotonically decreasing and $\tilde{c}< c_{-}$. Then, $\tilde{u}>\tilde{c}$. If $\beta>0$ and the upstream is subsonic (i.e. $u_-<c_-$), $\tilde{u}$ is monotonically decreasing and $\tilde{c}$ is monotonically increasing. Then $\tilde{u}<\tilde{c}$.\\ \indent When $\beta<0$, from \eqref{a9}, $h(s)$ decreases with respect to the length of the duct till arriving its minimum. Therefore, we can get the maximal length of the duct $L_{m}$ for a supersonic or subsonic flow before it gets choked, which is \begin{align}\label{L1} L_{m}=\frac{1}{\beta}\Big(\frac{1}{-\alpha+1}(s_{c}^{-\alpha+1}-u_{-}^{-\alpha+1})+\frac{1}{\gamma+\alpha}c_{-}^{2}(u_{-}^{\gamma-1}s_{c}^{-\gamma-\alpha}-u_{-}^{-1-\alpha})\Big). \end{align} \textbf{Case 2:} $\alpha=1$ or $\alpha=-\gamma$.\\ \indent Now, \eqref{a7} is turned into \begin{align} (\ln\tilde{u})'+\frac{1}{\gamma+1}c_{-}^{2}u_{-}^{\gamma-1} (\tilde{u}^{-\gamma-1})'=\beta, \quad {\rm for} ~~\alpha=1,\label{a10} \end{align} and \begin{align} \frac{1}{\gamma+1}(\tilde{u}^{\gamma+1})'-c_{-}^{2}u_{-}^{\gamma-1} (\ln\tilde{u})'=\beta, \quad {\rm for} ~~\alpha=-\gamma. \label{a12} \end{align} Integrating ~\eqref{a10} and~\eqref{a12} from $0$ to $x$, we get \begin{align} \ln\tilde{u}+\frac{1}{\gamma+1}c_{-}^{2}u_{-}^{\gamma-1} \tilde{u}^{-\gamma-1}=\ln u_{-}+\frac{1}{\gamma+1}c_{-}^{2}u_{-}^{-2}+\beta x, \quad {\rm for} ~~\alpha=1, \label{a11} \end{align} and \begin{align} \frac{1}{\gamma+1}\tilde{u}^{\gamma+1}-c_{-}^{2}u_{-}^{\gamma-1}\ln\tilde{u} =\frac{1}{\gamma+1}u_{-}^{\gamma+1}-c_{-}^{2}u_{-}^{\gamma-1}\ln u_{-}+\beta x, \quad {\rm for}~~\alpha=-\gamma. \label{a13} \end{align} Define $$ f(s)=\ln s+\frac{1}{\gamma+1}c_{-}^{2}u_{-}^{\gamma-1} s^{-\gamma-1}, $$ and $$ g(s)=\frac{1}{\gamma+1}s^{\gamma+1}-c_{-}^{2}u_{-}^{\gamma-1}\ln s. $$ The functions $f(s)$ and $g(s)$ get their minimums at point $s=s_c=c_{-}^{\frac{2}{\gamma+1}}u_{-}^{\frac{\gamma-1}{\gamma+1}}$. Furthermore, we get the maximal length of the duct $L_{m}$ : \begin{align}\label{L2} L_{m}=\frac{1}{\beta}\Big(\frac{1}{\gamma+1}c_{-}^{2}(u_-^{\gamma-1}s_c^{-\gamma-1}-u_-^{-2})+ \ln\frac{s_c}{u_{-}}\Big), {~~\rm for~~} \alpha=1 \end{align} and \begin{align}\label{L3} L_{m}=\frac{1}{\beta}\Big(\frac{1}{\gamma+1}(s_c^{\gamma+1}-u_{-}^{\gamma+1})-c_{-}^{2} u_{-}^{\gamma-1}\ln\frac{s_c}{u_{-}}\Big), {~~\rm for~~} \alpha=-\gamma. \end{align} \indent We can get the similar results with the case 1, we omit the details here.\\ \indent From the above discussion, we have the following Lemma. \begin{lemma}\label{t1} If $u_->0, c_->0$ and the duct length $L< L_m$, where $L_m$ is a positive constant only depending on $\alpha, \beta, \gamma, c_-$ and $u_-$ (See~\eqref{L1},\eqref{L2},\eqref{L3}), then the Cauchy problem~\eqref{a5} admits a unique smooth positive solution $(\tilde{c}(x),\tilde{u}(x))^{\top}$ which satisfies the following properties: \begin{enumerate}[1)] \item $0<\tilde{u}(x)<u_-<c_-<\tilde{c}(x)$,~~~ if $\beta>0$ and $c_->u_-$; \item $0<\tilde{c}(x)<c_-<u_-<\tilde{u}(x)$,~~~ if $\beta>0$ and $c_-<u_-$; \item $0<u_-<\tilde{u}(x)<\tilde{c}(x)<c_-$,~~~ if $\beta<0$ and $c_->u_-$; \item $0<c_-<\tilde{c}(x)<\tilde{u}(x)<u_-$,~~~ if $\beta<0$ and $c_-<u_-$. \end{enumerate} \end{lemma} \indent This result means that a subsonic flow entering a duct with friction $(\beta<0)$ will have an increase in its Mach number until the flow is choked at $M=1$, i.e. $\tilde{u}=\tilde{c}$. Conversely, the Mach number of a supersonic flow will decrease until the flow is choked. However, if a flow entering a duct with acceleration $(\beta>0)$, the Mach number of a subsonic flow will decrease and the Mach number of a supersonic flow will increase (i.e. accelerating the initial subsonic or supersonic state). It is worthy to be pointed out that the theoretic calculations are consistent with its experiment. Different from the calculations in \cite{Wei}, where the authors consider a differential equation that relates the change in Mach number with respect to the length of the duct ${dM}\over {dx}$, we rewrite the dominating equations by the relations between the sound speed and flow speeds. Fortunately, the resulting equations can be decoupled easily. Therefore, we can show the maximal duct length which makes the flow choke assuming the upstream Mach number is supersonic or subsonic. Thus by~\lemref{t1} and $\tilde{c}=\sqrt{\gamma}\tilde{\rho}^{\frac{\gamma-1}{2}}$, we can directly get~\theref{t3}.\\ \indent From the result, we observe that no matter what the constant number $\alpha$ is, the Mach number of a supersonic (subsonic) flow will increase (decrease) when $\beta>0$. While when $\beta<0$, the Mach number of a supersonic flow will decrease until the flow is choked; conversely, a subsonic flow will have an increase in its Mach number until the flow is choked.\\ \section{Reformulation of Problem and Wave Decomposition}\label{s2} \indent\indent For the supersonic flow, we should have $u>0$. Then, we can write the system~\eqref{a1} as \begin{align}\label{b1} \left\{ \begin{aligned} &\rho_{t}+\rho_{x}u+\rho u_{x}=0,\\ &u_{t}+uu_{x}+\gamma\rho^{\gamma-2}\rho_{x}=\beta u^{\alpha+1}. \end{aligned}\right. \end{align} \indent Letting \begin{align}\label{b2} \rho(t,x)=\bar{\rho}(t,x)+\tilde{\rho}(x),\quad u(t,x)=\bar{u}(t,x) +\tilde{u}(x), \end{align} where $(\bar{\rho}(t,x),\bar{u}(t,x))^{\top}$ is the perturbation of the supersonic Fanno flow. Substituting ~\eqref{b2} into ~\eqref{b1}, we get \begin{align}\label{b3} \left\{ \begin{aligned} &\bar{\rho}_{t}+\bar{\rho}_{x}u+\rho \bar{u}_{x}+\tilde{\rho}'\bar{u}+\bar{\rho}\tilde{u}' +\tilde{\rho}'\tilde{u}+\tilde{\rho}\tilde{u}'=0,\\ &\bar{u}_{t}+u\bar{u}_{x}+\bar{u}\tilde{u}' +\tilde{u}\tilde{u}'+\gamma\rho^{\gamma-2}\bar{\rho}_{x} +\gamma\rho^{\gamma-2}\tilde{\rho}'=\beta(\bar{u}+\tilde{u})^{\alpha+1}. \end{aligned}\right. \end{align} Moreover, the system ~\eqref{b3} can be further written into \begin{align}\label{b4} \left\{ \begin{aligned} &\bar{\rho}_{t}+\bar{\rho}_{x}u+\rho \bar{u}_{x}=-\tilde{\rho}'\bar{u}-\bar{\rho}\tilde{u}',\\ &\bar{u}_{t}+u\bar{u}_{x}+\gamma\rho^{\gamma-2}\bar{\rho}_{x} =-F(\rho,\tilde{\rho})\bar{\rho}\tilde{\rho}'-\bar{u}\tilde{u}' -G(u,\tilde{u})\bar{u}, \end{aligned}\right. \end{align} where $F(\rho,\tilde{\rho})\bar{\rho}=\gamma(\rho^{\gamma-2}-\tilde{\rho}^{\gamma-2})$, $G(u,\tilde{u})\bar{u}=-\beta[u^{\alpha+1}-\tilde{u}^{\alpha+1}]$ and $F(\rho,\tilde{\rho})$ and $G(u,\tilde{u})$ can be taken the following expressions \begin{align} F(\rho,\tilde{\rho})=\gamma(\gamma-2)\int_{0}^{1}(\theta\bar{\rho}+\tilde{\rho})^{\gamma-3}d\theta,\quad G(u,\tilde{u})=-\beta(\alpha+1)\int_{0}^{1}(\theta\bar{u}+\tilde{u})^{\alpha}d\theta. \label{Exps1} \end{align} \indent We also consider the perturbations of the initial and boundary conditions. The initial data is reformulated as \begin{align}\label{a14} t=0:\left\{ \begin{aligned} \rho_{0}(x)=\bar{\rho}_{0}(x)+\tilde{\rho}(x),\quad x\in[0,L],\\\ u_{0}(x)=\bar{u}_{0}(x)+\tilde{u}(x),\quad x\in[0,L], \end{aligned}\right. \end{align} where $L<L_{m}$, and boundary condition is \begin{align}\label{a15} x=0:\left\{ \begin{aligned} &\rho_{l}(t)=\bar{\rho}_{l}(t)+\tilde{\rho}(0),\quad t\geq0,\\ &u_{l}(t)=\bar{u}_{l}(t)+\tilde{u}(0),\quad t\geq0, \end{aligned}\right. \end{align} where $\bar{\rho}_{0},\bar{u}_{0}, \bar{\rho}_{l},\bar{u}_{l}$ are $C^{1}$ functions.\\ \indent Let $\bar{V}=(\bar{\rho},\bar{u})^{\top}$, the system~\eqref{b4} can be rewritten as the following quasi-linear form \begin{equation}\label{b5} \bar{V}_{t}+A(V)\bar{V}_{x}+D(\tilde{V})\bar{V}=0 \end{equation} with the initial data \begin{align} \bar{V}|_{t=0}=\bar{V}_{0}=(\bar{\rho}_{0},\bar{u}_{0})^{\top},\label{1in1} \end{align} and the boundary condition \begin{align} V|_{x=0}=V_{l}&=(\rho_{l},u_{l})^{\top},\label{1in2} \end{align} where $V(t,x)=\bar{V}(t,x)+\tilde{V}(x)$, and $$ A(V)=\left( \begin{matrix} u & \rho\\ \gamma\rho^{\gamma-2} & u \end{matrix} \right),\quad D(\tilde{V})=\left( \begin{matrix} \tilde{u}' & \tilde{\rho}'\\ F(\rho,\tilde{\rho})\tilde{\rho}' & \tilde{u}'+G(u,\tilde{u}) \end{matrix} \right). $$ \indent We next introduce a wave decomposition of the solution $\bar{V}$ to the system~\eqref{b5}. We can easily get the following two eigenvalues of the coefficient matrix $A(V)$ $$ \lambda_{1}(V)=u-c,\quad\lambda_{2}(V)=u+c, $$ where $c=\sqrt{\gamma}\rho^{\frac{\gamma-1}{2}}$. The corresponding two right eigenvectors $r_{i}, i=1,2$ are \begin{align}\label{b6} r_{1}(V)=\frac{1}{\sqrt{\rho^{2}+c^{2}}}(\rho,-c)^{\top},\quad r_{2}(V)=\frac{1}{\sqrt{\rho^{2}+c^{2}}}(\rho,c)^{\top}. \end{align} The left eigenvectors $l_{i}(V), i=1,2$ are determined by \begin{align}\label{b7} l_{i}(V)r_{j}(V)\equiv\delta_{ij},\quad r_{i}^{\top}(V)r_{i}(V)\equiv1,\quad i,j=1,2, \end{align} where $\delta_{ij}$ stands for the Kronecker's symbol. Then, $l_{i}, i=1,2$ have the following expressions \begin{align} l_{1}(V)=\frac{\sqrt{\rho^{2}+c^{2}}}{2}(\rho^{-1},-c^{-1}),\quad l_{2}(V)=\frac{\sqrt{\rho^{2}+c^{2}}}{2}(\rho^{-1},c^{-1}), \label{Leg1} \end{align} which have the same regularity as $r_{i}(V)$.\\ \indent Let \begin{align}\label{b8} m_{i}=l_{i}(V)\bar{V},\quad n_{i}=l_{i}(V)\bar{V}_{x},\quad m=(m_{1},m_{2})^{\top} ,\quad n=(n_{1},n_{2})^{\top}, \end{align} then \begin{align} \bar{V}&=\sum_{k=1}^{2}m_{k}r_{k}(V),\quad \frac{\partial\bar{V}}{\partial x}=\sum_{k=1}^{2}n_{k}r_{k}(V),\label{Sdec1}\\ \frac{\partial\bar{V}}{\partial t}&=-D(\tilde{V})\bar{V}-\sum_{k=1}^{2}\lambda_{k}(V)n_{k}r_{k}(V). \label{Sdec2} \end{align} Thus, we have \begin{equation}\label{b9} \begin{split} \frac{d\bar{V}}{d_{i}t}&=\frac{\partial\bar{V}}{\partial t}+\lambda_{i}(V)\frac{\partial\bar{V}}{\partial x}\\ &=\sum_{k=1}^{2}(\lambda_{i}(V)-\lambda_{k}(V))n_{k}r_{k}(V) -D(\tilde{V})\bar{V}. \end{split} \end{equation} \indent By~\eqref{b7}-~\eqref{b9}, one has \begin{equation}\label{b10} \begin{split} \frac{d m_{i}}{d_{i}t}=&\frac{\partial m_{i}}{\partial t}+\lambda_{i}(V)\frac{\partial m_{i}}{\partial x}\\ =&\sum_{j,k=1}^{2}\Psi_{ijk}(V)n_{j}m_{k}+\sum_{j,k=1}^{2}\tilde{\Psi}_{ijk}(V) m_{j}m_{k}-\sum_{k=1}^{2}\tilde{\tilde{\Psi}}_{ik}(V)m_{k}, \end{split} \end{equation} where \begin{align} \Psi_{ijk}(V)&=(\lambda_{j}(V)-\lambda_{i}(V))l_{i}(V)r_{j}(V)\cdot\nabla_{V}r_{k}(V), \label{mep1}\\ \tilde{\Psi}_{ijk}(V)&=l_{i}(V)D(\tilde{V})r_{j}(V)\cdot\nabla_{V}r_{k}(V), \label{mep2}\\ \tilde{\tilde{\Psi}}_{ik}(V)&=\lambda_{i}(V) l_{i}(V)\tilde{V}'\cdot\nabla_{V}r_{k}(V)+l_{i}(V)D(\tilde{V}) r_{k}(V). \label{mep3} \end{align} \indent Similarly, using~\eqref{b5} and ~\eqref{b7}-~\eqref{b9}, we also get \begin{equation}\label{b11} \begin{split} \frac{d n_{i}}{d_{i}t}=&\frac{\partial n_{i}}{\partial t}+\lambda_{i}(V)\frac{\partial n_{i}}{\partial x}\\ =&\sum_{j,k=1}^{2}\Phi_{ijk}(V)n_{j}n_{k}+\sum_{j,k=1}^{2}\tilde{\Phi}_{ijk}(V)n_{k}-\sum_{k=1}^{2}l_{i}(V)D_{x}(\tilde{V})r_{k}(V)m_{k}, \end{split} \end{equation} where the term $D_{x}(\tilde{V})$ makes sense if $\tilde{V}$ is a $C^{2}$ function, and \begin{align} \Phi_{ijk}(V)=&(\lambda_{j}(V)-\lambda_{k}(V))l_{i}(V)r_{j}(V)\cdot\nabla_{V}r_{k}(V)\notag\\ &-r_{j}(V)\cdot\nabla_{V}\lambda_{k}(V)\delta_{ik},\label{nep1}\\ \tilde{\Phi}_{ijk}(V)=&-\lambda_{k}(V)l_{i}(V)\tilde{V}'\cdot\nabla_{V}r_{k}(V) +l_{i}(V)D(\tilde{V})r_{j}(V)\cdot\nabla_{V}r_{k}(V)m_{j}(V)\notag\\ &-l_{i}(V)D(\tilde{V})r_{k}(V)-\tilde{V}'\cdot\nabla_{V}\lambda_{k}(V)\delta_{ik}. \label{nep2} \end{align} \indent For later use, we rewrite the system ~\eqref{b4} by exchanging the variable $t$ and $x$ as follows \begin{equation*} \bar{V}_{x}+A^{-1}(V)\bar{V}_{t}+A^{-1}(V)D(\tilde{V})\bar{V}=0. \end{equation*} Denote $\hat{\lambda}_{i}(V), i=1,2$ are eigenvalues of the matrix $A^{-1}(V)$, $\hat{l}_{i}(V), i=1,2$ and $\hat{r}_{i}(V), i=1,2$ are the left and right eigenvectors respectively. They can be determined in terms of $\lambda_{i}(V), r_{i}(V)$ and $l_{i}(V)$ as follows \begin{align} \hat{\lambda}_{i}(V)=\lambda_{i}(V)^{-1},\quad \hat{r}_{i}(V)=r_{i}(V),\quad \hat{l}_{i}(V)=l_{i}(V). \label{Evf1} \end{align} Therefore, $\hat{r}_{i}(V)$ and $\hat{l}_{i}(V)$ also satisfy~\eqref{b7}. \indent Let \begin{align} \hat{m}_{i}=\hat{l}_{i}(V)\bar{V},\quad\hat{n}_{i}=\hat{l}_{i}(V)\bar{V}_{t},\quad \hat{m}=(\hat{m}_{1},\hat{m}_{2})^{\top},\quad\hat{n}=(\hat{n}_{1},\hat{n}_{2})^{\top}. \label{hsdec1} \end{align} By applying the similar arguments as in~\eqref{b10}-~\eqref{nep2}, we can get \begin{equation}\label{b12} \begin{split} \frac{d \hat{m}_{i}}{d_{i}x}=&\frac{\partial\hat{m}_{i}}{\partial x}+\hat{\lambda}_{i}(V)\frac{\partial\hat{m}_{i}}{\partial t}\\ =&\sum_{j,k=1}^{2}\hat{\Psi}_{ijk}(V)\hat{n}_{j}\hat{m}_{k} +\sum_{j,k=1}^{2}\hat{\tilde{\Psi}}_{ijk}(V)\hat{m}_{j}\hat{m}_{k} -\sum_{k=1}^{2}\hat{\tilde{\tilde{\Psi}}}_{ik}(V)\hat{m}_{k} \end{split} \end{equation} with \begin{align} \hat{\Psi}_{ijk}(V)=&(\hat{\lambda}_{j}(V)-\hat{\lambda}_{i}(V)) \hat{l}_{i}(V)\hat{r}_{j}(V)\cdot\nabla_{V}\hat{r}_{k}(V),\label{hmep1}\\ \hat{\tilde{\Psi}}_{ijk}(V)=&\hat{\lambda}_{i}(V)\hat{l}_{i}(V) D(\tilde{V})\hat{r}_{j}(V)\cdot\nabla_{V}\hat{r}_{k}(V),\label{hmep2}\\ \hat{\tilde{\tilde{\Psi}}}_{ik}(V)=&\hat{l}_{i}(V)\tilde{V}'\cdot\nabla_{V} \hat{r}_{k}(V)+\hat{\lambda}_{i}(V)\hat{l}_{i}(V)D(\tilde{V})\hat{r}_{k}(V),\label{hmep3} \end{align} and \begin{equation}\label{b13} \begin{split} \frac{d \hat{n}_{i}}{d_{i}x}=&\frac{\partial\hat{n}_{i}}{\partial x}+\hat{\lambda}_{i}(V)\frac{\partial\hat{n}_{i}}{\partial t}\\ =&\sum_{j,k=1}^{2}\hat{\Phi}_{ijk}(V)\hat{n}_{j}\hat{n}_{k} +\sum_{j,k=1}^{2}\hat{\tilde{\Phi}}_{ijk}(V)\hat{n}_{k}-\sum_{k=1}^{2}\hat{l}_{i}(V)(A^{-1}(V)D(\tilde{V}))_{t}\hat{r}_{k}(V)\hat{m}_{k}(V) \end{split} \end{equation} with \begin{align*} \hat{\Phi}_{ijk}(V)=&(\hat{\lambda}_{j}(V)-\hat{\lambda}_{k}(V))\hat{l}_{i}(V)\hat{r}_{j}(V)\cdot\nabla_{V} \hat{r}_{k}(V)-\hat{r}_{j}(V)\cdot\nabla_{V}\hat{\lambda}_{k}(V)\delta_{ik},\\ \hat{\tilde{\Phi}}_{ijk}(V)=&-\hat{l}_{i}(V)\tilde{V}'\cdot\nabla_{V}\hat{r}_{k}(V)+ \hat{\lambda}_{i}(V)\hat{l}_{i}(V)D(\tilde{V})\hat{r}_{j}(V)\cdot\nabla_{V}\hat{r}_{k}(V)\hat{m}_{j}(V)\\ &-\hat{\lambda}_{i}(V)\hat{l}_{i}(V)D(\tilde{V})\hat{r}_{k}(V). \end{align*} \indent We also provide the wave decomposition of the initial and boundary data as follows \begin{align}\label{ib8} m_{0}=(m_{10},m_{20})^{\top} ,\quad n_{0}=(n_{10},n_{20})^{\top} \end{align} with $$ m_{i0}=l_{i}(V_{0})\bar{V}_{0},\quad n_{i0}=l_{i}(V_{0})\bar{V}'_{0}, $$ and \begin{align} \hat{m}_{l}=(\hat{m}_{1l},\hat{m}_{2l})^{\top},\quad\hat{n}_{l}=(\hat{n}_{1l},\hat{n}_{2l})^{\top}. \label{ihsdec1} \end{align} with $$ \hat{m}_{il}=\hat{l}_{i}(V_{l})\bar{V}_{l},\quad\hat{n}_{il}=\hat{l}_{i}(V_{l})\bar{V}'_{l}, $$ where $\bar{V}_{0}$ and $\bar{V}_{l}$ are defined by~\eqref{1in1} and~\eqref{1in2} respectively, and \begin{align} V_{0}&=(\rho_{0},u_{0})^{\top},\quad \bar{V}'_{0}=(\bar{\rho}'_{0},\bar{u}'_{0})^{\top},\label{in1}\\ V_{l}&=(\rho_{l},u_{l})^{\top},\quad \bar{V}'_{l}=(\bar{\rho}'_{l},\bar{u}'_{l})^{\top}.\label{in2} \end{align} \section{Existence of Global Solutions}\label{s3} \indent\indent In this section, we will prove the existence of global solution $\bar{V}=(\bar{\rho}(t,x),\bar{u}(t,x))^{\top}$ to the initial-boundary value problem~\eqref{b5} and~\eqref{1in1}-\eqref{1in2} in the domain $E=\{(t,x)|t>0,x\in(0,L]\}$.\\ \indent The local existence and uniqueness of the $C^{1}$ solution to the mixed initial-boundary value problem~\eqref{b5} and~\eqref{1in1}-\eqref{1in2} is guaranteed by the classical theory in~\cite{Yu}, which can be extended globally in terms of a uniform a-priori estimate of the global $C^{1}$ solutions (see~\cite{Wei, Li, Zhou, Wang,Rao,Jin}). \indent Next we will establish a uniform a-priori estimate of the classical solution to help us to extend globally the local solution. Let us first give the following assumption \begin{align} |m_{i}(t,x)|,\, |n_{i}(t,x)|\leq C\varepsilon,\quad\forall i=1,2, \quad (t,x)\in E \label{c1} \end{align} for a suitably small positive constant $\varepsilon$, which will be determined later. From\eqref{b6},~\eqref{Sdec1}and~\eqref{c1}, we have \begin{align} |\bar{V}(t,x)|,\,|\frac{\partial\bar{V}}{\partial x}(t,x)|\leq C\varepsilon,\quad\forall (t,x)\in E. \label{Casp2} \end{align} Combining~\theref{t1} with~\eqref{Casp2}, we obtain the following results. The details of the proof are omitted here. \begin{lemma}\label{Casp3} For sufficiently small $\varepsilon$, it holds that \begin{align} &|D(\tilde{V})(t,x)|, |\partial_{x}D(\tilde{V})(t,x)|, |\nabla_{V}r_{i}(V)(t,x)|, |\tilde{V}'|, T_{1}\leq C,\label{Casp4}\\ &C^{-1}\leq |\lambda_{i}(V)(t,x)|, |\nabla_{V}\hat{\lambda}_{i}(V)(t,x)|, |l_{i}(V)(t,x)|\leq C, \label{Casp5}\\ &|\frac{\partial\bar{V}}{\partial t}(t,x)|, |\partial_{t}A^{-1}(V)(t,x)|, |\partial_{t}D(\tilde{V})(t,x)|\leq C\varepsilon \label{Casp6} \end{align} for any $(t,x)\in E$, where the positive constant $C$ only depends on $c_{-}, u_{-}, \tilde{c}(L)$, $\tilde{u}(L), \gamma, \alpha$ and $\beta$. \end{lemma} We observe from~\eqref{Casp2} and~\eqref{Casp5} that it suffices to prove~\eqref{c1} for a uniform a-priori estimate of the global $C^{1}$ solution. \indent Write $x=x_{i}^{*}(t), i=1,2$ as the characteristic curve of $\lambda_{i}$ passing a point $(0,0)$, which satisfy \begin{align*} \frac{d x_{i}^{*}(t)}{dt}=\lambda_{i}(V(t,x_{i}^{*}(t))),\quad x_{i}^{*}(0)=0. \end{align*} Noting that $x=x_{2}^{*}(t)$ lies below $x=x_{1}^{*}(t)$ since $\lambda_{2}(V)>\lambda_{1}(V)$.\\ \indent We divide the region $E$ into three small regions and discuss the uniform a-priori estimate of the classical solutions in each small region separately.\\ \textbf{Region 1:} the region $E_{1}=\{(t,x)|0\leq t\leq T_{1}, 0\leq x\leq L, x\geq x_{2}^{*}(t)\}$.\\ \indent For any point $(t,x)\in E_{1}$, integrating the i-th equation in ~\eqref{b10} along the i-characteristic curve with respect to $\tau$ from $0$ to $t$ which intersects the $x$-axis at a point $(0,b_{i})$, we obtain from~\eqref{b10}, \eqref{mep1}-\eqref{mep3}, \eqref{c1}, \eqref{Casp4} and~\eqref{Casp5} that \begin{equation}\label{c2} \begin{aligned} |m_{i}(t,x(t))|\leq&|m_{i}(0,b_{i})|+\int_{0}^{t}\sum_{j,k=1}^{2}|\Psi_{ijk}(V)n_{j}m_{k}|d\tau\\ &+\int_{0}^{t}\sum_{j,k=1}^{2}|\tilde{\Psi}_{ijk}(V)m_{j}m_{k}|d\tau+\int_{0}^{t}\sum_{k=1}^{2}|\tilde{\tilde{\Psi}}_{ik}(V)m_{k}|d\tau\\ \leq&|m_{i0}(b_{i})|+C\int_{0}^{t}|m(\tau,x(\tau))|d\tau. \end{aligned} \end{equation} \indent Applying the same procedures as above for~\eqref{b11}, from~\eqref{nep1}, \eqref{nep2}, \eqref{c1}, \eqref{Casp4} and~\eqref{Casp5}, we have \begin{equation}\label{c3} \begin{split} |n_{i}(t,x(t))|\leq&|n_{i}(0,b_{i})|+\int_{0}^{t}\sum_{j,k=1}^{2}|\Phi_{ijk}(V) n_{j}n_{k}|d\tau\\ &+\int_{0}^{t}\sum_{j,k=1}^{2}|\tilde{\Phi}_{ijk}(V)n_{k}|d\tau+\int_{0}^{t}\sum_{k=1}^{2}|l_{i}(V)D_{x}(\tilde{V})r_{k}(V)m_{k}|d\tau\\ \leq&|n_{i0}(b_{i})|+C(\int_{0}^{t}|n(\tau,x(\tau))|d\tau+\int_{0}^{t}|m(\tau,x(\tau))|d\tau). \end{split} \end{equation} \indent Putting~\eqref{c2}-\eqref{c3} together, summing up $i=1,2$ and applying the Gronwall's inequality, we have \begin{equation}\label{c4} |m(t,x)|+|n(t,x)|\leq (\|m_{0}\|_{C^{0}([0,L])}+\|n_{0}\|_{C^{0}([0,L])})(1+CT_{1}). \end{equation} Because of the arbitrariness of $(t,x)\in E_{1}$ and the boundedness of $T_{1}$ in~\eqref{Casp4}, we obtain from~\eqref{c4} that \begin{align} \max_{(t,x)\in E_{1}}|m(t,x)|+|n(t,x)|\leq C(\|m_{0}\|_{C^{0}([0,L])}+\|n_{0}\|_{C^{0}([0,L])}). \label{Apri1} \end{align} \textbf{Region 2:} the region $E_{2}=\{(t,x)|t\geq0, 0\leq x\leq L, 0\leq x\leq x_{1}^{*}(t)\}$.\\ \indent For any point $(t,x)\in E_{2}$, integrating in~\eqref{b12} with respect to $x$ along the $i$-th characteristic curve, which is assumed to intersect the $t$-axis at a point $(\tau_{i},0)$, we have from~\eqref{hmep1}-\eqref{hmep3}, \eqref{c1}, \eqref{Casp4} and~\eqref{Casp5} that \begin{align} |\hat{m}_{i}(t(x),x)|\leq&|\hat{m}_{il}(\tau_{i})|+C\int_{0}^{x}|\hat{m}(t(y),y)|dy. \label{hapri1} \end{align} \indent For~\eqref{b13}, applying the same procedures as above, we further use~\eqref{Casp6} to obtain \begin{align} |\hat{n}_{i}(t(x),x)|\leq&|\hat{n}_{il}(\tau_{i})|+C(\int_{0}^{x}|\hat{n}(t(y),y)|dy+ \int_{0}^{x}|\hat{m}(t(y),y)|dy). \label{hapri2} \end{align} Taking the summation of~\eqref{hapri1} and~\eqref{hapri2} and the summation for $i=1,2$, applying the Gronwall's inequality, we have \begin{align}\label{c6} \max_{(t,x)\in E_{2}}|\hat{m}(t,x)|+|\hat{n}(t,x)|\leq C(\|\hat{m}_{l}\|_{C^{0}([0,+\infty))}+\|\hat{n}_{l}\|_{C^{0}([0,+\infty))}),\end{align} where we have used the arbitrariness of $(t,x)\in E_{2}$.\\ \textbf{Region 3:} in the remaining region $$ E_{3}=\{(t,x)|0\leq t\leq T_{1}, 0\leq x\leq L, x_{1}^{*}(t)\leq x \leq x_{2}^{*}(t)\}. $$ \indent For any point $(t,x)\in E_{3}$, integrating the first equation in ~\eqref{b10} and~\eqref{b11} along the first characteristic curve that intersects the $x_{2}^{*}(t)$ at a point $(t_{1},x_{1})$, we get from~\eqref{mep1}-\eqref{mep3}, \eqref{nep1}, \eqref{nep2}, \eqref{c1}, \eqref{Casp4} and~\eqref{Casp5} that \begin{align} |m_{1}(t,x(t))|\leq&|m_{1}(t_{1},x_{1})|+C\int_{t_{1}}^{t}|m(\tau,x(\tau))|d\tau\notag\\ \leq&|m_{1}(t_{1},x_{1})|+C\int_{0}^{t}|m(\tau,x(\tau))|d\tau, \label{c7}\\ |n_{1}(t,x(t))|\leq&|n_{1}(t_{1},x_{1})|+C(\int_{0}^{t}|n(\tau,x(\tau))|d\tau +\int_{0}^{t}|m(\tau,x(\tau))|d\tau). \label{c8} \end{align} \indent Similarly, for any point $(t,x)\in E_{3}$, integrating the second equation in~\eqref{b10} and~\eqref{b11} along the second characteristic curve that intersects $x_{1}^{*}(t)$ at a point $(t_{2},x_{2})$, we have \begin{align} |m_{2}(t,x(t))|\leq&|m_{2}(t_{2},x_{2})|+C\int_{0}^{t}|m(\tau,x(\tau))|d\tau,\label{c9}\\ |n_{2}(t,x(t))|\leq&|n_{2}(t_{2},x_{2})|+C(\int_{0}^{t}|n(\tau,x(\tau))|d\tau+\int_{0}^{t}|m(\tau,x(\tau))|d\tau). \label{c10} \end{align} \indent By applying the Gronwall's inequality, the combination of~\eqref{c7}-\eqref{c10} gives rise to \begin{equation}\label{c11} \begin{split} \max_{(t,x)\in E_{3}}(|m(t,x)|+|n(t,x)|)\leq& C(\|m_{0}\|_{C^{0}([0,L])}+\|n_{0}\|_{C^{0}([0,L])}\\ &+\|\hat{m}_{l}\|_{C^{0}([0,+\infty))}+\|\hat{n}_{l}\|_{C^{0}([0,+\infty))}), \end{split} \end{equation} where we have used~\eqref{Apri1} and~\eqref{c6} and the arbitrariness of $(t,x)\in E_{3}$.\\ \indent We notice from~\eqref{Apri1}, \eqref{c6}, \eqref{c11}, \eqref{b8} and~\eqref{hsdec1} that under the initial and boundary conditions~\eqref{a19}-\eqref{a20} for a sufficiently small $\varepsilon>0$ and the assumption~\eqref{Casp5}, we can check the validity of hypothesis ~\eqref{c1} for some constant $C>0$. Therefore, we obtain a uniform a-priori estimate for the global $C^{1}$ solution. The global existence of solution to the initial-boundary value problem~\eqref{b5} and~\eqref{1in1}-\eqref{1in2} can be checked by the standard continuity method, the details are omitted here. \section{Periodic Solution}\label{s4} \indent\indent In this section, we will prove global solution $V=(\rho(t,x),u(t,x))^{\top}$ is a time-periodic function with a period $P>0$.\\ \indent Using a Riemann invariant of system ~\eqref{a1} \begin{align}\label{d1} r=\frac{1}{2}(u-\frac{2}{\gamma-1}c),~~~s=\frac{1}{2}(u+\frac{2}{\gamma-1}c), \end{align} ~\eqref{a1} can be converted into the following form \begin{align}\label{d2} \left\{ \begin{aligned} r_{t}+\lambda_{1}(r,s)r_{x}=\frac{\beta(r+s)^{\alpha+1}}{2},\\ s_{t}+\lambda_{2}(r,s)s_{x}=\frac{\beta(r+s)^{\alpha+1}}{2}, \end{aligned}\right. \end{align} where \begin{align*} \lambda_{1}=u-c=\frac{\gamma+1}{2}r-\frac{\gamma-3}{2}s,\quad \lambda_{2}=u+c=\frac{3-\gamma}{2}r+\frac{\gamma+1}{2}s. \end{align*} Correspondingly, the initial data and boundary conditions become \begin{align} &r(0,x)=r_{0}(x),~~s(0,x)=s_{0}(x),~~x\in[0,L]\label{d3},\\ &r(t,0)=r_{l}(t),~~~~s(t,0)=s_{l}(t),~~~\quad t\geq0\label{d4}, \end{align} where $r_{l}(t),s_{l}(t)$ are time-periodic with the period $P>0$.\\ \indent For the convenience of later proof, we exchange $t$ and $x$, then problem ~\eqref{d2} and ~\eqref{d3}-~\eqref{d4} becomes the following Cauchy problem in the domain $E$ \begin{equation}\label{d5} \left\{\begin{aligned} &r_{x}+\frac{1}{\lambda_{1}}r_{t}=\frac{\beta(r+s)^{\alpha+1}}{2\lambda_{1}},\\ &s_{x}+\frac{1}{\lambda_{2}}s_{t}=\frac{\beta(r+s)^{\alpha+1}}{2\lambda_{2}},\\ &r(t,0)=r_{l}(t),\\ &s(t,0)=s_{l}(t). \end{aligned}\right. \end{equation} Furthermore, setting $$ W=(r-\tilde{r},s-\tilde{s})^{\top},\quad \Lambda(t,x)=\left( \begin{array}{cc} \frac{1}{\lambda_{1}(r(t,x),s(t,x))} & 0\\ 0 & \frac{1}{\lambda_{2}(r(t,x),s(t,x))}\\ \end{array} \right), $$ then ~\eqref{d5} can be rewritten as \begin{align}\label{d6} W_{x}+\Lambda(t,x) W_{t}=\frac{\beta}{2}\Lambda(t,x)\left( \begin{aligned} (r+s)^{\alpha+1}\\ (r+s)^{\alpha+1}\\ \end{aligned} \right) -\frac{\beta}{2} \left( \begin{aligned} \frac{(\tilde{r}+\tilde{s})^{\alpha+1}}{\tilde{\lambda}_{1}}\\ \frac{(\tilde{r}+\tilde{s})^{\alpha+1}}{\tilde{\lambda}_{2}}\\ \end{aligned} \right), \end{align} where $$ \tilde{r}=\frac{1}{2}(\tilde{u}-\frac{2}{\gamma-1}\tilde{c}),\quad \tilde{s}=\frac{1}{2}(\tilde{u}+\frac{2}{\gamma-1}\tilde{c}), $$ \begin{align*} &\tilde{\lambda}_{1}=\lambda_{1}(\tilde{r},\tilde{s})=\frac{\gamma+1}{2}\tilde{r}-\frac{\gamma-3}{2}\tilde{s},\\ &\tilde{\lambda}_{2}=\lambda_{2}(\tilde{r},\tilde{s})=\frac{3-\gamma}{2}\tilde{r}+\frac{\gamma+1}{2}\tilde{s}. \end{align*} \indent By \begin{align*} \|\rho-\tilde{\rho}\|_{C^{1}(E)}+\|u-\tilde{u}\|_{C^{1}(E)}<C\varepsilon, \end{align*} and~\eqref{d1}, we can get \begin{align}\label{d7} \|r(t,x)-\tilde{r}(x)\|_{C^{1}(E)}+\|s(t,x)-\tilde{s}(x)\|_{C^{1}(E)}<K_{1}\varepsilon \end{align} with $K_{1}>0$ a constant that depending only on $\tilde{\rho}, \tilde{u}, \gamma$ and $L$.\\ \indent Next we will show that the following conclusion holds \begin{align}\label{d8} r(t+P,x)=r(t,x),~~s(t+P,x)=s(t,x),\quad\forall t>T_{1},x\in[0,L], \end{align} where $T_{1}$ is defined by~\eqref{a21}.\\ \indent Letting $$ U(t,x)=W(t+P,x)-W(t,x), $$ then by ~\eqref{d6}, we can get \begin{align}\label{d9} \left\{\begin{aligned} &U_{x}+\Lambda(t,x)U_{t}=G(t,x),\\ &U(t,0)=0,\quad t>0, \end{aligned}\right. \end{align} where \begin{align*} G(t,x)=&\frac{\beta}{2}\Lambda(t+P,x)\left( \begin{aligned} (r(t+P,x)+s(t+P,x))^{\alpha+1}\\ (r(t+P,x)+s(t+P,x))^{\alpha+1}\\ \end{aligned} \right)\\ &-\frac{\beta}{2}\Lambda(t,x)\left( \begin{aligned} (r(t,x)+s(t,x))^{\alpha+1}\\ (r(t,x)+s(t,x))^{\alpha+1}\\ \end{aligned} \right)\\ &-[\Lambda(t+P,x)-\Lambda(t,x)]W_{t}(t+P,x). \end{align*} \indent Noting that $\lambda_{1}, \lambda_{2}$ are continuous functions of $(r,s)$, then by ~\eqref{d7}, we can get the following estimates \begin{align} &|W_{t}(t+p,x)|\leq K_{1}\varepsilon,\label{d10}\\ &|r(t+P,x)+s(t+P,x)|\leq K_{2},\label{d11}\\ &|\Lambda_{t}(r(t,x),s(t,x))|\leq K_{3}\varepsilon,\label{d12}\\ &|\Lambda(t+P,x)-\Lambda(t,x)|\leq K_{4}|U(t,x)|,\label{d13}\\ &|\Lambda(t,x)|\leq K_{5},\label{d14} \end{align} where constants $K_{2}, K_{3}, K_{4}, K_{5}$ depend only on $\tilde{\rho}, \tilde{u}, \gamma$ and $L$.\\ \indent It follows from ~\eqref{d10}-\eqref{d11}, \eqref{d13}-\eqref{d14} that \begin{equation}\label{d15} \begin{split} |G(t,x)|\leq&\frac{|\beta|}{2}|\Lambda(t,x)|\left( \begin{aligned} (\alpha+1)|\eta|^{\alpha}|U(t,x)|\\ (\alpha+1)|\eta|^{\alpha}|U(t,x)|\\ \end{aligned} \right)\\ &+\frac{|\beta|}{2}|\Lambda(t+P,x)-\Lambda(t,x)|\left( \begin{aligned} |r(t+P,x)+s(t+P,x)|^{\alpha+1}\\ |r(t+P,x)+s(t+P,x)|^{\alpha+1}\\ \end{aligned} \right)\\ &+|\Lambda(t+P,x)-\Lambda(t,x)||W_{t}(t+P,x)|\\ \leq& K_{6}|U(t,x)|, \end{split} \end{equation} where $\eta$ lies between $u(t,x)$ and $u(t+p,x)$, the definition of $K_{6}$ is the same as above.\\ \indent For a fixed point $(t_{0},x_{0})$ with $t_{0}>T_{1}, 0<x_{0}<L$, we can draw two characteristic curves $\Gamma_{1}:t=t_{1}^{*}(x)$ and $\Gamma_{2}:t=t_{2}^{*}(x)$, namely, \begin{align*} \frac{dt_{1}^{*}}{dx}=\frac{1}{\lambda_{1}(r(t_{1}^{*},x),s(t_{1}^{*},x))},t_{1}^{*}(x_{0})=t_{0} \end{align*} and \begin{align*} \frac{dt_{2}^{*}}{dx}=\frac{1}{\lambda_{2}(r(t_{2}^{*},x),s(t_{2}^{*},x))},t_{2}^{*}(x_{0})=t_{0} \end{align*} for $0<x<x_{0}$. And we can easily see that $\Gamma_{1}$ lies below $\Gamma_{2}$.\\ Setting \begin{align}\label{d16} I(x)=\frac{1}{2}\int_{t_{1}^{*}(x)}^{t_{2}^{*}(x)}|U(t,x)|^{2}dt, \end{align} where $0\leq x< x_{0}$.\\ \indent By the definition of $T_{1}$ and $t_{0}>T_{1}$, we can get that $(t_{1}^{*}(0),t_{2}^{*}(0))\subset(0,+\infty)$, then by ~\eqref{d9}, we have $U(t,0)\equiv0$ in this interval.\\ \indent Therefore, \begin{align*} I(0)=0. \end{align*} \indent Taking derivative of $I(x)$ with respect to $x$, we get \begin{align*} I^{'}(x)=& \int_{t_{1}^{*}(x)}^{t_{2}^{*}(x)}{U(t,x)^{T}U_{x}(t,x)}dt+\frac{1}{2}|{U(t_{2}^{*}(x),x)}|^{2} {\frac{1}{\lambda_{2}(r(t_{2}^{*}(x),x),s(t_{2}^{*}(x),x))}}\\ &-\frac{1}{2}|{U(t_{1}^{*}(x),x)}|^{2}{\frac{1}{\lambda_{1}(r(t_{1}^{*}(x),x),s(t_{1}^{*}(x),x))}}\\ \leq&-\int_{t_{1}^{*}(x)}^{t_{2}*(x)}U(t,x)^{T}\Lambda(t,x)U_{t}(t,x)dt +\int_{t_{1}^{*}(x)}^{t_{2}^{*}(x)}U(t,x)^{T}G(t,x)dt\\ &+\frac{1}{2}U(t,x)^{T}\Lambda(t,x)U(t,x)|_{t=t_{1}^{*}(x)}^{t=t_{2}^{*}(x)}\\ =&-\frac{1}{2}\int_{t_{1}^{*}(x)}^{t_{2}^{*}(x)}(U(t,x)^{T}\Lambda(t,x)U(t,x))_{t}-U(t,x)^{T} \Lambda_{t}(t,x)U(t,x)dt\\ &+\int_{t_{1}^{*}(x)}^{t_{2}^{*}(x)}U(t,x)^{T}G(t,x)dt+\frac{1}{2}U(t,x)^{T}\Lambda(t,x)U(t,x) |_{t=t_{1}^{*}(x)}^{t=t_{2}^{*}(x)}\\ =&\frac{1}{2}\int_{t_{1}^{*}(x)}^{t_{2}^{*}(x)}U(t,x)^{T}\Lambda_{t}(t,x)U(t,x)dt +\int_{t_{1}^{*}(x)}^{t_{2}^{*}(x)}U(t,x)^{T}G(t,x)dt\\ \leq&(K_{3}\varepsilon+2K_{6})I(x). \end{align*} In the last inequality we have used (\ref{d12}) and (\ref{d15}).\\ \indent Hence, by Gronwall's inequality, we can get that $I(x)\equiv0$. Furthermore, by continuity of $I(x)$, we have $I(x_{0})=0$, then $U(t_{0},x_{0})=0$.\\ \indent Since $(t_{0},x_{0})$ is arbitrary, so we have $$ U(t,x)\equiv0,\quad \forall t>T_{1},x\in[0,L], $$ that is, we complete the proof of ~\eqref{d8}. Then, using~\eqref{d1} and $c=\sqrt{\gamma}\rho^{\frac{\gamma-1}{2}}$, we can get $(\rho,u)^{\top}$ is also a periodic function with a period $P>0$.
1,314,259,996,250
arxiv
\section{Introduction} Throughout the paper, we focus only on commutative rings with a nonzero identity. Let $R$ will always denote such a ring. We will denote the set of all ideals of $R$ by $\mathcal{I}(R)$. A proper ideal $I$ of $R$ is an element $I\in\mathcal{I}(R)$ with $I\neq R$. For many years, numerous types of ideals have been developed such as prime, primary, maximal, etc. All of them play significant role when characterizing a ring. The concept of prime ideals and its generalizations have a significant place in commutative algebra since they are used in understanding the structure of rings. Recall that a proper ideal $I\ $of $R\ $is said to be a \textit{prime ideal }if whenever $xy\in I$ for some $x,y\in R,$ then either $x\in P$ or $y\in P$ \cite{AtMac}. The importance of prime ideals led many researchers to work prime ideals and its generalizations. See, for example, \cite{Ba2}, \cite{Be} and \cite{KoUlTe}. In \cite{AnSmi}, Anderson and Smith introduced a notion of weakly prime ideal which is a generalization of prime ideals. A proper ideal $I$ of $R$ is called \textit{weakly prime ideal} if $0\neq xy\in I$ for some elements $x,y\in I$ implies that $x\in I$ or $y\in I$. They gave many results concerning weakly prime ideals and used it to study factorization in commutative rings with zero divisors. Also, they gave necessary and sufficient conditions so that any proper ideal of $R$ can be written as a product of weakly prime ideals. It is clear that every prime ideal is weakly prime but the converse is not true in general. Afterwards, Badawi, in his celebrated paper \cite{Ba1}, introduced the notion of 2-absorbing ideals and used them to characterize Dedekind domains. Recall from \cite{Ba1}, that a nonzero proper ideal $I$ of $R$ is called \textit{2-absorbing ideal} if $xyz\in I$ for some $x,y,z\in R$ implies either $xy\in I$ or $xz\in I$ or $yz\in I$. Note that every prime ideal is also a 2-absorbing ideal. After this, over the past decades, $2$-absorbing version of ideals and many generalizations of 2-absorbing ideals attracted considerable attention by many researchers in \cite{BaTeYe}, \cite{TeKoOrSh} and \cite{KoUrTe}. In \cite{Dar2}, in order to study unique factorization domain, Bhatwadekar and Sharma defined almost prime ideals which is a generalization of prime ideals. A proper ideal $I$ is called \textit{almost prime ideal} if $xy\in I-I^{2}$ for some $x,y\in R$ implies that $x\in I$ or $y\in I$. They investigated the relations among the prime ideals, pseudo prime ideals and almost prime ideals of $R$. Badawi and Darani in \cite{BaDa} defined and studied weakly 2-absorbing ideals which is a generalization of weakly prime ideals. A proper ideal $I$ of $R$ is called a \textit{weakly 2-absorbing ideal} if for each $x,y,z\in R$ with $0\neq xyz\in I$, then either we have $xy\in I\ $or $xz\in I$ or $yz\in I$.\ In \cite{AnMa}, Anderson and Bataineh defined a new class of prime ideals. A proper ideal $I$ of $R$ is called \textit{$\phi$-prime ideal} if whenever $xy\in I-\phi(I)$ for some $x,y\in R$ then either $x\in I$ or $y\in I$ where $\phi:\mathcal{I}% (R)\rightarrow\mathcal{I}(R)\cup\{\emptyset\}$ is a function. They showed that a prime ideal and a $\phi$-prime ideal have some similar properties. Recently, in \cite{YasNik}, Yassine et al. introduced 1-absorbing prime ideal. This type of ideals is a generalization of prime ideals. A proper ideal $I$ of $R$ is called \textit{1--absorbing prime ideal} if whenever $xyz\in I$ for some nonunits $x,y,z\in R$ then either $xy\in I$ or $z\in I$. Note that every prime ideal is 1-absorbing prime and every 1-absorbing prime ideal is 2-absorbing ideal. The converses are not true. For instance, $P=6\mathbb{% \mathbb{Z} }$ is a 2-absorbing ideal of $\mathbb{% \mathbb{Z} }$ but not a 1-absorbing prime ideal and also $P=(\overline{0})$ is a 1-absorbing prime ideal of $\mathbb{% \mathbb{Z} }_{4}$ which is not prime. They characterized 1-absorbing prime ideals of some special rings such as valuation domain and principal ideal domain. They also gave Prime Avoidance Theorem for 1-absorbing prime ideals. More currently, Ko\c{c} et al. defined weakly-1-absorbing prime ideals which is a generalization of 1-absorbing prime ideal \cite{w-1}. A proper ideal $I$ of $R$ is called \textit{weakly-1-absorbing prime ideal} if $0\neq xyz\in I$ for some nonunits $x,y,z\in R$ implies that $xy\in I$ or $z\in I$. They gave many properties of this class of ideals and characterized rings that every proper ideal is weakly-1-absorbing ideal. Moreover, they investigated weakly-1-absorbing ideal in $C(X)$, which is the set of all real-valued continuous functions of topological space $X$. In this paper, we define $\phi$-1-absorbing prime ideals as a new class of ideals which is generalization of 1-absorbing prime ideals. A proper ideal $I$ of $R\ $is called \textit{$\phi$-1-absorbing prime ideal} if whenever $xyz\in I-\phi(I)$ for some nonunits $x,y,z\in R$ then $xy\in I$ or $z\in I$.\ Among other results in this paper, we give some relations between $\phi$-1-absorbing prime ideals and other classical ideals such as weakly prime ideals, $\phi $-prime ideals, 1-absorbing prime ideals and weakly 1-absorbing prime ideals (See, Proposition \ref{pfirst}). In particular, we show that every $\phi $-prime ideal is also a $\phi$-1-absorbing prime ideal. But the converse is not true in general (See, Example \ref{ex3}). Hovewer, we give a condition under which any $\phi$-1-absorbing prime ideal is $\phi$-prime (See, Theorem \ref{tp1-abs}). Also, we give some characterizations of $\phi$-1-absorbing prime ideals in general rings, in factor ring, in localization of rings, in cartesian product of rings (See, Theorem \ref{tmm}, Theorem \ref{tfac}, Theorem \ref{loc}, Theorem \ref{tgen}). Finally, we determine rings over which every ideal is almost 1-absorbing prime ideal (See, Theorem \ref{tmain}). \section{Characterization of $\phi$-1-absorbing prime ideals} Let $R$ be a commutative ring. Define a function $\phi:\mathcal{I}% (R)\rightarrow\mathcal{I}(R)\cup\{\emptyset\}$. This function maps an ideal of $R$ to an ideal of $R$ or $\emptyset$. \begin{definition} Let $R$ be a ring and $I$ be a proper ideal of $R$. $I$ is called $\phi $-1-absorbing prime ideal of $R$ if whenever $xyz\in I-\phi(I)$ for some nonunits $x,y,z\in R$ then $xy\in I$ or $z\in I$. \end{definition} The following notations will be used for the rest of the paper. \begin{example} Let $R$ be a commutative ring and $\phi_{\alpha}:\mathcal{I}(R)\rightarrow \mathcal{I}(R)\cup\{\emptyset\}$ be a function. The following gives types of 1-absorbing prime ideals corresponding to $\phi_{\alpha}$. \newline $\phi_{\emptyset} \quad\quad\phi(I)=\emptyset\quad\quad\quad\quad \text{1-absorbing prime ideal}$ $\phi_{0} \quad\quad\phi(I)=0 \quad\quad\quad\quad\text{weakly-1-absorbing prime ideal}$ $\phi_{2} \quad\quad\phi(I)=I^{2} \quad\quad\quad\text{almost-1-absorbing prime ideal}$ $\phi_{n} \quad\quad\phi(I)=I^{n} \quad\quad\quad\text{n-almost-1-absorbing prime ideal}$ $\phi_{w} \quad\quad\phi(I)=\bigcap_{n=1}^{\infty}I^{n} \quad \text{w-1-absorbing prime ideal}$ $\phi_{1} \quad\quad\phi(I)=I \quad\quad\quad\quad\text{any ideal}$\newline Consider two functions $\phi,\psi:\mathcal{I}(R)\rightarrow\mathcal{I}% (R)\cup\{\emptyset\}$. Then $\phi\leq\psi$ if $\phi(I)\subseteq\psi(I)$ for all ideals of $R$. Moreover, note that $\phi_{\emptyset}\leq\phi_{0}\leq \phi_{w}\leq\cdots\leq\phi_{n+1}\leq\phi_{n}\leq\cdots\leq\phi_{2}\leq\phi _{1}$. We will assume that $\phi(I)\subseteq I$ throughout the paper. \end{example} \begin{proposition} \label{pfirst}(i)\ Let $I$ be a proper ideal of $R$ and $\phi,\psi :\mathcal{I}(R)\rightarrow\mathcal{I}(R)\cup\{\emptyset\}$ be two functions with $\phi\leq\psi$. If $I$ is a $\phi$-1-absorbing prime ideal, then $I$ is a $\psi$-1-absorbing ideal. (ii) $I\ $is a 1-absorbing prime ideal $\Rightarrow$ $I\ $is a weakly 1-absorbing prime ideal $\Rightarrow$ $I\ $is a $w$-1-absorbing prime ideal $\Rightarrow$\ $I\ $is an $n$-almost 1-absorbing prime ideal for each $n\geq2$ $\Rightarrow$ $I$ is an almost 1-absorbing prime ideal.$\ $ (iii)\ $I\ $is an $n$-almost 1-absorbing prime ideal if and only for each $n\geq2\ $if and only if $I\ $is a $w$-1-absorbing prime ideal. (iv)\ Every $\phi$-prime ideal is a $\phi$-1-absorbing prime ideal. \end{proposition} \begin{proof} (i):\ Assume that $I$ is a $\phi$-1-absorbing prime ideal. Let $xyz\in I-\psi(I)$ for some nonunits $x,y,z\in R$. Then, $xyz\in I-\phi(I)$ and since $I$ is $\phi$-1-absorbing ideal, $xy\in I$ or $z\in I$ which completes the proof. (ii):\ Follows from the fact that $\phi_{\emptyset}\leq\phi_{0}\leq\phi _{w}\leq\cdots\leq\phi_{n+1}\leq\phi_{n}\leq\phi_{2}$ and (i). (iii):\ By (ii), we know that if $I\ $is a $w$-1-absorbing prime ideal, then $I\ $is an $n$-almost 1-absorbing prime ideal for each $n\geq2$. Now, assume that $I\ $is an $n$-almost 1-absorbing prime ideal if and only for each $n\geq2.\ $Let $xyz\in I-\bigcap_{n=1}^{\infty}I^{n}$ for some nonunits $x,y,z\in R.\ $Then there exists $m\geq2\ $such that $xyz\notin I^{m}.\ $Since $I\ $is an $m$-almost 1-absorbing prime ideal of $R\ $and $xyz\in I-I^{m}% ,\ $then either we have $xy\in I$ or $z\in I.\ $ (iv):\ It is clear. \end{proof} \begin{example} \label{ex1}\textbf{(weakly 1-absorbing prime ideal that is not 1-absorbing prime ideal)} Let $p,q\ $be distinct prime numbers and consider the ring $R=% \mathbb{Z} _{pq^{2}}.\ $Then $I=(\overline{0})\ $is a weakly 1-absorbing prime ideal of $R.\ $Since $\overline{p}\overline{q}\overline{q}\in I\ $and$\ \overline {p}\overline{q},\overline{q}\notin I,\ I\ $is not a 1-absorbing prime ideal of $R.\ $ \end{example} \begin{example} \textbf{(w-1-absorbing prime ideal that is not weakly 1-absorbing prime ideal) }Let $I\ $be an idempotent ideal of $R,\ $that is, $I=I^{2}.\ $Then $I\ $is a w-1-absorbing prime ideal since $I^{n}=I$ for each $n\geq2.\ $But $I\ $may not be a weakly 1-absorbing prime ideal of $R.\ $For instance, take $R=% \mathbb{Z} _{2}^{4}$ and $I=% \mathbb{Z} _{2}\times(0)\times(0)\times(0).\ $Then $I\ $is a w-1-absorbing prime ideal since it is idempotent. Now, take the nonunits $x=(1,1,1,0),y=(1,1,0,1)\ $and $y=(1,0,1,1)$ in $R.\ $Then $0\neq xyz\in I\ $but $xy,z\notin I.\ $So it follows that $I\ $is not a weakly 1-absorbing prime ideal of $R.$ \end{example} \begin{example} \label{ex3}($\phi$\textbf{-1-absorbing prime ideal that is not }$\phi $\textbf{-prime}) Take $R\ $as in Example \ref{ex1} and consider the ideal $I=(\overline{q^{2}})$ of $R.\ $Suppose that $\phi(I)=(\overline{0}).\ $Then $I\ $is not $\phi$-prime since $\overline{q}\overline{q}\in I-\phi(I)\ $and $\overline{q}\notin I.\ $Now, take nonunits $\overline{x},\overline {y},\overline{z}\in R\ $such that $\overline{0}\neq\overline{x}\overline {y}\overline{z}\in I.\ $Then it is clear that $q^{2}|xyz$ and $pq^{2}\nmid xyz.\ $If $q^{2}|xy,\ $then we are done. So assume that $q^{2}\nmid xy.\ $On the other hand, since $q^{2}|xyz,\ $we have $q|z.\ $If $q^{2}|z,\ $again we are done. So we may assume that $q|z\ $but $q^{2}\nmid z.\ $Since $q^{2}|xyz,\ q|z\ $and $q^{2}\nmid z,\ $we have either $q|x\ $or $q|y.\ $Without loss of generality, suppose that $q|x\ $but $q\nmid y.\ $Since $\overline{y}\ $is not unit, we have $p|y\ $and in this case $\overline {x}\overline{y}\overline{z}=\overline{0}\ $which is a contradiction. Therefore, we have either $q^{2}|xy\ $or $q^{2}|z$, namely, $xy\in I\ $or $z\in I.\ $ \end{example} \begin{theorem} \label{tmm}Let $R$ be a commutative ring and $I$ a proper ideal of $R$. The following statements are equivalent. (i) $I$ is a $\phi$-1-absorbing prime ideal of $R$. (ii)\ For each nonunits $x,y\in R$ with $xy\not \in I$ implies $(I:xy)=I\cup (\phi(I):xy).$ (iii)\ For each nonunits $x,y\in R$ with $xy\notin I$ gives either $(I:xy)=I$ or $(I:xy)=(\phi(I):xy).$ (iv) For each nonunits $x,y\in R$ and proper ideal $J$ of $R$ such that $xyJ\subseteq I$ but $xyJ\not \subseteq \phi(I)$ implies either $xy\in I$ or $J\subseteq I$. (v) For each nonunit $x\in R$ and proper ideals $J,K$ of $R$ such that $xJK\subseteq I$ but $xJK\not \subseteq \phi(I)$, either $xJ\subseteq I$ or $K\subseteq I.$ (vi) For each proper ideals $J,K,L$ of $R$ such that $JKL\subseteq I$ but $JKL\not \subseteq \phi(I)$, either $JK\subseteq I\ $or $L\subseteq I$. \end{theorem} \begin{proof} $(i)\Rightarrow(ii):$ Assume that $I$ is a $\phi$-1-absorbing ideal of $R$ and $xy\notin I$ for some nonunit elements $x,y\in R$. It is clear that $I\cup(\phi(I):xy)\subseteq(I:xy)$. On the other hand, choose $z\in(I:xy)$ and so $xyz\in I$. If $xyz\not \in \phi(I)$, then $z\in I$. Now suppose $xyz\in\phi(I)$. Then, $z\in(\phi(I):xy)$. Therefore, it gives $(I:xy)\subseteq I\cup(\phi(I):xy)$. $(ii)\Rightarrow(iii):$ Since $(I:xy)=I\cup(\phi(I):xy)$, $(I:xy)$ must be one of the component in the union. $(iii)\Rightarrow(iv):$ Assume that $xyJ\subseteq I$ but $xyJ\not \subseteq \phi(I)$. Let $xy\not \in I$. Then, either $(I:xy)=(\phi(I):xy)$ or $(I:xy)=I$ by $(iii)$. Suppose the former case holds. Since $xyJ\subseteq I$, we have $J\subseteq(I:xy)=(\phi(I):xy)$. It gives $xyJ\subseteq\phi(I)$ which is a contradiction. Now, suppose the latter case holds. Then, $J\subseteq(I:xy)=I$ showing $J\subseteq I$, as needed. $(iv)\Rightarrow(v):$ Let $xJK\subseteq I$ and $xJK\not \subseteq \phi(I)$. Suppose $xJ\not \subseteq I$ and $K\not \subseteq I$. Then there exists $a\in J$ such that $xa\not \in I$. Also, since $xJK\not \subseteq \phi(I)$ there exists $b\in J$ such that $xbK\not \subseteq \phi(I)$. Now assume that $xaK\not \in \phi(I)$. Since $x,a$ are nonunits and $xaK\subseteq I$, we have either $xa\in I$ or $K\subseteq I$, a contradiction. So, we get $xaK\in \phi(I)$. Also, we have $x(a+b)K\subseteq I-\phi(I)$ and it implies $x(a+b)\in I$. Since $xbK\subseteq I-\phi(I)$ and $K\not \subseteq I$, we get $xb\in I$. Thus, we obtain $xa\in I$ giving a contradiction. This proves $xJ\subseteq I$ or $K\subseteq I$. $(v)\Rightarrow(vi):$ Let $JKL\subseteq I$ but $JKL\not \subseteq \phi(I)$ for some proper ideals $J,K$ and $L$ of $R$. Assume that $JK\not \subseteq I$ and $L\not \subseteq I$. Then, there exists $y\in J$ such that $yK\not \subseteq I$. Also since $JKL\nsubseteq\phi(I)$, $xKL\not \in \phi(I)$ for some $x\in J$. Then, we get $xK\subseteq I$ since $xKL\subseteq I-\phi(I)$. Suppose $yKL\not \subseteq \phi(I)$. By $(v),$ this gives $yK\subseteq I$ or $L\subseteq I$, which is contradiction. So, $yKL\subseteq\phi(I)$. As $(x+y)KL\subseteq I-\phi(I)$, we have $(x+y)K\subseteq I$. This implies $yK\subseteq I$, a contradiction. $(vi)\Rightarrow(i):$ Let $xyz\in I-\phi(I)$. Then, $(x)(y)(z)\subseteq I$ and $(x)(y)(z)\not \subseteq \phi(I)$. Hence, $(x)(y)\subseteq I$ or $(z)\subseteq I$ showing that $xy\in I$ or $z\in I$, as desired. \end{proof} \begin{definition} Let $I\ $be a $\phi$-1-absorbing prime ideal and $x,y,z\ $be nonunit elements of $R.\ $If $xyz\in\phi(I),\ xy\notin I$ and $z\notin I,\ $then we say that $(x,y,z)$ is a\ $\phi$-1-triple zero of $I.\ $ \end{definition} \begin{remark} (i)\ Let $I\ $be a $\phi$-1-absorbing prime ideal of $R.\ $Then $I\ $has a $\ \phi$-1-triple zero if and only if there exists $z\notin I\ $and a nonunit element $y\in R$ such that $(\phi(I):yz)\nsubseteq(I:y).$ (ii)\ Let $I\ $be a proper ideal of $R.\ $Then $I\ $is a 1-absorbing prime ideal if and only if the following two conditions must be hold: \qquad(a) $I\ $is a $\phi$-1-absorbing prime ideal of $R$. \qquad(b) For each $z\notin I$ and nonunit element $y\in R,\ $we have $(\phi(I):yz)\subseteq(I:y).$ \end{remark} \begin{theorem} \label{I3} Suppose that $I$ is a $\phi$-1-absorbing prime ideal of $R$ that is not 1-absorbing prime and $(x,y,z)\ $is a $\phi$-1-triple zero of $I$. Then, (i)\ $xyI\subseteq\phi(I).$\ (ii) If $xz,yz\notin I,\ $then $xzI,\ yzI,\ xI^{2},\ yI^{2},\ zI^{2}% \subseteq\phi(I).\ $In particular, $I^{3}\subseteq\phi(I).\ $ \end{theorem} \begin{proof} (i):\ Let $I$ be a $\phi$-1-absorbing prime ideal of $R$ that is not 1-absorbing prime and $(x,y,z)\ $be a $\phi$-1-triple zero of $I$. Then we have $xyz\in\phi(I),\ xy\notin I\ $and $z\notin I.\ $Suppose $xyI\not \subseteq \phi(I)$. Then, there exists $a\in I$ such that $xya\not \in \phi(I)$. So, $xy(z+a)\not \in I-\phi(I)$. If $z+a$ is unit, then $xy\in I$,\ a contradiction. Now assume that $z+a$ is nonunit and so we get $xy\in I$ or $z\in I,\ $again a contradiction. Thus, we have $xyI\subseteq \phi(I)$. (ii):\ Now, assume that $xz,yz\notin I.\ $We will show that $xzI,\ yzI\subseteq\phi(I).\ $Suppose that $xzI\nsubseteq\phi(I).\ $Then there exists an element $a\in I\ $such that $xza\notin\phi(I).\ $This implies that $x(y+a)z\in I-\phi(I).\ $If $y+a$ is unit, then $xz\in I\ $which is a contradiction. Thus $y+a\ $is nonunit. Since $I\ $is a $\phi$-1-absorbing prime ideal,\ we conclude either $x(y+a)\in I\ $or $z\in I,\ $which implies $xy\in I\ $or $z\in I,\ $again a contradiction. Thus, $xzI\subseteq\phi (I).\ $By using similar argument, we have $yzI\subseteq\phi(I)$. Now, we will show that $xI^{2}\subseteq\phi(I)$. Suppose to the contrary. Then, there exists $a,b\in I$ such that $xab\not \in \phi(I)$. It implies $x(y+a)(z+b)\in I-\phi(I)$. If $(y+a)$ is unit, $x(z+b)\in I$ which gives $xz\in I,\ $a contradiction. Similarly,$\ (z+b)$ is nonunit.\ Then, either $x(y+a)\in I$ or $z+b\in I$ implying that $xy\in I$ or $z\in I$. Thus, we have $xI^{2}% \subseteq\phi(I)$. Similarly, we get $yI^{2}\subseteq\phi(I)$ and $zI^{2}\subseteq\phi(I)$, we are done. For the rest, if $I^{3}\not \subseteq \phi(I)$, there exists $a,b,c\in I$ such that $abc\not \in \phi(I)$. Then, $(x+a)(y+b)(z+c)\in I-\phi(I)$. If $x+a\ $is unit, then we obtain $(y+b)(z+c)=yz+yc+zb+bc\in I\ $and so $yz\in I,\ $which is a contradiction. Similarly, we can show that $y+b\ $and $z+c\ $are nonunits.\ Then, we get $(x+a)(y+b)\in I$ or $z+c\in I$. This gives $xy\in I$ or $z\in I$, again a contradiction. Hence, $I^{3}\subseteq\phi(I).$ \end{proof} \begin{theorem} Let $R$ be a ring and $a$ be a nonunit element of $R$. Suppose that $(0:a)\subseteq(a)$\ (e.g., $a$ is regular). Then, $(a)$ is $\phi$-1-absorbing prime ideal with $\phi\leq\phi_{2}$ if and only if $(a)$ is a 1-absorbing prime ideal. \end{theorem} \begin{proof} If $(a)$ is 1-absorbing prime ideal, then $(a)$ is $\phi$-1-absorbing ideal. For the other direction, assume that $(a)$ is $\phi$-1-absorbing prime ideal with $\phi\leq\phi_{2}$. Then, it is also $\phi_{2}$-1-absorbing prime ideal by Proposition \ref{pfirst}. Let $xyz\in(a)\ $for some nonunits $x,y,z\in R$. If $xyz\not \in (a)^{2}$, then $xy\in(a)$ or $z\in(a)$. So suppose $xyz\in(a)^{2}$. We have $xy(z+a)\in(a)$. If $z+a$ is unit, we are done. Hence, we can assume $z+a$ is nonunit. Assume that $xy(z+a)\not \in (a)^{2}$. Then we get either $xy\in(a)$ or $z+a\in(a)\ $implying $xy\in(a)$ or $z\in (a)$. Now assume $xy(z+a)\in(a)^{2}$. This gives $ayz\in(a)^{2}$ and so there exists $t\in R\ $such that $ayz=a^{2}t$. Thus we have $yz-at\in(0:a)\subseteq (a).$ Therefore, $yz\in(a)+(0:a)\subseteq(a)$, as needed. \end{proof} Now, we give a condition for a $\phi$-1-absorbing prime ideal of $R\ $to become a $\phi$-prime ideal of $R$. \begin{theorem} \label{tp1-abs}Let $I\ $be a proper ideal of a non-quasi local ring\ $R.\ $% Suppose that $(\phi(I):x)$ is not maximal ideal for each $x\in I.\ $The following statements are equivalent. (i)\ $I\ $is a $\phi$-prime ideal of $R.$ (ii)\ $I\ $is a $\phi$-1-absorbing prime ideal of $R.\ $ \end{theorem} \begin{proof} $(i)\Rightarrow(ii):\ $Follows from Proposition \ref{pfirst}. $(ii)\Rightarrow(i):\ $Let $I\ $be a $\phi$-1-absorbing prime ideal of $R.\ $Choose $x,y\in R\ $such that $xy\in I-\phi(I).\ $If $x$ or $y$ is unit, then $x\in I\ $or $y\in I\ $which is needed. So suppose that $x,y\ $are nonunits in $R.\ $Since $xy\notin\phi(I),\ (\phi(I):xy)$ is proper. Choose a maximal ideal $\mathfrak{m}_{1}$ of $R\ $containing $\ (\phi(I):xy)\subseteq \mathfrak{m}_{1}.\ $Since $R\ $is non-quasi-local, there exists a different maximal ideal $\mathfrak{m}_{2}\ $of $R.\ $Now, take $z\in\mathfrak{m}% _{2}-\mathfrak{m}_{1}.\ $Then $z\notin\ (\phi(I):xy),\ $and so we have $(zx)y\in I-\phi(I).\ $Since $I\ $is a $\phi$-1-absorbing prime ideal of $R,\ $we get either $zx\in I\ $or $y\in I.\ $If $y\in I,\ $then we are done. So assume that $zx\in I.\ $As $z\notin\mathfrak{m}_{1},\ $then there exists an $a\in R\ $such that $1+az\in\mathfrak{m}_{1}.\ $Note that $1+az$ is nonunit. If $1+az\notin(\phi(I):xy),\ $then we have $(1+az)xy\in I-\phi(I)$ implying $(1+az)x\in I\ $and so $x\in I\ $since $zx\in I.\ $So assume that $1+az\in(\phi(I):xy),\ $that is, $xy(1+az)\in\phi(I).\ $Now, choose an element $b\in\mathfrak{m}_{1}-(\phi(I):xy).\ $Then we have $(1+az+b)xy\in I-\phi(I).\ $On the other hand, since $1+az+b\in\mathfrak{m}_{1},\ 1+az+b$ is nonunit. This implies that $(1+az+b)x\in I.\ $Also, since $bxy\in I-\phi(I),\ $we get $bx\in I.\ $Then we have $x=(1+az+b)x-a(zx)-bx\in I.\ $Therefore, $I\ $is a $\phi$-prime ideal of $R.$ \end{proof} Now, for any ideal $J$ of $R$ define a function $\phi_{J}:\mathcal{I}% (R/J)\rightarrow\mathcal{I}(R/J)\cup\{\emptyset\}$ by $\phi_{J}(I/J)=(\phi (I)+J))/J$ where $J\subseteq I$ and $\phi_{J}(I/J)=\emptyset$ if $\phi(I)=\emptyset$. Also, note that $\phi_{J}(I/J)\subseteq I/J$. \begin{theorem} \label{tfac}(i)\ Let $I$ be a $\phi$-1-absorbing prime ideal of $R.\ $Then $I/\phi(I)\ $is a weakly 1-absorbing prime ideal of $R/\phi(I).\ $ (ii) Let $I/\phi(I)\ $be a weakly 1-absorbing prime ideal of $R/\phi(I)$\ and $u(R/\phi(I))=\{x+\phi(I):x\in u(R)\}.$ Then $I\ $is a $\phi$-1-absorbing prime ideal of $R.$ (iii)\ Let $I,J$ be two ideals of $R$ with $J\subseteq I$ and $I$ be a $\phi $-1-absorbing prime ideal. Then, $I/J$ is a $\phi_{J}$-1-absorbing prime ideal of $R/J$. \end{theorem} \begin{proof} (i):\ Let $\overline{0}\neq\Bar{x}\Bar{y}\Bar{z}\in I/\phi(I)$for some nonunits $\Bar{x},\Bar{y},\Bar{z}\in R/\phi(I),\ $where $\overline{x}% =x+\phi(I),\overline{y}=y+\phi(I)$ and $\overline{z}=z+\phi(I)$. Then $x,y,z\ $are nonunits in $R\ $and $xyz\in I-\phi(I).$ Since $I$ is a $\phi $-1-absorbing prime ideal of $R$, $xy\in I$ or $z\in I$. Then, we get $\overline{x}\overline{y}\in I/J$ or $\Bar{z}\in I/J$ which completes the proof. (ii): Let $I/\phi(I)\ $be a weakly 1-absorbing prime ideal of $R/\phi(I)$\ and $u(R/\phi(I))=\{x+\phi(I):x\in u(R)\}.\ $Choose nonunits $x,y,z$ in $R\ $such that $xyz\in I-\phi(I).\ $Then we have $\overline{0}\neq\Bar{x}\Bar{y}\Bar {z}\in I/\phi(I).\ $Since $u(R/\phi(I))=\{x+\phi(I):x\in u(R)\},\ \Bar{x}% ,\Bar{y}\ $and $\Bar{z}\ $are nonunits in $R/\phi(I).\ $Since $I/\phi(I)\ $is a weakly 1-absorbing prime ideal, we have either $\Bar{x}\Bar{y}\in I/\phi(I)$ or $\overline{z}\in I/\phi(I)$, which implies $xy\in I\ $or $z\in I.\ $Therefore, $I\ $is a $\phi$-1-absorbing prime ideal of $R.$ (iii):\ It is similar to (i) \end{proof} Let $R$ be a commutative ring and $S$ be a multiplicatively closed subset of $R$. Consider the function $\phi:\mathcal{I}(R)\rightarrow\mathcal{I}% (R)\cup\{\emptyset\}$. Define $\phi_{S}:\mathcal{I}(S^{-1}R)\rightarrow \mathcal{I}(S^{-1}R)\cup\{\emptyset\}$ by $\phi_{S}(I)=S^{-1}\phi(I\cap R)$ and $\phi_{S}(I)=\emptyset$ if $\phi(I\cap R)=\emptyset$. Here, it is easy to see that $\phi_{S}(I)\subseteq I$. \begin{theorem} \label{loc}Let $R$ be a commutative ring, $\phi:\mathcal{I}(R)\rightarrow \mathcal{I}(R)\cup\{\emptyset\}$ be a function, $I$ be a $\phi$-1-absorbing prime ideal of $R$ and $S$ be a multiplicatively closed subset of $R$ with $I\cap S=\emptyset$ and $S^{-1}\phi(I)\subseteq\phi_{S}(S^{-1}I)$. Then, $S^{-1}I$ is a $\phi_{S}$-1-absorbing prime ideal of $S^{-1}R$. Furthermore, if $S^{-1}I\neq S^{-1}\phi(I),$ then $S^{-1}I\cap R=I$. \end{theorem} \begin{proof} Let $\frac{x}{s}\frac{y}{t}\frac{z}{u}\in S^{-1}I-\phi_{S}(S^{-1}I)$ for some nonunits $\frac{x}{s},\frac{y}{t},\frac{z}{u}\in S^{-1}R$. Then, there exists $s^{\prime}\in S$ such that $s^{\prime}xyz\in I$ but $s^{\ast}xyz\not \in \phi(S^{-1}I\cap R)$ for all $s^{\ast}\in S$. If $s^{\prime}xyz\in\phi(I),$ then we have $\frac{x}{s}\frac{y}{t}\frac{z}{t}\in\phi(I)_{S}\subseteq\phi _{S}(S^{-1}I)$, a contradiction. So we get $s^{\prime}xyz=(s^{\prime}x)yz\in I-\phi(I)$. Since $s^{\prime}x,y,z$ are nonunits in $R$ and $I$ is a $\phi $-1-absorbing prime ideal, we get $s^{\prime}xy\in I$ or $z\in I$. This implies $\frac{x}{s}\frac{y}{t}=\frac{s^{\prime}xy}{s^{\prime}st}\in S^{-1}I$ or $\frac{z}{u}\in S^{-1}I$. Now we will show that $S^{-1}I\cap R=I$. Let $a\in S^{-1}I$. Then, there exists $s\in S$ such that $sa\in I$. If $s$ is unit, we are done. If $a$ is unit, it contrdicts with $I\cap S=\emptyset$. So we can assume $s$ and $a$ are nonunits in $R$. If $s^{2}a=ssa\not \in \phi(I)$, we get $s^{2}\in I$ or $a\in I$. Since former case is not possible, we have $a\in I$. In the case $s^{2}a\in\phi(I)$, we have $a\in S^{-1}\phi(I)\cap R$. So we obtain $S^{-1}I\cap R\subseteq I\cup\left( S^{-1}\phi(I)\cap R\right) $. Thus, we conclude that either $S^{-1}I\cap R=I$ or $S^{-1}I\cap R=S^{-1}\phi(I)\cap R$. Since latter case contradicts with the assumption, we have $S^{-1}I\cap R=I$. \end{proof} Let $R_{1},R_{2}$ be commutative rings and $\phi_{1}:\mathcal{I}% (R_{1})\rightarrow\mathcal{I}(R_{1})\cup\{\emptyset\}$, $\phi_{2}% :\mathcal{I}(R_{2})\rightarrow\mathcal{I}(R_{2})\cup\{\emptyset\}$ be two functions. Suppose that $R=R_{1}\times R_{2}\ $and $\phi:\mathcal{I}% (R)\rightarrow\mathcal{I}(R)\cup\{\emptyset\}$ is a function defined by $\phi(I_{1}\times I_{2})=\phi_{1}(I_{1})\times\phi_{2}(I_{2})\ $for each ideal $I_{k}\ $of $R_{k}.\ $Then $\phi$ is denoted by $\phi=\phi_{1}\times\phi_{2}.$ \begin{theorem} \label{tm1}Let $R_{1},R_{2}$ be commutative rings and $\phi_{1}:\mathcal{I}% (R_{1})\rightarrow\mathcal{I}(R_{1})\cup\{\emptyset\}$, $\phi_{2}% :\mathcal{I}(R_{2})\rightarrow\mathcal{I}(R_{2})\cup\{\emptyset\}$ be two functions. Suppose that $I=I_{1}\times I_{2},\ $where $I_{i}\ $is an ideal of $R_{i}$ for each $i=1,2,\ $and $\phi=\phi_{1}\times\phi_{2}.\ $If $I=I_{1}\times I_{2}\ $is a $\phi$-1-absorbing prime ideal of $R,$ then one of the following three conditions must be hold. (i) $\phi(I)=I.$ (ii)$\ I=I_{1}\times R_{2}$ and $I_{1}\ $is a $\phi_{1}$-prime ideal of $R_{1}$ which must be prime if $\phi_{2}(R_{2})\ $is not unique maximal ideal of $R_{2}$ (e.g. $R_{1},R_{2}\ $are not quasi-local)$.$ (iii)\ $I=R_{1}\times I_{2}$ and $I_{2}\ $is a $\phi_{2}$-prime ideal of $R_{2}$ which must be prime if $\phi_{1}(R_{1})\ $is not unique maximal ideal of $R_{1}$ (e.g. $R_{1},R_{2}\ $are not quasi-local)$.$ \end{theorem} \begin{proof} Suppose that $I$ is a $\phi$-1-absorbing prime ideal of $R.\ $First, we will show that $I_{1}\ $is a $\phi_{1}$-prime ideal of $R_{1}.\ $To see this, choose $x,y\in R\ $such that $xy\in I_{1}-\phi_{1}(I_{1}).\ $Then we have $(x,0)(1,0)(y,0)=(xy,0)\in I-\phi(I)\ $for some nonunits $(x,0),(1,0),(y,0)\in R.\ $Since $I$ is a $\phi$-1-absorbing prime ideal of $R,\ $we get either $(x,0)(1,0)=(x,0)\in I\ $or $(y,0)\in I$ implying that $x\in I_{1}\ $or $y\in I_{1}.\ $Therefore, $I_{1}\ $is a $\phi_{1}$-prime ideal of $R_{1}.\ $Similar argument shows that $I_{1}\ $is a $\phi_{2}$-prime ideal of $R_{2}.\ $Now assume that $\phi(I)\neq I.\ $Then either $\phi_{1}(I_{1})\neq I_{1}\ $or $\phi_{2}(I_{2})\neq I_{2}.\ $Suppose that $\phi_{1}(I_{1})\neq I_{1}.\ $Then there exists $x\in I_{1}-\phi_{1}(I_{1}).\ $This implies that $(1,0)(1,0)(x,1)=(x,0)\in I-\phi(I).\ $Then we have either $1\in I_{1}\ $or $1\in I_{2},\ $that is, $I_{1}=R_{1}\ $or $I_{2}=R_{2}.\ $Without loss of generality, we may assume that $I_{1}=R_{1}.\ $Now, we will show that $I=R_{1}\times I_{2}\ $and $I_{2}$ is prime in $R_{2}\ $if $\phi_{1}(R_{1}% )\ $is not unique maximal ideal of $R_{1}.\ $Let $ab\in I_{2}\ $for some elements $a,b\in R_{2}.$\ If $a$ or $b$ is unit, we are done. So assume that $a,b\ $are nonunits in $R_{2}.\ $Since $\phi_{1}(R_{1})\ $is not unique maximal ideal of $R_{1}$,$\ $there exists a nonunit element $x\in R_{1}% -\phi_{1}(R_{1}).\ $Then we have $(x,1)(1,a)(1,b)=(x,ab)\in I-\phi(I).\ $Since $I$ is a $\phi$-1-absorbing prime ideal of $R,\ $we have either $(x,1)(1,a)=(x,a)\in I\ $or $(1,b)\in I$ implying or $a\in I_{2}\ $or $b\in I_{2}.\ $Therefore, $I_{2}\ $is a prime ideal of $R_{2}.\ $ \end{proof} Recall that a commutative ring $R\ $is said to be a \textit{quasi-local} if it has a unique maximal ideal \cite{Sharp}. Otherwise, we say $R\ $is not quasi-local or non-quasi-local. \begin{theorem} \label{tnloc}Let $R_{1},R_{2}$ be commutative rings such that $\phi_{i}% (I_{i})$ is not unique maximal ideal of $R_{i}\ $(e.g. $R_{i}\ $is not quasi-local) and $\phi_{i}:\mathcal{I}(R_{i})\rightarrow\mathcal{I}(R_{i}% )\cup\{\emptyset\}$ for each $i=1,2$. Suppose that $I=I_{1}\times I_{2}$ is nonzero ideal$,\ $where $I_{i}\ $is an ideal of $R_{i}$ for each $i=1,2,\ \phi=\phi_{1}\times\phi_{2}$ and $\phi(I)\neq I.\ $Then the following statements are equivalent. (i)\ $I\ $is a $\phi$-1-absorbing prime ideal of $R=R_{1}\times R_{2}.$ (ii)\ $I=I_{1}\times R_{2}\ $for some prime ideal $I_{1}\ $of $R_{1}\ $and $I=R_{1}\times I_{2}\ $for some prime ideal $I_{2}\ $of $R_{2}.$ (iii)\ $I\ $is a prime ideal of\ $R.\ $ (iv)\ $I\ $is a weakly prime ideal of $R.\ $ (v)\ $I\ $is a $1$-aborbing prime ideal of $R.$ \end{theorem} \begin{proof} $(i)\Rightarrow(ii):\ $Follows from Theorem \ref{tm1}. $(ii)\Rightarrow(iii):\ $Clear. $(iii)\Leftrightarrow(iv):\ $Follows from \cite[Theorem 7]{AnSmi}. $(iii)\Rightarrow(v):\ $Follows from \cite[Definition 2.1]{YasNik}. $(v)\Rightarrow(i):\ $Follows from the fact that $\phi_{\emptyset}\leq\phi$ and Proposition \ref{pfirst}. \end{proof} \begin{theorem} \label{tgen}Let $R_{1},R_{2}$ be commutative rings such that $\phi_{i}(I_{i})$ is not unique maximal ideal of $R_{i}\ $(e.g. $R_{i}\ $is not quasi-local) and $\phi_{i}:\mathcal{I}(R_{i})\rightarrow\mathcal{I}(R_{i})\cup\{\emptyset\}$ for each $i=1,2,\ldots,n.\ $Suppose that $I=I_{1}\times I_{2}\times \cdots\times I_{n}$ is nonzero ideal$,\ $where $I_{i}\ $is an ideal of $R_{i}$ for each $i=1,2,\ldots,n,\ \phi=\phi_{1}\times\phi_{2}\times\cdots\times \phi_{n}$ and $\phi(I)\neq I.\ $Then the following statements are equivalent. $I\ $is a $\phi$-1-absorbing prime ideal of $R=R_{1}\times R_{2}.$ (ii)\ $I=R_{1}\times R_{2}\times\cdots\times R_{t-1}\times I_{t}\times R_{t+1}\times\cdots\times R_{n}\ $for some prime ideal $I_{t}\ $of $R_{t}% $\ and $1\leq t\leq n\ .$ (iii)\ $I\ $is a prime ideal of\ $R.\ $ (iv)\ $I\ $is a weakly prime ideal of $R.\ $ (v)\ $I\ $is a $1$-absorbing prime ideal of $R.$ \end{theorem} \begin{proof} We use induction on $n.$ If $n=1,\ $the claim is clear. If $n=2,\ $the claim follows from Theorem \ref{tnloc}. Now, assume that $(i)\Leftrightarrow (ii)\Leftrightarrow(iii)\Leftrightarrow(iv)\Leftrightarrow(v)\ $is true for all $k<n.\ $Let $I^{\prime}=I_{1}\times I_{2}\times\cdots\times I_{n-1}% ,\ R^{\prime}=R_{1}\times R_{2}\times\cdots\times R_{n-1}$ and $\phi^{\prime }=\phi_{1}\times\phi_{2}\times\cdots\times\phi_{n-1}.\ $Then note that $I=I^{\prime}\times I_{n},\ R=R^{\prime}\times R_{n}\ $and $\phi=\phi^{\prime }\times\phi_{n}.\ $The rest follows from induction hypothesis and Theorem \ref{tnloc}. \end{proof} \begin{lemma} Let $(R,m)$ be a quasi-local ring and $\mathfrak{m}^{3}\subseteq\phi(I)$ for every proper ideal $I$ of $R$. Then, every proper ideal of $R$ is a $\phi $-1-absorbing prime ideal. \end{lemma} \begin{proof} Let $I$ be a nonzero proper ideal of $R$. Assume that $I$ is not $\phi $-1-absorbing prime ideal. Then, there exist nonunit elements $x,y,z\in R$ such that $xyz\in I-\phi(I)$ but $xy\not \in I$ and $z\not \in I$. Since $x,y,z$ are nonunits, they are elements of $\mathfrak{m}$. So, $xyz\in \mathfrak{m}^{3}\subseteq\phi(I)$, a contradiction. \end{proof} A ring $R\ $is said to be an \textit{indecomposable ring} if its all idempotents are $0\ $and $1.$Otherwise, we say $R\ $is decomposable. It is well know that a ring $R\ $is decomposable if and only if $R=R_{1}\times R_{2}\ $for some commutative rings $R_{1}\ $and $R_{2}.$ Recall that a commutative ring $R\ $is said to be a von Neumann regular ring if its each ideal is idempotent, or equivalently, for each $x\in R,\ $there exists an idempotent element $e\in R\ $such that $(x)=(e)\ $\cite{von}. The concept of von Neumann regular rings and its generalizations have drawn considerable interest and have been widely studied by many authors. See, for example, \cite{AnChu}, \cite{JaTe} and \cite{JaTeKo}. Now, in the following, we characterize all rings over which every proper ideal is almost 1-absorbing prime ideal. \begin{theorem} \label{tmain}Let $R\ $be a ring. Then every proper ideal is almost $1$-absorbing prime if and only $(R,\mathfrak{m}\mathcal{)}\ $is either quasi-local with $\mathfrak{m}^{3}=(0)$ or $R\ $is a von Neumann regular ring. \end{theorem} \begin{proof} $(\Leftarrow):\ $Suppose that $(R,\mathfrak{m}\mathcal{)}\ $is quasi-local with $\mathfrak{m}^{3}=(0).\ $Then by previous Lemma, every ideal is almost $1$-absorbing prime. If $R\ $is von Neumann regular ring, then every ideal is idempotent so that every ideal is almost $1$-absorbing prime. $(\Rightarrow):\ $Now, suppose that every proper ideal is almost $1$-absorbing prime. First, we will show that $(a^{3})=(a^{4})$ for each element $a\in R.\ $If $a$ is unit, then we are done. So assume that $a$ is not unit. Take a maximal ideal $\mathfrak{m}$ of $R.\ $If $a\notin\mathfrak{m}\mathcal{,}$ then $\frac{a}{1}$ is unit in $R_{\mathfrak{m}}\ $so that we have $(a^{3}% )_{\mathfrak{m}}=(a^{4})_{\mathfrak{m}}.\ $So suppose that $a\in \mathfrak{m}\mathcal{.}$Then by Theorem \ref{loc}, every proper ideal of $R_{\mathfrak{m}}$ is almost 1-absorbing prime. Since $\frac{a^{3}}{1}% \in(\frac{a^{3}}{1})$ and $(\frac{a^{3}}{1})$ is almost 1-absorbing prime, we have either $\frac{a^{2}}{1}\in(\frac{a^{3}}{1})$ or $\frac{a^{3}}{1}\in (\frac{a^{3}}{1})^{2},\ $which implies that $(a^{3})_{\mathfrak{m}}% =(a^{4})_{\mathfrak{m}}.$\ Since $(a^{3})_{\mathfrak{m}}=(a^{4})_{\mathfrak{m}% }$ for each maximal ideal $\mathfrak{m}\mathcal{\ }$of $R,$ we have $(a^{3})=(a^{4})$ and thus $(a^{3})=(a^{3})^{2}.\ $This implies that $(a^{3})=(e)\ $for some idempotent $e\in R.\ $If $R\ $is not decomposable ring, then for each nonunit $a\in R,\ a^{3}=(0)\ $and this shows that $(R,\mathfrak{m}\mathcal{)}\ $is quasi-local with $\mathfrak{m}^{3}% =(0),\ $where $\mathfrak{m}=\sqrt{0}.\ $Now, suppose that $R=R_{1}\times R_{2}\ $for some commutative rings $R_{1}\ $and $R_{2}.\ $If $R_{1}\ $is not von Neumann regular, then there exists an ideal $I\ $of $R\ $such that $I^{2}\neq I.\ $Now take the ideal $J=I\times0\ $of $R.\ $Since $J\ $is almost 1-absorbing prime, by Theorem \ref{tm1}, $J=I\times0=I\times R_{2}\ $which is a contradiction. Thus $R_{1}\ $is a von Neumann regular ring. Similarly, $R_{2}\ $is von Neumann regular ring and so is $R=R_{1}\times R_{2}.$ \end{proof}
1,314,259,996,251
arxiv
\section{\label{sec:Intro} Introduction} Many well motivated extensions to the standard model predict the existence of a new family of particles, the so called Weakly Interacting Sub-eV Particles (WISPs). As the name suggests, they all share a very low rest mass below 1~eV and very feeble interactions with the standard model, making them difficult to detect experimentally. One popular member of the WISP family is the axion. Historically it emerged from a proposal by Peccei and Quinn in 1977, intended to solve a fine tuning problem in the theoretical framework of Quantum Chromodynamics (QCD) \cite{PhysRevLett.38.1440, PhysRevLett.40.223, Kim19871, PhysRevLett.40.279}. Since then, several Axion Like Particles (ALPs) with similar properties to the original axion have been proposed, arising from string theory \cite{1126-6708-2006-06-051}, or motivated as a possible explanation of dark energy in our universe \cite{Raffelt:1996wa}. Another prominent member of the WISP family is the Hidden Sector Photon (HSP). It can be described by extra U(1) gauge factors in the standard model, which is a necessary requirement for string theory \cite{Okun:1982xi,src:HSP_STRING,src:HSP_STRING_2,src:LowEnergyFrontier}. WISPs could explain several astrophysical phenomena \cite{Raffelt:1996wa} and the axion would be an excellent candidate for cold dark matter, if it exists in a certain mass range. Axions also have been the most accepted solution for the strong CP problem in QCD for over 30 years now. However, there is no experimental evidence from laboratory searches yet and all efforts so far have just produced exclusion limits. \section{\label{sec:exp} Experimental detection principle} WISPs interact very weakly with standard model particles and their weak coupling to photons provides the most promising way to indirectly observe them in a laboratory experiment. ALPs can convert to photons and photons can convert to ALPs in a strong static magnetic field by the ``Primakoff effect'' \cite{PhysRevLett.51.1415}. The probability of this process happening is extraordinarily low. A conversion by the Primakoff effect happens without energy loss. This means that the entire energy of the photon converts into rest mass ($m_a$) and into kinetic energy of the ALP. The mass of the ALP is a fixed but not yet known parameter, which is only weakly bound by cosmological observations in the range of $10^{-12}~\mathrm{eV} \leq m_a \leq 10^{3} ~\mathrm{eV}$ \cite{Raffelt:1996wa}. The ``Light Shining Through the Wall'' (LSW) detection scheme has first been proposed in \cite{Okun:1982xi,PhysRevLett.59.759,src:hoogLaser}. These proposals were focused on the design of an experiment in the optical domain. A laser beam, shining through a strong magnetic field forms the ``emitting region''. In this environment, photons can convert into ALPs, which would propagate parallel to the photon beam. An opaque wall is placed downstream of the magnet, blocking all photons. As the ALP beam does not interact with matter, it penetrates the wall and propagates towards the ``detection region''. A second magnet reconverts the ALPs to photons which can be detected by a sensitive optical instrument. To improve the low conversion probability, mirrors can be placed at each end of the emitting and detection region, forming optical resonators and allowing the photons to pass several times through the magnetic field. This has been done -- for example -- in the ALPS-1 experiment \cite{src:alps1}. While the coupling between photons and ALPs originates from the Primakoff effect, the coupling between photons and HSPs arises from kinetic mixing, a process similar to neutrino oscillations \cite{src:HSP_STRING}. HSPs can be probed with the same experimental setup as used for ALPs, but -- due to the different coupling mechanism -- a static magnetic field is not necessary for the HSP-search. \begin{figure} \centerline{\psfig{file=ovr_2_shield_simple.pdf, width=1\linewidth}} \caption{Schematic of a microwave LSW experiment.} \label{fig:OverViewSimple} \end{figure} Adapting the LSW principle to microwaves has first been proposed in \cite{src:hoogUW,src:JaCaRi}. We performed a microwave based LSW experiment searching for ALPs and HSPs. The schematic of our setup is shown in Fig.~\ref{fig:OverViewSimple}. Converting the optical to a microwave domain setup involves several steps: The laser is replaced by a microwave oscillator and power amplifier. The equivalent to optical resonators are microwave cavities. The two dimensional ``wall'' becomes a three dimensional electromagnetic shielding challenge, and the optical power detector transforms to a coherent microwave receiver. Observing WISPs would correspond to a microwave signal appearing within a well shielded detection volume, exciting the detecting cavity. The weak sinusoidal signal is of equivalent -- and thus of known -- frequency, as the one driving the emitting cavity, allowing us to exploit a lock-in scheme for the signal detection. Due to the small wavelengths and therefore stringent mechanical tolerances involved, the realization of low loss resonators is a substantial challenge in the optical regime. It is technologically less challenging to produce and operate low loss microwave cavities, making the experimental setup more rugged, cheaper and easier to handle. It also allows to reduce the separation between the cavities to less than a wavelength, making the experiment sensitive -- not only to propagating WISPs (like in a laser based experiment) -- but also to ``near field" WISPs \cite{src:hoogUW} surrounding the cavities. The energy required to produce photons decreases as the wavelength increases. For a given input power, more photons are produced in microwave based experiments. Therefore they are more sensitive to WISPs than the optical equivalent. On the downside, the lower photon energy restricts the maximum detectable mass of the hidden particles. \section{\label{sec:sens}Detection sensitivity} The expected output power from the detecting cavity due to the ALP or HSP propagation has been derived in \cite{src:JaCaRi} and is given by Eq.~\ref{equ:powerAxions} and Eq.~\ref{equ:powerHSP}: \begin{eqnarray} \label{equ:powerAxions} P_{\mathrm{ALP}} &= \left(\frac{g B}{f_{\mathrm{sys}}}\right)^4 |G_{\mathrm{ALP}}|^2 Q_{\mathrm{em}} Q_{\mathrm{det}} P_{\mathrm{em}}, \end{eqnarray} \begin{eqnarray} \label{equ:powerHSP} P_{\mathrm{HSP}} &= \chi^4 \left(\frac{m_{\gamma'}}{f_{\mathrm{sys}}}\right)^8 |G_{\mathrm{HSP}}|^2 Q_{\mathrm{em}} Q_{\mathrm{det}} P_{\mathrm{em}}, \end{eqnarray} where $Q_{\mathrm{em}}$ and $Q_{\mathrm{det}}$ are the loaded quality factors of emitting and detection cavity, $f_{\mathrm{sys}}$ is the operation frequency of the experiment (to which the cavities are tuned), $P_{\mathrm{em}}$ is the emitting cavity driving power and $B$ is the strength of the static magnetic field. The unknown coupling constants for ALPs or HSPs to photons are given by $g$ and $\chi$ respectively. Details of the geometric form factors $|G_{\mathrm{ALP}}|$ and $|G_{\mathrm{HSP}}|$ are given in the subsequent section. Both equations are a function of rest mass of the hidden particle, directly by $m_{\gamma'}$ (only in Eq.~\ref{equ:powerHSP}) and indirectly by the mass dependent geometric form factors (in both equations). It is convenient to express all quantities in the same unit based on [eV]. If no ALPs or HSPs are observed in the experiment, an upper bound on the coupling parameter $g$ or $\chi$ can be derived from Eq.~\ref{equ:powerAxions} or Eq.~\ref{equ:powerHSP}. This allows to compare the experiments sensitivity to other WISP searches. Note that in this case, the minimum detectable signal power of the RF receiver ($P_{\mathrm{sig}}$) is assigned to $P_{\mathrm{ALP}}$ or $P_{\mathrm{HSP}}$. Sensitivity to ALPs is largely dominated by the strength of the magnetic field $B$ and the operating frequency $f_{\mathrm{sys}}$. Sensitivity to HSPs is dominated by the Q factors of the cavities and the geometric form factor $|G_{\mathrm{HSP}}|$. \section{\label{sec:geom}Calculation of the geometric form factor} The geometric form factor $|G|$ is typically in the order of 1, and depends on the relative position and orientation of the cavities, the electric field configuration of the resonating mode, and the rest mass of the hidden particles. Furthermore it depends on the relative direction of the static magnetic field for ALPs detection \cite{src:JaCaRi, src:JaCaRi2, src:UWA_LSW, src:hoogUW}. The geometric form factor can be compared to the near-field antenna gain, as used in communication systems. It differs by taking the non-zero rest mass of the WISPs into account. $|G|$ is determined by a 6 dimensional integration over the volumes of emitting ($V$) and detecting ($V'$) cavity. The formulas are given for ALPs in Eq.~\ref{equ:GALP} and for HSPs in Eq.~\ref{equ:GHSP}: \begin{eqnarray} \label{equ:GALP} G_{\mathrm{ALP}} = \frac{k^{2}}{4 \pi} \int_{V'} \int_{V} \frac{e^{i k' \left| \mathbf{x} - \mathbf{y} \right|}} {\left| \mathbf{x} - \mathbf{y} \right|} E_{B}(\mathbf{x}) E'_{B'}(\mathbf{y}) ~ d^3 \mathbf{x} d^3 \mathbf{y}, \end{eqnarray} \begin{eqnarray} \label{equ:GHSP} G_{\mathrm{HSP}} = \frac{k^{2}}{4 \pi} \int_{V'} \int_{V} \frac{e^{i k' \left| \mathbf{x} - \mathbf{y} \right|}} {\left| \mathbf{x} - \mathbf{y} \right|} \mathbf{E(x)} \cdot \mathbf{E'(y)} ~ d^3 \mathbf{x} d^3 \mathbf{y}. \end{eqnarray} Each integration variable, $\mathbf{x}$ and $\mathbf{y}$, represents a three dimensional vector, indexing a point within the emitting and receiving cavity in a common coordinate system. The wavenumber of the photon is given by $k$. The wavenumber of the ALP or HSP is given by $k'$, which depends on the rest mass of the hidden particle ($m_{\mathrm{WISP}}$ in Eq.~\ref{equ:kkk}). This mass dependence has a significant influence on the shape of the excluded areas in Fig.~\ref{fig:exclPlotALP} and Fig.~\ref{fig:exclPlotHSP}. Both integrands in Eq.~\ref{equ:GALP} and Eq.~\ref{equ:GHSP} contain an attenuation factor proportional to distance ($|\mathbf{x} - \mathbf{y}|$) and a phase factor ($e^{i k' \left| \mathbf{x} - \mathbf{y} \right|}$), which becomes more significant at larger $k'$ (corresponding to WISPs with higher kinetic energy). Note that $k'$ can become complex in the non-propagating WISP case (if $m_{\mathrm{WISP}} > h f_{\mathrm{sys}}$), leading to an exponential suppression of $|G|$. \begin{eqnarray} \label{equ:kkk} k = \frac{2 \pi f_{\mathrm{sys}}}{c} \quad\quad \frac{k'}{k} = \sqrt{1 - \left( \frac{m_{\mathrm{WISP}}}{h f_{\mathrm{sys}}} \right)^2}. \end{eqnarray} The $\mathbf{E}$ fields are normalized such that: \begin{eqnarray} \label{equ:normalize} \int_{V} \left| \mathbf{E}(\mathbf{x}) \right|^2 d^3\mathbf{x} = 1. \end{eqnarray} For HSPs, Eq.~\ref{equ:GHSP} suggests the dot product between the two electric fields, $\mathbf{E(x)} \cdot \mathbf{E'(y)}$, is of importance. It is thus advantageous to use the same mode in both cavities. For ALPs Eq.~\ref{equ:GALP} suggests that only the component of the electric field in the cavity, aligned with the static magnetic field, $E_B$ and $E'_{B'}$ is of significance. Large geometric form factors can be expected, if the electric field is parallel to the external magnetic field over a large volume in the cavity. \section{\label{sec:cav}Cavity design} Each of the two cylindrical microwave cavities is made out of two half shells, machined at CERN from brass material. Figure~\ref{fig:cavSchem} shows the inner dimensions of the cavity. A photo of the emitting cavity is shown in Fig.~\ref{fig:magnParts}. To increase the surface conductivity, the base material was coated with a 10 $\mu$m thick layer of silver. On top of the silver layer, a~$\ll 1\mu$m thin flash of gold has been deposited, which serves as protection against oxidation. Due to the skin effect, $> 80\%$ of the RF currents flow in the low resistivity silver coating. \begin{figure} \centerline{\psfig{file=cavCombi2.pdf, width=1\linewidth}} \caption{ Inside dimensions of the cavity in [mm]. The coupling loop can be seen on top. The electric field configuration of two modes is shown on a transverse cutting plane. (a) TM$_{010}$ mode for ALP search, (b) TE$_{011}$ mode for HSP search. } \label{fig:cavSchem} \end{figure} In order to determine the most sensitive cavity mode for ALP or HSP search, the product of Q factor and corresponding geometry factor needs to be maximized. For ALP search, the best choice is the fundamental TM$_{010}$ mode, providing an E-field which can be aligned with an homogeneous external magnetic field over the largest possible volume, and thus providing a significantly larger $|G|_{\mathrm{ALP}}$ than any other mode. For the HSP search, the E-field does not need to be aligned with an external magnetic field and as Table~\ref{tbl:g} points out, the TE$_{011}$ mode is preferable. Its displacement currents flow along the circumference of the cavity walls \cite{src:cavTE011lowLoss} and hence do not cross the boundary between the two half shells. Resistive losses due to contact springs are effectively avoided, and therefore its Q factor is higher compared to other modes. The surface currents of this mode flow entirely azimuthal and do not cross the boundary between the two half shells of the cavity \cite{src:cavTE011lowLoss}. Resistive losses due to contact springs are effectively avoided and therefore its Q factor is higher compared to other modes. \begin{table}[t] \centering \caption{Comparison of modes for HSP search} \label{tbl:g} \begin{tabular}{p{0.2\linewidth}p{0.2\linewidth}p{0.2\linewidth}p{0.2\linewidth}} \hline Mode & meas. $Q_L$ & $|G|_{\mathrm{HSP}}$ & $Q \cdot G$\\ \hline TM$_{010}$ & 11 392 & 0.77 & 8772\\ TE$_{011}$ & 23 210 & 0.52 & 12069\\ \hline \end{tabular} \end{table} In a cavity of cylindrical geometry, the TE$_{011}$ and TM$_{111}$ are degenerate and can not be excited separately. To ensure well defined experimental conditions, the geometry has been modified. Chamfering the edges of the cavity breaks the degeneracy between the TE$_{011}$ and TM$_{111}$ mode, separating them in frequency and mitigating energy loss as a consequence of mode coupling \cite{src:cavDegPaper}. The nominal resonant frequency can be tuned in a range of +- 5 MHz, using a fine threaded tuning screw which modifies the fields in the cavity. A measurement of the frequency dependence of the first seven modes as a function of the tuning screw position is shown in Fig.~\ref{fig:cavityTuningMeas}. The figure also indicates that there are no mode crossings within the nominal tuning range of 0 - 10~mm insertion depth. \begin{figure} \centerline{\psfig{file=cavityTuningSweep2.pdf, width=1\linewidth}} \caption{ Tuning range of several modes in the cavity, measured with a VNA in reflection. The depth of the tuning screw $d$ has been increased by 1 mm for each measurement. Black crosses indicate complementary simulation results. } \label{fig:cavityTuningMeas} \end{figure} A wire-loop antenna of $\approx 8$~mm diameter couples the electromagnetic field in the cavity to a $50~\Omega$ coaxial transmission line. The coupling strength ($\beta$) of this loop antenna can be easily optimized for critical coupling ($\beta=1$) by a slight rotation, i.e. modifying its effective surface area to the H-field in the cavity. The loaded quality factors significantly influence the sensitivity of the experiment and therefore had to be determined with high accuracy. The frequency dependent reflection coefficient $\Gamma$ has been measured with a Vector Network Analyzer (VNA). The cavity parameters $Q_0$, $Q_L$, $\beta$ and their respective uncertainties have been extracted from the VNA data by means of a curve fitting algorithm, described in \cite{src:cavQfitting}. The resulting quality factors for the HSP and ALP measurement runs can be found in Table~\ref{tbl:paramHSP} and Table~\ref{tbl:paramALP}. A typical result, taken imediately before the ALPs run in June 2013 is shown in Fig.~\ref{fig:cavFit}. \begin{figure} \centerline{\psfig{file=tune_single_det_after_cr2o3.pdf, width=1\linewidth}} \caption{ Result of a $S_{11}$ measurement of the detecting cavity with a VNA, immediately before the ALPs run. The cavity parameters have been extracted by means of a curve fitting procedure, adapted from \cite{src:cavQfitting}. } \label{fig:cavFit} \end{figure} The coupling loops in both cavities are made from copper wire, hence a small amount of signal power is lost due to its finite conductivity. The coupling loss can be estimated from the reflection coefficient far-off resonance, which would be equal to $|\Gamma| = 0$~dB (a short or open circuit) in the ideal case. We measured a coupling loss in the order of $|\Gamma| = -0.1$~dB. Due to its small value and due to the fact that the coupling losses are implicitly included in the loaded quality factor ($Q_L$) for the critically coupled cavity, we do not have to consider them separately for the detection sensitivity of the experiment. Note that both cavities provide $|\Gamma| < -30$~dB return loss on their resonant frequencies. Hence the reflected RF-power is small and signal attenuation due to impedance mismatch at the cavity couplers can be neglected. \section{\label{sec:resMon}Monitoring of resonant frequency} Maximum sensitivity to a WISP related signal can only be achieved if the resonant frequency ($f_{\mathrm{res}}$) of both cavities agrees with the system frequency ($f_{\mathrm{sys}}$), which is the frequency of the emitting cavity drive signal. The cavities are sensitive to temperature variations due to the thermal expansion and contraction of their wall materials. Sensitivity will degrade if a cavity drifts off-frequency during the measurement run. This could disguise a potential WISP signal and lead to an invalid exclusion result. Therefore it is critical to monitor the instantaneous resonant frequency of both cavities, take the maximum deviation into account and estimate a worst case degradation in detection sensitivity. The resonant frequencies have been monitored by recording three different observables during the measurement runs: \begin{description} \item[RF power] The emitting cavity has been monitored by logging the incident ($P_{\mathrm{inc}}$) and reflected RF power ($P_{\mathrm{refl}}$) at the cavities coupling port. Reflected power will only be minimum if $f_{\mathrm{sys}} = f_{\mathrm{res}}$. Detuning causes an increase in $P_{\mathrm{refl}}$. If completely off-tune, all of the incident RF power would be reflected \cite{src:CAS}. $P_{\mathrm{inc}}$ and $P_{\mathrm{refl}}$ are measured on a directional coupler, placed on the coaxial line between power amplifier and cavity. The coupled signals are converted to DC voltages proportional to RF power by detector diodes and recorded with a data logging device (Picolog ADC-20). A VNA was used to calibrate the setup, allowing absolute power levels to be recorded. \item[Noise power] For the detecting cavity, we evaluate the spectral noise power density $N_{\mathrm{o}}$ around $f_{\mathrm{res}}$. As the noise temperature of the cavity walls (298~K) is significantly higher than the noise temperature of the amplifier (43~K), a good estimate of the absolute resonant frequency can be determined from the maximum of the noise power spectrum. A dedicated spectral noise measurement with a span of 1~MHz has been carried out with the VSA before and after each experimental run. Note that the VSA is already connected to the receiving cavity for the purpose of recording experimental data and no changes to the hardware were necessary for this measurement. Furthermore, the data during the measurement run has been evaluated for its average noise power over time. The trace shows a maximum if the cavity is on resonance. \item[Physical temperature] The physical temperature of both cavities has been measured with high precision by two LM35 sensors. The change in resonant frequency is directly proportional to the change in temperature. We have measured the proportionality constant beforehand (see Table~\ref{tbl:tempConst}), which allows us to make a statement about the maximum deviation of the resonant frequency during the measurement run. \end{description} \begin{table} \centering \caption{Measured $\Delta f / \Delta T$ of two cavity modes.} \label{tbl:tempConst} \begin{tabular}{c c} \toprule TM$_{010}$:~-33.5~kHz/$^\circ$C \quad & \quad TE$_{011}$:~-57.2~kHz/$^\circ$C\\ \toprule \end{tabular} \end{table} \section{\label{sec:cavStab}Cavity operation} The emitting cavity dissipates up to 50~W of heat by forced air cooling without any external temperature stabilization. Before data taking, the cavity was heated by RF power, while the tuning screw was continuously adjusted to keep it on resonance. After approximately 1~h, the cavity reaches thermal equilibrium and no further tuning is necessary. Once in this state, no major frequency drift occurs because of a feedback process: An increase in cavity temperature manifest itself in expansion -- hence in a lower resonant frequency, which in turn leads to less RF power being absorbed by the cavity; the temperature of the cavity decreases, resulting in a stable feedback operation. This stability can only be achieved on the upper half of the resonance curve. To stay within that region, even with small fluctuations of ambient temperature, $f_{\mathrm{sys}}$ has been set slightly higher than $f_{\mathrm{res}}$, leading to $\approx 3~$W of constantly reflected RF power. The actual absorbed RF power in the cavity, taking reflection losses into account, is $P_{\mathrm{em}} = P_{\mathrm{inc}} - P_{\mathrm{refl}}$. The average of $P_{\mathrm{em}}$ during the measurement run has been utilized for the exclusion limit calculation. Figure~\ref{fig:tempAndRF}(a) shows a trace of the measured $P_{\mathrm{em}}$ for the ALPs run in June 2013. During this run, an unexpectedly large fluctuation of the ambient temperature resulted in a thermal runaway condition after the first 12~h of data taking. The emitting cavity drifted off-resonance, reflected all incident RF power and cooled down to ambient temperature within a few minutes.\\ After noticing this condition, it was possible to bring the cavity back to the nominal operating temperature and resonant frequency by adapting $f_{\mathrm{sys}}$ remotely. We were able to continue the experiment after a 3~h period, during which the recorded data had to be discarded. Despite the thermal runaway incident, this run still yields highest sensitivity towards ALPs. \begin{figure} \centerline{\psfig{file=tune_detecting_cavity.pdf, width=1\linewidth}} \caption{ Noise power density $N_0$ at the coupling port of the detecting cavity, indicating its resonant frequency in relation to $f_{\mathrm{sys}}$. The measurement has been done immediately before and after the 25 h ALPs run in June 2013.} \label{fig:detCavTune} \end{figure} \begin{figure} \centerline{\psfig{file=n0_vs_t.pdf, width=1\linewidth}} \caption{(a): emitting cavity absorbed RF power (defined as $P_{\mathrm{EM}} = P_{\mathrm{inc}} - P_{\mathrm{refl}}$), (b): Physical temperature of the detecting cavity, (c): Noise power density from the detecting cavity. Data was recorded during the ALPs run in June 2013.} \label{fig:tempAndRF} \end{figure} The detecting cavity was only tuned at the beginning of the measurement run. After closing its shielding enclosure, the tuning screw is not reachable and was left untouched. A good indication of its resonant frequency at the beginning (t=0~h) and end (t=25~h) of the measurement run is given by the maximum of its spectral noise power density ($N_0$) in Fig.~\ref{fig:detCavTune}. The cavities $f_{\mathrm{res}}$ is $\approx 20$~kHz below $f_{\mathrm{sys}}$. The noise power at $f_{\mathrm{sys}}$ is $\approx 0.3$~dB below the maximum. We would expect the same amount of degradation for a hypothetical WISP signal. There is no visible difference between the state of the cavity at the beginning and end of the measurement run. However, the temperature measurement, shown in Fig.~\ref{fig:tempAndRF}(b) indicates a significant change of $\Delta T = -1.25 ^\circ$C at t=12~h. The cause for this fluctuation was the unexpected cool-down of the emitting cavity, which was in close vicinity to the detecting cavity. According to Table~\ref{tbl:tempConst}, the change in temperature corresponds to a relative change in resonant frequency of $\approx \Delta f = + 40$~kHz, which would place the cavities resonance $\approx 20$~kHz above $f_{\mathrm{sys}}$ and cause a worst case reduction of -0.3~dB in signal power. Note that it was not practical to measure the actual temperature of the detecting cavity, as its EMI shielding enclosure would have been compromised by the copper wires of the temperature sensor. Instead, the sensor has been placed on the outside wall of the shielding enclosure, which is in good thermal contact with the cavity. The actual fluctuations of cavity temperature are therefore less than the measured $T_{\mathrm{DET}}$. As a further crosscheck, the average noise power in a bandwidth of 2~kHz around $f_{\mathrm{sys}}$ has been evaluated from the recorded experimental data. The result is shown in Fig.~\ref{fig:tempAndRF}(c). The noise power density at t=0~h is $N_0 = -173.1$~dBm/Hz, which is in good agreement with the blue trace in Fig.~\ref{fig:detCavTune}. At t=12~h, an excursion of +0.3~dB is visible. This indicates a shift of the resonance curve by $\approx \Delta f = + 20$~kHz, centering it on $f_{\mathrm{sys}}$. At t=18~h, the cavity has warmed up again and reached its original and slightly detuned state, which is in good agreement with the green trace in Fig.~\ref{fig:detCavTune}.\\ In conclusion, we can make the following statements for the ALPs run in June 2013: \begin{itemize} \item The worst case signal degradation of an hypothetical ALP signal due to detuning of the detecting cavity was $\leq 0.3$~dB~$= 7\%$. \item The average RF power absorbed by the emitting cavity was $P_{\mathrm{em}} = 47.9$~W \end{itemize} The same monitoring principles have been applied during the HSP run in September 2013. During this run no thermal runaway condition occurred, and the emitting cavity was stable throughout the entire recording time, lasting 3 x 29 h. \section{\label{sec:emShield} Electromagnetic shielding} Shielding is critical around the detecting cavity and the microwave receiver to eliminate electromagnetic interference (EMI) from ambient sources like cellphones or wireless network transceivers. Shielding is also necessary to avoid coupling between the two cavities by electromagnetic (EM) leakage, which has to be attenuated below the detection threshold of the microwave receiver. Leaking photons would generate false results, as this kind of signal could not be distinguished from a signal propagating by WISP conversion. From the expected EM field strengths in emitting (180~kV/m) and detecting cavity (20~nV/m), we can get an estimate for the required amount of shielding. At only 20~mm separation between the two cavities, the fields need to be attenuated by $> 10^{13} = 260 \mathrm{~dB}$ to ensure meaningful results. Most microwave components used in the setup like SMA connectors or semi-rigid coaxial cables provide less than 120 dB of shielding, making an external shielding enclosure and strategic use of optical fibres for signal transmission necessary. \begin{figure} \centerline{\psfig{file=200320132312.jpg, width=1\linewidth}} \caption{ Photo of emitting cavity (1) and shielding enclosure (2) containing the identical detecting cavity. For ALP search, both parts were placed in the bore of a solenoid magnet with the same arrangement as shown in the picture. } \label{fig:magnParts} \end{figure} The EM shielding has been split into two separate enclosures. One is placed in the magnet, housing the detecting cavity and the RF frontend, as shown on the right hand side of Fig.~\ref{fig:magnParts}. The second shielding enclosure is placed outside of the magnet and contains the instrumentation needed to detect the weak microwave signals. Both enclosures have been lined with microwave absorbing foam on their inside walls. This dampens resonances, which could lead to a degraded shielding performance at certain frequencies \cite{src:EMIcavRes}. The RF signals between the two shielding boxes are transmitted by optical fibres utilizing analog transceivers. An optical ethernet link is used for remote controlling the signal analyser. Optical fibres have two distinct advantages in this application: \begin{itemize} \item Compared to coaxial transmission lines, they provide galvanic isolation and a nearly infinite shielding attenuation. Microwave interference does not influence the optical carrier and can not couple into the shielded domain. \item Optical fibres are free of metals, making them efficient with the tubular waveguide style feedthroughs \cite{src:EMVgrundlagen} used in both shielding enclosures. \end{itemize} A detailed schematic of the experimental setup is shown in Fig.~\ref{fig:OverView}. \begin{figure} \centerline{\psfig{file=ovr_2_shield.pdf, width=1\linewidth}} \caption{Detailed block diagram of the experimental setup.} \label{fig:OverView} \end{figure} To measure the shielding effectiveness, a microwave source has been placed within the enclosure under test and the electric field strength at several fixed points outside the enclosure has been compared with its lid open and closed \cite{src:EMVgrundlagen}. The field strength was measured with a calibrated electric field probe, which also makes it possible to quickly localize weak spots in the shielding. Both enclosures provide $\approx 90$~dB and each of the cavities provide an additional $\approx$~110 dB of shielding. Therefore the combined EM attenuation of the experimental setup is $\approx 310$~dB, making thermal noise the limiting factor for the minimum detectable signal. For diagnostic purposes, a sinusoidal signal of known frequency is emitted within the shielding enclosure during each measurement run. This ``test tone'' of relatively low and constant power ($\approx -100$~dBm) couples from a $\lambda / 4$ antenna to the detecting cavity and to the components of the receiver frontend. By identifying the signal in the recorded spectrum, we demonstrate that the entire signal processing chain was operational during the measurement. This also allows to evaluate unwanted frequency offsets, frequency drifts, or phase noise by comparing shape and position of the measured signal peak to the expected one. Furthermore, the observed power of the test tone is used as an indicator for a major degradation in the EM shielding. For example, a faulty RF connector on the detecting cavity can lead to excessive RF leakage. The observed test tone power would increase by several orders of magnitude, which is an immediate indication for a fault condition. The test tone was transmitted over an optical fibre into the shielding enclosure, using a reverse biased photo diode (Hamamatsu G9801) to convert the optical to an electrical signal. The test tone frequency $f_{\mathrm{test}}$ has been offset by $\approx 400$~Hz to $f_{\mathrm{sys}}$, which avoids any interference with WISP detection. \section{\label{sec:front} RF frontend} All components of the RF frontend are mounted within the cavity shielding enclosure and need to be compatible with the 3~T magnetic field. The noise like signal from the detection cavity is amplified, filtered, modulated on an optical carrier and transmitted over an optical fibre to the shielding box outside the magnet. \begin{figure} \centerline{\psfig{file=160420132399.jpg, width=1\linewidth}} \caption{The components of the RF frontend, unmounted from the cavity shielding box. From left to right: Analog optical link, low noise amplifier and bandpass filter.} \label{fig:noiseRed} \end{figure} The low noise amplifier (LNA) of type MITEQ AMF-3F-0200-400-06-10P has been tested and characterized in the 3~T magnet. This was necessary, as some components in the amplifier might be affected by a high level magnetic field \cite{src:LNA_Magnet}. The LNA provides a nominal gain (G) of 45 dB and a noise figure (NF) of 0.6 dB at $f_{\mathrm{sys}}$. Its equivalent noise temperature is $T_{\mathrm{LNA}} = 43$~K, making thermal noise from the detecting cavity the only significant noise source in the receiving chain. The preliminary HSP measurement runs until December 2012 have been successfully completed using a commercial optical transmitter, type MITEQ LBT-50K4P5G-25-15-M14. However the module received permanent damage after first tests in the 3 T magnet in preparation for the run in June 2013. As no magnetically compatible replacement product was readily available from industry, a commercial satellite TV low noise block from the company INVACOM was adapted for our purpose. Ferromagnetic materials of significant mass, including all ferrite cored inductors have been removed, the internal DC/DC converter has been replaced with a magnetically compatible power supply. The optical link was measured and achieved a nominal noise figure of 10 dB and a gain of 19 dB in the frequency range of 0.5 GHz - 4 GHz, which are of comparable performance to the MITEQ link. It proved to operate reliably in a magnetic field of up to 3 T. It is necessary to calibrate the measured power spectra to obtain the absolute noise power at the coupling port of the detecting cavity. The so called ``hardware transfer function'' (HTF) had to be determined. It corresponds to the cascaded gain and noise figure of all frontend components and cables between cavity and signal analyzer. To minimize errors due to thermal drifts and influences of the magnetic field, the HTF has been measured in the magnet, immediately before the WISP measurement run. The HTF was determined by the Y-factor method \cite{src:AgilentNoise}. For this purpose, the detecting cavity was disconnected from the LNA and a calibrated noise diode was connected through a 5 m long cable. The cable was necessary to prevent interference of the noise diode due to the magnetic field. The exact attenuation of the cable has been determined beforehand and taken into account. The HTF of the receiving chain has been determined as NF = $0.7 \pm 0.2$ dB and G = $60.4 \pm 0.5$ dB at $f_{\mathrm{sys}}$. The measurement uncertainty has been estimated by a method described in \cite{src:AgiNFuncPaper}. During earlier HTF measurements, a saturation of the optical transmitter was observed due to the output of the 15 dB ENR noise diode ($T_N = 2300$~K) being amplified by the LNA over a wide bandwidth $>2$ GHz. A non-magnetic, adjustable bandpass filter was designed, built and inserted between LNA and optical transmitter. The filter substantially reduces noise-power by limiting the frontend bandwidth to 25~MHz, preventing this saturation and effectively increasing the dynamic range. The filter is based on an evanescent mode design described in \cite{src:EvanescentFilter,src:WGFilters}. \section{\label{sec:proc} Data processing and evaluation} The signal from the RF frontend is recorded by an Agilent EXA N9010A signal analyzer. The center frequency was set to $f_{\mathrm{sys}} + 400~\mathrm{Hz}$ to avoid internal spurious signals appearing at the important parts of the spectrum \cite{src:moiIPAC12}. The instrument shifts the center frequency to baseband before digitizing and recording the complex quadrature signal with a bandwidth of 2 kHz. For offline data processing, the spectral power of the recorded noise like signal is estimated by a python script. For the ALPs run in June 2013, the time record had to be divided into two 10 h long continuous segments, discarding a 3~h long segment of data where the emitting cavity drifted off tune. The complex spectra of each segment are calculated by an Discrete Fourier Transform (DFT), efficiently implemented by the software library FFTW \cite{src:FFTW05}. The two subspectra are averaged, resulting in the final spectrum in Fig.~\ref{fig:resultSpect} (1). It consists of $\approx 71 \cdot 10^6$ spectral bins, which have been decimated to 1500 points in the overview plot, showing minima, maxima (as grey area) and average values (as a blue line) of each group of spectral bins. This substantially reduces the amount of data handling while preserving sharp peaks or sudden excursions, which are the expected signatures of a WISP signal. The DFT represents a matched filter for detecting sinusoidal signals in white background noise \cite{src:statPaper2} and is therefore the most efficient algorithm for this purpose. Each spectral bin resulting from the DFT operation can be seen as the integrated output power of a bandpass filter, tuned to a specific frequency. The bandwidth of each filter is determined by BW$_{\mathrm{res}} = 1/l$, where $l$ is the length of the recorded time trace. The displayed average background noise level in each spectral bin is given by $P_N = \mathrm{BW_{res}} N_0$, where $N_0$ is the noise power density of the input signal. A pure sinusoidal signal at fixed frequency (like the one we would expect from a WISP) has an infinitely narrow bandwidth and will always deposit its entire signal power ($P_{\mathrm{sig}}$) within one single spectral bin. Therefore the signal to noise ratio in this bin, defined as S/N = $P_{\mathrm{sig}}/P_N$, is proportional to the length of the recorded time trace. This is in contrast to averaging $n$ spectra, where S/N only improves by a factor of $\sqrt{n}$. No window function was used before calculating the FFT. This yields the most narrow resolution bandwidth BW$_{\mathrm{res}}$, the lowest $P_N$ and the largest possible S/N for detecting sinusoidal signals \cite{src:fftWind,src:FFTpaper}. Note that window functions are often used to diminish the effects of spectral leakage and provide a steep fall-off around signal peaks. This is not required in our case as we do not need to resolve signals in the spectrum which are tightly spaced in frequency or which have a high dynamic range. \begin{figure} \centerline{\psfig{file=scalloping2.pdf, width=1\linewidth}} \caption{Response of three spectral bins without zero padding. The amplitude of a signal falling between the bins can be attenuated by a factor of 0.64 (-3.92~dB).} \label{fig:scallop} \end{figure} The resulting spectral estimate suffers from certain artifacts, originating from the definition of the DFT. In our case, the most critical one is the so called scalloping loss, which can be observed if a sinusoidal signal falls between two frequency bins in the spectrum. Its amplitude can be attenuated by up to 3.92 dB \cite{src:FFTpaper}. This is illustrated by Fig.~\ref{fig:scallop}. A signal, sampled in the time domain, corresponds to a continuous spectrum in the frequency domain. Scalloping loss occurs because the DFT defines the minimum number of sampling points on this underlying continuous spectrum, without causing information loss (Nyquist rate in the frequency domain). One way to avoid scalloping loss is to use a flat-top window. However, this trades resolution bandwidth for amplitude accuracy, which would reduce the detection sensitivity.\\ A better way is to calculate more than the minimum number of frequency bins. This can be achieved by zero padding the time domain signal before the DFT operation. The underlying continuous spectrum is sampled more frequently in the frequency domain, which leads to a more accurate representation of a signal peak, if it falls between two frequency points. For the data analysis, zero padding has been applied with 10 times the number of measured samples, reducing scalloping losses to a negligible amount. For the two zoomed spectra in Fig.~\ref{fig:resultSpect} (2) and (3), the interpolated data is shown as a grey line, while the sampling points from a DFT operation without zero padding are shown as blue dots. As we effectively define a very narrowband filter around the WISP signal, the long-term frequency stability of the RF-source, the signal analyzer and any other oscillator involved in the receiving chain can critically influence the detection sensitivity. It has been demonstrated in \cite{src:narrowband} that excessive frequency drifts would smear out the sharp peak we would expect from a sinusoidal signal in the spectrum. The power of a hypothetical WISP signal would spread over several frequency bins and the signal to noise ratio would degrade. To ensure the WISP signal stays within one frequency bin of width $BW_{\mathrm{res}}$, we require a fractional frequency accuracy of $\Delta f / f = BW_{\mathrm{res}} / f_{\mathrm{sys}} \approx 2 \cdot 10^{-14}$ during the whole measurement time. Note that this requirement only applies to the frequency accuracy of the oscillators relative to each other. Frequency drifts which apply to all oscillators in the same way will cancel out in the result. We have achieved the required stability by phase-locking all oscillator to a common 10~MHz frequency reference. Long term phase drift measurements showed, that the synchronized RF sources are precise enough for the narrowband measurement and no broadening of the signal peak is expected. This has been confirmed by the power spectra of each experimental run, where a test tone signal is visible as narrow peak with the minimum possible width dictated by the DFT. \begin{figure} \centerline{\psfig{file=montecarlo.pdf, width=1\linewidth}} \caption{Monte Carlo simulation of the fluctuation of one spectral bin governed by background noise (N) and another bin with an additional sinusoidal signal (S+N). The analytical PDFs show good agreement with the histogram of the Monte Carlo data. Furthermore, an exemplary detection threshold $P_{th}$ and the resulting error probabilities $PR_{\alpha}$ and $PR_{\beta}$ have been indicated. Note that the power levels and probabilities are arbitrary and do not correspond to a measurement run.} \label{fig:monte} \end{figure} \begin{figure*} \centerline{\psfig{file=second_3T_run_AllInOne.pdf, width=1\linewidth}} \caption{Resulting power spectrum from the ALP run in June 2013. (1) Overview over full recorded span. (2) Zoom on $f_{\mathrm{test}}$, where the test tone signal is clearly visible. (3) Zoom on $f_{\mathrm{sys}}$, where no WISP signal is visible. The green shaded areas mark the frequency range where a signal would be expected. Blue dots indicate the spectral bins calculated by a direct DFT operation, the grey lines show the underlying continuous spectrum, approximated by zero padding the time domain data. All three plots share the same Y - axis} \label{fig:resultSpect} \end{figure*} The resulting power spectrum from the ALPs run in June 2013 is shown in Fig.~\ref{fig:resultSpect}. The test tone signal is visible as a single peak, spanning only one single bin. Its absolute position on the frequency axis was offset by $\approx 34~\mu$Hz due to the finite frequency resolution of the RF oscillators. To accommodate for these offsets, a window has been defined with a width of $15 \cdot \mathrm{BW_{res}} = 423~\mu$Hz around the frequency where an ALP signal would be expected. To decide the outcome of the experiment, we compare the power level of each frequency bin within this window to a predefined detection threshold $P_{\mathrm{th}}$. Defining the threshold is an exercise in statistical hypothesis testing. We specify: \begin{description} \item[$H_0$:] Null hypothesis. There is no WISP signal with a power of $\geq P_{\mathrm{sig}}$ and the frequency bins are governed exclusively by background noise. \item[$H_1$:] Alternative hypothesis. There is a WISP signal added to the background noise with a power of $\geq P_{\mathrm{sig}}$. \end{description} The probability density functions (PDF) in both cases are known. Under $H_0$, the relevant frequency bin in the power spectrum will obey a central $\chi^2$ distribution. Under $H_1$, it will obey a non - central $\chi^2$ distribution, where the non-centrality parameter equals the WISP signal power $P_{\mathrm{sig}}$. Both distributions have two known parameters: the degrees of freedom is $k=4$, because the power spectrum is calculated from the magnitude of independent real and imaginary parts, which have been averaged twice. Furthermore, the parameter $\sigma$, describing the variance of the distributions, is related to the average noise power by $\sigma = P_N / k$. As $P_N$ can be estimated from the frequency bins where no WISP signal is expected, this parameter is considered known. Details to the derivation of the PDFs are given in \cite{src:statPaper1, src:statPaper2, src:statPaper3}. The analytical PDFs have been cross-checked with a Monte Carlo simulation. We computed 10000 power spectra of synthetic input data and computed the histogram of two specific frequency bins -- one governed by noise and one with an additional signal. The corresponding analytical PDFs agree well with the histograms, as shown in Fig.~\ref{fig:monte}. We define the following probabilities for the hypothesis test: \begin{description} \item[PR$_\alpha' := 5\%$] is the probability for a false positive outcome of the experiment under the assumption of $H_0$ (we have discovered the WISP mistakenly). Note that we have to take the size of the WISP window into account. To obtain a false detection probability of 5~\% with respect to testing all 15 frequency bins, we need to set PR$_\alpha = $PR$_\alpha' / 15$. \item[PR$_\beta := 5 \%$] is the probability for a false negative outcome of the experiment under the assumption of $H_1$ (we have excluded the WISP mistakenly). \end{description} The two probabilities correspond to areas under the two PDFs, above and below the detection threshold, as illustrated by Fig.~\ref{fig:monte}. With the above definitions and the known parameters of the PDFs, we can solve the system of equations numerically and get a value for the detection threshold $P_{th}$, which corresponds to the two error probabilities. For the ALPs run in June 2012 the detection threshold is $P_{th} = -212.6$~dBm. Figure~\ref{fig:resultSpect} shows that there is no peak exceeding this threshold within the WISP window. Therefore $H_1$ can be rejected and we can state with a confidence level of 1-PR$_\alpha' = 95$~\%, that there is no excessive signal with a power of $P_{\mathrm{sig}} \geq -210.1$~dBm in the measured data. This allows us to set an exclusion limit for ALPs. \section{\label{sec:exclRes}Achieved exclusion results} The most sensitive measurement run for HSPs has been carried out in September 2013 at CERN, recording three continuous time traces of 29~h length. For ALPs, the most sensitive run was carried out in June 2013 in cooperation with the Brain \& Behaviour Laboratory of Geneva University. We were able to operate the setup within the bore of a 3~T superconducting magnet, which is part of an MRI scanner. Over the course of one weekend, 2~x~10~h of experimental data was recorded. The technical parameters of these two runs have been summarized in Table~\ref{tbl:paramHSP} and Table~\ref{tbl:paramALP}. Note that the experimental apparatus (apart from the superconducting magnet) and the method of data evaluation was identical for both measurement runs. As no WISPs were detected, the corresponding exclusion limits in comparison to other experiments are shown in Fig.~\ref{fig:exclPlotALP} and Fig.~\ref{fig:exclPlotHSP}. \begin{table}[t] \centering \caption{Parameters of the HSP run in September 2013} \label{tbl:paramHSP} \begin{tabular}{c} \toprule $f_{\mathrm{sys}} = 2.956610$~GHz \quad $Q_{\mathrm{det}} = 22739$ \quad $Q_{\mathrm{em}} = 23210$\\[0.1 cm] $P_{\mathrm{sig}} = 3.72 \cdot 10^{-25}$ W \quad $P_{\mathrm{em}} = 35.6$ W \quad $|G|_{\mathrm{max}} = 0.51$ \\ \toprule \end{tabular} \end{table} \begin{table}[t] \centering \caption{Parameters of the ALP run in June 2013} \label{tbl:paramALP} \begin{tabular}{c} \toprule $f_{\mathrm{sys}} = 1.739990$ GHz \quad $Q = 11392,12151$ \quad B = 2.88 T\\[0.1 cm] $P_{\mathrm{sig}} = 9.8 \cdot 10^{-25}$ W \quad $P_{\mathrm{em}} = 47.9$ W \quad $|G|_{\mathrm{max}} = 0.94$ \\ \toprule \end{tabular} \end{table} \begin{figure} \centerline{\psfig{file=ALP_excl_plot.pdf, width=1\linewidth}} \caption{CROWS: exclusion limits for ALPs for the measurement run in June 2013 in a 3 Tesla magnet. Confidence level: 95~\%. ALPS-I: exclusion limits from the most sensitive optical LSW experiment to date \cite{src:alps1}. CAST: helioscope observing the sun to search for solar ALPs \cite{src:CAST_limits}. More details on the other experiments can be found in \cite{src:LowEnergyFrontier}. } \label{fig:exclPlotALP} \end{figure} \begin{figure} \centerline{\psfig{file=HSP_excl_plot.pdf, width=1\linewidth}} \caption{CROWS: exclusion limits for HSPs for the measurement run in September 2013. Confidence level: 95~\%. UWA LSW \cite{src:UWA_LSW} and ADMX LSW \cite{src:ADMX_HSP_LSW} are similar microwave based LSW experiments. More details on the other experiments can be found in \cite{src:LowEnergyFrontier}.} \label{fig:exclPlotHSP} \end{figure} \section{\label{sec:conclusion}Conclusion} No HSPs or ALPs were observed in the most sensitive measurement-runs of the CROWS experiment. For HSPs, the experiment was sufficiently sensitive to improve previous exclusion limits. For ALPs, it was -- in a small mass range -- more sensitive than other purely laboratory based experiments (namely laser LSW of the first generation like ALPS-1 \cite{src:alps1}) but less sensitive than extraterrestrial experiments like the CERN Axion Solar Telescope CAST \cite{src:CAST_limits}. This is the first time ALPs have been probed by a microwave based LSW experiment. Several technical challenges, like $> 300$~dB EM shielding between the cavities, keeping them frequency matched during up to 29~h long measurement runs and filtering the sinusoidal signal with a bandwidth of $BW_{\mathrm{res}} < 30~\mu$Hz to discriminate it from background noise, had to be solved. There is still significant potential for improvement as the sensitivity of the experiment scales with $B$ and $1/{f_{\mathrm{sys}}}$ for ALPs. Therefore the setup might be upgraded with a stronger magnet or lower frequency and thus larger cavities. \section{\label{sec:Ack}Acknowledgements} The authors would like to thank K.~Baker, P.~Blanc, A.~Collar, A.~Malagon, J.~Troschka and K.~Zioutas for a large number of inspiring discussions. We are especially grateful to C.~Burrage for bringing the right people together at the right time, to M.~Wendt for reading the manuscript and to the BE-RF mechanical workshop at CERN for practical assistance. Exclusion plot comparison data were provided with friendly permission from J.~Jaeckel and J.~Redondo. We would like to express our gratitude for the support from R. Jones, E. Jensen and the BE department at CERN. \bibliographystyle{apsrev}
1,314,259,996,252
arxiv
\section{Introduction}\label{sec:intro} We consider two-player zero-sum {\em graph games}; a fundamental model with applications, e.g., in multi-agent systems \cite{AHK02}. A graph game is played on a finite directed graph as follows. A token is placed on a vertex and the players move it throughout the graph to produce an infinite path, which determines the payoff of the game. Traditional graph games are {\em turn-based}: the players alternate turns in moving the token. {\em Bidding games} \cite{LLPU96,LLPSU99} are graph games in which an ``auction'' (bidding) determines which player moves the token in each turn. The concrete bidding mechanisms that we consider proceed as follows. In each turn, both players simultaneously submit bids, where a bid is legal if it does not exceed the available budget. The higher bidder ``wins'' the bidding and moves the token. The mechanisms differ in their payment schemes, which are classified according to two orthogonal properties. {\em Who pays:} in {\em first-price} bidding only the higher bidder pays the bid and in {\em all-pay} bidding both players pay their bids. {\em Who is the recipient:} in {\em Richman} bidding (named after David Richman) payments are made to the other player and in {\em poorman} bidding payments are made to the ``bank'', i.e., the bid is lost. As a rule of thumb, bidding games under all-pay and poorman bidding are respectively technically more challenging than first-price and Richman bidding. More on this later. In terms of applications, however, we argue below that poorman bidding is often the more appropriate bidding mechanism. \paragraph{Applications.} A central application of graph games is {\em reactive synthesis}~\cite{PR89}: given a specification, the goal is to construct a {\em controller} that ensures correct behavior in an adversarial environment. Synthesis is solved by constructing a turn-based graph game in which Player~$1$\xspace is associated with the controller and Player~$2$\xspace with the environment, and searching for a winning Player~$1$\xspace strategy. Bidding games extend the modeling capabilities of graph games. For example, they model ongoing and stateful auctions in which budgets do not contribute to the players' utilities. Advertising campaigns are one such setting: the goal is to maximize visibility using a pre-allocated advertising budget. By modeling this setting as a bidding game and solving for Player~$1$\xspace, we obtain a bidding strategy with guarantees against any opponent\footnote{A worst-case modelling assumes that the other bidders cooperate against Player~$1$\xspace.}. Maximizing visibility can be expressed as a mean-payoff objective (defined below). All-pay poorman bidding is particularly appealing since it constitutes a dynamic version of the well-known Colonel Blotto games \cite{Bor21}. Rather than thinking of the budgets as money, we think of them as resources at the disposal of the players, like time or energy. Then, deciding how much to bid represents the effort that a player invests in a competition, e.g., investing time to prepare for a job interview, where the player that invests more wins the competition. \paragraph{Prior work -- full-information bidding games.} The central quantity in bidding games is the {\em initial ratio} between the players' budgets. Formally, for $i \in \set{1,2}$, let $B_i$ be Player~$i$\xspace's initial budget. Then, Player~$1$\xspace's initial ratio is $B_1/(B_1 + B_2)$. A {\em random-turn game} \cite{PSSW09} with parameter $p \in [0,1]$ is similar to a bidding game only that instead of bidding, in each turn, we toss a coin with probability $p$ that determines which player moves the token. Formally, a random-turn game is a special case of a {\em stochastic game} \cite{Con92}. {\em Qualitative objectives.} In {\em reachability} games, each player is associated with a target vertex, the game ends once a target is reached, and the winner is the player whose target is reached. Reachability bidding games were studied in \cite{LLPU96,LLPSU99}. It was shown that, for first-price reachability games, a {\em threshold ratio} exists, which, informally, is a necessary and sufficient initial ratio for winning the game. Moreover, it was shown that first-price Richman-bidding games are equivalent to uniform random-turn games (and only Richman bidding); namely, the threshold ratio in a bidding game corresponds to the value of a uniform random-turn game. All-pay reachability games are technically more challenging. Optimal strategies might be mixed and may require sampling from infinite-support probability distributions even in extremely simple games \cite{AIT20}. {\em Mean-payoff games.} Mean-payoff games are infinite-duration quantitative games. Technically, each vertex of the graph is assigned a weight, and the {\em payoff} of an infinite play is the long-run average sum of weights along the path. The payoff is Player~$1$\xspace's reward and Player~$2$\xspace's cost, thus we refer to them respectively as \text{Max}\xspace and \text{Min}\xspace. For example, consider the ``bowtie'' game ${\cal G}_{\bowtie}$, depicted in Fig.~\ref{fig:bowtie}. The payoff in ${\cal G}_{\bowtie}$ corresponds to the ratio of bidding that \text{Max}\xspace wins. Informally ${\cal G}_{\bowtie}$ models the setting in which in each day a publisher sells an ad slot, and \text{Max}\xspace's objective is to maximize visibility: the number of days that his ad is displayed throughout the year. Unlike reachability games, intricate equivalences between mean-payoff bidding games and random-turn games are known for all the mechanisms described above~\cite{AHC19,AHI18,AHZ21,AJZ21}. \begin{example} We illustrate the equivalences between full-information bidding games and random-turn games. Consider the ``bowtie'' game ${\cal G}_{\bowtie}$ (see Fig.~\ref{fig:bowtie}). For $p \in [0,1]$, the random-turn game $\textsf{RT}({\cal G}_{\bowtie}, p)$ that uses a coin with bias $p$ is depicted in Fig.~\ref{fig:RTBowtie}. Its expected payoff is $p$. Suppose that the initial ratio is $r \in (0,1)$. Under first-price Richman-bidding, the optimal payoff in ${\cal G}_{\bowtie}$ does not depend on the initial ratio: no matter what $r$ is, the optimal payoff that \text{Max}\xspace can guarantee is arbitrarily close to $0.5$, hence the equivalance with $\textsf{RT}({\cal G}_{\bowtie}, 0.5)$. Under first-price poorman bidding, the optimal payoff {\em does} depend on the initial ratio: roughly, the optimal payoff that \text{Max}\xspace can guarantee is $r$, hence the equivalence with $\textsf{RT}({\cal G}_{\bowtie}, r)$. For all-pay bidding, pure strategies are only ``useful'' in all-pay poorman bidding and only when $r > 0.5$, where \text{Max}\xspace can guarantee an optimal payoff of $\frac{2r -1}{r}$. The results extend to general strongly-connected games (see Thm.~\ref{thm:FI-MP}).\hfill$\triangle$ \end{example} \stam{ \noindent \begin{minipage}{0.7\linewidth} \vspace{0.06cm} We start with first-price bidding. Under Richman-bidding, the optimal payoff in ${\cal G}_{\bowtie}$ does not depend on the initial ratio. Roughly, \text{Max}\xspace can guarantee, using a pure strategy, a payoff arbitrarily close to $0.5$ with any positive initial ratio, and cannot do better. Under first-price poorman bidding, the optimal payoff {\em does} depend on the initial ratio: roughly, when \text{Max}\xspace's initial ratio is $r \in [0,1]$, the optimal payoff that he can guarantee is $r$. \vspace{0.06cm} \end{minipage} \ \hspace{0.2cm} \begin{minipage}{0.3\linewidth} \begin{center} \includegraphics[height=1.3cm]{lollipop.pdf} \captionof{figure}{The mean-payoff game ${\cal G}_{\bowtie}$ with weights in the vertices.} \label{fig:bowtie} \end{center} \end{minipage} \noindent For all-pay bidding, pure strategies are only ``useful'' in all-pay poorman bidding and only when $r > 0.5$, where \text{Max}\xspace can guarantee an optimal payoff of $\frac{2r -1}{r}$. These optimal payoffs coincide with a value of a random-turn game (see App.~\ref{app:RT-MP}) and the results extend to general strongly-connected games (see Thm.~\ref{thm:FI-MP}).\hfill$\triangle$ \end{example} } \begin{figure} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[height=1.3cm]{lollipop.pdf} \caption{The mean-payoff game ${\cal G}_{\bowtie}$ with the weights in the vertices.} \label{fig:bowtie} \end{minipage} \hspace{0.05\linewidth} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[height=1.3cm]{RTlollipop.pdf} \caption{The simplified random-turn game $\textsf{RT}({\cal G}_{\bowtie}, p)$, for $p \in [0,1]$. \label{fig:RTBowtie} \end{minipage} \end{figure} \paragraph{Our contributions -- partial-information bidding games.} In most auction domains, bidders are not precisely informed of their opponent's budget. Bidding games, however, have only been studied as full-information games. We initiate the study of bidding games in which the players are partially informed of the opponent's budget. Specifically, we study bidding games in which the two players' budgets are drawn from a known probability distribution, and the players' goal is to maximize their expected utility. We first show that the results on qualitative objectives as well as first-price Richman bidding transfer to the partial-information setting. We turn to study mean-payoff poorman-bidding games, which are significantly more challenging. We focus on {\em one-sided} partial-information games in which only Player~$2$\xspace's budget is drawn from a probability distribution. Thus, Player~$1$\xspace is {\em partially informed} and Player~$2$\xspace is {\em fully informed} of the opponent's budget. We argue that one-sided partial-information games are practically well-motivated. Indeed, one-sided partial information is a worst-case modelling: the utility that an optimal strategy for Player~$1$\xspace guarantees in the game, is a lower bound on the utility that it will guarantee when deployed against the concrete environment. We illustrate our results in the following example. \begin{example} \label{ex:partially-informed} Consider the bowtie game ${\cal G}_{\bowtie}$ (Fig.~\ref{fig:bowtie}), where \text{Max}\xspace (the partially-informed player) starts with a budget of $B$ and \text{Min}\xspace (the fully-informed player) starts with a budget that is drawn uniformly at random from $\text{supp}(\gamma) = \set{C_1,C_2}$. We describe an optimal strategy for \text{Max}\xspace under first-price poorman bidding. \text{Max}\xspace carefully chooses an $x \in [B \cdot \frac{C_1}{C_2},B]$ and divides his budget into two ``wallets''; the first with budget $x$ and the second with budget $B-x$. He initially uses his first wallet to play an optimal full-information strategy assuming the initial budgets are $x$ and $C_1$, which guarantees a payoff of at least $p_1 = \frac{x}{C_1 + x}$. If Player~$2$\xspace spends more than $C_1$, i.e., her initial budget was in fact $C_2$, then Player~$1$\xspace proceeds to use his second wallet against Player~$2$\xspace's remaining budget, which guarantees a payoff of at least $p_2 = \frac{B-x}{B-x+C_2-C_1}$. Thus, the expected payoff is at least $0.5\cdot (p_1 + p_2)$, and \text{Max}\xspace simply chooses an $x$ that maximizes this expression. Note that the constraint that $x \geq B \cdot \frac{C_1}{C_2}$ implies that $p_1 \geq p_2$, thus \text{Min}\xspace has an incentive to play so that \text{Max}\xspace proceeds to use his second wallet. We show that this strategy is optimal, and extend the technique to obtain optimal strategies in general strongly-connected games for first-price and all-pay poorman bidding. Finally, we show that the optimal payoff that \text{Min}\xspace can guarantee in ${\cal G}_{\bowtie}$, is obtained by a surprisingly simple strategy. We show that the following \text{Min}\xspace strategy is optimal: when her initial budget is $C_i$, for $i \in \set{1,2}$, \text{Min}\xspace follows an optimal full-information strategy for ratio $B/(B+C_i)$. That is, she ``reveals'' her true budget in the first round and cannot gain utility by hiding this information. The technical challenge is to show that this strategy is optimal. \hfill$\triangle$ \end{example} Our results show that contrary to turn-based, stochastic games, and full-information bidding games, there is a gap between the optimal payoffs that the players can guarantee with pure strategies. Thus, the {\em value} does not necessary exist in partial-information mean-payoff bidding games under pure strategies. \paragraph{Related work.} The seminar book \cite{AMS95} studies the mean-payoff game ${\cal G}_{\bowtie}$ under one-sided partial-information with a different semantic to the one we study. Let $L$ or $R$ denote the two vertices of ${\cal G}_{\bowtie}$. \text{Min}\xspace has partial information of the weights of $L$ and $R$, which, before the game begins, are drawn from a known probability distribution. \text{Max}\xspace, the fully-informed player, knows the weights. In each turn, \text{Max}\xspace chooses $L$ or $R$, followed by \text{Min}\xspace who either ``accepts'' or ``rejects'' \text{Max}\xspace's choice, thus both players can affect the movement of the token. The value in the game is shown to exist. Interestingly and similar in spirit to our results, there are cases in which \text{Max}\xspace cannot use his knowledge advantage and his optimal strategy reveals which of the two vertices he prefers. One-sided partial information have also been considered in turn-based graph games, e.g., \cite{Reif84,RC+07,DDR06}. {\em Discrete bidding games} were studied in \cite{DP10}; namely, budgets are given in coins, and the minimal positive bid a player can make is a single coin. Tie-breaking is a significant factor in such games~\cite{AAH21}. Non-zero-sum bidding games were studied in \cite{MKT18}. See also the survey~\cite{AH20}. \section{Preliminaries} \label{sec:prelim} \paragraph{Strategies in bidding games.} A bidding game is played on a directed graph $\zug{V, E}$. A {\em strategy} in any graph game is a function from histories to actions. In bidding games, a history consists of the sequence of vertices that were visited and bids made by the two players. We stress that the history does not contain the current state of the budgets. Rather, a player can compute his opponent's current budget based on the history of bids, if he knows her initial budget. We formalize the {\em available budget} following a history. For $i \in \set{1,2}$, suppose the initial budget of Player~$i$\xspace is $B_i$. For a history $h$, we define the {\em investments} of Player~$i$\xspace throughout $h$, denoted $\text{Inv}_i(h)$. In all-pay bidding, $\text{Inv}_i(h)$ is the sum bids made by Player~$i$\xspace throughout $h$, and in first-price bidding, it is the sum only over the winning bids. We denote by $B_i(h)$ Player~$i$\xspace's available budget following $h$. Under Richman bidding, winning bids are paid to the opponent, thus $B_i(h) = B_i - \text{Inv}_i(h) + \text{Inv}_{3-i}(h)$. Under poorman bidding, winning bids are paid to the bank, thus $B_i(h) = B_i - \text{Inv}_i(h)$. Given a history, a strategy prescribes an action, which in a bidding game, is a pair $\zug{b,u} \in \mathbb{R} \times V$, where $b$ is a bid and $u$ is the vertex to move to upon winning. We restrict the actions of the players following a history $h$ so that (1) the bid does not exceed the available budget, thus following a history $h$, a legal bid for Player~$i$\xspace is a bid in $[0, B_i(h)]$, and (2) a player must choose a neighbor of the vertex that the token is placed on. We restrict attention to strategies that choose legal actions for all histories. Note that we consider only {\em pure} strategies and disallow {\em mixed} strategies (strategies that allow a random choice of action). \begin{definition} For $i \in \set{1,2}$, we denote by $\S_i(B_i)$ the set of legal strategies for Player~$i$\xspace with an initial budget of $B_i$. Note that with a higher initial budget, there are more strategies to choose from, i.e., for $B'_i > B_i$, we have $\S_i(B_i) \subseteq \S_i(B'_i)$. \end{definition} The central quantity in bidding games is the initial ratio, defined as follows. \begin{definition} {\bf Budget ratio.} When Player~$i$\xspace's budget is $B_i$, for $i \in \set{1,2}$, we say that Player~$i$\xspace's ratio is $\frac{B_i}{B_1 + B_2}$. \end{definition} \paragraph{Plays.} Consider initial budgets $B_1$ and $B_2$ for the two players, two strategies $f \in \S_1(B_1)$ and $g \in \S_2(B_2)$, and an initial vertex $v$. The triple $f$, $g$, and $v$ gives rise to a unique {\em play}, denoted $\textsf{play}(v, f,g)$. The construction of $\textsf{play}(v, f, g)$ is inductive and is intuitively obtained by allowing the players to play according to $f$ and $g$. Initially, we place the token on $v$, thus the first history of the game is $h=v$. Suppose a history $h$ has been played. Then, the next action that the players choose is respectively $\zug{u_1, b_1} = f(h)$ and $\zug{u_2, b_2} = g(h)$. If $b_1 > b_2$, then Player~$1$\xspace wins the bidding and the token moves to $u_1$, and otherwise Player~$2$\xspace wins the bidding and the token moves to $u_2$. Note that we resolve ties arbitrarily in favor of Player~$2$\xspace. The play continues indefinitely. Since the players always choose neighboring vertices, each play corresponds to an infinite path in $\zug{V, E}$. For $n \in \mathbb{N}$, we use $\textsf{play}_n(v, f,g)$ to denote its finite prefix of length $n$. We sometimes omit the initial vertex from the play when it is clear from the context. \paragraph{Objectives.} We consider zero-sum games. An {\em objective} assigns a {\em payoff} to a play, which can be thought of as Player~$1$\xspace's reward and Player~$2$\xspace's penalty. We thus sometimes refer to Player~$1$\xspace as \text{Max}\xspace and Player~$2$\xspace as \text{Min}\xspace. We denote by $\textsf{payoff}(f, g, v)$ the payoff of the play $\textsf{play}(f, g, v)$. \paragraph{Qualitative objectives.} The payoff in games with qualitative objectives is in $\set{-1,1}$. We say that Player~$1$\xspace\ {\em wins} the play when the payoff is $1$. We consider two qualitative objectives. (1) {\em Reachability.} There is a distinguished target vertex $t$ and a play is winning for Player~$1$\xspace iff it visits $t$. (2) {\em Parity.} Each vertex is labeled by an index in $\set{1,\ldots,d}$ and a play is winning for Player~$1$\xspace iff the highest index that is \stam \begin{itemize \item {\bf Reachability.} There is a distinguished target vertex $t$ and a play is winning for Player~$1$\xspace iff it visits $t$. \item {\bf Parity.} Each vertex is labeled by an index in $\set{1,\ldots,d}$ and a play is winning for Player~$1$\xspace iff the highest index that is visited infinitely often is odd. \end{itemize} } Parity objectives are important in practice, e.g., reactive synthesis~\cite{PR89} is reducted to the problem of solving a (turn-based) parity games. \paragraph{Mean-payoff games.} The quantitative objective that we consider is {\em mean-payoff}. Every vertex $v$ in a mean-payoff game has a weight $w(v)$ and the payoff of an infinite play is the long-run average weight that it traverses. Formally, the payoff of an infinite path $v_1, v_2, \ldots$ is $\liminf_{n \to \infty} \frac{1}{n} \sum_{1 \leq i < n} w(v_i)$. Note that the definition favors \text{Min}\xspace since it uses $\liminf$. \paragraph{Values in full-information bidding games.} We are interested in finding the optimal payoff that a player can {\em guarantee} with respect to an initial budget ratio. Let $c \in \mathbb{R}$ and initial budgets $B_1$ and $B_2$. We say that Player~$1$\xspace can {\em guarantee} a payoff of $c$, if he can reveal that he will be playing according to a strategy $f \in \S_1(B_1)$, and no matter which strategy $g \in \S_2(B_2)$ Player~$2$\xspace responds with, we have $\textsf{payoff}(f, g) \geq c$. {\em Player~$1$\xspace's value} is the maximal $c$ that he can guarantee, and Player~$2$\xspace's value is defined dually. Note that there might be a gap between the two players' values. When Player~$1$\xspace's value coincides with Player~$2$\xspace's value, we say that the {\em value} exists in the game. \subsection{Partial information bidding games} A partial-information bidding game is ${\cal G} = \zug{V, E, \alpha, \gamma_1, \gamma_2}$, where $\zug{V,E}$ is a directed graph, $\alpha$ is an objective as we elaborate later, and the {\em budget distribution} $\gamma_i$ is a probability distribution from which Player~$i$\xspace's initial budget is drawn, for $i \in \set{1,2}$. The {\em support} of a probability distribution $\gamma: \mathbb{Q} \rightarrow [0,1]$ is $\text{supp}(\gamma) = \set{x \in \mathbb{Q}: \gamma(x) > 0}$. We restrict attention to finite-support probability distributions. For $i \in \set{1,2}$, the probability that Player~$i$\xspace's initial budget is $B_i \in \text{supp}(\gamma_i)$ is $\gamma_i(B_i)$. \begin{definition} {\bf One-sided partial information.} We say that a game has one-sided partial information when $|\text{supp}(\gamma_1)|=1$ and $|\text{supp}(\gamma_2)| > 1$. We then call Player~$1$\xspace the {\em partially-informed player} and Player~$2$\xspace the {\em fully-informed player}. \end{definition} We turn to define values in partial-information games. The intuition is similar to the full-information case only that each player selects a collection of strategies, one for each possible initial budget, and we take the expectation over the payoffs that each pair of strategies achieves. The $\delta$ in the following definition allows us to avoid corner cases due to ties in biddings and the $\epsilon$ is crucial to obtain the results on full-information mean-payoff bidding games. \begin{definition}{\bf (Values in partial-information bidding games).} \label{def:values} Consider a partial-information bidding game ${\cal G} =\zug{V, E, \alpha, \beta, \gamma}$. Suppose $\text{supp}(\beta) = \set{B_1, \ldots, B_n}$ and $\text{supp}(\gamma) = \set{C_1, \ldots, C_n}$. We define Player~$1$\xspace's value, denoted $\textsf{val}^\downarrow({\cal G}, \beta, \gamma)$, and Player~$2$\xspace's value, denoted $\textsf{val}^\uparrow({\cal G}, \beta, \gamma)$, is defined symmetrically. We define that $\textsf{val}^\downarrow({\cal G}, \beta, \gamma) = c \in \mathbb{R}$ if for every $\delta,\epsilon > 0$, \begin{itemize \item There is a collection $\big(f_B \in \S_1(B + \delta) \big)_{B \in \text{supp}(\beta)}$ of Player~$1$\xspace strategies, such that for every collection $\big(g_C \in \S_2(C) \big)_{C \in \text{supp}(\gamma)}$ of Player~$2$\xspace strategies, we have $\sum_{B,C} \beta(B) \cdot \gamma(C) \cdot \textsf{payoff}(f_B, g_C) \geq c - \epsilon$. \item For every collection $\big(f_B \in \S_1(B) \big)_{B \in \text{supp}(\beta)}$ of Player~$1$\xspace strategies, there is a collection $\big(g_C \in \S_2(C + \delta) \big)_{C \in \text{supp}(\gamma)}$ of Player~$2$\xspace strategies such that $\sum_{B,C} \beta(B) \cdot \gamma(C) \cdot \textsf{payoff}(f_B, g_C) \leq c + \epsilon$. \end{itemize} Note that $\textsf{val}^\downarrow({\cal G}, \beta, \gamma) \leq \textsf{val}^\uparrow({\cal G}, \beta, \gamma)$ and when there is equality, we say that the value exists, and denote it by $\textsf{val}({\cal G}, \beta, \gamma)$. \end{definition} The value in mean-payoff games is often called the {\em mean-payoff value}. In mean-payoff games we use $\textsf{MP}^\downarrow, \textsf{MP}^\uparrow$, and $\textsf{MP}$ instead of $\textsf{val}^\downarrow, \textsf{val}^\uparrow$, and $\textsf{val}$, respectively. When ${\cal G}$ is full-information and the budget ratio is $r$, we use $\textsf{MP}({\cal G}, r)$ instead of writing the two budgets. \stam{ \begin{remark}{\bf (Revealing the initial budget).} We note that the definition of strategies in partial- and full-information bidding games is the same; namely, a strategy is a function from histories to actions. Recall that histories only contain the previous bids performed. Thus, while in full-information games this suffices to compute the current budget of the opponent, in partial-information games a player only knows a distribution of possible current budgets of the opponent. In other words, when Player~$1$\xspace plays according to $f$ and Player~$2$\xspace according to $g_1$ and $g_2$, then as long as $g_1$ and $g_2$ bid the same, the bids made by $f$ are the same and so is the generated history. On the other hand, the opponent {\em reveals} her true initial budget when following a history $h$, we have $g_1(h) \neq g_2(h)$. \end{remark} } \section{Partial-Information Qualitative First-Price Bidding Games} \label{sec:qual} In this section, we focus on first-price bidding and show that the value exists in partial-information bidding games with qualitative objectives. The proof adapts results from the full-information setting, which we survey first. \begin{definition} {\bf (Threshold ratios in full-information games).} Consider a full-information first-price bidding game with a qualitative objective. Suppose that the sum of initial budgets is $1$ and that the game starts at $v$. The threshold ratio in $v$, denoted $\texttt{Th}\xspace(v)$, is a value $t$ such that for every $\epsilon > 0$: \begin{itemize \item Player~$1$\xspace wins when his ratio is greater than $\texttt{Th}\xspace(v)$; namely, when the initial budgets are $t+\epsilon$ and $1-t-\epsilon$. \item Player~$2$\xspace wins when Player~$1$\xspace's ratio is less than $\texttt{Th}\xspace(v)$; namely, when the initial budgets are $t-\epsilon$ and $1-t+\epsilon$. \end{itemize} \end{definition} Existence of threshold ratios for full-information reachability games was shown in \cite{LLPU96,LLPSU99} and later extended to full-information parity games in \cite{AHC19,AHI18}. \begin{theorem}\cite{LLPU96,LLPSU99,AHC19,AHI18} Threshold ratios exist in every vertex of a parity game. \end{theorem} The following theorem, extends these results to the partial-information setting. \begin{theorem} \label{thm:partial-qual} Consider a partial-information parity first-price bidding game ${\cal G} = \zug{V, E, \alpha, \beta, \gamma}$ and a vertex $v \in V$. Let $W = \set{\zug{B,C}: B \in \text{supp}(\beta), \ C \in \text{supp}(\gamma), \text{ and } \texttt{Th}\xspace(v) < \frac{B}{B+C}}$. Then, the value of ${\cal G}$ in $v$ is $\sum_{\zug{B,C} \in W} \beta(B) \cdot \gamma(C)$. \end{theorem} \begin{proof} Consider the following collection of strategies for Player~$1$\xspace. For every $B \in \text{supp}(\beta)$, let $C \in \text{supp}(\gamma)$ be the maximal initial budget such that Player~$1$\xspace wins with initial budgets $B$ and $C$ from $v$. That is, $C$ is the maximal element such that $\frac{B}{B+C} > \texttt{Th}\xspace(v)$. We fix Player~$1$\xspace's strategy for initial budget $B$ to be a winning strategy $f$ against $C$. It is not hard to show that $f$ wins against any Player~$2$\xspace strategy $g \in \S_2(C')$, for $C' < C$. To show that \text{Max}\xspace cannot guarantee a higher payoff, we consider the dual collection of strategies for \text{Min}\xspace: for every $C \in \text{supp}(\gamma)$, \text{Min}\xspace selects the maximal $B \in \text{supp}(\beta)$ such that $\frac{B}{B+C} \leq \texttt{Th}\xspace(v)$, and plays according to a winning strategy for these budgets. Recall that we let \text{Min}\xspace win bidding ties, thus she wins the game when $\frac{B}{B+C} = \texttt{Th}\xspace(v)$. Similar to the above, \text{Min}\xspace wins for initial budgets $C$ and $B' < B$. To conclude, for each pair $\zug{B, C} \in \text{supp}(\beta)\times \text{supp}(\gamma)$, if $\frac{B}{B+C} > \texttt{Th}\xspace(v)$, Player~$1$\xspace wins, and if $\frac{B}{B+C} \leq \texttt{Th}\xspace(v)$, Player~$2$\xspace wins. Both players play irrespective of the opponent's strategy, hence the theorem follows. \end{proof} \section{Partial-Information Mean-Payoff Bidding Games} \label{sec:MP} In this section we study mean-payoff bidding games. Throughout this section we focus on games played on {\em strongly-connected graphs}. We start by surveying results on full-information games. The most technically-challenging results concern one-sided partial-information poorman-bidding games. We first develop optimal strategies for the partially-informed player, and then show that the value does not necessary exist under pure strategies. \subsection{Full-information mean-payoff bidding games}\label{sec:MP-FI} We show equivalences between bidding games and a class of stochastic games~\cite{Con92} called random-turn games, which are define formally as follows. \begin{definition} {\bf (Random-turn games).} \label{def:RT} Consider a strongly-connected mean-payoff bidding game ${\cal G}$. For $p \in [0,1]$, the random-turn game that corresponds to ${\cal G}$ w.r.t. $p$, denoted $\textsf{RT}({\cal G}, p)$, is a game in which instead of bidding, in each turn, we toss a (biased) coin to determine which player moves the token: Player~$1$\xspace and Player~$2$\xspace are respectively chosen with probability $p$ and $1-p$. Formally, $\textsf{RT}({\cal G}, p)$ is constructed as follows. Every vertex $v$ in ${\cal G}$, is replaced by three vertices $v_N, v_1$, and $v_2$. The vertex $v_N$ simulates the coin toss: it has an outgoing edge with probability $p$ to $v_1$ and an edge with probability $1-p$ to $v_2$. For $i \in \set{1,2}$, vertex $v_i$ simulates Player~$i$\xspace winning the coin toss: it is controlled by Player~$i$\xspace and has an outgoing edge to $u_N$, for every neighbor $u$ of $v$. The weights of $v_N, v_1$, and $v_2$ coincide with the weight of $v$. The mean-payoff value of $\textsf{RT}({\cal G}, p)$, denoted $\textsf{MP}\big(\textsf{RT}({\cal G}, p)\big)$, is the optimal expected payoff that the two players can guarantee, and it is known to exist \cite{Put05}. Since ${\cal G}$ is strongly-connected, $\textsf{MP}\big(\textsf{RT}({\cal G}, p)\big)$ does not depend on the initial vertex. \end{definition} For a full-information game ${\cal G}$ and a ratio $r \in (0,1)$, recall that $\textsf{MP}({\cal G}, r)$ denotes the optimal payoff that \text{Max}\xspace can guarantee with initial ratio $r$. We state the equivalences between the two models. \begin{theorem} \label{thm:FI-MP} Let ${\cal G}$ be a strongly-connected full-information mean-payoff bidding game. \begin{itemize} \item {\bf First-price Richman bidding \cite{AHC19}.} The optimal payoff that \text{Max}\xspace can guarantee with a pure strategy does not depend on the initial ratio: for every initial ratio $r$, we have $\textsf{MP}({\cal G}, r) = \textsf{MP}\big(\textsf{RT}({\cal G}, 0.5)\big)$. \item {\bf First-price poorman bidding \cite{AHI18}.} The optimal payoff that \text{Max}\xspace can guarantee with pure strategy and ratio $r$ coincides with the value of a random-turn game with bias $r$: for every initial ratio $r$, we have $\textsf{MP}({\cal G}, r) = \textsf{MP}\big(\textsf{RT}({\cal G}, r)\big)$. \item {\bf All-pay poorman bidding \cite{AJZ21}.} The optimal payoff that \text{Max}\xspace can guarantee with a pure strategy and ratio $r > 0.5$ coincides with the value of a random-turn game with bias $(2r-1)/r$: for every initial ratio $r > 0.5$, we have $\textsf{MP}({\cal G}, r) = \textsf{MP}\big(\textsf{RT}({\cal G}, (2r-1)/r)\big)$. \end{itemize} \end{theorem} Since the optimal payoff under first-price Richman bidding depends only on the structure of the game and not on the initial ratios, the result easily generalizes to partial-information games. Consider two budget distributions $\beta$ and $\gamma$ for \text{Min}\xspace and \text{Max}\xspace, respectively. Indeed, when \text{Min}\xspace's initial budget is $B \in \text{supp}(\beta)$, playing optimally against any $C \in \text{supp}(\gamma)$ results in the same payoff, and similarly for \text{Max}\xspace. We thus conclude the following. \begin{theorem Consider a strongly-connected first-price Richman mean-payoff bidding game ${\cal G}$. For any two budget distributions $\beta$ and $\gamma$ for the two players, we have $\textsf{MP}^\downarrow({\cal G}, \beta, \gamma) = \textsf{MP}^\uparrow({\cal G}, \beta, \gamma) = \textsf{MP}\big(\textsf{RT}({\cal G}, 0.5)\big)$. \end{theorem} \begin{remark} {\bf (All-pay Richman bidding).} It was shown in \cite{AJZ21} that in all-pay Richman bidding games, pure strategies are ``useless'': no matter what the initial ratio is, \text{Max}\xspace cannot guarantee a positive payoff with a pure strategy. The study of mean-payoff all-pay Richman-bidding games is thus trivial in the partial-information setting as well. \end{remark} \subsection{The value of the partially-informed player} \label{sec:partially-informed} We turn to study partial-information mean-payoff bidding games under poorman bidding, where we focus on one-sided partial information. We arbitrarily set \text{Max}\xspace to be partially-informed and \text{Min}\xspace to be fully-informed. \subsubsection{First-price bidding.} Fix a strongly-connected mean-payoff game ${\cal G}$. Suppose that \text{Max}\xspace's budget is $B$ and \text{Min}\xspace's budget is chosen from a finite probability distribution $\gamma$ with $\text{supp}(\gamma) = \set{C_1, \ldots, C_n}$ and $C_i < C_{i+1}$, for $1 \leq i < n$. We generalize the technique that is illustrated in Example~\ref{ex:partially-informed}. \text{Max}\xspace carefully chooses increasing $x_1, \ldots, x_n$, where $x_n = B$. He maintains two ``accounts'': a spending account from which he bids and a savings account. Initially, the spending account has a budget of $x_1$ and the savings account, a budget of $B-x_1$. \text{Max}\xspace plays ``optimistically''. He first plays in hope that \text{Min}\xspace's budget is $C_1$ with a budget of $x_1$. If \text{Min}\xspace does not spend $C_1$, the payoff is as in full-information games, namely at least $p_1 = \textsf{MP}\big(\textsf{RT}({\cal G}, \frac{x_1}{x_1 + C_1})\big)$. Otherwise, \text{Min}\xspace spends at least $C_1$ and \text{Max}\xspace transfers budget from his savings account to his spending account so that the saving account has $B-x_2$ and the spending account has at least $x_2 - x_1$. Note that if \text{Min}\xspace's initial budget was indeed $C_2$, at this point she is left with a budget of at most $C_2 - C_1$. If \text{Min}\xspace does not spend $C_2 - C_1$, by following a full-information optimal strategy, \text{Max}\xspace can guarantee a payoff of at least $p_2 = \textsf{MP}\big(\textsf{RT}({\cal G}, \frac{x_2-x_1}{x_2-x_1 + C_2 - C_1})\big)$. The definition of $p_3,\ldots, p_n$ is similar. \text{Max}\xspace chooses $x_1,\ldots, x_n$ so that $p_1 \geq \ldots \geq p_n$. Thus, when \text{Min}\xspace's initial budget is $C_i$, she has an incentive to play so that \text{Max}\xspace's spending account will reach $x_i$ and the payoff will be at least $p_i$. We call such a choice of $x_1,\ldots,x_n$ {\em admissible} and formally define it as follows. \begin{definition}\label{def:admissible}{\bf Admissible sequences.} Let ${\cal G}$ be a poorman mean-payoff bidding game. Let $B$ be a budget of \text{Max}\xspace and $\gamma$ be a finite budget distribution of \text{Min}\xspace with $\text{supp}(\gamma) =\{C_1,\dots,C_n\}$. A sequence $(x_i)_{1 \leq i \leq n}$ of budgets is called \emph{admissible} with respect to $B$ and $\gamma$ if $0 \leq x_1 \leq x_2 \leq \dots \leq x_n = B$ and $p_1 \geq p_2 \geq \ldots \geq p_n$,~where \begin{equation}\label{eq:xprelation} p_i = \textsf{MP}\Big(\textsf{RT}\Big({\cal G},\frac{x_i-x_{i-1}}{x_i-x_{i-1} + C_i - C_{i-1}}\Big)\Big) \end{equation} for each $1\leq i\leq n$, with $x_0=0$ and $C_0=0$. We denote by $\textsc{Adm}(B,\gamma)$ the set of all admissible sequences with respect to $B$ and $\gamma$. \end{definition} The main result of this section is stated in the following theorem. The upper bound is proven in Lemma~\ref{lem:partially-upper-bound} and the lower bound in Lemma~\ref{lem:partially-lower-bound}. \begin{theorem}[Mean-payoff value of the partially-informed player]\label{thm:maxmeanpayoff} Consider a strongly-connected first-price poorman mean-payoff bidding game ${\cal G}$. Let $B$ be the initial budget of \text{Max}\xspace and $\gamma$ be a finite budget distribution of \text{Min}\xspace with $\text{supp}(\gamma) =\{C_1,\dots,C_n\}$. Then \begin{equation}\label{eq:value} \textsf{MP}^\downarrow({\cal G}, \beta, \gamma) = \max_{(x_i)_{1 \leq i \leq n} \in \textsc{Adm}(B,\gamma)}\, \mathsf{Val}(x_1,\dots,x_n), \end{equation} where \begin{equation}\label{eq:sum} \mathsf{Val}(x_1,\dots,x_n) = \sum_{i=1}^n \gamma(C_i) \cdot \textsf{MP}\Big( \textsf{RT}\Big( {\cal G}, \frac{x_i-x_{i-1}}{x_i-x_{i-1} + C_i - C_{i-1}} \Big) \Big) \end{equation} with $x_0 = 0$ and $C_0 = 0$. \end{theorem} We point to some interesting properties of \text{Max}\xspace's value: \begin{remark} \label{rem:partially} Consider the bowtie game (Fig.~\ref{fig:bowtie}) and assume \text{Max}\xspace's budget is fixed to $B=1$ and \text{Min}\xspace's budget is drawn uniformly at random from $\set{C_1, C_2}$. \begin{itemize \item When $C_1 = 1$ and $C_2 = 2$, the maximum is obtained at $x= 0.5$, thus \text{Max}\xspace's optimal expected payoff is $\frac{1}{3} = \frac{B}{B+C_2}$. We note that \text{Max}\xspace has a very simple optimal strategy in this case: ``assume the worst'' on \text{Min}\xspace's initial budget. That is, play according to an optimal strategy for initial budgets $B$ and $C_2$. \item When $C_1 = 1$, and $C_2 = 5$, the maximum is obtained at $x=0$. This is the dual of the case above. \text{Max}\xspace can ``assume the best'' on \text{Min}\xspace's initial budget and play according to an optimal strategy for budgets $B$ and $C_1$. When \text{Min}\xspace's budget is $C_1$, this strategy guarantees a payoff of $\frac{B}{B+C_1}$. But when \text{Min}\xspace's budget is $C_2$, the strategy cannot guarantee a payoff above $0$. Thus, the strategy guarantees an expected payoff of $\frac{1}{4} = \frac{1}{2}\cdot\frac{B}{B+C_1}$. \item There are cases in which \text{Max}\xspace's optimal strategy is not one of the trivial cases above. When $C_1 = 1$ and $C_2 = 3$, \text{Max}\xspace's optimal payoff is $\frac{1}{8}(5-2\cdot\sqrt{2})\approx 0.271$, which is strictly larger than both $\frac{1}{4}=\frac{1}{2}\cdot\frac{B}{B+C_1}$ and $\frac{1}{4}=\frac{B}{B+C_2}$.\hfill$\triangle$ \end{itemize} \end{remark} \begin{definition} We denote the right-hand-side of eq.~\eqref{eq:value} by $\mathsf{Val}$ \end{definition} \begin{lemma}[Upper bound] \label{lem:partially-upper-bound} Consider a strongly-connected first-price poorman mean-payoff bidding game ${\cal G}$. Let $B$ be the initial budget of \text{Max}\xspace and $\gamma$ be a finite budget distribution of \text{Min}\xspace with $\text{supp}(\gamma) =\{C_1,\dots,C_k\}$. Then, for every $\epsilon>0$, \text{Max}\xspace has a strategy that guarantees an expected mean-payoff of at least $\mathsf{Val}-\epsilon$. \end{lemma} \begin{proof} Fix $\epsilon>0$. For each $(x_i)_{1 \leq i \leq n} \in \textsc{Adm}(B,\gamma)$, we construct a \text{Max}\xspace strategy $f_{x_1,\dots,x_n}$ that guarantees a payoff of at least $\mathsf{Val}(x_1,\dots,x_n)-\epsilon$ as follows: \begin{itemize \item \text{Max}\xspace uses portion $x_1$ of his budget to play an $\epsilon$-optimal strategy against \text{Min}\xspace with budget $C_1$. This is continued as long as \text{Min}\xspace spends at most $C_1$. \item For each $1\leq i\leq n-1$, once \text{Min}\xspace's investments exceed $C_i$, \text{Max}\xspace starts using portion $x_{i+1}-x_i$ of his budget and plays according to an $\epsilon$-optimal strategy against budget $C_{i+1}-C_i$ of $\text{Min}\xspace$. This is continued as long as \text{Min}\xspace's investments do not exceed $C_{i+1}$. \end{itemize} Lemma~\ref{lem:partially-upper-bound} follows from Claim~$1$ below, which generalizes the analysis of Example~\ref{ex:partially-informed} to any \text{Min}\xspace finite budget distribution. As in the example, it is crucial to select $x_i$ such that \text{Min}\xspace has an incentive to ``reveal'' (when she can), that her budget is larger than $C_i$. Formally, recall that $p_i = (x_i-x_{i-1})/(x_i-x_{i-1} + C_i - C_{i-1})$. Intuitively, $p_i$ can be thought of as the payoff when \text{Max}\xspace plays according to the strategy above and \text{Min}\xspace's budget is $C_i$. Then, we require that $p_1 \geq \dots \geq p_n$. \noindent{\bf Claim 1.} For each $(x_i)_{1 \leq i \leq n} \in \textsc{Adm}(B,\gamma)$, \text{Max}\xspace ensures a payoff of at least $\mathsf{Val}(x_1,\dots,x_n)-\epsilon$ by playing according to the strategy $f_{x_1,\dots,x_n}$. To prove Claim 1, fix a strategy $g$ of \text{Min}\xspace and consider $\textsf{play}(f_{x_1,\dots,x_n},g)$. Denote by $c$ the highest value of the budget lost by \text{Min}\xspace during the course of the play, and let $1\leq i\leq n$ be such that $C_{i-1}< c\leq C_i$. Then, by the construction of $f_{x_1,\dots,x_n}$, the payoff of the play is at least $p_i-\epsilon$. Since we assumed that $p_1\geq p_2\geq \dots \geq p_n$ and since $\textsf{MP}\big(\textsf{RT}({\cal G},p)\big)$ is a monotonically decreasing function in $p$, it follows that the payoff of $\textsf{play}(f_{x_1,\dots,x_n},g)$ is at least $p_i-\epsilon$ if $\text{Min}\xspace$'s initial budget is $C_i$. Therefore, as the probability of \text{Min}\xspace's initial budget being $C_i$ is $\gamma(C_i)$, we conclude that the expected payoff of $\textsf{play}(f_{x_1,\dots,x_n},g)$ is at least $\mathsf{Val}(x_1,\dots,x_n)-\epsilon$. Since the strategy $g$ of $\text{Min}\xspace$ was arbitrary, Claim~1 follows. \end{proof} Recall that \text{Max}\xspace guarantees an expected payoff of $c \in \mathbb{R}$ if, intuitively, he can reveal the strategy that he plays according to and no matter how \text{Min}\xspace responds, the expected payoff is at least $c$. Thus, in order to show a lower bound on \text{Max}\xspace's value, we show that no matter which strategy \text{Max}\xspace chooses, \text{Min}\xspace can respond in a way that guarantees a payoff of at most $\mathsf{Val} + \epsilon$. Formally we have the following. \begin{lemma}[Lower bound]\label{lem:partially-lower-bound} Given $\epsilon>0$ and a strategy $f$ of \text{Max}\xspace, there exist strategies $g_1\in \S(C_1),\dots,g_n\in \S(C_n)$ of \text{Min}\xspace such that $\sum_{i=1}^n \gamma(C_i)\cdot \textsf{payoff}(f,g_i) \leq \mathsf{Val} + \epsilon$. \end{lemma} \begin{proof} Let $\epsilon > 0$ and suppose that \text{Max}\xspace plays according to a strategy $f$. As a response, for each $1 \leq i \leq n$, when \text{Min}\xspace's initial budget is $C_i$, she selects an $\epsilon$-optimal response strategy $g_i \in \S_2(C_i)$ against $f$. We show that the choice of $g_1,\ldots, g_n$ satisfies the claim. Intuitively, we find an admissible sequence $x_1, \ldots, x_n$ and a corresponding ``wallet-based'' strategy $f_{x_1,\dots,x_n}$ as constructed in the proof of Lemma~\ref{lem:partially-upper-bound}, and show that $f_{x_1,\dots,x_n}$ achieves a payoff no worse than $f$ against $g_1,\ldots, g_n$. The proof follows since \[ \sum_{i=1}^n \gamma(C_i)\cdot \textsf{payoff}(f,g_i) = \mathsf{Val}(x_1,\dots, x_n)+\epsilon \leq \mathsf{Val} + \epsilon. \] To construct the admissible sequence $(x_i)_{1 \leq i \leq n} \in \textsc{Adm}(B,\gamma)$, we set $x_n=B$ and define the remaining $x_i$'s as follows. Let $p_i = \textsf{payoff}(f,g_i)$ for each $1\leq i\leq n$. Since $C_1 < \dots < C_n$, we have $p_1\geq \dots \geq p_n$. By Theorem~\ref{thm:FI-MP}, we have $\textsf{MP}(\textsf{RT}({\cal G}, 0)) \leq p_n \leq \textsf{MP}(\textsf{RT}({\cal G}, \frac{B}{B+C_n}))$. On the other hand, it is known~\cite{Cha12,Sol03} that the value $MP(\textsf{RT}({\cal G}, p))$ is a continuous function in $p$. Hence, there exists $B\cdot\frac{C_{n-1}}{C_n} \leq x \leq B$ such that $p_n = \textsf{MP}(\textsf{RT}({\cal G},\frac{B-x}{B-x+C_n-C_{n-1}}))$. We set $x_{n-1}$ to be the largest such~$x$. We claim that, when the initial budget of \text{Min}\xspace is $C_{n-1}$, \text{Max}\xspace does not spend more than $x_{n-1}$ in the $\textsf{play}(f,g_{n-1})$. Indeed, suppose towards contradiction that $\text{Max}\xspace$ spends $x'>x_{n-1}$ while playing against $g_{n-1}$. Then, if the initial budget of \text{Min}\xspace was $C_n$ and \text{Min}\xspace used portion $C_{n-1}$ of her budget to play according to $g_{n-1}$, \text{Max}\xspace would eventually be left with a budget of $B-x'<B-x_{n-1}$ and \text{Min}\xspace would be left with at least $C_n-C_{n-1}$. Thus, \text{Min}\xspace could play optimally for an initial budget of at least $C_n-C_{n-1}$ against a \text{Max}\xspace budget smaller than $B-x'$, to ensure a payoff of at most $\textsf{MP}(\textsf{RT}({\cal G},\frac{B-x'}{B-x'+C_n-C_{n-1}})) \leq \textsf{MP}(\textsf{RT}({\cal G},\frac{B-x_{n-1}}{B-x_{n-1}+C_n-C_{n-1}})) = p_n$. This would contradict either the optimality of $g_n$ in the case of strict inequality, or the maximality of $x_{n-1}$ in the case of equality. We thus conclude that \text{Max}\xspace spends at most $x_{n-1}$ in $\textsf{play}(f,g_{n-1})$. Next, we define $x_{n-2}$. Note that the fact that \text{Max}\xspace spends at most $x_{n-1}$ in $\textsf{play}(f,g_{n-1})$ also implies that $\textsf{MP}(\textsf{RT}({\cal G}, 0)) \leq p_{n-1}\leq \textsf{MP}(\textsf{RT}({\cal G},x_{n-1}/(x_{n-1}+C_{n-1}))$. Thus, as $MP(\textsf{RT}({\cal G}, p))$ is continuous in $p$, there exists $x_{n-1}\cdot C_{n-2}/C_{n-1} \leq x\leq x_{n-1}$ with $p_{n-1} = \textsf{MP}(\textsf{RT}({\cal G},\frac{x_{n-1} - x}{x_{n-1}-x+C_{n-1}-C_{n-2}}))$. Set $x_{n-2}$ to be the largest such $x$. Then, the same argument as above shows that \text{Max}\xspace does not lose more than $x_{n-2}$ in the $\textsf{play}(f,g_{n-2})$. We may then inductively repeat this procedure in order to define $x_{n-3},\dots,x_1$. Note that this results in a sequence $0\leq x_1\leq x_2\leq \dots\leq x_n=B$ which by construction satisfies eq.~\eqref{eq:xprelation} for each $1\leq i\leq n$. Since we already showed that $p_1\geq \dots\geq p_n$, it follows that $(x_i)_{1 \leq i \leq n} \in \textsc{Adm}(B,\gamma)$. \end{proof} \subsubsection{All-pay poorman bidding} We extend the technique in the previous section to all-pay poorman bidding. In order to state our results formally, we need to redefine the notion of admissible sequences since the optimal payoff that \text{Max}\xspace can guarantee under all-pay bidding differs from the payoff that he can guarantee under first-price bidding. Analogously to Def.~\ref{def:admissible} but now under all-pay bidding, we say that a sequence $(x_i)_{1 \leq i \leq n}$ of budgets is called \emph{admissible} with respect to a budget $B$ of \text{Max}\xspace and a budget distribution $\gamma$ of \text{Min}\xspace if $0 \leq x_1 \leq x_2 \leq \dots \leq x_n = B$ and $p_1 \geq p_2 \geq \ldots \geq p_n$,~where now \begin{equation*} p_i = \textsf{MP}\Big(\textsf{RT}\Big({\cal G},\Big(1-\frac{C_i - C_{i-1}}{x_i-x_{i-1}}\Big)\cdot\mathbb{I}\Big(x_i-x_{i-1} > C_i - C_{i-1}\Big)\Big)\Big) \end{equation*} for each $1\leq i\leq n$, with $x_0=0$ and $C_0=0$. Here, $\mathbb{I}$ is an indicator function that evaluates to $1$ if the input logical formula is true, and to $0$ if it is false. We are now ready to state our result on all-pay poorman mean-payoff bidding games. \begin{theorem}[Mean-payoff value of the partially-informed player]\label{thm:APmaxmeanpayoff} Consider a strongly-connected all-pay poorman mean-payoff bidding game ${\cal G}$. Let $B$ be the initial budget of \text{Max}\xspace and $\gamma$ be a finite budget distribution of \text{Min}\xspace with $\text{supp}(\gamma) =\{C_1,\dots,C_n\}$. Then \begin{equation}\label{eq:APvalue} \textsf{MP}^\downarrow({\cal G}, \beta, \gamma) = \max_{(x_i)_{1 \leq i \leq n} \in \textsc{Adm}(B,\gamma)}\, \mathsf{Val}(x_1,\dots,x_n), \end{equation} where $\mathsf{Val}(x_1,\dots,x_n) = \sum_{i=1}^n \gamma(C_i) \cdot \textsf{MP}\Big( \textsf{RT}\Big( {\cal G}, \Big(1-\frac{C_i - C_{i-1}}{x_i-x_{i-1}}\Big)$$\cdot\mathbb{I}\Big(x_i-x_{i-1} > C_i - C_{i-1}\Big) \Big) \Big)$ \stam \begin{equation}\label{eq:sum} \mathsf{Val}(x_1,\dots,x_n) = \sum_{i=1}^n \gamma(C_i) \cdot \textsf{MP}\Big( \textsf{RT}\Big( {\cal G}, \Big(1-\frac{C_i - C_{i-1}}{x_i-x_{i-1}}\Big)\cdot\mathbb{I}\Big(x_i-x_{i-1} > C_i - C_{i-1}\Big) \Big) \Big) \end{equation} } with $x_0 = 0$ and $C_0 = 0$ and $\mathbb{I}$ an indicator function. \end{theorem} We introduce the following notation: \begin{definition} We denote the right-hand-side of eq.~\eqref{eq:APvalue} by $\mathsf{Val}$ \end{definition} The proof of the upper bound is similar to the proof for first-price poorman bidding and we include it for completeness \begin{lemma}[Upper bound] \label{lem:partially-upper-bound-allpay} Consider a strongly-connected all-pay poorman mean-payoff bidding game ${\cal G}$. Let $B$ be the initial budget of \text{Max}\xspace and $\gamma$ be a finite budget distribution of \text{Min}\xspace with $\text{supp}(\gamma) =\{C_1,\dots,C_k\}$. Then, for every $\epsilon>0$, \text{Max}\xspace has a strategy that guarantees an expected mean-payoff of at least $\mathsf{Val}-\epsilon$. \end{lemma} \begin{proof} Fix $\epsilon>0$. For each $(x_i)_{1 \leq i \leq n} \in \textsc{Adm}(B,\gamma)$, we construct a \text{Max}\xspace strategy $f_{x_1,\dots,x_n}$ that guarantees a payoff of at least $\mathsf{Val}(x_1,\dots,x_n)-\epsilon$ as follows: \begin{itemize} \item \text{Max}\xspace uses portion $x_1$ of his budget to play an $\epsilon$-optimal strategy against \text{Min}\xspace with budget $C_1$. This is continued as long as \text{Min}\xspace spends at most $C_1$. \item For each $1\leq i\leq n-1$, once \text{Min}\xspace's investments exceed $C_i$, \text{Max}\xspace starts using portion $x_{i+1}-x_i$ of his budget and plays according to an $\epsilon$-optimal strategy against budget $C_{i+1}-C_i$ of $\text{Min}\xspace$. This is continued as long as \text{Min}\xspace's investments do not exceed $C_{i+1}$. \end{itemize} Lemma~\ref{lem:partially-upper-bound-allpay} follows immediately from Claim~$1$ below. Recall that, for all-pay poorman mean-payoff bidding games, we defined $p_i = \textsf{MP}(\textsf{RT}({\cal G},(1-\frac{C_i - C_{i-1}}{x_i-x_{i-1}})\cdot\mathbb{I}(x_i-x_{i-1} > C_i - C_{i-1})))$. \paragraph{Claim 1.} For each $(x_i)_{1 \leq i \leq n} \in \textsc{Adm}(B,\gamma)$, \text{Max}\xspace ensures a payoff of at least $\mathsf{Val}(x_1,\dots,x_n)-\epsilon$ by playing according to the strategy $f_{x_1,\dots,x_n}$. To prove Claim 1, fix a strategy $g$ of \text{Min}\xspace and consider $\textsf{play}(f_{x_1,\dots,x_n},g)$. Denote by $c$ the highest value of the budget lost by \text{Min}\xspace during the course of the play, and let $1\leq i\leq n$ be such that $C_{i-1}< c\leq C_i$. Then, by the construction of $f_{x_1,\dots,x_n}$, the payoff of the play is at least $p_i-\epsilon$. Since we assumed that $p_1\geq p_2\geq \dots \geq p_n$ and since $\textsf{MP}\big(\textsf{RT}({\cal G},p)\big)$ is a monotonically decreasing function in $p$, it follows that the payoff of $\textsf{play}(f_{x_1,\dots,x_n},g)$ is at least $p_i-\epsilon$ if $\text{Min}\xspace$'s initial budget is $C_i$. Therefore, as the probability of \text{Min}\xspace's initial budget being $C_i$ is $\gamma(C_i)$, we conclude that the expected payoff of $\textsf{play}(f_{x_1,\dots,x_n},g)$ is at least $\mathsf{Val}(x_1,\dots,x_n)-\epsilon$. Since the strategy $g$ of $\text{Min}\xspace$ was arbitrary, Claim~1 follows. \end{proof} The proof of the lower bound is also similar to the proof for first-price poorman bidding but it requires care. In particular, if the initial budget of \text{Min}\xspace is $C_i>B$, then \text{Min}\xspace can guarantee an arbitrarily small payoff against any strategy of \text{Max}\xspace according to Theorem~\ref{thm:FI-MP}. We need to take this into account when constructing an admissible sequence $x_1, \ldots, x_n$ and a corresponding ``wallet-based'' strategy $f_{x_1,\dots,x_n}$. \begin{lemma}[Lower bound]\label{lem:partially-lower-bound-allpay} Given $\epsilon>0$ and a strategy $f$ of \text{Max}\xspace, there exist strategies $g_1\in \S(C_1),\dots,g_n\in \S(C_n)$ of \text{Min}\xspace such that $\sum_{i=1}^n \gamma(C_i)\cdot \textsf{payoff}(f,g_i) \leq \mathsf{Val} + \epsilon$. \end{lemma} \begin{proof} Let $\epsilon > 0$ and suppose that \text{Max}\xspace plays according to a strategy $f$. As a response, for each $1 \leq i \leq n$, when her initial budget is $C_i$, \text{Min}\xspace selects a strategy $g_i \in \S_2(C_i)$ that is $\epsilon$-optimal against $f$. We show that the choice of $g_1,\ldots, g_n$ satisfies the claim. First, if $B<C_i$ for each $1\leq i\leq n$, then from eq.~\eqref{eq:value} and eq.~\eqref{eq:sum} we see that $\mathsf{Val}=0$. On the other hand, $\text{Max}\xspace$ cannot guarantee any payoff better than $0$ against any possible budget of \text{Min}\xspace, thus for each $i$ and for each strategy of $\text{Max}\xspace$ there exists a response strategy of \text{Min}\xspace that ensures payoff of at most $\epsilon$. Therefore, by our choice of $g_1,\dots,g_n$ we deduce that $\sum_{i=1}^n \gamma(C_i)\cdot \textsf{payoff}(f,g_i)\leq \sum_{i=1}^n \gamma(C_i)\cdot \epsilon = \epsilon = \mathsf{Val} + \epsilon$, as desired. Now, assume that there exists some $C_i<B$ and let $i^{\ast}$ be the largest such index. To prove that our choice of $g_1,\ldots, g_n$ satisfies the claim, we find an admissible sequence $x_1, \ldots, x_n$ and a corresponding ``wallet-based'' strategy $f_{x_1,\dots,x_n}$ as constructed in the proof of Lemma~\ref{lem:partially-upper-bound-allpay}, and show that $f_{x_1,\dots,x_n}$ achieves a payoff no worse than $f$ against $g_1,\ldots, g_n$. The proof follows since \[ \sum_{i=1}^n \gamma(C_i)\cdot \textsf{payoff}(f,g_i) \leq \mathsf{Val}(x_1,\dots, x_n)+\epsilon \leq \mathsf{Val} + \epsilon. \] \end{proof} \subsection{The mean-payoff value of the fully-informed player under first-price poorman bidding} \label{sec:fully} In this section we identify the optimal expected payoff that the fully-informed player can guarantee in the bowtie game (Fig.~\ref{fig:bowtie}) under first-price bidding. Suppose that \text{Max}\xspace's initial budget is $B$ and \text{Min}\xspace's initial budget is drawn from a distribution $\gamma$. Consider the following collection of naive strategies for \text{Min}\xspace: when her initial budget is $C \in \text{supp}(\gamma)$, \text{Min}\xspace plays according to an optimal full-information strategy for the ratio $\frac{B}{B+C}$. We find it surprising that this collection of strategies is optimal for \text{Min}\xspace in the bowtie game. The technical challenge in this section is the lower bound. This result complements Thm.~\ref{thm:maxmeanpayoff}: we characterize both \text{Min}\xspace and \text{Max}\xspace's values in the bowtie game when the players are restricted to use pure strategies. We show, somewhat unexpectedly, that the two values do not necessarily coincide. In order to state the result formally, we need the following definition. Intuitively, the {\em potential} of $\zug{B, \gamma}$ is the optimal expected payoff when \text{Min}\xspace plays according to the collection of naive strategies described above. \begin{definition} {\bf (Potential).} Given a budget $B \in \mathbb{R}$ of \text{Max}\xspace and a budget distribution $\gamma$ with support $\text{supp}(\gamma)=\{C_1,C_2,\dots,C_k\}$ of \text{Min}\xspace, we define $\Genpot{B}{\gamma} =\sum_{j=1}^k \gamma(C_j) \cdot \frac{B}{B+ C_j}$. \end{definition} \smallskip The main result in this section is given in the following theorem, whose proof follows from Lemmas~\ref{lemma:up} and~\ref{lemma:low}. \begin{theorem}[Mean-payoff value of the fully-informed player]\label{thm:fully-informed} Consider the bowtie game ${\cal G}_{\bowtie}$. Let $B$ be the initial budget of \text{Max}\xspace and $\gamma$ be a finite budget distribution of \text{Min}\xspace with $\text{supp}(\gamma) =\{C_1,C_2,\dots,C_k\}$. Then, \begin{equation}\label{eq:valueIMPERFECT} \textsf{MP}^\uparrow({\cal G}, B, \gamma) = \Genpot{B}{\gamma} = \sum_{j=1}^k \gamma(C_j) \cdot \frac{B}{B + C_j}. \end{equation} \end{theorem} Before proving the theorem, we note the following. \begin{remark} {\bf (Inexistence of a value).} Our result implies that the value in partial-information mean-payoff first-price poorman bidding games under pure strategies is not guaranteed to exist. Indeed, consider ${\cal G}_{\bowtie}$ with $B = 1$ and $\gamma$ that draws \text{Min}\xspace's budget uniformly at random from $\set{1,2}$. By Thm.~\ref{thm:maxmeanpayoff}, one can verify that the optimal choice of $x$ is $1$, thus $\textsf{MP}^\downarrow({\cal G}_{\bowtie}, B, \gamma) = \frac{1}{3}$. On the other hand, by Thm.~\ref{thm:fully-informed}, we have $\textsf{MP}^\uparrow({\cal G}_{\bowtie}, B, \gamma) = \frac{5}{12}$. \hfill$\triangle$ \end{remark} The upper bound is obtained when \text{Min}\xspace reveals her true budget immediately and plays according to the strategies described above. The following lemma follows from results on full-information games (Thm.~\ref{thm:FI-MP}). \begin{lemma}[Upper bound]\label{lemma:up} For every $\epsilon>0$, \text{Min}\xspace has a collection of strategies ensuring an expected payoff smaller than $\Genpot{B}{\gamma} + \epsilon$. \end{lemma} \stam \begin{proof} Let $\epsilon>0$. \text{Min}\xspace ensures the desired payoff by always using an optimal strategy for her actual initial budget against the initial budget $B$ of Max. Formally, for every $1 \leq j \leq k$, when \text{Min}\xspace starts with $C_j$ she has a strategy $g_j$ ensuring the payoff $\frac{B}{B + C_j} - \epsilon$ (Thm.~\ref{thm:FI-MP}). Then for every strategy $f$ of Max we get \[ \sum_{j=1}^k \gamma(C_j) \cdot \textsf{payoff}(f, g_j) = \sum_{j=1}^k \gamma(C_j) \cdot \frac{B}{B + C_j} - \epsilon, \] as desired. \end{proof} } We proceed to the more challenging lower bound and show that there are no \text{Min}\xspace strategies that perform better than the naive strategy above. \begin{lemma}[Lower bound]\label{lemma:low} For every $\epsilon>0$ and for every collection $(g_j \in \S_\text{Min}\xspace(C_j))_{1 \leq j \leq k}$ of \text{Min}\xspace strategies, \text{Max}\xspace has a strategy ensuring an expected payoff greater than $\Genpot{B}{\gamma} - \epsilon$. \end{lemma} \begin{proof} Let $\epsilon > 0$, and let $(g_j \in \S_\text{Min}\xspace(C_j))_{1 \leq j \leq k}$ be a collection of \text{Min}\xspace strategies. We construct a counter strategy $f$ of \text{Max}\xspace ensuring an expected payoff greater than $\Genpot{B}{\gamma} - \epsilon$. The proof is by induction over the size $k$ of the support of $\gamma$. Obviously, if $k=1$, \text{Max}\xspace has perfect information and can follow a full-information optimal strategy to guarantee a payoff of $\Genpot{B}{\gamma} = \frac{B}{B+C_1}$ (Thm.~\ref{thm:FI-MP}). So suppose that $k>1$, and that the statement holds for every budget distribution of \text{Min}\xspace with a support strictly smaller than $k$. \text{Max}\xspace carefully chooses a small part $x \leq B$ of his budget and a part $y \leq C_1$ of \text{Min}\xspace's budget. He plays according to a full-information strategy $f$ for initial budgets $x$ and $y$. This can result in three possible outcomes: {\bf (O$_1$)} \text{Min}\xspace never uses more than $y$: the payoff is $\frac{x}{x+y}$ as in full-information games; {\bf (O$_2$)} \text{Min}\xspace reveals her true initial budget, thus \text{Max}\xspace can distinguish between the case that \text{Min}\xspace's budget is $C_i$ and $C_j$, and by the induction hypothesis he can ensure an expected payoff of $\Genpot{B-x}{\gamma}$ using his remaining budget; {\bf (O$_3$)} \text{Min}\xspace does not reveal her true initial budget and spends more than $y$: \text{Max}\xspace's leftover budget is greater than $B-x$ and, for $1 \leq j \leq k$, when \text{Min}\xspace's budget is $C_j$, she has $C_j-y$, and \text{Max}\xspace re-starts the loop by selecting a new $x$. \stam \begin{description \item[O$_1$] \text{Min}\xspace never uses more than $y$. Thus, the outcome is as in the full-information setting, and the payoff is $\frac{x}{x+y}$; \item[O$_2$] \text{Min}\xspace reveals her true initial budget; namely, there are $g_i$ and $g_j$ that prescribe different actions. Then, \text{Max}\xspace can distinguish between the case that \text{Min}\xspace's budget is $C_i$ and $C_j$, and by the induction hypothesis he can play with his remaining budget to ensure the expected payoff $\Genpot{B-x}{\gamma}$; \item[O$_3$] \text{Min}\xspace does not reveal her true initial budget and spends more than $y$. Thus, \text{Max}\xspace's leftover budget is greater than $B-x$ and, for $1 \leq j \leq k$, when \text{Min}\xspace's budget is $C_j$, she has $C_j-y$. \text{Max}\xspace re-starts the loop and selects a new $x$. \end{description} } We show that \text{Max}\xspace can choose $x$ and $y$ in a way that guarantees that the payoffs obtained in the first two outcomes are greater than the desired payoff $\Genpot{B}{\gamma} - \epsilon$. Also, outcome O$_3$ can occur only finitely many times and the potential there does not decrease. Thus, O$_1$ or O$_2$ occur, ensuring a payoff of at least $\Genpot{B}{\gamma} -\epsilon$. Formally, we describe a sequence $(\pi_i,B_i,\gamma_i)_{0 \leq i \leq m}$ of configurations comprising of a history $\pi_i$ consistent with every strategy $(g_j)_{1 \leq j \leq k}$, the budget $B_i$ of \text{Max}\xspace after $\pi_i$, and the budget distribution $\gamma_i$ of \text{Min}\xspace with $\text{supp}(\gamma_i) = \{C_1^i,C_2^i, \ldots, C_k^i\}$ following $\pi_i$. Tuple~$i$ represents the budget and budget distribution of the players following $i-1$ choices of outcome O$_3$. Let $\lambda = 1 - \frac{\epsilon}{2}$ and $\rho = \frac{1}{\Genpot{B}{\gamma}} -1$. We start with $(\pi_0,B_0,\gamma_0) = (v,B,\gamma)$ with $v$ an initial vertex, and we show recursively how Max can update this tuple while ensuring that the following four properties are satisfied: \begin{itemize}[noitemsep,topsep=0pt] \item[P$_1$:] The history $\pi_i$ is consistent with every $(g_j)_{1 \leq j \leq k}$; \item[P$_2$:] Max spends his budget sufficiently slowly: $B_i \geq \lambda^i B$; \item[P$_3$:] Min spends her budget sufficiently fast: $C_{j}^{i} \leq C_j - \rho \cdot (1-\lambda^i) B$ for every $1 \leq j \leq k$; \item[P$_4$:] The potential never decreases: $\Genpot{B_i}{\gamma_i} \geq \Genpot{B}{\gamma}$. \end{itemize} Note that for the initial tuple $(\pi_0,B_0,\gamma_0) = (v,B,\gamma)$, these are trivially satisfied. Moreover, Property P$_3$ implies an upper bound on $i$, that is, outcome O$_3$ can happen only finitely many times: $\lim_{i \rightarrow \infty} C_{1}^{i} \leq \lim_{i \rightarrow \infty} C_1 -\rho \cdot (1-\lambda^i) B = C_1 - \rho \cdot B = B+ C_1 - \frac{B}{\Genpot{B}{\gamma}} = \frac{1}{\frac{1}{B+C_1}} - \frac{1}{ \sum_{j=1}^k \gamma(C_j) \cdot \frac{1}{B + C_j}}$ \stam{ \begin{align*} \lim_{i \rightarrow \infty} C_{1}^{i} &\leq \lim_{i \rightarrow \infty} C_1 -\rho \cdot (1-\lambda^i) B = C_1 - \rho \cdot B = B+ C_1 - \frac{B}{\Genpot{B}{\gamma}}\\ &= \frac{1}{\frac{1}{B+C_1}} - \frac{1}{ \sum_{j=1}^k \gamma(C_j) \cdot \frac{1}{B + C_j}} \end{align*} \begin{equation*} \begin{split} &\lim_{i \rightarrow \infty} C_{1}^{i} \leq \lim_{i \rightarrow \infty} C_1 -\rho \cdot (1-\lambda^i) B = C_1 - \rho \cdot B \\ &= B+ C_1 - \frac{B}{\Genpot{B}{\gamma}} = \frac{1}{\frac{1}{B+C_1}} - \frac{1}{ \sum_{j=1}^k \gamma(C_j) \cdot \frac{1}{B + C_j}} \end{split} \end{equation*} } which is negative since $C_1<C_2<\ldots<C_k$, yet a negative $C^i_1$ means that \text{Min}\xspace illegally bids higher than her available budget. We now define the choices $x_i$ and $y_i$ for each $i \in \mathbb{N}$, and show that they satisfy the properties described above. Let $x_i = \frac{\epsilon}{2} \cdot \lambda^{i} B$ and $y_i = \rho \cdot x_i$. For initial budgets $x_i$ and $y_i$, let $f_i$ be a full-information \text{Max}\xspace strategy whose payoff is greater than $\frac{x_i}{x_i + y_i} - \epsilon$. \text{Max}\xspace follows $f_i$ as long as \text{Min}\xspace spends at most $y_i$. Let $(\psi_j)_{1 \leq j \leq k}$ be plays such that for each $1 \leq j \leq k$: \begin{itemize}[noitemsep,topsep=0pt] \item the play $\pi_i\psi_j$ is consistent with the strategy $g_j$; \item Max plays according to $f_i$ along $\psi_j$; \item $\psi_j$ stops when Min uses more than $y_i$, and is infinite if she never does. \end{itemize} We consider three possible cases, depending on whether the paths $\psi_j$ are finite or infinite, and whether they are distinct or identical. If they are all infinite, or there are at least two distinct ones, we show that Max immediately has a way to obtain the desired payoff. If they are all identical and finite, we show that, while Max cannot immediately get the desired payoff, he can go to the next step by setting $\phi_{i+1} = \phi_i \psi_1$, and restarting. \noindent {\bf 1. The play $\psi_j$ is infinite for every $1 \leq j \leq k$.} \noindent This situation happens if Min does not spend more than $y_i$. Since \text{Max}\xspace follows the strategy $f_i$ along each $\psi_j$, the resulting payoff is greater than $\frac{x_i}{x_i + y_i} - \epsilon$. Moreover, the definition of $y_i$ implies: \[ \frac{x_i}{x_i+y_i} = \frac{x_i}{x_i+\rho \cdot x_i} = \frac{x_i}{x_i+(\frac{1}{\Genpot{B}{\gamma}} - 1)x_i} = \Genpot{B}{\gamma}. \] \noindent {\bf 2. The plays $(\psi_j)_{1 \leq j \leq k}$ are not all identical.} \noindent Let $P_1,P_2,\ldots,P_m$ be the partition of $\{1,2,\ldots,k\}$ such that for every pair $1 \leq j,j' \leq k$, the plays $\psi_j$ and $\psi_{j'}$ are equal if and only if $j$ and $j'$ belong to the same $P_\ell$. Remark that $m \geq 2$ since by supposition the plays $(\psi_j)_{1 \leq j \leq k}$ are not all identical. We show that \text{Max}\xspace can follow some $\psi_j$ until he identifies precisely which $P_\ell$ corresponds to the initial budget of \text{Min}\xspace, which allows us to apply the induction hypothesis, and to show that \text{Max}\xspace can guarantee the desired payoff. For each $1 \leq \ell \leq m$, the plays $(\psi_j)_{j \in P_\ell}$ are equal by definition, and we denote this play by~$\chi_\ell$. We start by trimming the infinite plays into finite plays that still allow \text{Max}\xspace to determine the adequate~$P_\ell$: for every $1 \leq \ell \leq m$, let $\chi_\ell'$ be a finite prefix of $\chi_\ell$ that is only consistent with the strategies $g_j$ of \text{Min}\xspace satisfying $j \in P_\ell$ (note that if the play $\chi_\ell$ is already finite, we can set $\chi_\ell' = \chi_\ell$). Remark that the play $\chi_\ell'$ occurs with probability exactly $\sum_{j \in P_{\ell}} \gamma_i(C_j^i)$, which we denote by~$\gamma(P_\ell)$. After the play~$\phi_i \chi_\ell'$, the remaining budget of Max is bigger that $B_i - x_i$. Moreover, since this play is only consistent with the strategies $g_{j}$ of \text{Min}\xspace satisfying $j \in P_\ell$, Max knows that the current distribution of budgets of \text{Min}\xspace is the function $\gamma_{i.\ell}$ defined~by \[ \gamma_{i.\ell}(C^i_j-y) = \frac{\gamma_i(C^i_j)}{\gamma(P_\ell)}, \] where $j \in P_{\ell}$ and $y$ denotes the budget spent by \text{Min}\xspace along $\chi_\ell'$. Since $|P_\ell|<k$, the induction hypothesis implies that from this point \text{Max}\xspace can guarantee an expected payoff greater than~$\Genpot{B_i - x_i}{\gamma_{i.\ell}}-\frac{\epsilon}{2}$. This holds for every $1 \leq \ell \leq m$, therefore Max can globally guarantee an expected payoff greater than \begin{equation*} \begin{split} &\sum_{\ell = 1}^m \gamma(P_\ell) \cdot \Genpot{B_i-x_i}{\gamma_{i.\ell}} -\frac{\epsilon}{2} \\ &= \sum_{\ell = 1}^m \gamma(P_\ell) \cdot \sum_{j \in P_{\ell}} \gamma_{i.\ell}(C_j^i-y) \frac{B_i-x_i}{B_i-x_i + C_j^i-y} -\frac{\epsilon}{2}\\ &\geq \sum_{\ell = 1}^m \gamma(P_\ell) \cdot \sum_{j \in P_{\ell}} \frac{\gamma_{i}(C_j^i)}{\gamma(P_\ell)} \frac{B_i-x_i}{B_i-x_i + C_j^i}-\frac{\epsilon}{2} \\ &= \sum_{\ell = 1}^m \sum_{j \in P_{\ell}} \gamma_{i}(B_j^i) \frac{B_i-x_i}{B_i-x_i + C_j^i}-\frac{\epsilon}{2}\\ &= \Genpot{B_i-x_i}{\gamma_i}-\frac{\epsilon}{2} \end{split} \end{equation*} To conclude, we show that $\Genpot{B_i-x_i}{\gamma_i} \geq \Genpot{B}{\gamma} - \frac{\epsilon}{2}$. For all $1 \leq j \leq k$, the definition of $x_i$ and Property P$_2$ imply \begin{align*} \frac{B_i}{B_i+C_j^i} - \frac{B_i-x_i}{B_i-x_i+C_j^i} = \frac{C_j^ix_i}{(B_i+C_j^i)(B_i+C_j^i-x_i)} \leq \frac{x_i}{B_i} \leq \frac{\epsilon}{2}. \end{align*} Therefore $\Genpot{B_i-x_i}{\gamma_i} \geq \Genpot{B_i}{\gamma_i} - \frac{\epsilon}{2}$, which translates to $\Genpot{B_i-x_i}{\gamma_i} \geq \Genpot{B}{\gamma} - \frac{\epsilon}{2}$ by Property P$_4$.\\ \noindent {\bf 3. The plays $(\psi_j)_{1 \leq j \leq k}$ are identical and finite.} \noindent If the $\psi_j$ are all equal to a finite play $\psi$, then we define $\pi_{i+1}$ as the concatenation of $\pi_i$ and $\psi$. The budget $B_{i+1}$ is obtained by subtracting from $B_i$ the budget spent by \text{Max}\xspace along $\psi$. Moreover, for every $1 \leq j \leq k$, the distribution $\gamma_{i+1}$ maps the budget $C_j^{i+1}$ obtained by subtracting from $C_j^{i}$ the budget spent by \text{Min}\xspace along $\psi$ to the probability $\gamma_{i}(C_j^{i}) = \gamma(C_j) \in [0,1]$. We show that the configuration $(\pi_{i+1},B_{i+1}, \gamma_{i+1})$ satisfies properties P$_1$-P$_4$. First, Property P$_1$ holds as $\phi_i \psi$ is consistent with every $g_j$. Second, since Max follows the strategy $f_i$ along $\psi$, he does not spend more than $x_i$. Therefore, since his budget $B_i$ after $\phi_i$ satisfies Property P$_2$, so does his budget $B_{i+1}$ after $\phi_i \psi$: \[ B_{i+1} \geq B_i - x_i \geq \lambda^iB - \frac{\epsilon}{2} \lambda^iB = (1-\frac{\epsilon}{2})\lambda^i B = \lambda^{i+1}B. \] Moreover, since Min needs to use more than $y_i$ in order for $\psi$ to stop, we can also conclude P$_3$: \begin{align*} C_j^{i+1} &\leq C_j^{i} - y_i \leq C_j - \rho (1-\lambda^i)B - \rho \frac{\epsilon}{2} \lambda^iB = C_j - \rho (1-\lambda^{i+1})B. \end{align*} Finally, we obtain Property P$_4$ as a consequence of Properties P$_2$ and P$_3$. Let $x$ denote the overapproximation $(1-\lambda^{i+1})B$ of the budget spent by Max since the start of the game, and let $y$ denote the underapproximation $\rho \cdot (1-\lambda^{i+1})B$ of the budget spent by Min since the start of the game. Then \begin{equation*} \begin{split} &\Genpot{B_{i+1}}{\gamma_{i+1}} = \sum_{j=1}^k \gamma(C_j) \frac{B_{i+1}}{B_{i+1} + C_j^{i+1}} \geq \ \sum_{j=1}^k \gamma(C_j) \frac{B - x}{B -x + C_j - y} \\ &= \sum_{j=1}^k \gamma(C_j) \frac{B}{B+C_j} \cdot \frac{B - x}{B - \frac{B}{B+C_j} (x+y)} = \sum_{j=1}^k \gamma(C_j) f\Big(\frac{B}{B+C_j}\Big), \end{split} \end{equation*} where $f$ is the function mapping $\lambda \in \mathbb{R}$ to $\lambda \cdot \frac{B-x}{B-\lambda (x+y)}$. As $f$ is convex, we may apply Jensen's inequality, and use the fact that $\Genpot{B}{\gamma} \cdot (x + y) = x$ to conclude that \stam \begin{align}\label{equ:2} \Genpot{B_{i+1}}{\gamma_{i+1}} &\geq \ f\Big(\sum_{j=1}^k \gamma(C^j) \cdot \frac{B}{B+C^j}\Big) \ = \ f(\Genpot{B}{\gamma}) \ = \ \frac{\Genpot{B}{\gamma} \cdot(B-x)}{B-\Genpot{B}{\gamma} \cdot (x+y)}\nonumber\\ &= \ \Genpot{B}{\gamma} \nonumber.\qed \end{align} } \begin{equation*} \begin{split} \Genpot{B_{i+1}}{\gamma_{i+1}} &\geq f\Big(\sum_{j=1}^k \gamma(C^j) \cdot \frac{B}{B+C^j}\Big) = f(\Genpot{B}{\gamma}) \\ &= \frac{\Genpot{B}{\gamma} \cdot(B-x)}{B-\Genpot{B}{\gamma} \cdot (x+y)} = \Genpot{B}{\gamma}. \end{split} \end{equation*} \end{proof} \section{Discussion and Future Work} \label{sec:disc} We initiate the study of partial-information bidding games, and specifically bidding games with partially-observed budgets. Our most technically challenging results are for one-sided partial-information mean-payoff poorman-bidding games. We show a complete picture in strongly-connected games for the partially-informed player, which is the more important case in practice. By identifying the value for the fully-informed player in the bowtie game, we show that the value in mean-payoff bidding games does not necessarily exist when restricting to pure strategies. We discuss open problems in this model. First, we focus on games played on strongly-connected graphs. Reasoning about such games is the crux of the solution to general full-information bidding games. We thus expect that our results will be key in the solution of partial-information bidding games on general graphs. This extension, however, is not straightforward as in the full-information setting, and we leave it as an open question. Second, we identify the value of the fully-informed player in the bowtie game ${\cal G}_{\bowtie}$. Reasoning about ${\cal G}_{\bowtie}$ was the crux of the solution to general strongly-connected full-information bidding games. In fact, the same technique was used to lift a solution for ${\cal G}_{\bowtie}$ to general strongly-connected games under all the previously-studied bidding mechanisms. In partial-information games, however, this technique breaks the intricate analysis in the proof of Thm.~\ref{thm:fully-informed}. Again, we expect a solution to the bowtie game to be a key ingredient in the solution to general strongly-connected games, and we leave the problem open. Finally, we showed that the value does not necessarily exist under pure strategies. We leave open the problem of developing optimal mixed strategies for the players. This work is part of a research that combines formal methods and AI including multi-agent graph games~\cite{AHK02}, logics to reason about strategies~\cite{CHP10,MMPV14} and in particular, their application in auctions~\cite{MM+22}, enhancing {\em network-formation games} with concepts from formal methods (e.g.,~\cite{AKT16}), and many more. \stam{ \section{Reachability Richman-bidding games are equivalent to random-turn games} \label{app:equiv-reach} \begin{figure}[ht] \begin{minipage}[b]{0.4\linewidth} \centering \includegraphics[width=\linewidth]{reach.pdf} \end{minipage} \hspace{0.1\linewidth} \begin{minipage}[b]{0.4\linewidth} \centering \includegraphics[width=\linewidth]{RTreach.pdf} \end{minipage} \caption{{\bf Left:} A reachability bidding game in which the target for Player~$i$\xspace is $t_i$, for $i \in \set{1,2}$. The threshold ratio under Richman and poorman bidding are depicted under each vertex. {\bf Right:} The (simplified) uniform random-turn game that corresponds to the game on the left.} \label{fig:reach} \end{figure} Consider the reachability bidding game that is depicted in Fig.~\ref{fig:reach}. We show that under Richman bidding, when the game starts at $v_0$, Player~$1$\xspace wins when his ratio is $2/3+\epsilon$. It is dual to show that Player~$2$\xspace wins when Player~$1$\xspace's ratio is $2/3-\epsilon$, for any $\epsilon > 0$. Thus, the threshold at $v_0$ is $2/3$. The construction for poorman bidding is somewhat similar, though technically more involved. Player~$1$\xspace starts by bidding $1/3$ at $v_0$. He necessarily wins the bidding since his bid is higher than Player~$2$\xspace's budget. He pays Player~$2$\xspace and moves the token to $v_1$, thus the budgets at $v_1$ are $1/3 + \epsilon$ and $2/3-\epsilon$. Player~$1$\xspace now bids all in. If he wins the bidding, he wins the game. Otherwise, Player~$2$\xspace pays him at least $1/3+\epsilon$, thus when the game returns to $v_0$, his budget increases by at least $\epsilon$ to at least $2/3+2\epsilon$. By repeatedly playing according to this strategy, if he does not win the game, his budget at $v_0$ will eventually exceed $3/4$, which suffices for winning two biddings in a row and forcing the game to $t_1$. Consider the random-turn game that is depicted in the right side of Fig.~\ref{fig:reach}. The equivalence between the models is as follows: the probability of reaching $t_1$ from $v_j$, for $j \in \set{0,1}$, coincides with the threshold ratio under Richman bidding for Player~$2$\xspace. Note that under poorman bidding, since threshold ratios are irrational, such an equivalence is unlikely to exist. } \small \bibliographystyle{plain}
1,314,259,996,253
arxiv
\section{Section 1. Proof of Theorem 1} \label{TheoremProof} We present a proof for Theorem 1. Any $n+1$ partite qubit state $\ket{\psi} \in \mathcal{H}_2^{\otimes (n+1)}$ might be written as \begin{equation} \label{Peq111} \ket{\psi}=\ket{0} \ket{\psi_0} + \ket{1}\ket{\psi_1}. \end{equation} Such a form provides the canonical decomposition of the reduced density matrix $\rho_{} =\ket{\psi_0}\bra{\psi_0} +\ket{\psi_1}\bra{\psi_1}$ over the non-normalized states $\ket{\psi_0}, \ket{\psi_1}\in \mathcal{H}_2^{\otimes N}$, obtained by tracing out the first qubit. Consider now a reversible operator $\mathcal{O} =\begin{psmallmatrix} a&b\\ c&d \end{psmallmatrix} \in \SLC$ acting on the first qubit. Under the action of this operator, the state $\ket{\psi}$ is transformed into \eq{ \ket{\psi'}:= \mathcal{O} \ket{\psi}=\ket{0} \Big(a\ket{\psi_0}+b\ket{\psi_1}\Big) + \ket{1} \Big(c\ket{\psi_0}+d\ket{\psi_1}\Big) = \ket{0}\ket{\psi'_0} + \ket{1}\ket{\psi'_1}\,, } where \ea{ \ket{\psi'_0} & := a\ket{\psi_0}+b\ket{\psi_1} \label{p0l}\,, \\ \ket{\psi'_1} & := c\ket{\psi_0}+d\ket{\psi_1} \label{p1l}\,. } Consider now any superposition of states $\ket{\psi'_0}$ and $\ket{\psi'_1}$. Observe that \begin{align*} \ket{\psi'_z} := z \ket{\psi'_0}+\ket{\psi'_1} &=z \Big(a\ket{\psi_0} +b \ket{\psi_1} \Big) + c\ket{\psi_0} +d \ket{\psi_1} \\ &=\left(a z+b\right) \ket{\psi_0} +\left( cz+d \right) \ket{\psi_1}\\ &\propto \dfrac{az+b}{cz+d}\ket{\psi_0 }+ \ket{\psi_1}, \end{align*} where the compex number $cz+d$ was factored out in order for the transformation to map states from the extended plane representation to the extended plane representation. In other words, we have \eq{ \ma{O}\ket{\psi_z} = \ket{\psi_{z'}}\,, \quad z' = \frac{az+b}{cz+d}\,, } i.e. the operator $\ma{O}$ transforms states in the extended plane representation by applying a Möbius tansformation on the index $z$. Suppose now that $\zeta_i$ is a zero of a $h$-degree polynomial function $E$, i.e. $E (\zeta_i \ket{\psi_0}+ \ket{\psi_1})=0$. Acting on the first qubit with $\ma{O}$, the density matrix after tracing out the first qubit becomes $\ket{\psi'_0}\bra{\psi'_0} +\ket{\psi'_1}\bra{\psi'_1}$, so the entanglement measure $E$ will be zero for some new roots $\zeta'_i$, such that $E (\zeta'_i \ket{\psi'_0}+ \ket{\psi'_1})=0$. Using Eqs.~(\ref{p0l})-(\ref{p1l}), the later equation can be transformed into \eq{ E\left((c\zeta_i'+d)\left(\dfrac{a\zeta_i'+b}{c\zeta_i'+d} \ket{\psi_0}+ \ket{\psi_1}\right) \right)=0 } where the factor $(c\zeta_i'+d)$ is irrelevant since any root multiplied by it will still be a valid root. Comparing with the equation for the roots before the action of $\ma{O}$, we reach the conclusion that the roots transform according to the inverse Möbius transformation as \eq{\label{Mzeta} \zeta'_i =\dfrac{d\zeta_i-b}{-c\zeta_i+a} \,, } under the action of the operator $\ma{O}$. As a consequence, the roots of the zero-polytope transform with respect to the inverse Möbius transformation associated to the operator $\mathcal{O} =\begin{psmallmatrix} a&b\\ c&d \end{psmallmatrix}$. Analize now the case when $\mathcal{O}$ is a unitary operator $\mathcal{O}=\mathcal{U}$. Since any unitary operator $\mathcal{U}$ can be represented as a rotation $\mathcal{R} =\begin{psmallmatrix} \text{cos}\alpha & \text{sin}\alpha \;e^{i \phi}\\ -\text{sin}\alpha \;e^{-i \phi} & \text{cos}\alpha \end{psmallmatrix}$ (up to an irrelevant global phase), it will simply rotate the Bloch ball, together with the zero-polytope. Consider now multilocal operators $\mathcal{O}_{\vec{n}}= \mathcal{O}_1\otimes\ldots\otimes\mathcal{O}_n$ acting on the remaining qubits of the state $\ket{\psi}$ from \cref{Peq111}. The state $\ket{\psi}$ will transform accordingly as \eq{ \ket{\psi'}:= \mathcal{O}_{\vec{n}} \ket{\psi}=\ket{0} \underbrace{\mathcal{O}_{\vec{n}} \ket{\psi_0} }_{:=\ket{\psi_0'}}+ \ket{1} \underbrace{\mathcal{O}_{\vec{n}} \ket{\psi_1} }_{:=\ket{\psi_1'}}. } After the action of $\mathcal{O}_{\vec{n}}$, a state in the extended plane representation will have a value of entanglement measure $E$ equal to \[ E \Big( z\ket{\psi_0'}+ \ket{\psi_1'} \Big)= E \Big( \mathcal{O}_{\vec{n}} \big(z\ket{\psi_0}+ \ket{\psi_1}\big) \Big)\,. \] However, since $E$ is $\SLC^{\otimes n}$ invariant, we have that $E (z \ket{\psi_0}+ \ket{\psi_1} )=0$ iff $E (z \ket{\psi_0'}+ \ket{\psi_1'} )=0$, and so the roots of both polynomial equations are the same. As a consequence, the roots of the the zero-polytope will remain unchanged under the action of $\mathcal{O}_{\vec{n}}$. This concludes the proof of Theorem~1. \section{Section 2. Normal form} Consider the set of four symmetrically related points $\Phi=\{z,\frac{1}{z},-z,-\frac{1}{z} \}$. It is very convinient to associate with them the cuboid spanned by eight points: \[ \Phi\cup\bar{\Phi}= \Big\{z,\frac{1}{z},-z,-\frac{1}{z} , \bar{z},\frac{1}{\bar{z}},-\bar{z},-\frac{1}{\bar{z}} \Big\}, \] as it is presented on \cref{G24}. Observe, that all six faces of the cuboid are parallel to one of the planes: $XZ$,$XY$, or $YZ$. In fact, this property is equivalent to the initial assumption that the set of points $\Phi$ is in normal form. Clearly, all rotations of the Bloch ball preserve the form of the cuboid. Nevertheless, only a special subgroup of all rotations preserve faces of the cuboid being parallel to $XZ$,$XY$, or $YZ$. This special subgroup $\mathcal{G}_{24}$ contains $24$ elements spanned by three rotations of $\pi /2$ around $X$, $Y$, and $Z$ axis, given by: \begin{align} \mathbf{R}_x ({\pi}/{2}) = & \begin{psmallmatrix}\text{cos} \;\pi /4 & -i\;\text{sin}\;\pi /4\\-i\;\text{sin}\;\pi /4 & \text{cos}\;\pi /4\end{psmallmatrix} =\frac{1}{\sqrt{2}} \begin{psmallmatrix}1 & -i\\-i &1 \end{psmallmatrix} ,\; \label{Indeed} \\ \mathbf{R}_y ({\pi}/{2}) = & \begin{psmallmatrix}\text{cos} \;\pi /4 & -\text{sin}\;\pi /4\\\text{sin}\;\pi /4 & \text{cos}\;\pi /4\end{psmallmatrix} =\frac{1}{\sqrt{2}} \begin{psmallmatrix}1 & -1\\1 &1 \end{psmallmatrix} ,\; \label{Indeed2} \\ \mathbf{R}_z ({\pi}/{2}) = & \begin{psmallmatrix}e^{-i \pi /4 } & 0\\0 & e^{i \pi /4 }\end{psmallmatrix} =\frac{1}{\sqrt{2}} \begin{psmallmatrix}1-i & 0\\0 & 1+i \end{psmallmatrix} \label{Indeed3} \end{align} In fact, this is a group of rotations preserving the regular cube (the group of orientable cube symmetries). Clearly, all rotations in the $\mathcal{G}_{24}$ group preserve the normal-form structure of $\Phi$. On the other hand, the normal form is uniquelly determined up to $24$ rotations in the $\mathcal{G}_{24}$ group. \begin{figure}[h!] \includegraphics[scale=1.6]{G24.pdf} \caption{A normal system of roots $z,1/z,-z,-1/z$ together with the conjugate points $\bar{z},1/\bar{z},-\bar{z},-1/\bar{z}$ span the cuboid whose faces are parallel to the $XZ$, $XY$ and $YZ$ planes. There are 24 rotations of the Bloch sphere which preserve this property, composing the elements of the group $\ma{G}_{24}$. Two of them, namely the rotation by a $\pi /2$ angle around $X$ and $Y$ axes are presented. The system of roots transforms according to Eqs.~\ref{Indeed}-\ref{Indeed3}, giving $z\mapsto z':= \frac{z-i}{-iz+1}$ and $z\mapsto z'':= \frac{z-1}{z+1}$ for the two rotations. } \label{G24} \end{figure} \begin{proposition} \label{24} Each non-degenerated four points $z_1, z_2, z_3, z_4$ on the Bloch sphere can be transformed onto the normal form $z,\frac{1}{z},-z,-\frac{1}{z} $ via a Möbius transformation $T$. The latter is uniquely defined up to $24$ rotations in the group $\mathcal{G}_{24}$. \end{proposition} \begin{proof} For each complex number $\lambda$ there exists another complex number $z$, such that the cross-ratio of the four points is equal to $\lambda$, i.e. \begin{equation} \Big(z,\frac{1}{z};-z,-\frac{1}{z} \Big)= \lambda\,. \label{CRvia} \end{equation} Indeed, the cross-ratio on the left side equals ${4 z^2} /{(1+z^2)^2}$, and the equation ${4 z^2} /{(1+z^2)^2}=\lambda$ has exactly four solutions \begin{equation} \label{eq23} z_0 =\frac{4-2\lambda + \sqrt{1-\lambda}}{2\lambda} ,\; \frac{1}{z_0},\;-z_0,\;-\frac{1}{z_0}. \end{equation} Therefore, for a given value $\lambda$ there exists a unique $\lambda$-normal system, such that the cross-ratio of its vertices is given by $(z_0,\frac{1}{z_0};-z_0,-\frac{1}{z_0} )=\lambda$. Replacing the vertex $z_0$ by any other vertex $z_0,{1}/{z_0},-z_0$, or $-{1}/{z_0}$ does not change the value of the cross-ratio $(z_0,\frac{1}{z_0};-z_0,-\frac{1}{z_0} )=\lambda$. Note that there exists a unique Möbius transformation $T$ which maps $z_1,z_2,z_3$ onto $z_0, {1}/{z_0},-z_0$, with the remaining $z_4$ mapped onto $-{1}/{z_0}$. Observe as well that the value of $z_0$ is unique up to its inverse, opposite and opposite inverse elements, according to \cref{eq23}, with the corresponding Möbius transformations associated to the matrices $T, \sigma_x T ,\sigma_y T$ and $\sigma_z T$. Each of those transformations maps the set of points $\{z_1,z_2,z_3,z_4\}$ onto the same set of points $\{z_0, {1}/{z_0},-z_0,-{1}/{z_0}\}$, although the exact bijection between those two sets of roots is different for each transformation. Depending on the order of four points $\{z_1,z_2,z_3,z_4\}$, the corresponding cross-ratio takes six values: $\lambda, \frac{1}{\lambda}, 1-\lambda ,\frac{1}{1-\lambda},\frac{\lambda-1}{\lambda}$ and $\frac{\lambda}{\lambda-1}$. For each of these, there is a corresponding set of solutions of the form $\{z_0, {1}/{z_0},-z_0,-{1}/{z_0}\}$ via \cref{eq23} with four related Möbius transformations. Therefore, there are in total 24 Möbius transformations that map any four non-degenerated points onto a normal system, each of them related by an element of the group $\mathcal{G}_{24}$ which has exactly $24$ elements. \end{proof} \section{Section 3. Proof of Proposition 1} \label{GabcdProof} Consider the state $\ket{G_{abcd}}$ and its decomposition with respect to the first subsystem $\ket{G_{abcd}} =\ket{0}\ket{\psi_0} +\ket{1}\ket{\psi_1}$, where \begin{align*} \ket{\psi_0} &= \frac{a+d}{2}\ket{000} +\frac{a-d}{2}\ket{011} +\frac{b+c}{2}\ket{101} +\frac{b-c}{2}\ket{110}\,, \\ \ket{\psi_1} &= \frac{a+d}{2}\ket{111} +\frac{a-d}{2}\ket{100} +\frac{b+c}{2}\ket{010} +\frac{b-c}{2}\ket{001}\,. \end{align*} Suppose that $\tau^{(3)} (\zeta \ket{\psi_0} +\ket{\psi_1} )=0$. Since $\tau^{(3)}$ is a $\SLC^{\otimes 3}$ invariant, for any local operators $\mathcal{O}_1$, $\mathcal{O}_2$, $\mathcal{O}_3$ we have \[ \tau^{(3)} \Big( (\mathcal{O}_1\otimes\mathcal{O}_2\otimes\mathcal{O}_3) \big(\zeta\ket{\psi_0} +\ket{\psi_1} \big)\Big)=0\,. \] Observe that \begin{align*} \ket{\psi_0}=&(\sigma_x \otimes \sigma_x \otimes \sigma_x) \ket{\psi_1}\,,\\ \ket{\psi_1}=&(\sigma_x \otimes \sigma_x \otimes \sigma_x) \ket{\psi_1}\,, \end{align*} where $\sigma_x,\sigma_y,\sigma_z$ are Pauli matrices. Therefore by taking all local operators $\mathcal{O}_1, \mathcal{O}_2, \mathcal{O}_3$ equal to $\sigma_x$, we may conclude that \begin{equation} 0=\tau^{(3)} \Big( (\sigma_x \otimes \sigma_x \otimes \sigma_x) \big(\zeta\ket{\psi_0} +\ket{\psi_1} \big)\Big)= \zeta\ket{\psi_1} +\ket{\psi_0} \propto \frac{1}{\zeta}\ket{\psi_0}+\ket{\psi_1}, \end{equation} hence $1/\zeta$ is another root of $\tau^{(3)}$. Similarly, by considering $(\sigma_y \otimes \sigma_y \otimes \sigma_y)$ and $(\sigma_z \otimes \sigma_z \otimes \sigma_z)$, one may find another two roots $-\zeta, \;-1/\zeta$ of $\tau^{(3)}$. This shows that the roots of $\tau^{(3)}$ evaluated on any state from the $G_{abcd}$ family are symmetrical with respect to rotations around $X,Y,Z$ axes by the angle $\pi$. Writting $\tau^{(3)} (z \ket{\psi_0} +\ket{\psi_1} )=0$ explicitely, we obtain the equation \begin{equation} \tau^{(3)} (z \ket{\psi_0} +\ket{\psi_1} ) = A z^4 - 2(2 B+A)z^2 + A = 0, \label{eqq} \end{equation} where $A = (b^2 - c^2) (a^2 - d^2)$ and $B = (c^2-d^2) (a^2 - b^2)$. The above equation is non-degenerated iff $A,B,A+2B\neq 0$, which happens iff $a^2 \neq b^2 \neq c^2 \neq d^2 $ are pairwise different. \begin{lemma} Any local operator $\mathcal{O}= \mathcal{O}_1 \otimes \mathcal{O}_2 \otimes \mathcal{O}_3 \otimes \mathcal{O}_4 \in \SLC^{\otimes 4}$ which transforms states $\ket{G_{a' b'c'd'}} \propto \mathcal{O} \ket{G_{a bcd}}$ with $a^2 \neq b^2 \neq c^2 \neq d^2 $, is of the form $\mathcal{O}_i \in \mathcal{G}_{24} $. \end{lemma} \begin{proof} A local operator $\mathcal{O}_1$ acting on the first qubit and transforming the state $ \ket{G_{abcd}}$ onto $\ket{G_{a'b'c'd'}}$, also transforms their systems of roots denoted as $\Lambda$ and $\Lambda '$, respectively, via the action of the corresponding Möbius transformation. Note that both systems $\Lambda$ and $\Lambda '$ are in the normal form, therefore, according to \cref{24}, we have that $\mathcal{O}_i \in \mathcal{G}_{24} $. A similar analysis with respect to all other qubits shows that $\mathcal{O}_2,\mathcal{O}_3,\mathcal{O}_4 \in \mathcal{G}_{24}$. \end{proof} This way, searching for SLOCC-equivalence between the states $\ket{G_{a bcd}}$ and $\ket{G_{a' b'c'd'}}$ becomes restricted to the search within the finite class of operators $\mathcal{O} \in \mathcal{G}_{24}^{\otimes 4}$. Since the group $\mathcal{G}_{24}$ has only 24 elements, one may numerically verify that there are exactly $8\times 24 =192$ states in the $G_{abcd}$ family which are SLOCC-equivalent to $\ket{G_{abcd}}$ by $\mathcal{O}\in \mathcal{G}_{24}^{\otimes 4}$. For example, the following operation \begin{align} \label{tuples1} \mathbf{R}_x \Big(\frac{\pi}{2}\Big) \otimes \mathbf{R}_x \Big(\frac{\pi}{2}\Big)\otimes \mathbf{R}_x \Big(\frac{\pi}{2}\Big) \otimes \mathbf{R}_x \Big(\frac{\pi}{2}\Big) & \end{align} transforms state $\ket{G_{abcd}}$ into $\ket{G_{-b-acd}}$. This might be simply written as a transformation of a tuples of indices: the tuple $(a,b,c,d )$ is transformed into the tuple $(-b,-a,c,b)$. Similarly, the operators showed on the following right hand sides provide the corresponding transformations of the tuple $(a,b,c,d)$ on the left side: \begin{align} \label{tuples2} \nonumber \mathbf{R}_y \Big(\frac{\pi}{2}\Big) \otimes \mathbf{R}_y \Big(\frac{\pi}{2}\Big)\otimes \mathbf{R}_y \Big(\frac{\pi}{2}\Big) \otimes \mathbf{R}_y \Big(\frac{\pi}{2}\Big) &\quad \longleftrightarrow \quad (a,\blue{d},c,\blue{b}), \\ \nonumber \mathbf{R}_z \Big(\frac{\pi}{2}\Big) \otimes \mathbf{R}_z \Big(\frac{\pi}{2}\Big)\otimes \mathbf{R}_z \Big(\frac{\pi}{2}\Big) \otimes \mathbf{R}_z \Big(\frac{\pi}{2}\Big) &\quad \longleftrightarrow \quad (\blue{-d},b,c,\blue{-a}), \\ \nonumber \mathbf{R}_y ({\pi}) \otimes \mathbf{R}_y ({\pi})\otimes \bs{1} \otimes \bs{1} &\quad \longleftrightarrow \quad (a,\blue{-b},\blue{-c},d), \\ \nonumber \mathbf{R}_x ({\pi}) \otimes \mathbf{R}_x ({\pi})\otimes \bs{1} \otimes \bs{1} &\quad \longleftrightarrow \quad (a,b,\blue{-c},\blue{-d}), \\ \nonumber \mathbf{R}_y ({\pi}) \otimes \bs{1} \otimes \mathbf{R}_y ({\pi})\otimes \bs{1} &\quad \longleftrightarrow \quad (\blue{d},\blue{c},\blue{b},\blue{a}), \\ \mathbf{R}_x ({\pi}) \otimes \bs{1} \otimes \mathbf{R}_x ({\pi})\otimes \bs{1} &\quad \longleftrightarrow \quad (\blue{b},\blue{a},\blue{d},\blue{c})\,. \nonumber \end{align} Additionally, the tuples $ (a,b,c,d)$ and $(-a,-b,-c,-d)$ represent the same state. Note that any composition of the above operations also provides SLOCC equivalences between $\ket{G_{abcd}}$ states. The eight aforementioned transformations of tuples generate all permutations of the $a,b,c,d$ indices, together with the change of a sign of any two or all four indices. There are exactly $24$ permutations and for each permutation the signs can be matched in exactly $1+ {{4}\choose{2}} +1 =8$ ways. This gives in total $192$ tuples representing SLOCC equivalent states, which perfectly matches the numerical result. Finally, another trivial manipulation with indices $a,b,c,d$ comes from multiplying by a global phase, which is an irrelevant operation due the fact that quantum states are elements of a projective space. This operation transforms the indices as \[ (e^{i\theta} a,\;e^{i\theta} b,\; e^{i\theta} c,\; e^{i\theta} d ) \sim (a,b,c,d)\,, \] resulting in the same quantum state for any real number $\theta \in [0,2 \pi)$. In particular, for $\theta=\pi$, we observe that system of opposite indices determines the same state as the initial one, i.e. $(-a,-b,-c,-d)\sim (a,b,c,d)$. \end{document}
1,314,259,996,254
arxiv
\section{Scattering states} We linearize the problem and consider $+/-$ (where $+/-$ corresponds to the vicinity of $\pm k_F$, i.e. right- and left-movers) and use the ansatz \begin{equation} \psi_{1,2} = \begin{pmatrix} a^+_e e^{i k_F x} e^{i(k+q)x + i \varphi_{1,2}/2}\\ a^+_h e^{i k_F x} e^{i(k-q)x - i \varphi_{1,2}/2}\\ a^-_e e^{-i k_F x} e^{i(k+q)x + i \varphi_{1,2}/2}\\ a^-_h e^{-i k_F x} e^{i(k-q)x - i \varphi_{1,2}/2} \end{pmatrix} \end{equation} in superconducting lead 1 and similarly in lead 2. Here $k \equiv k_x$. Recall that $\varphi_1 = 0$ and $\varphi_2 = \varphi$. Notice that normally, there should be wavefunction normalization by the quasiparticle current in order for the scattering problem to be unitary; however, we will use Andreev approximation, which makes this normalization unnecessary. We assume that $0< q v_F < \Delta$, and that the energies of the states will lie in the gap of the superconductor $0<E< \Delta - q v_F$ . The wavefunction for right-movers that decays as $x\rightarrow + \infty$ in $SC_2$: \begin{equation} \begin{pmatrix} a^+_e \\ a^+_h \\ 0\\ 0 \end{pmatrix}_{SC_2} = \frac{1}{\sqrt{2}} \begin{pmatrix} \left( \frac{E- v_F q}{\Delta} + i \sqrt{1-\left( \frac{E- v_F q}{\Delta}\right)^2}\right)\\ 1\\ 0\\ 0 \end{pmatrix} \end{equation} For this state, $v_F k =+ i \sqrt{\Delta^2 - (E- v_F q)^2}$. The L state in $SC_1$ that decays as $x\rightarrow -\infty$ is: \begin{equation} \begin{pmatrix} 0 \\ 0 \\ a^-_e\\ a^-_h \end{pmatrix}_{SC_1} = \frac{1}{\sqrt{2}} \begin{pmatrix} 0\\ 0\\ \left( \frac{E+ v_F q}{\Delta} + i \sqrt{1-\left( \frac{E+ v_F q}{\Delta}\right)^2}\right)\\ 1 \end{pmatrix} \end{equation} For this state, $v_F k =- i \sqrt{\Delta^2 - (E+ v_F q)^2}$. It is known~\cite{beenakker1992three} that the result of the scattering formalism for short junctions will be independent of whether the junction is represented by narrow weak link, a region of normal metal, or an insulating barrier. For simplicity of calculation, we consider normal states in the middle region. We will be solving the problem of Andreev scattering at two interfaces ($N/SC_2$ and $SC_1/N$). For incoming electron, the scattering states with that are relevant to the left (1) and the right (2) contacts are: \begin{equation} \psi_N^{(1)} = \begin{pmatrix} 0 \\ 0\\ e^{-i k_F x} e^{i k x}\\ r_A e^{-i k_F x} e^{i k x} \end{pmatrix}, \quad \psi_N ^{(2)} = \begin{pmatrix} e^{i k_F x} e^{i k x} \\ r_A e^{i k_F x} e^{i k x} \\ 0 \\ 0 \end{pmatrix} \end{equation} Where $r_A$ is not necessarily the same constant for scattering on the left and right contacts. For the incoming holes, the problem is set up similarly. \section{The scattering matrix formalism} To obtain the amplitudes of Andreev reflection, we solve the condition $\psi_N^{(1,2)} = S_{interface \ 1,2} \psi_S^{(1,2)}$ at each interface ($x = 0$ and $x=d$). In the case of perfectly transparent contacts, the scattering matrix at the interfaces is identity. We solve these equations (for both incoming electrons and holes at both interfaces) and obtain the matrix describing Andreev scattering at both interfaces: \begin{equation} \begin{split} \psi_{out} = \begin{pmatrix} \psi_{N,e}^-(0)\\ \psi_{N,e}^+(d)\\ \psi_{N,h}^+(0)\\ \psi_{N,h}^-(d) \end{pmatrix} = \begin{pmatrix} & & r_A^- & 0\\ & & 0 & r_A^+ e^{- i \varphi}\\ r_A^+ & 0 & & \\ 0 & r_A^- e^{ i \varphi} & & \end{pmatrix} \begin{pmatrix} \psi_{N,e}^+(0)\\ \psi_{N,e}^-(d)\\ \psi_{N,h}^-(0)\\ \psi_{N,h}^+(d) \end{pmatrix}\equiv s_A^{-1} \psi_{in} \end{split} \end{equation} where the unfilled spaces correspond to zero entries, and we used the notation \begin{align} r_A^{\pm} = \frac{E \mp v_F q}{\Delta } - i \sqrt{1 - \left (\frac{E\mp v_Fq}{\Delta} \right)^2}. \end{align} In the absence of normal reflection, the scattering matrix of the normal region is: \begin{equation} \psi_{out} =\begin{pmatrix} \psi_{N,e}^-(0)\\ \psi_{N,e}^+(d)\\ \psi_{N,h}^+(0)\\ \psi_{N,h}^-(d) \end{pmatrix} = \begin{pmatrix} r& t' & & \\ t & r' & & \\ & & r^* & t'^* \\ & & t^* & r'^* \end{pmatrix} \begin{pmatrix} \psi_{N,e}^+(0)\\ \psi_{N,e}^-(d)\\ \psi_{N,h}^-(0)\\ \psi_{N,h}^+(d) \end{pmatrix} \equiv s_N \psi_{in} \end{equation} where in the limit of short junction ($\frac{\Delta d}{v_F} \approx \frac{d}{\xi}, \frac{E_z d}{v_F} \ll 1$) the transmission and reflection are energy-independent. In our case, for one channel, $r' = r$ and $t' = t$. The condition determining the spectrum of the ground states is $\det \left ( \bm 1 - s_N s_A \right) = 0 $, which translates into \begin{equation} \label{T} T \left ((r_A^+)^2 - e^{2 i q d + i \varphi} \right) \left ((r_A^-)^2 - e^{-2 i q d - i \varphi} \right) + (1-T) (1- r_A^- r_A^+)^2 =0. \end{equation} where $T = |t|^2$, $|t|^2 + |r|^2 = 1$. In the absence of normal reflection $t = e^{i qd}$, $T = 1$ and this simplifies to: \begin{equation} \label{main} \left ((r_A^+)^2 - e^{2 i q d + i \varphi} \right) \left ((r_A^-)^2 - e^{-2 i q d - i \varphi} \right) =0 \end{equation} from the main text, and the solutions to this equation produce the energies of the two bound states \eqref{eq:ABS_energy}. \begin{figure}[h] \includegraphics[width= 0.9\columnwidth]{e1.pdf} \caption{ Spectrum of the bound states in the junction at different values of the Cooper pair momentum $q$ in superconducting regions 1 and 2. } \label{Espectrum} \end{figure} \section{The current-phase relation} The free energy is \cite{beenakker1991universal,beenakker1992three} \begin{equation} F = -\frac{2}{\beta} \sum_{E>0} \ln \left [ 2 \cosh \left( \frac{\beta E}{2} \right )\right] + \int d^2 r \frac{|\Delta|^2}{|g|} + \Tr H_0 \end{equation} where $H_0$ is the particle block of the BdG Hamiltonian and $\beta$ is the inverse temperature. We neglect the contribution from the spatial integral $ \int d^2 r \frac{|\Delta|^2}{|g|}$ to the Josephson current because we assume that $\Delta$ changes as a step-function at the contacts. Therefore, the free energy can be written as \begin{equation} F = - \frac{1}{\beta} \int_0^\infty dE \nu(E) \ln 2\cosh\frac{\beta E}{2} \end{equation} The density of states $\nu(E)$ in the junction can be evaluated as\cite{beenakker1992three}, \begin{equation} \nu(E)=-\frac{1}{\pi } \Im \frac{\partial}{\partial E} \ln \det\left ( 1- s_A s_N\right) + \text{const.} \end{equation} which is good for describing both bound and continnum states. Here $s_A$ and $s_N$ are the scattering matrices transforming the wavefunctions due to Andreev reflection at the interfaces and due to the propagation/scattering in the weak link; the `const.' is the phase-independent part of the density of states. We can rewrite the density of states \begin{equation} \nu(E)=\frac{1}{\pi } \Im \frac{\partial}{\partial E} \ln \sin\left ( \arccos\frac{E + q v_F}{\Delta} + \frac{\widetilde \varphi}{2}\right)\sin\left ( \arccos\frac{E - q v_F}{\Delta} - \frac{\widetilde \varphi}{2}\right) + \text{const.} \end{equation} where, in order to work with energies of both bound and continuous states, we have to assume that $E = E + i 0$, and perform proper analytic continuation where necessary. Lastly, we plug the expression for the density of states into the free energy and evaluate the current as $I = \frac{2 e}{\hbar} \frac{d F}{d \varphi}$. We extend the symmetric integration to $(-\infty, +\infty)$ and integrate by parts using that the boundary terms vanishing as $\propto 1/E$. Thus, we obtain: \begin{equation} I(\varphi) = - \frac{e}{2 \pi \hbar} \int_{-\infty}^{\infty} dE \tanh \frac{\beta E}{2} \mathrm{Im} \frac{\partial}{\partial \varphi} \ln \sin\left ( \arccos\frac{E + q v_F}{\Delta} + \frac{\widetilde \varphi}{2}\right)\sin\left ( \arccos\frac{E - q v_F}{\Delta} - \frac{\widetilde \varphi}{2}\right) \end{equation} which is equal \begin{equation} I(\varphi) = - \frac{e}{4 \pi \hbar} \int_{-\infty}^{\infty} dE \tanh \frac{\beta E}{2} \mathrm{Im} \left [ \cot\left ( \arccos\frac{E + q v_F}{\Delta} + \frac{\widetilde \varphi}{2}\right) - \cot\left ( \arccos\frac{E - q v_F}{\Delta} - \frac{\widetilde \varphi}{2}\right) \right ] \end{equation} We complete the contour in the upper half complex energy plane, picking up residues at each of the poles of the hyperbolic tangent at Matsubara frequencies $E = i\omega_n \equiv i(2n+1)\pi/\beta$ to obtain a summation: \begin{equation} I(\varphi) = \frac{e}{\hbar \beta} \mathrm{Re} \left [ \cot\left ( \arccos\frac{i \omega_n + q v_F}{\Delta} + \frac{\widetilde \varphi}{2}\right) - \cot\left ( \arccos\frac{i \omega_n - q v_F}{\Delta} - \frac{\widetilde \varphi}{2}\right) \right ] \end{equation} Which, finally, can be brought into the form: \begin{equation} I(\varphi) = \frac{2e}{\beta \hbar} \sum_{n=0}^{\infty} \textrm{Re}\cot \left( \arccos(\frac{i\omega_n-qv_F}{\Delta}) - \frac{\tilde \varphi}{2}\right) \label{eqn:sum} \end{equation} The zero temperature result, which we are interested in, is given by turning the sum $\sum_{n=0}^{\infty}$ into an integral $\frac{\beta}{2\pi}\int_0^\infty d\omega $. \subsubsection{ The case of $|q| v_F < \Delta$} Here we derive the Josephson current at zero temperature using a slightly simpler approach than the one introduced in the section above. This approach will also allow us to distinguish the current contributions from bound states and continuous states. The Josephson current can be written as: \begin{equation} \begin{split} I = - \frac{2e}{\hbar}\sum_{E>0} \tanh \left( \frac{\beta E}{2 } \right ) \frac{dE}{d\varphi} - \frac{2e}{\hbar}\frac{2}{\beta} \int _{E\, \in \text{ cont.}}^\infty dE \ln \left [ 2 \cosh \left( \frac{\beta E}{2} \right )\right] \frac{d \nu (E)}{d \varphi} \end {split} \end{equation} Where the first term only counts contributions from the bound states and the second term corresponds to the current from the continuum. We are interested in the case when the temperature is zero, and the expression simplifies: \begin{equation} \begin{split} I = - \frac{2e}{\hbar}\sum_{E>0} \frac{dE}{d\varphi} - \frac{2e}{\hbar} \int _{E\, \in \text{ cont.}}^\infty dE \, E \frac{d \nu (E)}{d \varphi} \end {split} \end{equation} The expression for the contribution from the bound states is given in the main text (eq. \eqref{J_bound}). In what follows, we present the details of derivation of the current arising from the continuum of states. $\nu(E)$ equals \begin{equation} \nu(E)=-\frac{1}{\pi } \Im \frac{\partial}{\partial E} \ln \left ( T \left ((r_A^+)^2 - e^{2 i q d + i \varphi} \right) \left ((r_A^-)^2 - e^{-2 i q d - i \varphi} \right) + (1-T) (1- r_A^- r_A^+)^2\right ) \end{equation} When $T = 1$, the expression is especially simple. The derivative over $\varphi$ of the density of states for the continuum of states at $q v_F = 0.5 \Delta$ and $\varphi = 0$ is shown in fig.~\ref{fig:dos}. To compute the current from the continuum of states analytically, we change the order of derivatives and separate the contributions from left- and right-movers: \begin{equation} I_{cont} = \frac{2e}{\hbar} \frac{1}{\pi} \Im \left [\int_{\Delta + |q|v_F}^\infty dE \, E \frac{\partial }{\partial E} \frac{d}{d\varphi} \ln \left ( (r_A^+)^2 - e^{ i \tilde \varphi} \right)+ \int_{\Delta - |q|v_F}^\infty dE \, E \frac{\partial }{\partial E} \frac{d}{d\varphi} \ln \left ( (r_A^-)^2 - e^{- i \tilde \varphi} \right) \right] \end{equation} Then we proceed to evaluate the derivative over $\varphi$: \begin{equation} I_{cont} = -\frac{2e}{\hbar} \frac{1}{\pi} \Im i\left [\int_{\Delta + |q|v_F}^\infty dE \, E \frac{\partial }{\partial E} \frac{1}{ e^{ -i \tilde \varphi} (r_A^+)^2 -1 }- \int_{\Delta - |q|v_F}^\infty dE \, E \frac{\partial }{\partial E}\frac{1}{ e^{ i \tilde \varphi} (r_A^-)^2 -1 } \right] \end{equation} Next, we integrate by parts in order to obtain \begin{equation} \begin{split} I_{cont} &= \frac{2e}{\hbar} \frac{1}{\pi} \Im i\left [(\Delta - |q|v_F) \frac{1}{ e^{ -i \tilde \varphi}-1} - (\Delta + |q|v_F) \frac{1}{ e^{ -i \tilde \varphi}-1 }\right] +\\ &+\frac{2e}{\hbar} \frac{1}{\pi} \Im i\left [\int_{\Delta + |q|v_F}^\infty dE \, \frac{1}{ e^{ -i \tilde \varphi} (r_A^+)^2 -1}- \int_{\Delta - |q|v_F}^\infty dE \,\frac{1}{ e^{ i \tilde \varphi} (r_A^-)^2 -1 } \right] \end{split} \end{equation} Note that $\Im \frac{i}{ e^{ -i \tilde \varphi}-1 }=\Im \frac{i}{ e^{ i \tilde \varphi}-1 }=\frac{1}{2}$. Thus, simplifying further: \begin{equation} \begin{split} I_{cont} &=-\frac{e}{\hbar} \frac{1}{\pi} \left [(\Delta - |q|v_F) - (\Delta + |q|v_F) \right] +\frac{2e\Delta }{\hbar} \frac{1}{\pi} \Im i\left [\int_{1}^\infty dx \, \frac{1}{ e^{ -i \tilde \varphi + 2 \text{arccosh} x} -1 }- \int_{1}^\infty dy \,\frac{1}{ e^{ i \tilde \varphi+2 \text{arccosh} y} -1 } \right] = \\ &=\frac{e}{\hbar} \frac{2 |q| v_F}{\pi} - \frac{2e\Delta }{\hbar} \frac{1}{\pi} \Im \left [\int_{1}^\infty dx \, \frac{2 e^{2 \text{arccosh} x} \sin \tilde \varphi }{ 1+ e^{4 \text{arccosh} x} -2 e^{2 \text{arccosh} x} \cos \tilde \varphi} \right] \end{split} \end{equation} (recall that for continuous states $(r^{\pm}_A)^2 = e^{2 \text{arccosh} \frac{E \mp q v_F}{\Delta}}$). We see that the imaginary part of the second term is identically zero and thus: \begin{equation} \begin{split} I_{cont} =\frac{e \Delta}{\hbar} \frac{2 q v_F}{\pi \Delta} \end{split} \end{equation} which yields the result in eq.~\eqref{eq:I_cont}. \begin{figure}[h] \includegraphics[width= 0.35\columnwidth]{dos.pdf} \caption{ The derivative over $\varphi$ of the density of states for the continuum part of the spectrum only at $q v_F = 0.5 \Delta$. The dashed gray lines correspond to $E = \Delta - q v_F$ and $E = \Delta + q v_F$. The tails of the quantity $d \nu (E)/ d\varphi$ in the continuum have the opposite sign and cancel exactly when $q = 0$. When $q \neq 0$, the energies corresponding to the continuum states for right- and left-movers acquire Doppler shift $\pm q v_F$; on top of that, there is no perfect cancellation anymore, as we show in the calculation above. } \label{fig:dos} \end{figure} \subsubsection{The case $|q|v_F > \Delta$} When $qv_F >\Delta$ and $0<\widetilde{\varphi}<2\pi$, we evaluate the current using eq.~\eqref{eqn:sum} at zero temperature: \begin{equation} I(\varphi) = \frac{e}{\pi \hbar} \int_{0}^{\infty} d\omega \textrm{Re}\cot \left( \arccos(\frac{i\omega -qv_F}{\Delta}) - \frac{\tilde \varphi}{2}\right) \label{eqn:sum} \end{equation} We obtain: \begin{eqnarray} I(\varphi) = -\frac{2e(qv_F - \sqrt{q^2v_F^2 - \Delta^2})}{\pi} + \frac{2e\Delta \sin(\frac{\widetilde{\varphi}}{2})}{2\pi} ( \pi - \textrm{arg}[\Delta + qv_F\cos(\frac{\widetilde{\varphi}}{2}) +i\sqrt{q^2v_F^2 - \Delta^2}\sin(\frac{\widetilde{\varphi}}{2})] \\ - \textrm{arg}[\Delta - qv_F\cos(\frac{\widetilde{\varphi}}{2}) + i\sqrt{q^2v_F^2 - \Delta^2}\sin(\frac{\widetilde{\varphi}}{2}) ]) \end{eqnarray} Which can be simplified to \begin{equation} I(\varphi) = -\frac{2e(qv_F - \sqrt{q^2v_F^2 - \Delta^2})}{\pi} + \frac{2e\Delta \sin(\frac{\widetilde{\varphi}}{2})}{\pi} \arctan{ \frac{\Delta \sin{\frac{\widetilde{\varphi}}{2}}}{\sqrt{q^2v_F^2-\Delta^2}}} \end{equation} % Here $\textrm{arg}(z)$ refers to the argument of the complex number $z$. The maximum negative current occurs at $\widetilde{\varphi} = 0$, $|I_{c-}| = \frac{2e}{\pi}(qv_F - \sqrt{q^2v_F^2 - \Delta^2})$. The maximal positive current occurs at $\widetilde{\varphi} = \pi$, $I_{c+} = \frac{2e}{\pi}(\Delta \sin^{-1}(\Delta/qv_F)-(qv_F - \sqrt{q^2v_F^2 - \Delta^2}))$. Thus, the diode efficiency can be expressed as $\frac{I_{c-} - I_{c+} }{I_{c-} + I_{c+}} = \frac{ 2(1-\sqrt{1-p^2})-p\sin^{-1}(p)}{p\sin^{-1}(p)}$, where $p \equiv \Delta/qv_F <1$. In the limit $qv_F \gg \Delta$, the diode efficiency approaches $p^2/12 + 13p^4/360 + \ldots$ and thus vanishes as $\propto 1/q^2$. The expression for the supercurrent becomes symmetric in this limit $I(\varphi)|_{q v_F/\Delta \rightarrow + \infty} \approx -\frac{e\Delta^2 \cos{\widetilde{\varphi}}}{\pi qv_F} $. \begin{figure}[h] \includegraphics[width= 0.35\columnwidth]{fig3_SM.pdf} \caption{ The diode efficiency $\eta$ through the junction as a function of the Cooper pair momentum $q$. } \label{fig:currentSM} \end{figure} \section{Tight-binding calculations} We simulate the minimal model with short Josepshon junction by setting up a nearest-neighbor tight-binding chain with superconducting regions of the same length $L_S = N_S a$ and the thickness of the normal region $L_N = N_N a$, $N_N \ll N_S$. The hopping amplitude $t$ is the same in all regions, and the chemical potentials are the same in the superconducting regions $\mu_S$ and $\mu_N$ in the normal region. The pariging potential at lattice site $n$ is $\Delta(n) = \Delta_{1,2} e^{2 i q n a}$, where $\Delta _1 = \Delta$ and $\Delta_2 = \Delta e^{i \varphi}$. When $t \gg \Delta$, this corresponds to the condition $\mu \gg \Delta$ used in analytical derivations. In all the calculations, we used $\mu_S = 0$, and thus, $v_F = 2 a t$. For calculation in Fig.~\ref{fig:current}, we used $N_S = 350$, $N_N = 3$ (the total length of the system is 703$a$), $a = 1$, $t = 100$ and $\Delta = 2$. The solid line shows the result at negligible normal reflection, which is achieved at $\mu_N = 0$. The dotted line shows the result at small normal reflection, when a small potential barrier is introduced inside the junction by setting $\mu_N = 25$, which opens a small gap in the dispersion, see Fig.~\ref{fig:SM_TB_spectrum}(a-c). The current was found by evaluating the expression $I = \frac{2 e}{\hbar} \frac{d F}{d \varphi}$ numerically, where the free energy is found by summing over all the negative energy states. We plot it in Fig.~Fig.~\ref{fig:SM_TB_spectrum}(d-f). We estimate that when the potentials at the junction are equal to $\mu_N = 0.1t$ and $\mu_N = 0.4 t$, the junction transparency is $T = 0.998$ and $T = 0.975$, respectively. We obtained this correspondence by comparing the energy spectrum obtained from the tight-binding calculation with the analytical expression $E = \sqrt{1- T \sin^2 \frac{\varphi}{2}}$. We estimate the junction transparency to be $T = 0.99$ for the tight-binding calculation at $\mu_N = 0.25 t$ shown as a dashed red line in Fig.~\ref{fig:current}(a). \begin{figure}[h] \includegraphics[width= 1\columnwidth]{fig_SM_TB.pdf} \caption{(a-c) Energy spectra of bound and extended states from tight-binding calculation at $q v_F = -0.5 \Delta$. The parameter $\mu_N$ is the potential at the junction that controls the amplitude of the normal reflection. The other parameters used are $N_S = 150$, $N_N = 3$, $a = 1$, $t = 20$, $\Delta = 2$. (d-f) Corresponding phase-current relations showing that the current nonreciprocity is decreased when the normal reflection becomes large. } \label{fig:SM_TB_spectrum} \end{figure} \section{Spectral flow} In the presence of normal reflection, the left-and right moving states mix, and in the energy domain $\Delta - |q| v_F < |E|< \Delta + |q| v_F$ there are no true bound states anymore. As we see, now the contributions from left-movers and right-movers at these energies (associated with $r_A^+$ and $r_A^-$, respectively) are now related. From tight-binding calculations, we see that these states are not connected to the rest of the continuum states as shown in Fig. \ref{Espectrum_TB}. Upon changing the phase, there is spectral flow of one bound state into these quasi-continuum states and back into another bound states. This allows us to estimate the contribution of the continuum states into the Josephson current based on spectral flow argument: \begin{equation} \label{cont2} I_{cont}= - \frac{2 e}{\hbar} \sum_{i \text{ in continuum}} \frac{d |E_i|}{d \varphi} = \frac{2e }{\hbar } \frac{\Delta E}{\Delta \varphi } = -\frac{2e }{\hbar } \frac{2 |q| v_F}{2 \pi} = -\frac{e \Delta }{\hbar } \frac{2 |q| v_F}{ \pi \Delta} \end{equation} Which exactly matches the result in Eq. \eqref{eq:I_cont}. \begin{figure}[h] \includegraphics[width= 0.6\columnwidth]{fig9.pdf} \caption{(a)Tight-binding calculation of the spectrum of a 1D nanowire at $q v_F = - 0.5 \Delta$. . The parameters used are $N_S = 150$, $N_N = 3$, $a = 1$, $t = 20$, $\Delta = 2$, and $\mu_N = 0$. (b) Analytical expression for bound state levels at $T = 1$ and schematic illustrating the spectral flow connecting the states. } \label{Espectrum_TB} \end{figure} \section{Further discussion of the contribution from the continuum of states} \subsubsection{Vanishing of the contribution from continuum in conventional short junctions} Let us discuss one perspective on how and why continuous spectrum at $\Delta - |q v_F|<E<\Delta +|q v_F|$ contributes to the Josephson current. As we saw above, the current from continuous states is determined from \begin{equation} \label{Icont_gen} I_{cont} = - \frac{2e}{\hbar} \int _{\text{ cont.}} dE \, E \frac{d \nu (E)}{d \varphi}, \quad % \nu(E)=-\frac{1}{\pi } \Im \frac{\partial}{\partial E} \ln \det\left ( 1- s_A s_N\right) + (\varphi \text{-independent const}) \end{equation} Consider \begin{equation} \label{det_DOS} \det\left ( 1- s_A s_N\right) = T \left ((r^-_A)^2 (r^+_A)^2 +1 - 2 \cos \tilde \varphi \left [ (r^-_A)^2 + (r^+_A)^2\right] \right) - T (1-T) (1- r_A^- r_A^+)^2 - 2 i \sin \tilde \varphi \left [ (r^-_A)^2 - (r^+_A)^2\right] \end{equation} In the cases considered in refs.\cite{beenakker1992three,furusaki1991,beenakker1991universal}, the dispersion for left- and right-movers was symmetric and for short junction: \begin{equation} r^-_A = r^+_A = r_A \end{equation} Which immediately sets imaginary part of the determinant above zero. Thus, for a short junction with L/R-symmetric dispersion, the density of states is independent of $\varphi$ and the current from the continuum \eqref{Icont_gen} vanishes. It is known that just time-reversal symmetry breaking (for example, induced by spin-splitting magnetic field, see \cite{yokoyama2013josephson,yokoyama2014anomalous}) does not lead to an asymmetry in ABS dispersion in the case of short Josephson junctions. Therefore, the fact that the finite Cooper pair momentum not only breaks time-reversal, but also provides a selected direction in space (inversion breaking) is crucial for the asymmetry and the JDE effect. \subsubsection{Screening current in an infinite superconductor with finite Cooper pair momentum $q$} For the rest of the discussion, assume that $q$ is negative, which is the case in Fig. \ref{ill}. When the energy is in the range $\Delta - |q| v_F < |E|< \Delta + |q| v_F$, the left-moving states in the normal region correspond to a gapless energy range in both superconductors, as seen in Fig. \ref{ill}. \begin{figure}[h] \includegraphics[width= 1\columnwidth]{illustration2.pdf} \caption{ An illustration showing the types of contributions to the Josephson current at different energy ranges in transparent junction. For simplicity, we only consider positive energy states because the spectrum is particle-hole symmetric. The left column shows the energy spectrum in an exteneded superconductor with finite-momentum pairing $\Delta (x) = \Delta e^{2 i q x}$. The second and third columns show the schematics of the states in the junction and the energy spectrum in the junction, respectively.\\ In the energy range $|E|<\Delta - |q|v_F $, there are truly bound Andreev states. In the presence of normal reflection, the bound state in the energy range $\Delta - |q|v_F < E < \Delta + q v_F$ in the middle panel becomes a quasi-bound state. At $|E| > \Delta + q v_F$, the truly continuum states exist, however, they are current-carrying, as we discuss in the text. } \label{ill} \end{figure} Let us compute the screening current that flows in an an infinite superconducting slab with $\Delta(x) = \Delta e^{2 i q x}$. For the energy range $\Delta - |q| v_F < |E|< \Delta + |q| v_F$ the current comes from left movers only: \begin{equation} J_{1} = -e v_F \int_{\Delta - |q| v_F }^ {\Delta + |q| v_F}\nu_L(E) d E = -\frac{e}{ \pi \hbar} \int_{\Delta - |q| v_F }^ {\Delta + |q| v_F} \frac{E + |q| v_F}{\sqrt{(E + |q| v_F)^2 - \Delta^2} } d E = -\frac{2 e}{ \pi \hbar} \sqrt{|q|v_F (\Delta + |q|v_F)} \end{equation} which, as we see, is zero when $q=0$. Analogously, the contribution from the true continuum states equals to a difference between the contributions: \begin{equation} J_{2} = - e v_F\left ( \int_{\Delta + |q| v_F }^ {\infty}\nu_L(E) d E - \int_{\Delta + |q| v_F }^ {\infty}\nu_R(E) d E \right )=\frac{2 e}{ \pi \hbar} \left ( \sqrt{|q|v_F (\Delta + |q|v_F)} - |q| v_F \right) \end{equation} Thus, the screening current is \begin{equation} \label{cont} J_{scr}=J_{1} + J_{2}= \frac{2 eq v_F}{ \pi \hbar }. \end{equation} Which, as we see, equals \eqref{eq:I_cont}. \end{document}
1,314,259,996,255
arxiv
\section*{Abstract} We use Latent-Dynamic Conditional Random Fields to perform skeleton-based pointing gesture classification at each time instance of a video sequence, where we achieve a frame-wise pointing accuracy of roughly $83\%$. Subsequently, we determine continuous time sequences of arbitrary length that form individual pointing gestures and this way reliably detect pointing gestures at a false positive detection rate of $0.63\%$. \section{Introduction}\label{sec:introduction} Pointing gestures are a fundamental aspect of non-verbal human-human interaction, where they are often used to direct the conversation partner's attention towards objects and regions of interest -- an essential mean to achieve a joint focus of attention. As a consequence, reliable detection and interpretation of pointing gestures is an important aspect of natural, intuitive \acf{HCI} and \acf{HRI}. In this paper, we use \acfp{LDCRF} to perform pointing gesture detection and frame-wise segmentation based on skeleton data such as, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the joint data provided by a Kinect. Therefore, we use the \ac{LDCRF} to label each frame of a video sequence and subsequently determine continuous time sequences of arbitrary length that form individual pointing gestures. An important advantage of this approach is that we can detect pointing gestures while they are being performed and do not have to wait for a person to perform and complete the whole pointing action. This enables us to react to a pointing gesture as it is performed -- an important aspect considering that natural \ac{HRI} is our target application (see, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \cite{schauerte2014look,schauerte2010focusing}). For example, this way our robot is able to direct its head toward the coarse target area, thus providing a visual feedback for the pointing person, while the pointing person is still adjusting and/or fine-tuning the pointing direction. We evaluate the performance of our pointing gesture detection method based on a novel dataset. We used a Microsoft Kinect to record a diverse set of pointing gestures performed by 18 subjects, which enables us to evaluate person-independent pointing gesture detection. \newlength{\exampleheightobama} \setlength{\exampleheightobama}{2.125cm} \begin{figure}[tb] \disablegraphics{% \centering \includegraphics[width=.975\linewidth]{fig1} } \caption{Pointing gestures are an essential aspect of human communication that is frequently used throughout various situations such as, e.g., during speeches.} \end{figure} \section{Related Work}\label{sec:relwork} One of the first systems to recognize pointing gestures was Bolt's ``Put that there'' system \cite{Bolt1980}, which enabled users to point at and interact with objects while giving voice commands. However, the system required the user to wear a Polhemus \acf{ROPAMS} device at his wrist. Systems that avoid specialized wearable devices nowadays mostly use the data provided by stereo or depth cameras as a basis for (pointing) gesture recognition. Here, stereo cameras or depth cameras (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, Microsoft Kinect or the Mesa SwissRanger) are used to acquire depth image sequences, which allows for simpler gesture recognition in situations with, for example, occlusions and/or difficult lighting conditions. Gesture recognition for \acl{HCI} has been an active research areas for many years. Accordingly, there exist several surveys that provide a detailed overview of early and recent research developments \cite{Suarez2012,Mitra2007,Wachs2011,Gavrila1999,Pavlovic1997,Jaimes2007}. In the following, we focus on depth-based sequential gesture recognition approaches and, for example, do not discuss non-sequential approaches (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \cite{Jojic2000,Feris2005,VanDenBergh2011,Keskin2011,Biswas2011,Ramey2011}) \acfp{HMM} and stochastic grammars are widely used models for speech and gesture recognition. The Hidden Markov Model is a generative model that includes hidden state structure. Gesture recognition applications using \acp{HMM} can be found at Chen \emph{et al}\onedot \cite{chen2003hand}, Yang \emph{et al}\onedot \cite{yang2012gesture}, Zafrulla \emph{et al}\onedot \cite{zafrulla2011american} and Vogler \emph{et al}\onedot \cite{vogler1998asl}. The popular approach by Nickel and Stiefelhagen \cite{Nickel2003} uses a stereo camera to detect pointing gestures. A color-based detection of hands and face is combined with a clustering of the found regions based on depth information enabling the tracking of 3D skin color clusters. Hidden Markov Models are trained on different states (begin, hold, end) of sample pointing gestures and used to detect the occurrence of pointing gestures. The additional head orientation feature is used to improve recall and precision of pointing gestures. The authors claim to achieve 90\% accuracy identifying targets using the head-hand line, but no accuracy for the pointing gesture detection has been reported. In Park \emph{et al}\onedot \cite{park2008real}, face and hand tracking is performed using 3D particle filters after initially having detected those body parts based on skin color. To account for small pointing gestures (where the pointing arm is not fully extended), the hand positions are mapped onto an imaginary hemisphere centered around the shoulder of the pointing arm before estimating the pointing direction. This is done by a first stage \ac{HMM} that is used to retrieve more accurate hand positions. In a second stage these hand positions are fed into three \acp{HMM} (for three states Non-Gesture, Move-To and Point-To) in order to detect a pointing gesture. In case of a pointing gesture, the pointing direction is estimated. Droeschel \emph{et al}\onedot \cite{droeschel2011learning} focus on pointing gestures where the person does not look into the target direction. Body features are extracted from the depth and amplitude images delivered by a Time-of-Flight camera. \acp{HMM} are used to detect a pointing gesture which is segmented into the three phases \enquote{preparation}, \enquote{hold} and \enquote{retraction}. The \acp{HMM} are trained with features such as the distance from the head to the hand, the angle between the arm and the vertical body and the velocity of the hand. To estimate pointing directions, a pointing direction model is trained using Gaussian Process Regression, which leads to a better accuracy than simple criteria like head-hand, shoulder-hand or elbow-hand lines (see \cite{Nickel2003}). Despite their popularity, \acp{HMM} have some considerable limitations. An \ac{HMM} is a generative model that assumes joint probability over observation and label sequences. The model needs to enumerate all possible observation sequences in which each observation is required to be an atomic entity. Thus, the representation of long-range dependencies between observations or interacting features is computationally not tractable. The need for a richer representation of observations (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, with overlapping features) has led to the development of \acfp{MEMM} \cite{mccallum2000maximum}. A \ac{MEMM} is a model for sequence labeling that combines features of \acp{HMM} and Maximum Entropy models. It is a discriminative model that extends a standard maximum entropy classifier by assuming that the values to be learned are connected in a Markov chain rather than being conditionally independent of each other. It represents the probability of reaching a state given an observation and the previous state. Sung \emph{et al}\onedot \cite{sung2012unstructured} implemented detection and recognition of unstructured human activity in unstructured environments using hierarchical \acp{MEMM}. Sung \emph{et al}\onedot use different features describing body pose, hand position and motion extracted from a skeletal representation of the observed person. A major shortcoming of \acp{MEMM} and other discriminative Markov models based on directed graphical models is the \enquote{label bias problem} which can lead to poorer performances compared to \acp{HMM}. This problem has been addressed by Lafferty \emph{et al}\onedot's \acfp{CRF} \cite{lafferty2001conditional}. A conditional model like \acp{CRF} specifies the probabilities of label sequences for a given observation sequence. Features of the observation sequence do not need to be independent and they may represent attributes at different levels of granularity of the same observations. They could combine several properties of the same observation. Past and future observations may be considered to determine the probability of transitions between labels. Lafferty \emph{et al}\onedot \cite{lafferty2001conditional} and Sminchisescu \emph{et al}\onedot \cite{sminchisescu2006conditional} demonstrate how \acp{CRF} outperform both \acp{HMM} and \acp{MEMM}. While Lafferty \emph{et al}\onedot \cite{lafferty2001conditional} use synthetic data and real \acf{POS} tagging data, Sminchisescu \emph{et al}\onedot \cite{sminchisescu2006conditional} apply \acp{CRF} for the recognition of human motions. For this purpose, Sminchisescu \emph{et al}\onedot use a combination of 2D features from image silhouettes and 3D features. They show that the \ac{CRF}'s performance based on 3D joint angle features is more accurate with long range dependencies and that \acp{CRF} improve the recognition performance over \acp{MEMM} and \acp{HMM}. \acp{CRF} can model the transitions between gestures (extrinsic dynamics) but are not able to represent internal sub-structure. This led to the development of Hidden Conditional Random Fields that incorporate hidden states variables that model the sub-structure of a gesture sequence. Wang \emph{et al}\onedot \cite{Wang2006} combine the two main advantages of current approaches to gesture recognition: The ability of \acp{CRF} to use long range dependencies and the ability of \acp{HMM} to model latent structure. One single joint model is trained for all gestures to be classified and hidden states are shared between those gestures. According to Wang \emph{et al}\onedot \cite{Wang2006} the \acfp{HCRF} model outperforms \acp{HMM} and \acp{CRF} in the classification of arm gestures. But since \acp{HCRF} are trained on pre-segmented sequences of data, they only capture the internal structure but not the dynamics between gesture labels. To overcome this limitation, Morency \emph{et al}\onedot \cite{morency2007latent} introduced \acfp{LDCRF}, that combine the strengths of \acp{CRF} and \acp{HCRF}. \acp{LDCRF} are able to capture extrinsic dynamics as well as intrinsic substructure and can operate directly on unsegmented data. The performance of \acp{LDCRF} was tested on head and eye gesture data of three different datasets and the results were compared to state-of-the-art generative and discriminative modeling techniques like \ac{SVM}, \acp{HMM}, \acp{CRF} and \acp{HCRF}. The results show that \acp{LDCRF} perform better than all other methods. \section{Method}\label{sec:method} In the following, we use the Microsoft Kinect and OpenNI's NITE \cite{shotton2013real} module to obtain a skeleton of the person in front of the Kinect. We then use the joint data over time to detect the occurrence of pointing gestures. Since the Kinect provides relatively stable joint tracks (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, it is quite robust against illumination changes), we focus on pointing gesture detection and do not address, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, noisy body part detections. However, our trained model can be applied to other sensing methods (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, stereo cameras), if comparable, stable joint tracks are provided. \subsection{CRF} \acfp{CRF} are a framework for building probabilistic models to segment and label sequence data \cite{lafferty2001conditional}. For this purpose, \acp{CRF} define an undirected graph $G=(V,E)$ with the random variable $X$ representing the input variable observation and $Y$ being a random variable over corresponding label sequences. All components $Y_v \in Y$ are from the label alphabet $\mathcal{Y}$ = \{\enquote{background}, \enquote{rise left}, \enquote{point left}, \enquote{fall left}, \enquote{rise right}, \enquote{point right}, \enquote{fall right}, \enquote{other}\}. $G$ contains a vertex $v \in V$ for each random variable representing an element $Y_v \in Y$. In contrast to \acp{HMM}, \acp{CRF} do not model the output label sequence $Y$ and input data sequence $X$ as a joint probability $P(X,Y)$, but the conditional probability of a label sequence $Y_j$ given a sequence of input data $X_i$ and the model parameters $\theta$ \begin{equation} \label{eq:crf_label_probabilities} P(Y_j|X_i; \theta) = \frac{1}{Z(X_i, \theta)} exp \left( \sum_k \theta_k F_k(Y_j,X_i) \right), \end{equation} with the normalizing partition function \begin{equation} Z(X_i, \theta) = \sum\limits_j exp \left( \sum_k \theta_k F_k(Y_j,X_i) \right) \end{equation} summing for each $X_i$ over all $Y_j$ corresponding with $X_i$. The feature functions $F_k (Y_j , X_i )$ can be broken down into state functions $s_l(y_f,X_i,f)$ and transition functions $t_m(y_f,y_{f-1},X_i,f)$ \begin{equation} \label{eq:crf_feature_functions} F_k(Y_j,X_i) = \sum_f \left( \sum\limits_{l} \lambda_l s_l(y_f,X_i,f) + \sum_{m} \mu_m t_m(y_f,y_{f-1},X_i,f) \right). \end{equation} State functions $s_l$ model the conditioning of the labels $Y$ on the input features $X$ while the transition functions $t_m$ model the relations between the labels $Y$ with respect to input features $X$. Due to computational constrains a transition feature functions can only have two label node values as input parameters. \begin{figure}[tb] \centering \disablegraphics{% \includegraphics[width=.5\linewidth]{fig2} } \caption{\ac{CRF} (left) and \ac{LDCRF} (right) structure illustration with labels $Y_v$, features $X$, and latent variables $H_{ij}$. The boxes represent transition functions (green: intrinsic; black: extrinsic).}\label{fig:random_fields}\label{fig:crf}\label{fig:ldcrf} \end{figure} For our approach, the unconstrained access to any input feature $x_f \in X_i$, before or after a given frame, in the sequence $X_i$, is one of the major advantages that \acp{CRF} provide in contrast to \acp{HMM} (see Sec. \ref{sec:feature_history}). \subsection{LDCRF} \acfp{LDCRF} as introduced by Morency \emph{et al}\onedot \cite{morency2007latent} (please note that \acp{LDCRF} were first introduced as Frame-based Hidden-state Conditional Field in Morency's PhD-thesis \cite{morency2006context}), extend the structure of \acp{CRF} by hidden states to model intrinsic structure. Therefore, a set of latent variables $\mathcal{H}$ is introduced. Here, three hidden states per label $\mathcal{Y}$. The probability of each label in the graph is substituted by the chain of probabilities of its hidden states \begin{equation} \label{eq:ldcrf_model_probability_long} P(Y|X;\theta) = \sum_{H \in \mathcal{H}} P(Y|H,X;\theta) P(H|X;\theta). \end{equation} The random field (see Eq. \ref{eq:crf_label_probabilities}) is now build upon the hidden states $\mathcal{H}$ \begin{equation} P(H_n|X_i; \theta) = \frac{1}{Z(X_i, \theta)} exp \left( \sum_k \theta_k F_k(H_n,X_i) \right). \end{equation} Similar to \acp{CRF}, transition functions model relations between two hidden states. As shown in Fig.~\ref{fig:random_fields} these functions can in addition to the extrinsic structure (black boxes), relating observable class nodes, now also model intrinsic structure of a class (green boxes). Interestingly, \acp{HMM} model intrinsic structure with hidden states as well, but they need a model for each label class $\mathcal{Y}$. Thus, they calculate a probability for each sequence from each trained \ac{HMM} model, but those probabilities are unrelated. In contrast, \acp{LDCRF} seemingly combine those and output a meaningful probability for each $Y$ (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\sum_i P(Y_i) = 1$). \subsection{Features}\label{sec:features} Many features have been proposed and evaluated for gesture recognition (see, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \cite{richarz2010feature}). In preliminary experiments, we evaluated several features and feature combinations (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the features that Nickel and Stiefelhagen proposed \cite{Nickel2003}) We achieved the best results for pointing gesture recognition based on \acp{LDCRF} and NITE's \cite{shotton2013real} head, shoulder, elbow, and hand data with the following feature combination: The torso relative position of shoulder, elbow and hand, the angle between the vertical y-axis and the shoulder-hand line, the height difference of the two hands, and the distance of the hand to the corresponding shoulder representing the arm extension. To abstract different body dimensions, we normalize the distance of each distance-based feature by the shoulder width. Furthermore, to have a completely angular representation independent of body dimensions, we add each hand's polar coordinates as feature. The position of each hand in polar coordinates is represented by the azimuth angle $\alpha$ (in floor plane) and the elevation angle $\beta$ with respect to the corresponding shoulder. Here, we only use the angles and omit the radius to be body size independent. \subsection{History} \label{sec:feature_history} We use the CRF's ability to establish long range relations between input frames, which is the result of each feature function's ability to access all $x_f \in X_i$ of the input sequence $X_i$ (see Eq.\,\eqref{eq:crf_feature_functions}). We choose to establish state functions $s_l(y_f,X_i,f)$ that use the input features $x \in \mathrm{H_{istory}} = \{x_{f-1}, x_{f-3}, x_{f-5}, x_{f-7}, x_{f-13} \}$, where all of our features described in Sec.~\ref{sec:features} are used as state functions $s_l(y_f,X_i,f)$. This way we can represent dynamics in feature changes over an extended period. This is useful, because, for example, at the beginning of a pointing sequence the arm rises faster while it slows down shortly before arriving at the pointing posture. Taking into account deltas over shorter as well as larger time intervals captures this dynamic. We use a non-equidistant history to maintain the same level of computation complexity while covering a larger dynamic range compared to an equidistant history (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, $x \in \mathrm{H_{istory}} = \{x_{f-2}, x_{f-1}, x_{f}, x_{f+1}, x_{f+2} \}$ \cite{morency2007latent}). Furthermore, we only make use of previous features (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, only $f+i$ with $i < 0$) since one of our system's design goals is to keep the latency as low as possible. Thus, in contrast to, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, Morency \cite{morency2007latent}, we predict the current label only based on previous observations not waiting for additional future input frames. \section{Evaluation}\label{sec:evaluation} \newcommand{30}{30} \subsection{Dataset}\label{sec:evaluation:dataset} \begin{figure} \disablegraphics{ \centering \includegraphics[width=.75\linewidth]{fig3} } \caption{Example of a person pointing at target 3. Left (top to bottom): The camera image, the depth image, and the skeleton. Right: The skeleton and target object locations. The different lines represent different methods to calculate the pointing gestures direction such as, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the frequently used head-hand line.}\label{fig:skeleton_room} \end{figure} We recorded a novel evaluation dataset that consists of 990 pointing gestures: 18 persons (age range 23 to 63; six female and twelve male) performed 55 pointing gestures toward 22 targets. The subjects were positioned approximately $4$m away from the Kinect, which is at the upper end of Kinect's depth resolution sweet spot and allows us to record a full skeleton even of the tallest subjects. For each camera frame, we recorded the 11 bit depth image, the RGB image, and a 15 joint skeleton as is provided by the NITE framework, see Fig.~\ref{fig:skeleton_room}. Apart from the pointing gestures, all subjects performed 10 other gestures (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \enquote{hand waving}, \enquote{shoulder shrug}, or \enquote{come closer}) that we use as other/negative samples for training and testing. Additionally, we determined each person's dominant eye. We manually assigned every video frame in the dataset with one of 8 labels: \enquote{background} describes idle phases between gestures (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, standing upright with hanging arms). \enquote{left rise}, \enquote{left point}, and \enquote{left fall} (analogously, \enquote{right rise}, \enquote{right point}, and \enquote{right fall}) describe the typical pointing behavior of first raising the arm, the actually pointing and fine-tuning of the gesture, and finally lowering the arm again. \enquote{other} is used as a label for the various other gestures. \subsection{Results and Discussion}\label{sec:evaluation:results} \subsubsection{Frame-wise classification}\label{sec:evaluation:results:frame} \begin{table}[tb] \caption{Leave-one-subject-out evaluation results with LDCRF and CRF.\label{tab:results:frames}} % \centering \subfigure[LDCRF\label{tab:leave-out-subject-trained-with-18-subjects}]{ \centering \scaletables{ \begin{tabular}{|l| c | c | c | c | c | c | c | c | c |} \multicolumn{1}{l}{ } & \mcrot{1}{l}{30}{Background} & \mcrot{1}{l}{30}{Left Rise} & \mcrot{1}{l}{30}{Left Point} & \mcrot{1}{l}{30}{Left Fall} & \mcrot{1}{l}{30}{Right Rise} & \mcrot{1}{l}{30}{Right Point} & \mcrot{1}{l}{30}{Right Fall} & \mcrot{1}{l}{30}{Other}\\ \hline Background & \cellcolor[gray]{0.17} \color{white} 83.44&\cellcolor[gray]{0.97}2.67&\cellcolor[gray]{1.00}0.43&\cellcolor[gray]{0.97}3.29&\cellcolor[gray]{0.98}2.08&\cellcolor[gray]{1.00}0.26&\cellcolor[gray]{0.98}1.83&\cellcolor[gray]{0.94}5.99\\ \hline Left Rise & \cellcolor[gray]{0.94}6.34&\cellcolor[gray]{0.22} \color{white} 78.14&\cellcolor[gray]{0.90}9.60&\cellcolor[gray]{0.99}1.43&\cellcolor[gray]{1.00}0.48&\cellcolor[gray]{1.00}0.00&\cellcolor[gray]{1.00}0.03&\cellcolor[gray]{0.96}3.99\\ \hline Left Point & \cellcolor[gray]{1.00}0.03&\cellcolor[gray]{0.96}4.28&\cellcolor[gray]{0.13} \color{white} \textbf{\underline{87.22}}&\cellcolor[gray]{0.93}7.09&\cellcolor[gray]{1.00}0.36&\cellcolor[gray]{1.00}0.12&\cellcolor[gray]{1.00}0.15&\cellcolor[gray]{0.99}0.76\\ \hline Left Fall & \cellcolor[gray]{0.94}5.66&\cellcolor[gray]{0.99}0.52&\cellcolor[gray]{0.95}5.34&\cellcolor[gray]{0.15} \color{white} 85.15&\cellcolor[gray]{1.00}0.07&\cellcolor[gray]{1.00}0.11&\cellcolor[gray]{1.00}0.05&\cellcolor[gray]{0.97}3.11\\ \hline Right Rise & \cellcolor[gray]{0.97}2.98&\cellcolor[gray]{0.98}1.83&\cellcolor[gray]{1.00}0.07&\cellcolor[gray]{1.00}0.23&\cellcolor[gray]{0.21} \color{white} 79.02&\cellcolor[gray]{0.90}10.03&\cellcolor[gray]{0.99}0.60&\cellcolor[gray]{0.95}5.24\\ \hline Right Point & \cellcolor[gray]{1.00}0.00&\cellcolor[gray]{0.99}1.12&\cellcolor[gray]{0.99}0.74&\cellcolor[gray]{0.99}0.53&\cellcolor[gray]{0.91}8.79&\cellcolor[gray]{0.20} \color{white} \textbf{\underline{79.55}}&\cellcolor[gray]{0.95}5.08&\cellcolor[gray]{0.96}4.18\\ \hline Right Fall & \cellcolor[gray]{0.94}6.36&\cellcolor[gray]{1.00}0.30&\cellcolor[gray]{1.00}0.08&\cellcolor[gray]{1.00}0.07&\cellcolor[gray]{0.99}0.94&\cellcolor[gray]{0.95}5.49&\cellcolor[gray]{0.20} \color{white} 80.20&\cellcolor[gray]{0.93}6.55\\ \hline Other & \cellcolor[gray]{0.95}5.22&\cellcolor[gray]{0.96}3.96&\cellcolor[gray]{0.98}1.53&\cellcolor[gray]{0.97}3.12&\cellcolor[gray]{0.98}2.27&\cellcolor[gray]{0.97}3.16&\cellcolor[gray]{0.98}2.23&\cellcolor[gray]{0.21} \color{white} 78.51\\ \hline \end{tabular} } } \subfigure[CRF\label{tab:one-hidden-state-simulating-crf}]{ \centering \scaletables{ \begin{tabular}{|l| c | c | c | c | c | c | c | c | c |} \multicolumn{1}{l}{ } & \mcrot{1}{l}{30}{Background} & \mcrot{1}{l}{30}{Left Rise} & \mcrot{1}{l}{30}{Left Point} & \mcrot{1}{l}{30}{Left Fall} & \mcrot{1}{l}{30}{Right Rise} & \mcrot{1}{l}{30}{Right Point} & \mcrot{1}{l}{30}{Right Fall} & \mcrot{1}{l}{30}{Other}\\ \hline Background & \cellcolor[gray]{0.25} \color{white} 74.92&\cellcolor[gray]{0.99}0.61&\cellcolor[gray]{0.93}6.91&\cellcolor[gray]{0.98}1.61&\cellcolor[gray]{0.99}0.86&\cellcolor[gray]{0.92}7.97&\cellcolor[gray]{0.99}1.26&\cellcolor[gray]{0.94}5.85\\ \hline Left Rise & \cellcolor[gray]{0.94}5.81&\cellcolor[gray]{0.18} \color{white} 82.01&\cellcolor[gray]{0.93}7.38&\cellcolor[gray]{1.00}0.33&\cellcolor[gray]{1.00}0.30&\cellcolor[gray]{0.99}0.99&\cellcolor[gray]{1.00}0.50&\cellcolor[gray]{0.97}2.69\\ \hline Left Point & \cellcolor[gray]{0.90}10.10&\cellcolor[gray]{0.97}3.22&\cellcolor[gray]{0.28} \color{white} \textbf{\underline{71.85}}&\cellcolor[gray]{0.99}1.40&\cellcolor[gray]{1.00}0.15&\cellcolor[gray]{0.94}5.81&\cellcolor[gray]{1.00}0.13&\cellcolor[gray]{0.93}7.34\\ \hline Left Fall & \cellcolor[gray]{0.92}8.29&\cellcolor[gray]{1.00}0.00&\cellcolor[gray]{0.96}4.46&\cellcolor[gray]{0.15} \color{white} 85.23&\cellcolor[gray]{0.99}0.78&\cellcolor[gray]{0.99}0.56&\cellcolor[gray]{1.00}0.28&\cellcolor[gray]{1.00}0.40\\ \hline Right Rise & \cellcolor[gray]{0.97}3.06&\cellcolor[gray]{1.00}0.00&\cellcolor[gray]{1.00}0.27&\cellcolor[gray]{0.99}0.97&\cellcolor[gray]{0.11} \color{white} 89.16&\cellcolor[gray]{0.94}6.20&\cellcolor[gray]{1.00}0.00&\cellcolor[gray]{1.00}0.33\\ \hline Right Point & \cellcolor[gray]{0.84}15.52&\cellcolor[gray]{0.98}1.64&\cellcolor[gray]{0.95}5.19&\cellcolor[gray]{0.99}1.16&\cellcolor[gray]{0.97}2.69&\cellcolor[gray]{0.34} \color{white} \textbf{\underline{65.70}}&\cellcolor[gray]{0.99}0.86&\cellcolor[gray]{0.93}7.23\\ \hline Right Fall & \cellcolor[gray]{0.92}7.89&\cellcolor[gray]{0.99}1.05&\cellcolor[gray]{1.00}0.33&\cellcolor[gray]{1.00}0.03&\cellcolor[gray]{1.00}0.08&\cellcolor[gray]{0.96}3.62&\cellcolor[gray]{0.13} \color{white} 86.91&\cellcolor[gray]{1.00}0.10\\ \hline Other & \cellcolor[gray]{0.82}18.24&\cellcolor[gray]{0.92}8.17&\cellcolor[gray]{0.78}22.17&\cellcolor[gray]{0.91}8.62&\cellcolor[gray]{0.95}5.23&\cellcolor[gray]{0.85}14.58&\cellcolor[gray]{0.95}4.63&\cellcolor[gray]{0.82}18.35\\ \hline \end{tabular} } } \end{table} \newcommand{--}{--} \begin{table}[tb] \caption{Leave-one-subject-out evaluation results with HMM \cite{Nickel2003}. Please note that Nickel and Stiefelhagen's approach \cite{Nickel2003} is not multiclass and does not distinguish between left/right pointing. Accordingly, we do not record mistakes of, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, a detected \enquote{left rise} on a sequence in which the person points with his/her right arm.\label{tab:results:nickel}} \centering % \scaletables{ \begin{tabular}{|l| c | c | c | c | c | c | c | c |} \multicolumn{1}{l}{ } & \mcrot{1}{l}{30}{Background} & \mcrot{1}{l}{30}{Left Rise} & \mcrot{1}{l}{30}{Left Point} & \mcrot{1}{l}{30}{Left Fall} & \mcrot{1}{l}{30}{Right Rise} & \mcrot{1}{l}{30}{Right Point} & \mcrot{1}{l}{30}{Right Fall} & \mcrot{1}{l}{30}{Other}\\ \hline Background & \cellcolor[gray]{0.76}23.52&\cellcolor[gray]{0.84}15.65&\cellcolor[gray]{1.00}0.05&\cellcolor[gray]{0.95}4.82&\cellcolor[gray]{0.79}20.82&\cellcolor[gray]{1.00}0.00&\cellcolor[gray]{0.95}5.43&\cellcolor[gray]{0.70}29.71\\ \hline Left Rise & \cellcolor[gray]{0.99}0.78&\cellcolor[gray]{0.01} \color{white} 98.56&\cellcolor[gray]{0.99}0.66&\cellcolor[gray]{1.00}0.00& -- & -- & -- &\cellcolor[gray]{1.00}0.00\\ \hline Left Point & \cellcolor[gray]{1.00}0.24&\cellcolor[gray]{0.91}8.86&\cellcolor[gray]{0.20} \color{white} \textbf{\underline{80.33}}&\cellcolor[gray]{0.89}10.57& -- & -- & -- &\cellcolor[gray]{1.00}0.00\\ \hline Left Fall & \cellcolor[gray]{0.97}2.61&\cellcolor[gray]{1.00}0.38&\cellcolor[gray]{1.00}0.07&\cellcolor[gray]{0.03} \color{white} 96.94& -- & -- & -- &\cellcolor[gray]{1.00}0.00\\ \hline Right Rise & \cellcolor[gray]{0.97}2.61& -- & -- & -- &\cellcolor[gray]{0.05} \color{white} 95.40&\cellcolor[gray]{0.99}0.60&\cellcolor[gray]{1.00}0.49&\cellcolor[gray]{0.99}0.89\\ \hline Right Point & \cellcolor[gray]{0.99}0.56& -- & -- & -- &\cellcolor[gray]{0.91}9.08&\cellcolor[gray]{0.21} \color{white} \textbf{\underline{78.81}}&\cellcolor[gray]{0.89}11.49&\cellcolor[gray]{1.00}0.06\\ \hline Right Fall & \cellcolor[gray]{0.93}7.35& -- & -- & -- &\cellcolor[gray]{1.00}0.16&\cellcolor[gray]{1.00}0.20&\cellcolor[gray]{0.08} \color{white} 92.13&\cellcolor[gray]{1.00}0.16\\ \hline Other & \cellcolor[gray]{0.36} \color{white} 63.81&\cellcolor[gray]{0.92}8.11&\cellcolor[gray]{0.97}2.74&\cellcolor[gray]{0.95}5.27&\cellcolor[gray]{0.93}7.41&\cellcolor[gray]{0.97}3.08&\cellcolor[gray]{0.96}3.85&\cellcolor[gray]{0.94}5.73\\ \hline \end{tabular} } \end{table} The frame-wise classification results are depicted in confusion matrices in Tab.~\ref{tab:results:frames}. On average, the LDCRF correctly classifies roughly $83\%$ of the frames labeled as \enquote{left/right point} that mark the holding phase with pointing on target of each gesture, see Tab.~\ref{tab:leave-out-subject-trained-with-18-subjects}. It is important to note that the most common misclassifications are: First, \enquote{rise} and \enquote{fall} are misclassified as either \enquote{background} or \enquote{point}. Second, \enquote{point} is misclassified as either \enquote{rise} or \enquote{fall}. In our opinion, such mistakes are not critical, because during the transition phases from one state into the other (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, from \enquote{rise} to \enquote{point}) there is substantial label ambiguity over several frames between these classes. And these misclassifications almost exclusively occur during these transition phases. Furthermore, we can see that the LDCRF provides a substantially better performance than the CRF, compare Tab.~\ref{tab:leave-out-subject-trained-with-18-subjects} and Tab.~\ref{tab:one-hidden-state-simulating-crf}. Most importantly, we can see that the CRF often misclassifies frames from \enquote{other} gestures as being parts of pointing gestures, which in practice could lead to a significant amount of false positive pointing gesture misdetections. This is an important aspect for our intended use in \acl{HRI}, because it is less disturbing for an interaction partner to repeat a pointing gesture than to have the robot react to a falsely detected pointing gesture. The fact that the LDCRF rarely makes such mistakes is most likely due to its ability to learn and model the intrinsic structure of pointing gesture phases. \subsubsection{Sequence detection and segmentation}\label{sec:evaluation:results:sequence} \begin{table}[tb] \caption{Sequence detection results (in \%).\label{tab:prediction_state_classes}} \centering {\small \begin{tabular}{|l|l|r|r|} \hline Prediction Class & Type & LDCRF & HMM \\ \hline \hline Exact (End) & TP & 20.32 & 0.21 \\ Detection longer & TP & 23.89 & 0.31 \\ Detection shorter & TP & 29.26 & 67.26 \\ No detection (control) & TN & 9.47 & 10.53 \\ \hline Phantom detection & FP & 0.63 & 0.83 \\ Overlapping detection & FN & 16.42 & 0.00 \\ Missed detection & FN & 0.00 & 20.75 \\ \hline \hline Total True & T* & 82.95 & 78.31\\ Total False & F* & 17.05 & 21.58\\ \hline \end{tabular} } \end{table} \begin{figure}[tb] \centering \disablegraphics{ \includegraphics[width=.5\linewidth]{fig4} } \caption{Sequence evaluation state machine.\label{fig:statemachine}} \end{figure} As we have seen in the frame-wise classification, most of the pointing misclassifications are non-critial confusions between \enquote{rise}, \enquote{point}, \enquote{fall}, and \enquote{background}. Accordingly, we can use a window to suppress such misclassifications and obtain a continuous pointing detection. For this purpose, we use a simple median window to filter the frame-wise detections and eliminate small discontinuities. To evaluate the resulting continuous blocks of pointing gesture detections, we use a special state machine that is able to distinguish between different types of detection behaviors, see Fig.~\ref{fig:statemachine} and Tab.~\ref{tab:prediction_state_classes}. As can be seen in Tab.~\ref{tab:prediction_state_classes}, the resulting system exhibits a false positive rate of $0.63\%$. Furthermore, the system detects all pointing gestures, but unfortunately -- due to the simple filtering mechanism -- it classifies two immediately subsequent pointing gestures into a single pointing gesture detection in $16.42\%$ of the cases. However, we expect that we can easily improve on these number given a more elaborate frame grouping mechanism than median filtering of the predicted labels. In $20.32\%$ of the cases, the end frame of the predicted and annotated pointing sequence are exact matches. The predicted pointing segment is slightly longer than the annotation (i.e., an overlap of \enquote{rise} and/or \enquote{fall} with \enquote{background}) in $23.89\%$ of the detections and slightly shorter in $29.26\%$. This can be explained with the label ambiguity that we already addressed in Sec.~\ref{sec:evaluation:results:sequence}. \subsubsection{Baseline}\label{sec:evaluation:results:baseline} To serve as a baseline, we implemented Nickel and Stiefelhagen's \cite{Nickel2003} \ac{HMM}-based pointing gesture detection system, which -- despite its age -- is still a popular method. Interestingly, the \acp{HMM} performes bad with our feature set (see Sec.~\ref{sec:features}) and, analogously, the \ac{LDCRF} performs bad with Nickel and Stiefelhagen's features set. Consequently, we report the \ac{HMM} results based on Nickel and Stiefelhagen's \cite{Nickel2003} original set of features. As we can see in Tab.~\ref{tab:results:nickel}, the \ac{HMM} correctly detects roughly $95\%$ of frames that belong to rise or fall. However, this comes at the cost of a substantial amount of false rise and fall detections, especially for \enquote{background} frames but also for \enquote{other} and \enquote{point}. This leads to the fact that the ability to detect \enquote{point} frames is substantially lower than for the \ac{LDCRF}, see Tab.~\ref{tab:leave-out-subject-trained-with-18-subjects}. If we consider sequence detection and segmentation based on the \ac{HMM}'s frame-wise detections, see Tab.~\ref{tab:prediction_state_classes}, we notice that the \ac{HMM} does miss entire pointing sequences (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $20.75\%$ missed detection). Furthermore, we can see that the false positive rate of the \ac{LDCRF} is better compared to the \ac{HMM}'s with $0.63\%$ and $0.83\%$, respectively. \section{Conclusion and Future Work}\label{sec:conclusion} We presented how we use \acp{LDCRF} based on depth-based skeleton joint data to perform person-independent pointing gesture detection. Here, we have shown that \acp{LDCRF} outperform traditional \acp{CRF} and also \acp{HMM} for pointing gesture detection and labeling of individual frames. Based on the labeled frames of a video sequence, we can use filtering over time to suppress false detections and determine the onset and end of actual pointing gestures. Thus, segmenting pointing gesture of arbitrary length in video sequences. This way, we were able to reliably and efficiently detect pointing gestures with a very low false positive rate. To evaluate our approach, we recorded a novel dataset. We leave two important aspects as future work: First, we intend to improve the pointing sequence extraction based on the frame-wise labeled video frames, specifically to avoid misclassification of two successive pointing gestures as one pointing gesture. Second, we want to determine the optimal point in time of a pointing gesture to determine the target object, because in preliminary experiments in our dataset we have shown that it has a drastic influence on the ability to correctly determine the pointed-at object (i.e., an improvement of the correct classification rate by up to roughly $10$\% on our dataset). \bibliographystyle{model1-num-names}
1,314,259,996,256
arxiv
\section{Introduction} \label{sec:Intro} Given a group~$G$ with finite generating set~$S$ the cogrowth problem considers elements in the free monoid, $S^*$, on $S$ whose images in $G$ are equal to the identity; this is a sublanguage of $S^*$. We denote the set of these elements by $\mathcal{L}(G;S)$, or simply $\mathcal{L}$ when the context is clear. For many classes of groups the word problem is decidable (i.e., there exists a decision procedure for determining if a word on a set of generators is equal to the identity). This includes free groups, one-relator groups, polycyclic groups and fundamental groups of closed orientable two-manifolds of genus greater than one. On the other hand, there exist groups with unsolvable word problem, with the first such example being given by Novikov~\cite{novikov}. Here we consider an enumerative version of this problem, in which we are interested in the number of words of a given length in $\mathcal{L}$. This is called the \emph{cogrowth function} and we denote the number of words of length $n$ by ${\rm CL}(n;G,S)$. The generating function of this counting sequence, $F_{G;S}(t)$, is called the corresponding cogrowth series and is defined \begin{equation} F(t)= F_{G;S}(t) := \sum_{n\ge0} {\rm CL}(n;G,S)t^n.\end{equation} This enumeration problem gives rise to three different notions of complexity. First, there is the complexity of the language $\mathcal{L}(G,S)$, which can be understood using the Chomsky-Sch\"utzenberger hierarchy of grammars. Next there is the complexity of the associated generating series, where we regard the class of rational power series as being simplest, with algebraic power series, $D$-finite series, and differentially algebraic series being viewed as increasingly complex classes of power series. Finally, there is the complexity of the group itself, which is typically understood in terms of desirable group theoretic properties. When one works at the very simplest level, the three notions of complexity coincide: the language $\mathcal{L}(G,S)$ is a regular language if and only if $G$ is finite~\cite{anisimov} and this occurs if and only if $F_{G,S}(t)$ is rational. Beyond this there are other well known connections: the generating function of an unambiguous context-free language satisfies a polynomial equation and hence is an algebraic power series~\cite[Chapter III]{kuich}, and a result of Muller and Schupp \cite{muller} gives that this occurs precisely when $G$ has a finite-index free subgroup (i.e., $G$ is virtually free). Although, virtually free groups are in some respect complex (e.g., they are non-amenable except in the cyclic-by-finite case), the word problem is relatively straightforward for this class of groups and thus in the sense of complexity of the word problem the class of virtually free groups is ``simple''. While there is a strong overlap between the various notions of complexity (group theoretic, language theoretic, and complexity of power series), there are nevertheless some families of groups which are typically regarded as being structurally well behaved whose corresponding cogrowth series are complex according to our notion of complexity. For example, it is shown in ~\cite{BeMi20} that a finitely generated amenable group that is not virtually nilpotent can never have a generating series that satisfies a non-trivial homogeneous linear differential equation with rational function coefficients. In this work we focus on groups that are virtually free; specifically these are groups that have a finite-index free subgroup. As remarked earlier, the cogrowth series are algebraic in this case, and in many cases it is useful to understand the polynomial equation that these generating functions satisfy. \begin{figure \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=0.5\linewidth]{imgs/Z2Z3ultrathick.jpg} \caption{$m=2,n=3$} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=0.7\linewidth]{imgs/Z3Z3ultrathick.jpg} \caption{$m=n=3$} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=0.63\linewidth]{imgs/Z3Z4ultrathick.jpg} \caption{$m=3,n=4$} \end{subfigure} \vspace{10pt} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.5\linewidth]{imgs/Z3Z5ultrathick.jpg} \caption{$m=3,n=5$} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.495\linewidth]{imgs/Z4Z5ultrathick.jpg} \caption{$m=4,n=5$} \end{subfigure} \caption{\small Cayley graphs $X(G;S)$ of some virtually free groups. Each group is a free products of two cyclic groups $G=\mathbb{Z}/m\mathbb{Z}\star\mathbb{Z}/n\mathbb{Z}=\gen{x,y|x^m=1,y^n=1}$ with generating set $S=\{x,\inv{x},y,\inv{y}\}$.} \label{fig_CG_2cylic} \label{fig:graphs} \end{figure} Using a combinatorial argument, we are able to determine a grammar to generate the language $\mathcal{L}(G;S)$ when $G$ is the free product of a finite number of groups in which the groups are either finite or infinite cyclic. From the grammar it is then straightforward to deduce a family of algebraic equations which yield the cogrowth series. We use this system to find some explicit expressions for series. We also outline a second strategy using free probability to determine bounds~\cite{liu} on the degree and height of a bivariate polynomial $\Lambda(t,z)\in \mathbb{Z}[t,z]$ such that $\Lambda(t, F(t))=0$. Such bounds can be combined with strategic guessing techniques in order to deterministically compute $\Lambda$. The Cayley graphs of free products of finite groups provide a graph-theoretic interpretation of this language. Recall, that the Cayley graph of a group $G$ with given generating set $S$ is the directed graph denoted $\mathcal{X}(G;S)$ with vertex set given by the elements of $G$ with edge set $\{(g, gs): s\in S\}$. If the generating set is closed under taking inverses, then we identify the edges $(g,gs)$ and $(gs,g)$ and the graph is considered to be undirected. The words in $\mathcal{L}(G;S)$ are in bijection with walks on $\mathcal{X}(G;S)$ that start and end at the group identity element. Figure~\ref{fig:graphs} illustrates subgraphs of Cayley graphs of some virtually free groups, specifically neighbourhoods of the identity. To imagine the full graph, recall that Cayley graphs are vertex transitive, and hence these graphs are all infinite, and fractal in nature. The precise statements of the results that we can obtain are as follows. We give explicit bounds on the degree and height of the minimal polynomial of the cogrowth series for a group that is the free product of a finite number of finite groups. This is proved in~\S\ref{sec:bounds}. \begin{theorem} \label{thm:mainbound} Let $G_1,\ldots ,G_r$ be finite groups with generating sets $S_1,\hdots, S_r$ respectively, let $G=G_1^{\star m_1}\star\dots G_r^{\star m_r}$, and let $S:=\bigcup_{i=1}^r \bigcup_{j=1}^{m_i} S_i^{(j)}\subseteq G$, where for each $i$, $S_i^{(1)},\ldots ,S_i^{(m_i)}$ are copies of $S_i$ in the corresponding copies of $G_i$ used in the formation of $G$. Then the cogrowth series $F(t):=F_{G;S}(t)$ of $G$ with respect to $S$ is algebraic and satisfies $\Lambda(t,F(t))=0$, where $\Lambda(t,z)$ is a nonzero polynomial with rational coefficients with $${\rm deg}_t(\Lambda), {\rm deg}_z(\Lambda) \le \left(\prod_{i=1}^r \Delta_i\right) \left( 1 + \sum_{i=1}^r \frac{1}{\Delta_i} \right),$$ where $\Delta_i$ is the sum of the degrees of the irreducible representations of $G_i$ for $i=1,\ldots ,r$. In particular, the degrees do not depend on $m_1,\ldots ,m_r$ when we choose $S$ as above. \end{theorem} In fact, we are able to compute the polynomials $\Lambda(t,z)$ in the statement of Theorem \ref{thm:mainbound} for several families of free products. The results in Theorem~\ref{thm:main2} are worked out in~\S\ref{sec:exam}. \begin{theorem} \label{thm:main2} We have the following results. \begin{enumerate}[itemsep=15pt,label=(\alph*)] \item Let $d,m \ge 2$. If $G=\langle x_1 ~|~x_1^d=1\rangle\star \cdots \star \langle x_m~|~x_m^d=1\rangle \cong \left(\mathbb{Z}/d\mathbb{Z}\right)^{\star m}$ and $S=\{x_1,\ldots ,x_m\}$, then $F(t)=F_{G,S}(t)$ is the unique solution to the equation $$m^d t^d F(t)^d = (F(t)-1)(F(t)+m-1)^{d-1}\qquad {\rm with}~F(0)=1,$$ and $F(t)$ has radius of convergence $(d-1)^{(d-1)/d}/(d(m-1)^{1/d})$.\label{example:a} \item Let $m, s\ge 0$ be integers with $m+2s\ge 2$. If $G=\langle x_1 ~|~x_1^2=1\rangle\star \cdots \star \langle x_m~|~x_m^2=1\rangle \star \langle y_1,\ldots, y_s \rangle \cong \left(\mathbb{Z}/2\mathbb{Z}\right)^{\star m} \star \mathbb{Z}^{\star s}$ and $S=\{x_1,\ldots ,x_m,y_1,y_1^{-1},\ldots ,y_s, y_s^{-1}\}$ then $F(t)=F_{G,S}(t)$ is the unique solution to the equation $$(m+2s)^2 t^2 F(t)^2 = (F(t)-1)(F(t)+m+2s-1)\qquad {\rm with}~F(0)=1,$$ and $F(t)$ has radius of convergence $1/(2\cdot (m+2s-1)^{1/2})$. \label{example:b} \item If $n\ge 2$ and $G=\langle x~|~x^2=1\rangle \star \langle y ~|~y^n=1\rangle \cong \mathbb{Z}/2\mathbb{Z}\star \mathbb{Z}/n\mathbb{Z}$ and $S=\{x,y\}$ then $$F_{G,S}(t)= (1-tD)/((1-tD)^2-t^2),$$ where $D$ is the unique power series solution to the equation $$t^{n-1}(1-tD)^{n-1} = (1-tD-t^2)^{n-1} D$$ whose expansion begins $t^{n-1}+{\rm higher~degree~terms}$. \label{example:d} \end{enumerate} \end{theorem} We compute the power series $D$ in Theorem \ref{thm:main2} (b) for small values of $n$ in Example \ref{exam:3}. Additionally, as a consequence of Theorem~\ref{thm:main2}, we are able to prove a gap result. \begin{theorem}\label{thm:main3} Let $G$ be a finitely generated group with finite symmetric generating set $S$ and let $\rho_{G,S}$ denote the radius of convergence of the cogrowth generating series of $G$ with respect to $S$. Then $\rho_{G,S}^{-1} \in \{1,2\}\cup [2\sqrt{2},\infty)$. \end{theorem} We suspect that all values in $[2\sqrt{2},\infty)$ can occur as the inverse of the radius of convergence of a cogrowth series, since the groups for which $2\sqrt{2}$ is realized as the inverse of a radius of convergence have uncountably many homomorphic images, although we have no evidence that all values in this interval can be realized; it is an interesting problem to determine the possible radii of convergence for a cogrowth generating series for a group with a symmetric generating set $S$. Finite free products of finite groups and cyclic groups are virtually free, and so there is a pushdown automaton that accepts the language of words on~$S$ equal to the identity. In theory, one can translate the automaton theoretic description to a give a description in terms of grammars, and one can then use this to find a system of equations. Kuksov~\cite{kuksov} directly describes a recursive system which he solves to find generating series for some of the cases of Theorem~\ref{thm:main2} under the condition that one does not allow ``doubling back'' on the Cayley graph; that is, one does not allow a symbol $x$ to appear immediately next to the symbol $x^{-1}$ in the words considered. Nevertheless, it appears our systems resolve more cases than were previously known. Kuksov~\cite{kuksov} also obtained the result of Theorem~\ref{thm:main2}\ref{example:a} in the case when $d=2,3$. These two counting sequences appear the Online Encyclopedia of Integer Sequences~\cite{oeis} as sequences A183135 and A265434. Alkauskas~\cite{alkauskas} worked out Theorem~\ref{thm:main2}\ref{example:d} in the case when $m=2$ and $n=3$, refining it to actually get the cubic equation for the cogrowth series (this is of special significance because this corresponds to the group ${\rm PSL}_2(\mathbb{Z})$). Theorem~\ref{thm:main3} was noticed as a curiosity coming from computations done while proving Theorem~\ref{thm:main2} and working out other examples. The outline of this paper is as follows: In~\S\ref{sec:grammar}, we give equations for computing the cogrowth of finite free groups and cyclic groups. We also prove the equations have a unique set of solutions in power series with a given initial condition and that the solutions are algebraic, although, as noted earlier, the algebraicity follows from the Chomsky-Sch\"utzenberger Theorem~\cite{chomsky} and the work of Muller and Schupp~\cite{muller}. In \S\ref{sec:exam} we work out several examples, which are listed in the statement of Theorem~\ref{thm:main2}; The main result of \S\ref{sec:bounds} is a general bound on the degree of the minimial polynomial of co-growth series for the free products of finite groups, which gives Theorem \ref{thm:mainbound} as a consequence. These results use ideas from free probability, which were suggested to two of the authors by one of the referees for the earlier paper \cite{BeMi20}. In \S\ref{sec:gap} we prove the gap result for radii of convergence given in Theorem~\ref{thm:main3}, using the results from the preceding sections. \section{A grammar construction and system of equations} \label{sec:grammar} Computing the cogrowth of free products of free groups has been done in a number of cases \cite{alkauskas}, \cite{kuksov}, \cite{kuksov2}. We note that Kuksov's work is the most general, but as we have already remarked, he computes cogrowth using an altered definition. In particular, he only counts \emph{reduced words} in the generating set $S$ that are equal to $1$; that is, if $x, x^{-1}\in S$ then he does not allow $x$ to immediately follow $x^{-1}$ or $x^{-1}$ to immediately follow $x$ in the words he considers. We prove an analogue of his result that allows ``doubling back'' on the Cayley graph. We give an explicit algebraic system satisfied by the generating function. Alkauskas~\cite{alkauskas} does a computation for ${\rm PSL}_2(\mathbb{Z})$, which is a free product of a cyclic group of order $2$ and a cyclic group of order $3$, using a non-symmetric generating set of size $2$. We fix the following notation for use throughout this section. We let $m$ be a positive integer and we let $G_1,\ldots , G_m$ be groups; We let $S_i\subseteq G_i$ be a generating set for $G_i$ for $i=1,\ldots ,m$; We let $S=\cup S_i\subseteq G_1\star \cdots \star G_m =: G$ be a generating set for the free product of $G_1,\ldots ,G_m$, where we identify $G_i$ with its image in the free product under the canonical inclusion when forming $S$. For each $g\in G$, and $X\subseteq G$, we let $\mathcal{L}_{g, X}(S)$ (or simply $\mathcal{L}_{g,X}$ if $S$ is understood) be the language of words with the property that $s_1\cdots s_n \in \mathcal{L}_{g, X}(S)$ for a word $s_1\cdots s_n$ of length $n$ over the alphabet $S$, if $s_1\cdots s_n = g$ and for $1\le i<n$ we have $s_1\cdots s_i\not\in X$. In this case, we say that all proper prefixes of $s_1\cdots s_n$ \emph{avoid} $X$. We let $F_{g,X}(t)$ be the ordinary generating function for the language $\mathcal{L}_{g,X}(S)$. In the following lemma, we take $+$ to be combinatorial sum, i.e. disjoint union, and $\cdot$ to be concatenation, and for a statement $\mathrm{P}$ we take $\chi(\mathrm{P})$ to be $1$ if $\mathrm{P}$ is true and $0$ if $\mathrm{P}$ is false; in the case that $Z$ is a set, we take $\chi(\mathrm{P})Z$ to be empty if $\mathrm{P}$ is false; and the set $Z$ if $\mathrm{P}$ is true. \begin{lemma}\label{lem:eq} Adopt the notation above. Then for $i\in\{1,\ldots , m\}$, each $g\in G_i$, and $X\subseteq G_i$, we have the following combinatorial relations: \begin{align} \label{lem:eq:case_1inX_gnot1} \mathcal{L} _{g,X}&=\left(\chi\left(g\in S_i\cap X\right)\{g\} \right) + \bigcup_{s\in S_i\setminus X} \left(\{s\}\cdot \mathcal{L} _{s^{-1}g,s^{-1}X}\right)& 1\in X, g\neq~1;\\ \label{lem:eq:case_1notinX_gnot1} \mathcal{L} _{g,X}&=\mathcal{L} _{1,X}\cdot \mathcal{L} _{g,\unone{X}}&1\notin X,\ g\neq 1;\\ \label{lem:eq:case_1notinX_g1} \mathcal{L} _{1,X}&=\epsilon + \left(\mathcal{L} _{1,X}\cdot \left(\mathcal{L} _{1,\unone{X}}\setminus\epsilon\right)\right),&1\notin X;\\ \label{lem:eq:case_1inX_g1} \mathcal{L} _{1,X}&=\epsilon + \bigcup_{s\in S\setminus S_i}\left(\{s\}\cdot \mathcal{L} _{s^{-1},\{s^{-1}\}}\right)+\bigcup_{s\in S_i\setminus X}\left(\{s\}\cdot \mathcal{L} _{s^{-1},s^{-1}X}\right)&1\in X. \end{align} \end{lemma} \begin{proof} There are two main ideas in this proof. If $X$ does not contain 1, then we can factor a walk based on its last passage through 1. If it does contain 1, then the fact that the $G_i$ are disjoint gives a restriction that allows us to decompose walks uniquely according to their first step. Suppose first that $g\in G_i\setminus \{1\}$ for some $i\in \{1,\ldots ,m\}$ and that $1\in X$. If it is that $s_1\cdots s_n=g$ and $s_1\in G_j\setminus \{1\}$ with $j\neq i$ then if $s_1\cdots s_n=g$, the universal property for free products gives that some prefix of $s_1\cdots s_n$ must be equal to $1$. Hence every word of length $n$ starting with $s_1\not\in G_i$ that is equal to $g$ must pass through $X$. Thus if $s_1\cdots s_n=g$ and every proper prefix avoids $X$ then $s_1$ must be in $G_i$. If $n=1$, it may be that $g\in X$ however, if $n>1$ then $s_1\not\in X$, otherwise it contradicts the prefix condition. Then $s_2\cdots s_n= s_1^{-1}g$ and every prefix of $s_2\cdots s_n$ avoids $s_1^{-1}X$. Moreover, if $s_2\cdots s_n= s_1^{-1}g$ and every prefix of $s_2\cdots s_n$ avoids $s_1^{-1}X$ then appending $s_1$ at the beginning gives a word of length $n$ that is equal to $g$ and such that every prefix avoids $X$. Hence we see that \[\mathcal{L}_{g,X}= \chi(g\in S_i){g} + \bigcup_{s\in S_i\setminus X} \{s\}\cdot \mathcal{L}_{s^{-1}g,s^{-1}X}.\] Next, if $g\in G_i\setminus \{1\}$ for some $i\in \{1,\ldots ,s\}$ and $1\not\in X$ then we can factor the words in $\mathcal{L}_{g,X}$ uniquely by taking the largest prefix whose product is equal to 1. In particular, if $s_1\cdots s_n\in \mathcal{L}_{g,X}$, we let $i<n$ denote the largest index such that $s_1\cdots s_i=1$. (We note that $i=0$ is possible.) In this case we have a decomposition of $s_1\cdots s_n$ into a product $ab$ with $a=s_1\cdots s_i$ being equal to $1$ and such that every prefix of $a$ avoids $X$ and the word $b=s_{i+1}\cdots s_n$, which is equal to $g$ and such that every proper prefix avoids $X$ and also $1$. Thus we get \[\mathcal{L}_{g,X} = \mathcal{L}_{1,X}\cdot \mathcal{L}_{g,X\cup \{1\}}.\] The cases when $g=1$ are similar, although we must account for the empty word, which, by convention, evaluates to 1. More precisely, if $X\subseteq G_i$ and $1\in X$ then for $s\in S$, if we count words $s_1\cdots s_n$ that are equal to $1$ and such that $s_1=s\not\in X$ and such that every prefix avoids $X$ then for $n\ge 1$ there is a bijection between the collection of words of length $n$ and words $s_2\cdots s_n$ of length $n-1$ that are equal to $s_1^{-1}$ and such that every prefix avoids $s_1^{-1}X$. Then \[\mathcal{L}_{1,X}(t) = 1+\bigcup_{j=1}^m \bigcup_{s\in S_j\setminus X} \{s\} \cdot \mathcal{L}_{s^{-1}, s^{-1}X}.\] \end{proof} There is a classical translation of these combinatorial equivalences to give a system of functional equations for the corresponding ordinary generating functions. \begin{corollary}\label{grammarCorrectness_series} Adopt the notation above. Then we have the following relations: \begin{align} \label{case_1inX_gnot1_series} F_{g,X}(t)&=\chi(g\in S_i\cap X)t + \sum_{s\in S_i\setminus X}t F_{s^{-1}g,s^{-1}X}(t) &\text{ if }1\in X,\ g\neq 1;\\ \label{case_1notinX_gnot1_series} F_{g,X}(t)&=F_{1,X}(t)F_{g,\unone{X}}(t) &\text{ if }1\notin X,\ g\neq 1;\\ \label{case_1notinX_g1_series} F_{1,X}(t)&=1+F_{1,X}(t)(F_{1,\unone{X}}(t)-1)&\text{ if }1\notin X;\\ \label{case_1inX_g1_series} F_{1,X}(t)&=1+\sum_{s\in S\setminus S_i}tF_{s^{-1},\{s^{-1}\}}(t)+\sum_{s\in S_i\setminus X} tF_{s^{-1},s^{-1}X}(t)&\text{ if }1\in X. \end{align} \end{corollary} \begin{proof} This is done via a well known translation, as in~\cite{flajolet}. \end{proof} \begin{remark} Although the system in Lemma~\ref{lem:eq} is not \emph{a priori} finite when the groups are not finite, one can easily adapt this construction to handle the case where some of the $G_i$ are infinite cyclic groups with $S_i=\{x,x^{-1}\}$ with $x$ a generator for $G_i$. The reason for this is that if a word $s_1\cdots s_n$ has some proper prefix equal to $x^i$ with $i>0$ then it has a proper prefix equal to $x^j$ for $0<j<i$; similarly, if it has a proper prefix equal to $x^i$ with $i<0$ then it has a proper prefix equal to $x^j$ for $j<0$ with $j>i$. Thus we only need to consider $X$ with at most three elements (potentially one positive exponent, one negative exponent, and the identity), since if $x^i$ and $x^j$ are in $X$ with $i>j>0$ then $F_{g,X}=F_{g,X\setminus \{x^i\}}$ and an analogous result holds for negative exponents. Also, if $i>j>0$ then and $x^j\in X$ then $F_{x^i,X}=0$ and similarly in the negative case. Thus if one uses these facts and looks at the dependency tree that arises when looking at equations from Lemma, then we see that in the case that $G_i$ is an infinite cyclic group with generator $x$, we only need to consider $F_{g,X}$ for $g\in G_i$ and $X\subseteq G$ with $g\in \{x^{-1},1,x\}$ and $X\subseteq \{x^{-1},1,x\}$. See Example~\ref{exam:2} for a case where this is implemented. \label{rem:infinite} \end{remark} \subsection{Examples} \label{sec:exam} We now give some examples in which we use the systems above to compute cogrowth series for certain free products. For the following examples, we generated the system given by the equations, and applied simplifications. The solvable examples generally possess an exploitable symmetry, so although in theory one might have to manipulate $\sum_{i=1}^m |G_i|2^{|G_i|}$ equations, in practice there are far fewer. We incrementally eliminated the variables\footnote{Specifically, we have used the {\tt eliminate} command of Maple 2018.} to determine the algebraic equation satisfied by $F_{1,\emptyset}$. When listed, the OEIS numbers refer to the Online Encyclopedia of Integer Sequences~\cite{oeis}. Table~\ref{tab:examples} in the Appendix summarizes the sequences. We start with the following infinite family of groups. \begin{example} \label{exam:1} Let $d, m\ge 2$ and let $G=\left(\mathbb{Z}/d\mathbb{Z}\right)^{\star m}=\langle x_1~ |~x_1^d=1\rangle \star \cdots \star \langle x_m ~|~x_m^d=1\rangle$ and let $S=\{x_1,\ldots ,x_m\}$. Then the generating series, $Z(t)$ for ${\rm CL}(n;G,S)$, is the unique power series satisfying the equation \[m^d t^d Z^d = (Z-1)(Z+m-1)^{d-1}\] with initial condition $Z(0)=1$. \end{example} Below we prove this using the grammar, however in Section~\ref{sec:AlgSys} we give a second proof as a consequence of a more general result obtained by methods from free probability. \begin{proof} It is straightforward to solve the system from Lemma~\ref{lem:eq} with an additional remark. We let \begin{equation} A:=F_{1,\{x_1\}}(t), B=F_{1,\{1,x_1\}}(t), C=F_{x_1^{-1},\{x_1^{-1}\}}(t). \end{equation} Notice that a word $s_1\cdots s_n\in S^n$ that is equal to $x_1^{-1}$ and has no proper prefix equal to $x_1^{-1}$ can be decomposed uniquely in the form $w_1 x_1 w_2 x_1 \cdots w_{d-1} x_1$, where each $w_i$ is a word in $s_1,\ldots ,s_m$ that is equal to $1$ with no proper prefix equal to $x_1$. This then gives us the equation \begin{equation} \label{eq:A} t^{d-1} A^{d-1} = C. \end{equation} Equation~\eqref{lem:eq:case_1notinX_g1} of Lemma~\ref{lem:eq} gives that $A=1+ A(B-1)$; that is, $A=1/(2-B)$. Finally, by symmetry we can rewrite Equation~\eqref{lem:eq:case_1inX_g1} of Lemma~\ref{lem:eq} as \begin{equation} \label{eq:B} B=1+(m-1)t C.\end{equation} Using the fact that $A=1/(2-B)$ then gives that $A=(1-(m-1)tC)^{-1}$ and so Equation (\ref{eq:A}) yields \begin{equation} \label{eq:CC} t^{d-1} = C(1-(m-1)tC)^{d-1}.\end{equation} Let $W=F_{1,\{1\}}(t)$ and $Z=F_{1,\emptyset}(t)$. Then $Z$ is the generating series for the cogrowth. Equation~\eqref{lem:eq:case_1notinX_g1} of Lemma~\ref{lem:eq} gives \begin{equation} \label{eq:Z}Z=1+Z(W-1)\end{equation} and Equation~\eqref{lem:eq:case_1inX_g1} of Lemma~\ref{lem:eq}, again using symmetry, gives \begin{equation}\label{eq:W} W=1+mt C. \end{equation} Using Equations (\ref{eq:Z}) and (\ref{eq:W}), we see $Z=(1-mtC)^{-1}$, so substituting $C=(Z-1)/(mtZ)$ into Equation~\ref{eq:CC} gives $$mt^{d}Z = (Z-1)(1-(m-1)(Z-1)/mZ)^{d-1},$$ or equivalently $$m^d t^d Z^d = (Z-1)(Z+m-1)^{d-1},$$ as claimed. To see uniqueness of the solution once we impose the initial condition $Z(0)=1$, note that if there is a unique polynomial solution of degree $n-1$ to this equation mod $(t^n)$ for $n\ge 1$ then we get a unique polynomial solution of degree $n$ to this equation mod $(t^{n+1})$ by Hensel's lemma and so by induction there is a unique power series solution with this initial condition. \end{proof} We discovered the equation in Example \ref{exam:1} by solving the system on Maple for $d$ with $d\le 9$, and symbolic $m$, and we were able to use the Maple package {\tt gfun}~\cite{gfun} to guess the general form of the algebraic equation satisfied by the cogrowth. This then suggested a method for proving this fact. We are also able to determine dominant singularity of the cogrowth generating functions appearing in Example \ref{exam:1}. \begin{lemma} Let $\beta>0$ and let \[P(z)=m^d \beta^d z^d - (z-1)(z+m-1)^{d-1}.\] Then $P(z)$ has a repeated root if and only if $\beta= (d-1)^{(d-1)/d}/(d(m-1)^{1/d})$. \end{lemma} \begin{proof} Suppose that $P(z)$ has a repeated root. Then the system $P(z)=P'(z)=0$ has at least one solution. Notice that $$P'(z)=0 \iff d m^d \beta^d z^{d-1} = (z+m-1)^{d-1} + (d-1)(z-1) (z+m-1)^{d-2}.$$ It is easy to see that if $P(z)=0$ then $z\neq 0,1,-m+1$. If $P(z)=0$ then we have $m^d \beta^d z^d=(z-1)(z+m-1)^{d-1}$, so dividing the equation $$d m^d \beta^d z^{d-1} = (z+m-1)^{d-1} + (d-1)(z-1) (z+m-1)^{d-2}$$ on the left by $m^d\beta^d z^d$ and on the right by $(z-1)(z+m-1)^{d-1}$, we see that if $P(z)$ and $P'(z)$ are both equal to zero, then we must have $$d/z = 1/(z-1) + (d-1)/(z+m-1).$$ Multiplying by $z(z-1)(z+m-1)$ then gives the equation $$d(z-1)(z+m-1) = z(z+m-1) + (d-1) z(z-1),$$ which has the unique solution $z=z_0:=(m-1)d/(dm-d-m)$. Thus if $P(z)=P'(z)=0$ then we must have $z=z_0$ and so we substitute $z=z_0$ into $P(z)$ and use the fact that $P(z_0)$ must be zero to get $$m^d \beta^d (m-1)^d/(d-2)^d - (m-d+1)/(d-2) \cdot (m-1)^{d-1}(d-1)^{d-1}/(d-2)^{d-1},$$ which has the solution $$\beta^d =(d-1)^{d-1}/((m-1)d^d),$$ which has the unique positive solution $$\beta= (d-1)^{(d-1)/d}/(d(m-1)^{1/d}).$$ Thus we have shown that $P(z)$ has a double root only when $\beta= (d-1)^{(d-1)/d}/(d(m-1)^{1/d})$, and this double root occurs at $$z=(m-1)d/(dm-d-m).$$ We also get that $P'(z_0)=P(z_0)=0$ for this value of $\beta$, and so the result follows. \end{proof} \begin{corollary}\label{cor:main3} Let $m,d\ge 2$. The radius of convergence of the cogrowth generating function for $$G=\langle x_1~|~x_1^d=1\rangle \star \cdots \star \langle x_m~|~x_m^d=1\rangle\simeq (\mathbb{Z}/d\mathbb{Z})^{\star m}$$ with respect to $S=\{x_1,\ldots ,x_m\}$ is $$\displaystyle{{(d-1)^{\frac{d-1}{d}}\over d(m-1)^{\frac{1}{d}}}}.$$ \end{corollary} \begin{proof} The singularities of an algebraic power series $F(t)$ satisfying a polynomial equation\\ $\Lambda(t,F(t))=0$ for some polynomial $\Lambda(t,z)\in \mathbb{C}[t,z]$ are in the set $T$, where $T$ is the set of zeros of the leading coefficient of $\Lambda(t,z)$ as a polynomial in $z$ and the zeros of the discriminant of $\Lambda(t,z)$ with respect to $z$ (see Flajolet and Sedgewick \cite[\S7.36]{flajolet}). In the case that $F(t)$ is the cogrowth generating function of $G$ with respect to $S$, we have $F(t)$ is a root of $\Lambda(t,z)$, where \[\Lambda(t,z)=m^dt^{d}z^d -(z-1)(z+m-1)^{d-1},\] which has leading coefficient $m^d t^d - 1$. We showed that $\Lambda(t,z)$ can only have repeated roots for $t\ge 0$ at $t= (d-1)^{(d-1)/d}/(d(m-1)^{1/d})$. Thus the only positive singularities of $F(t)$ are in $T\cap (0,\infty)= \{(d-1)^{(d-1)/d}/(d(m-1)^{1/d}),1/m\}$. Since $F(t)$ has nonnegative coefficients, it has a singularity at $t=\rho$, where $\rho>0$ is the radius of convergence. If $d,m\ge 2$ and $(d,m)\neq (2,2)$, then $G$ is nonamenable, and a strengthening of Kesten's criterion due to Gray and Kambites~\cite[Corollary 6.6]{Kam} gives that the radius of convergence is strictly less than $1/|S|=1/m$. Thus the radius of convergence is $(d-1)^{(d-1)/d}/(d(m-1)^{1/d})$ for $(d,m)\neq (2,2)$. When $d=m=2$, $G$ is amenable and $(d-1)^{(d-1)/d}/(d(m-1)^{1/d})=1/m$, so the result follows. \end{proof} \begin{remark} Kuksov~\cite{kuksov} did this computation when $d=2$ and $d=3$, but did not do the general case. The case $d=2$ is classical, as it can be interpreted in terms of rooted closed walks of length $2n$ on the infinite rooted $m$-ary tree. The cases $d=2$ and $d=3$ appear in the OEIS as entries A126869 and A265434, respectively. \end{remark} \begin{example} \label{exam:2} Let $G = \langle x~|~x^2=1\rangle \star \langle y\rangle$ and let $S=\{x,y,y^{-1}\}$. Then the cogrowth series for $G$ with respect to $S$ is equal to $$Z=\frac{1}{2}\cdot \left(1-3\sqrt{1-8t^2}\right)/(1-9t^2).$$ \end{example} \begin{proof} Using the fact that $F_{y,Y}=F_{y^{-1},Y^{-1}}$, which follows from the obvious symmetry and using the equations in Lemma~\ref{lem:eq} along with Remark \ref{rem:infinite} about how to apply them in the infinite cyclic case, we get the equations: \begin{enumerate} \item $F_{x,\{1,x\}}(t)=t$; \item $F_{x,\{x\}}(t)= t F_{1,\{x\}}(t)$; \item $F_{1,\{x\}}(t) = 1 + F_{1,\{x\}}(t) (F_{1,\{1,x\}}(t) -1)$; \item $F_{1,\{1,x\}}(t) = 1+ 2tF_{y,\{y\}}(t)$; \item $F_{1,\{y\}}(t) = 1+F_{1,\{y\}}(t)(F_{1,\{1,y\}}(t)-1)$; \item $F_{y,\{y\}}(t)=F_{1,\{y\}}(t) F_{y,\{1,y\}}(t)$; \item $F_{y,\{1,y\}}(t) = t$; \item $F_{1,\{1,y\}}(t) = 1 + tF_{x,\{x\}}(t) + t F_{y, \{y\}}(t)$; \item $F_{1,\{1\}}(t)=1+tF_{x,\{x\}}+2tF_{y,\{y\}}(t)$; \item $F_{1,\emptyset}(t)=1 + F_{1,\emptyset}(t)(F_{1,\{1\}}(t)-1)$. \end{enumerate} \end{proof} Then we solve this equation using Maple and find that $Z=F_{1,\emptyset}(t)$ satisfies the polynomial equation $$(3t-1)(3t+1)(t-1)(t+1)Z^3+(-10t^2+2)Z^2+(2t^2-1)Z-2=0,$$ which factors as $$((9t^2-1) Z^2 -Z+2)((t^2-1)Z-1)=0.$$ Now $Z$ is a power series whose initial terms are $1+3t^2+\cdots$ and so we see that it is a root of the first factor, which we can solve: \[Z=\frac{1}{2}\cdot \left(1-3\sqrt{1-8t^2}\right)/(1-9t^2).\] The dominant singularity comes from the branch cut, and the radius of convergence is $1/(2\sqrt{2})$. \begin{remark}\label{remark:cogrowth_equiv} We note that the cogrowth series for $\mathbb{Z}/2\mathbb{Z}\star \mathbb{Z}$ given above is the same as the series for $d=2$, $m=3$ in Example \ref{exam:1}; that is, for the free product of three copies of the cyclic group of order $2$. The reason for this can be seen by the fact that if $u,v$ are elements of order two that generate an infinite dihedral group and if $x,x^{-1}$ generate an infinite cyclic group, then if we let $f_i(u)=x$, $f_i(v)=x^{-1}$ for $i$ odd and let $f_i(u)=x^{-1}$ and $f_i(u)=x$ for $i$ even then we have a map $g:\{u,v\}^*\to \{x,x^{-1}\}^*$ given by $g(a_1\cdots a_n)=f_1(a_1)f_2(a_2)\cdots f_n(a_n)$ for $a_1,\ldots ,a_n\in\{u,v\}$ and a straightforward induction shows that this map gives that these two groups have the same cogrowth. We note that this bijective argument holds more generally: the group $\mathbb{Z}\star H$ and the group $(\mathbb{Z}/2\mathbb{Z})^{\star 2} \star H$ have the same cogrowth, for any finitely generated $H$: Suppose $\{x,x^{-1}\}$ are the generators for $\mathbb{Z}$, $u$ and $v$ are order 2 elements generating $\mathbb{Z}/2\mathbb{Z}$ and $T$ generates $H$, then if we have a word on $\{u,v\} \cup T$, $1= \mu_1 \tau_1 \mu_2 \tau_2 \dots \mu_r \tau_r$, with the $\tau_i$ a word on $T$ and $\mu_i$ a word in the $u$, $v$. We can define $f_i$ are above, and apply it to the the associated word $\mu_1\mu_2\cdots \mu_r$ to get $\mu_1' \mu_2'\cdots \mu_r'$ in $x$ and $x^{-1}$. We interlace this with the $\tau_i$ to create $\mu'_1 \tau_1 \mu'_2 \tau_2 \dots \mu'_r \tau_r$, the image of the word in $(\mathbb{Z}/2\mathbb{Z})^{\star 2} \star H$. \end{remark} \begin{example} \label{exam:3} Let $G=\mathbb{Z}/2\mathbb{Z}\star \mathbb{Z}/n\mathbb{Z}=\langle x ~|~x^2=1\rangle \star \langle y ~|~y^n=1\rangle$ and let $S=\{x,y\}$ and let $F(t)$ denote the cogrowth generating series for $G$ with respect to $S$. Then $$F(t)= (1-tD)/((1-tD)^2-t^2),$$ where $D$ is the unique power series solution to the equation $$t^{n-1}(1-tD)^{n-1} = (1-tD-t^2)^{n-1} D$$ whose expansion begins $t^{n-1}+{\rm higher~degree~terms}$. \end{example} \begin{proof} We let \begin{equation} A_1:=F_{1,\{x\}}(t), B_1=F_{1,\{1,x\}}(t), C=F_{x^{-1},\{x^{-1}\}}(t) \end{equation} and \begin{equation} A_2:=F_{1,\{y\}}(t), B_2=F_{1,\{1,y\}}(t), D=F_{y^{-1},\{y^{-1}\}}(t). \end{equation} Notice that a word $s_1\cdots s_n\in S^n$ that is equal to $x^{-1}$ and has no proper prefix equal to $x^{-1}$ can be decomposed uniquely in the form $w_1 x$, where each $w_i$ is a word in $s_1,\ldots ,s_m$ that is equal to $1$ with no proper prefix equal to $x$. This then gives us the equation \begin{equation} \label{eq:AA} t A_1 = C. \end{equation} Similarly, we have \begin{equation} \label{eq:AAA} t^{n-1} A_2^{n-1} = D. \end{equation} We now make use of Lemma \ref{lem:eq} \eqref{lem:eq:case_1inX_gnot1}--\eqref{lem:eq:case_1inX_g1}. Equation~\eqref{lem:eq:case_1notinX_g1} gives us that $A_1=1/(2-B_1)$ and $A_2=1/(2-B_2)$. Equation~\eqref{lem:eq:case_1inX_g1} gives \begin{equation} \label{eq:BBB} B_1=1+t D\qquad {\rm and}\qquad B_2=1+tC.\end{equation} Combining these equations then gives that $t =(1-tD) C$ and we have $t^{n-1} = (1-tC)^{n-1} D$. Thus we see that D is a solution to the equation $$t^{n-1}(1-tD)^{n-1} = (1-tD-t^2)^{n-1} D.$$ Finally, Equation~\eqref{lem:eq:case_1inX_g1} gives $F_{1,\{1\}}(t) = 1+ t (C+D)$ and Equation~\eqref{lem:eq:case_1notinX_g1} gives $F(t)=F_{1,\emptyset}(t)=1/(1-tC-tD)$, and so if we use the fact that $C=t/(1-tD)$, we see that $$F(t)=(1-tD)/((1-tD)^2-t^2) = \frac{1}{2}\left(1/(1-t-tD) + 1/(1+t-tD)\right).$$ Uniqueness of $D$ after imposing the initial conditions follows from a standard application of Hensel's lemma. For $n=3,4,5$ we get the following expressions for the minimal polynomial of $F(t)$ using Maple. (Note that the case $n=2$ is done is the case $m=d=2$ in Example \ref{exam:1}.) \begin{description} \item[$n=3$] \[((t-1)^3+t^3)((t+1)^3-t^3)Z^3+(t^5-t^4+t^3+2t^2-1)Z^2+(t^3-t^2+1)Z+1\] \item[$n=4$] \[((t^4-(t-1)^4)((t+1)^4-t^4)Z^4+2(4t^6-2t^4+3t^2-1)Z^3+t^4(t^2+3)Z^2+(t^4-2t^2+2)Z+1\] \item[$n=5$] \[((t-1)^5+t^5)((t+1)^5-t^5)\,Z^5 +A\,Z^4 +B\,Z^3+C\,Z^2 +Z+1.\] \end{description} Here \[A=3(t^9-t^8+6t^7+4t^6+t^5-6t^4+4t^2-1)\qquad B=2(4t^7+t^6+3t^5-3t^4+3t^2-1)\] \[C=(t^7+4t^5+2t^4-4t^2+2) \qquad D=(t^5-3t^2+3).\] We computed the minimal polynomials for some higher $n$ too, but the expressions became increasingly unwieldy and we could not discern any obvious patterns governing the coefficients of the annihilating polynomials. One exception is the leading term which is predicted to be: \begin{equation} -\left((t+1)^n-t^n\right)\left((1-t)^n-t^n\right) Z^n. \end{equation} \end{proof} The case when $n=3$ was previously worked out by Alkauskas~\cite{alkauskas} and our formula appears in his Theorem 1. It corresponds to the cogrowth of ${\rm PSL}_2(\mathbb{Z})$ as a semigroup generated by two elements, one of order $2$ and another of order $3$. Again, we apply standard techniques to this algebraic equation to deduce that the singularities of $F$ are contained in the set of zeros of the leading coefficient and the discriminant. For the radius of convergence, we are only interested in real singularities in the range $[1/2,\infty)$. In the case of zeros of the leading coefficient in the interval $[1/2,\infty)$, we need $((t-1)^3+t^3)=0$; that is, $t=1/2$. In the case of zeros of the discriminant, we compute the discriminant and find that it is {\small \[ \left( t^{13}-8t^{12}-4t^{11}+164t^{10}-392t^{9}+ 404{t}^{8}-752{t}^{7}+260{t}^{6}-512{t}^{5}-128{t}^{4}-160t^{3}-64{t}^{2}+64 \right) {t}^{3},\]} which has a unique solution in $[1/2,\infty)$ that we numerically estimate to be~$.5072330945\ldots$. The radius of convergence is one of these two values. Since the cogrowth function is bounded above by $2^n$ and since $F(t)$ has a singularity at its radius of convergence by Pringsheim's theorem. We again invoke~\cite[Corollary 6.6]{Kam} to get that the radius of convergence is strictly greater than $1/2$, since ${\rm PSL}_2(\mathbb{Z})$ is non-amenable, and so the radius of convergence is $0.50723\cdots$. For similar reasons, for general $n$ we predict the radius of convergence will be a zero of the discriminant, and not of the leading coefficient. We can now prove Theorem \ref{thm:main2} \begin{proof}[Proof of Theorem \ref{thm:main2}] Part (a) follows from Example \ref{exam:1} and Corollary \ref{cor:main3}. For part (b), observe that Remark \ref{remark:cogrowth_equiv} shows that $\left(\mathbb{Z}/2\mathbb{Z}\right)^{\star m} \star \left(\mathbb{Z}\right)^{\star s}$ has the same cogrowth as $\left(\mathbb{Z}/2\mathbb{Z}\right)^{\star(m+2s)}$, and so the result follows from (a). Part (c) follows from Example \ref{exam:3}. \end{proof} As motivated by Remark \ref{remark:cogrowth_equiv}, it would be interesting to know whether one can find all pairs of non-isomorphic groups with symmetric generating sets whose cogrowth series are the same. \section{Bounding the degree of the minimal polynomial} \label{sec:bounds} In this section, we seek upper bounds for the degree of the minimal polynomials of various cogrowth generating functions. With knowledge of such upper bounds, along with sufficiently many terms in the series, we can apply Pade approximant techniques~\cite{aecf} and undetermined coefficient methods to obtain the precise minimal polynomial. \subsection{A bound for free products of finite groups} \label{sec_gen_finite_gps} Finite groups have rational cogrowth generating functions, for which the degree of the numerator and denominator can be effectively bounded in terms of the degrees of their irreducible representations. Since the free product of groups is the coproduct in the category of groups, we will use $\coprod_i G_i$ to denote the free product of groups. We will also use $G^{\star m}$ to denote $\coprod_{i=1}^m G$. We first set up some notation. Let $G_1,\ldots, G_r$ be finite groups with generating sets $S_1,\ldots ,S_r$ respectively, and let $m_1,\ldots ,m_r$ be positive integers. Consider the free product, $$G:=\coprod_{i=1}^r G_i^{\star m_i}.$$ For $i=1,\hdots,r$, let $S_i^{(1)},\ldots , S_i^{(m_i)}$ denote copies of $S_i$ in the respective copies of $G_i$ used in the formation of $G$. We take \begin{equation} S:=\bigcup_{i=1}^r \bigcup_{j=1}^{m_i} S_i^{(j)}\end{equation} to be our generating set for $G$. We now adopt some notation from free probability. For each $i$, let $\alpha_i = \sum_{s\in S_i} s \in \mathbb{C}[G_i]$. For the group $G$ defined above, we let $\phi: \mathbb{C}[G]\to \mathbb{C}$ be the \emph{expectation operator}: the map that extracts the coefficient of $1_G$ from an element of the group algebra. Furthermore, for each $\beta\in \mathbb{C}[G]$, we consider its Cauchy Transform $$\mathrm{C}_{\beta}(t):=\sum_{n\ge 0}\phi(\beta^n)t^{-n-1}\in \mathbb{C}[[t^{-1}]].$$ Observe that each element of $\mathbb{C}[G_i]$ can be canonically identified with an element of $\mathbb{C}[G]$ using any one of the $m_i$ copies of $G_i$ in $G$. Moreover, the value of the operator $\phi$ applied to an element of the group algebra $\mathbb{C}[G_i]$ is independent of the choice of embedding into $G$. Hence, we may define $\mathrm{C}_{\rho}(t)$ for $\rho\in \mathbb{C}[G_i]$ without any confusion. We let $\mathrm{K}_{\beta}(t)$ denote the compositional inverse of $\mathrm{C}_{\beta}(t)$. Since we are working with power series in $t^{-1}$ this is saying \begin{equation}\label{eq:inv} \mathrm{C}_{\beta}\circ \mathrm{K}_{\beta}(t) = \mathrm{K}_{\beta}\circ \mathrm{C}_{\beta}(t) = t^{-1}. \end{equation} Finally, let $$\alpha = \sum_{i=1}^r\sum_{j=1}^{m_i} \sum_{s\in S_i^{(j)}} s\in \mathbb{C}[G].$$ Notice that by our choice of $\alpha$, \begin{equation}\label{eq:11} \mathrm{C}_{\alpha}(t)=\inv{t}F_{G;S}(\inv{t}). \end{equation} Recall that each $F_{G_i;S_i}(t)$ is a rational function with integer coefficients. We can deduce further results regarding the degrees of the numerator and the denominator. \begin{lemma}\label{lem:bound} Let $H$ be a finite group with $d$ (inequivalent) irreducible representations with degrees given by $n_1,\ldots ,n_d$ respectively, let $T$ be a generating set for $H$, and let $F(t):=F_{H;T}(t)=\sum_{n\ge 0} \phi(\alpha^n) t^n$, where $\alpha=\sum_{s\in T} s\in \mathbb{C}[H]$, and $\phi$ is the expectation operator. Then $F(t)$ is the power series expansion of a rational function $P(t)/Q(t)$ where $P,Q\in \mathbb{Z}[t]$ are polynomials with $Q(0)=1$ and the degrees of $P$ and $Q$ are bounded by respectively $n_1+\cdots + n_d-1$ and $n_1+\cdots + n_d$. In particular, the degrees of $P$ and $Q$ are at most respectively $|H|-1$ and $|H|$ and equality can only occur if $H$ is abelian. \end{lemma} \begin{proof} By the Artin-Wedderburn theorem~\cite{dummit} and Maschke's theorem, we have a $\overline{\mathbb{Q}}$-algebra isomorphism $$\Psi: \overline{\mathbb{Q}}[H] \to \mathbb{M}_{n_1}( \overline{\mathbb{Q}})\times \cdots \times \mathbb{M}_{n_d}( \overline{\mathbb{Q}}).$$ Then under this isomorphism $\Psi$, the element $\alpha$ is sent to a $d$-tuple of matrices $(Y_1,\ldots ,Y_d)$. Observe that $\Psi$ induces a $\overline{\mathbb{Q}}$-algebra isomorphism between the power series rings $ \overline{\mathbb{Q}}[H][[t]]$ and $$\left( \mathbb{M}_{n_1}( \overline{\mathbb{Q}})\times \cdots \times \mathbb{M}_{n_d}( \overline{\mathbb{Q}})\right)[[t]]$$ and under this isomorphism $\sum \alpha^n t^{n}$ is sent to the series $$\sum_{n\ge 0} (Y_1^n,\ldots ,Y_d^n) t^{n}.$$ This series satisfies a linear recurrence of length $n_1+\cdots +n_d$ by the Cayley-Hamilton theorem, and thus the series is the power series expansion in $t$ of a series of the form $P(t)/Q(t)$ with $P$ and $Q$ polynomials with coefficients in $\overline{\mathbb{Q}}$ and $Q(0)\neq 0$ and $\gcd(P,Q)=1$ and ${\rm deg}(Q)\le \sum n_i$ and ${\rm deg}(P) \le \sum n_i -1$. By rescaling, we may assume that $Q(0 )=1$. Since $F(t)$ has integer coefficients, $P/Q$ must be invariant under the action of ${\rm Gal}(\overline{\mathbb{Q}},\mathbb{Q})$ and so since $Q(0)=1$, we see that $P$ and $Q$ have rational coefficients. Now notice that $Q(t) F(t) = P(t)$. We can show that the roots of $Q(t^{-1})$ must be algebraic integers and so $Q(t)$ is an integer polynomial and we can then have $P$ is an integer polynomial, since $P=FQ$ and $F$ and $Q$ have integer coefficients. Finally, if $\deg P+1=|H|$ or $\deg Q=|H|$, then we must have $\sum n_i=|H|$. Since $\sum n_i^2=|H|$, this can only occur when each $n_i=1$, and so $H$ must be abelian. \end{proof} We can build equations that are simpler than those that arise from the direct combinatorial interpretation using free probability. The following equation relates the inverse Cauchy Transforms of ${\alpha}$ and the ${\alpha_i}$ (see \cite[Theorem 12.7]{FPcombs}): \begin{equation}\label{eq:free} \mathrm{K}_{\alpha}(t) = \left(\sum_{i=1}^{r} m_i \mathrm{K}_{\alpha_i}(t)\right) - \left(m_1+\cdots +m_r-1\right) \inv{t}. \end{equation} For $i=1,\ldots ,r$, we let $\Delta_i$ denote the sum of the degrees of the irreducible representations of $G_i$. Let $G_1,\ldots ,G_r$ be finite groups with cogrowth series $F_i(t):=P_i(t)/Q_i(t)$ respectively, and let $\Delta_i = \max\{\deg (P_i)+1, \deg(Q_i)\}$. By definition, $\mathrm{K}_{\alpha_i}(\inv{t})$ and $tF_i(t)$ are compositional inverses, so $$\inv{t}=\mathrm{K}_{\alpha_i}(t)\cdot P_i(\mathrm{K}_{\alpha_i}(t))/Q_i(\mathrm{K}_{\alpha_i}(t)).$$ In particular, $\mathrm{K}_{\alpha_i}(t)$ is a root of the polynomial \begin{equation}\label{eq:H} \lambda_i(t,z):=Q_i(z) - tz P_i(z)\in \overline{\mathbb{Q}}[t,z]. \end{equation} It follows that $\deg_z(\lambda_i)= \Delta_i$, and $\deg_t \lambda_i=1$, so $\mathrm{K}_{\alpha_i}(t)$ is algebraic Hence $\mathrm{K}_{\alpha_i}(t)$ lies in a field extension $E_i$ of $\overline{\mathbb{Q}}(t)$ of degree at most $\Delta_i$. Consequently, $\mathrm{K}_{\alpha}(t)$ lies in the compositum, $E$, of $E_1,\ldots ,E_s$, which is an extension of degree at most \begin{equation} \Delta:=\Delta_1\cdots \Delta_r. \end{equation} We now bound the height\footnote{If the cogrowth series, $F(t)$, has a minimal polynomial $\eta(t,z)$, We refer to $\deg_t\eta$ and $\deg_z\eta$ as the height and degree of $F(t)$ respectively.} and degree of $F(t):=F_{G;S}(t)$. Consider the $\overline{\mathbb{Q}}(t)$-vector space $V$ with basis $e_{j_1,\ldots ,j_r}$ with $0\le j_i<\deg_z\lambda_i= \Delta_i$ for $i=1,\ldots ,r$. Then we have a surjective $\overline{\mathbb{Q}}(t)$-linear map $\Phi$ from $V$ to $E$, given by $$\Phi(e_{j_1,\ldots ,j_r}) = \prod_{i=1}^r \mathrm{K}_{\alpha_i}(t)^{j_i}$$ for $1\le i\le r$, $0\le j_i< \Delta_i$. Notice that by Equation (\ref{eq:H}) we have \begin{equation} \label{eq:D'} \sum_{\ell=0}^{\Delta_i} c_{i,\ell}(t) \mathrm{K}_{\alpha_i}(t)^{\ell} = 0, \end{equation} where each $c_{i,\ell}(t)=[z^{\ell}]\lambda_i$ has degree at most $1$, $c_{i, \Delta_i}\neq 0$, and $c_{i,0}(t)=1$ since $Q_i(0)=1$. Notice that multiplication by $\mathrm{K}_{\alpha_i}(t)$ induces a $\overline{\mathbb{Q}}(t)$-linear endomorphism $L_i$ of $E$, which using Equation (\ref{eq:D'}) we can lift to an endomorphism $\bar{L}_i$ of $V$ via the rule \begin{equation}\label{eq:endLiofV} \bar{L}_i(e_{j_1,\ldots ,j_s}) = \left\{ \begin{array}{ll} e_{j_1,\ldots ,j_i+1,\ldots ,j_s} & {\rm if}~ j_i < \Delta_i-1, \\ \sum_{\ell=0}^{\Delta_i-1} -c_{i,\ell}(t)c_{i,\Delta_i}(t)^{-1} e_{j_1,\ldots ,j_{i-1}, \ell, j_{i+1},\ldots ,j_s} & {\rm if}~j_i=\Delta_i-1. \end{array} \right. \end{equation} Then by construction, we have $$\Phi\circ \bar{L_i}= L_i\circ \Phi$$ for $i=1,\ldots ,r$. Now let $L$ denote the linear endomorphism of $E$ induced by multiplication by $\mathrm{K}_{\alpha}(t)$. Then $L= \sum_{i=1}^{r} m_i L_i - (m_1+\cdots +m_r-1)\inv{U}$, where $U:= (h \in E \mapsto t\cdot h\in E)$ denotes multiplication by $t$. Notice we can lift $U$ to an endomorphism $\bar{U}$ of $V$ by the rule $\bar{U}(e_{j_1,\ldots ,j_r}) = t\cdot e_{j_1,\ldots ,j_r}$ for all $j_1,\ldots ,j_r$ and we obtain $\Phi\circ \bar{U}^n =U^n\circ \Phi$ for each $n\in \mathbb{Z}$. Let $\bar{L}:= \sum_{i=1}^{s} m_i \bar{L}_i - (m_1+\cdots +m_r-1) \inv{\bar{U}}$, then an induction argument, using linearity of the maps involved, gives $\Phi\circ C(\bar{L}) = C(L)\circ \Phi$ for every polynomial $C$. In particular, the minimal polynomial of $L$ divides the characteristic polynomial of $\bar{L}$, since $\Phi$ is surjective. Let $Z$ denote the matrix of $\bar{L}$ with respect to the basis $\{e_{j_1,\ldots ,j_r}\}$. For a fixed column indexed by $e_{j_1,\ldots ,j_r}$, let $\Gamma\subseteq \{1,\ldots ,r\}$ denote the set of $i$ for which $j_i=\Delta_i-1$. Equation~\eqref{eq:endLiofV} implies that the matrix, $Y:=Z+(m_1+\cdots +m_r-1) \inv{t} {\bf I}$ has entries of the form $a(t)/\prod_{i\in \Gamma} c_{i,\Delta_i}(t)$, where ${\rm deg}(a(t)) \le |\Gamma|$. Thus, the characteristic polynomial of $Y$ is a monic polynomial in $x$ of degree $\Delta:=\prod \Delta_i$ in which the coefficient of $x^k$ in the characteristic polynomial is a sum of terms that are $(\Delta-k)$-fold products of rational functions of the form $a(t)/\prod_{i\in \Gamma} c_{i,\Delta_i}(t)$, where ${\rm deg}(a(t))\le |\Gamma|$. In particular, for a fixed $i$, since the number of standard basis elements, $e_{j_1,\ldots ,j_r}$ for which $j_i=\Delta_i-1$ is $\Delta/\Delta_i$, we see that the coefficients of $\det(x{\bf I}-Y)$ are of the form $b(t)/\prod_{i=1}^d c_{i,\Delta_i}(t)^{\Delta/\Delta_i}$, with ${\rm deg}(b) \le \sum_i \Delta/\Delta_i$. Considering $$M(t):=\prod_{i=1}^d c_{i,\Delta_i}(t)^{\Delta/\Delta_i},$$ we deduce that the characteristic polynomial of $Y$ is expressible as $$x^{\Delta} + \sum_{\ell=0}^{\Delta-1} {b_{\ell}(t)\over M(t)} x^{\ell},$$ where each $b_{\ell}(t)$ has degree at most $\sum_i \Delta/\Delta_i$. Thus, the characteristic polynomial of $Z$ is of the form $$(x+(m_1+\cdots +m_r-1) \inv{t})^{\Delta} + \sum_{\ell=0}^{\Delta-1} {b_{\ell}(t)\over M(t)} (x+(m_1+\cdots +m_r-1)\inv{t})^{\ell}.$$ It follows that $R(t,\mathrm{K}_{\alpha}(t))=0$, where $$R(t,z):=M(t) (tz+(m_1+\cdots +m_r-1))^{\Delta} + \sum_{\ell=0}^{\Delta-1} t^{\Delta-\ell} b_{\ell}(t) (tz+(m_1+\cdots +m_r-1))^{\ell}.$$ Notice that $\deg_t R\le \Delta+\Delta'$, where $\Delta':=\sum_i \Delta/\Delta_i$; and $\deg_z R = \Delta$. By the change of variables $\inv{t}=uF(u)$, we deduce that $$R(uF(u),\inv{u})=0.$$ Thus $F(t)$ is the root of the polynomial $$R_0(t,z):= z^{\Delta}R(tz, z^{-1}).$$ It follows from standard polynomial manipulation, that $\deg_t R_0, \deg_z R_0\le \Delta+\Delta' = \Delta+ \sum_{i=1}^r \Delta/\Delta_i$. In particular, we have the following theorem. \begin{theorem} \label{thm:generalbound} Let $G_1,\ldots ,G_r$ be finite groups with generating set $S_1,\hdots, S_r$ respectively, and cogrowth series $F_{G_i;S_i}(t)=P_i(t)/Q_i(t)$, where $P_i,Q_i\in \mathbb{Z}[t]$ are polynomials with constant term $1$. For each $i=1,2,\hdots, r$, let $\Delta_i:=\max\{1+\deg P_i, \deg Q_i\}$. Then the cogrowth series $F(t):=F_{G;S}(t)$ of $G=\coprod G_i^{\star m_i}$, is algebraic and satisfies $\Lambda(t,F(t))=0$, where $\Lambda(t,z)\in \mathbb{Z}[t,z]$ with ${\rm deg}_t(\Lambda)$ and ${\rm deg}_z(\Lambda)$ both at most $$\left(\prod_{i=1}^r \Delta_i\right) \left( 1 + \sum_{i=1}^r \frac{1}{\Delta_i} \right).$$ In particular, the degrees do not depend on $m_1,\ldots ,m_r$ when we choose $S$ as above. \end{theorem} Observe that each $\Delta_i$ is at most the degree sum of the irreducible representations of $G_i$. We therefore immediately obtain Theorem~\ref{thm:mainbound}. \begin{proof}[Proof of Theorem~\ref{thm:mainbound}] The real-valued function on $(0,\infty)^r$ given by $$(y_1,\hdots,y_r)\mapsto (y_1\hdots y_r) (1+\inv{y_1}+\hdots+\inv{y_r})$$ is increasing in each of the $y_i$. Hence, the result follows directly from Lemma~\ref{lem:bound} and Theorem~\ref{thm:generalbound}. \end{proof} The following remark shows how one can directly apply Theorem~\ref{thm:mainbound} to determine explicit algebraic equations satisfied by the cogrowth series. \begin{remark} If $A(t)\in \mathbb{C}[[t]]$ is a power series that is a solution to $\eta(t,A(t))=0$ where $\eta(t,z)$ is an irreducible polynomial whose degrees in $t$ and $z$ are bounded by $\Delta_t$ and $\Delta_z$ respectivel , then Bostan et al.~\cite{bostandeaf} show that $A(t)$ is $D$-finite and that it is annihilated by a differential operator $$\sum_{i=0}^k p_i(t) \partial_t^i$$ with $k\le \Delta_z$ and ${\rm deg}(p_i) \le ((2k-1)\Delta_z +2k^2 -4k+3)\Delta_t -k(k-1)/2$. By the irreducibility of $\eta$ and Theorem~\ref{thm:mainbound}, we can consider $$\Delta_t=\Delta_z=\prod_{i=1}^r \Delta_i \left( 1 + \sum_{i=1}^r 1/\Delta_i \right).$$ It follows that the above differential operator has coefficients in $\mathbb{C}[t]$, each of degree at most $$N:=((2{\Delta_z}-1){\Delta_z} + 2{\Delta_z}^2 -4{\Delta_z}+3){\Delta_z} - {\Delta_z}({\Delta_z}-1)/2 = (8{\Delta_z}^3 - 11{\Delta_z}^2+7{\Delta_z})/2.$$ Therefore, the coefficients $a_n$ of $A(t)$ satisfy a polynomial linear recurrence of the form $$\sum_{i=0}^{N+{\Delta_z}} q_i(n) a_{n-i} = 0$$ with $q_i(x)$ of degree at most ${\Delta_z}$, and so one can theoretically ``guess-and-prove'' the recurrences with the $q_i$ and sufficient many terms in $\{a_n\}$. \end{remark} \subsection{Determining the cogrowth via methods from free probability} \label{sec:AlgSys} In some cases it is straightforward to derive the minimal polynomial $\Lambda(t,z)$ for the cogrowth series of a virtually free group. The proof of Theorem~\ref{thm:generalbound} gives an outline of how to do this. In the case of cyclic factors, Liu~\cite{liu} gave a slight improvement to Theorem~\ref{thm:generalbound}. We now illustrate how to compute the polynomial equation satisfied by the cogrowth series for the group $G=(\mathbb{Z}/d\mathbb{Z})^{\star m}$. This is a different approach from that used in the proof of Corollary \ref{cor:main3}, although we obtain the same conclusion. We again let $x_1,\ldots ,x_m$ denote generators for the copies of $\mathbb{Z}/d\mathbb{Z}$ and we let $S=\{x_1,\ldots ,x_m\}$. Using the notation in \S3.1, we see by Equation \ref{eq:free} that $$\mathrm{K}_{\alpha}(t) = m \mathrm{K}_{\alpha_1}(t) - (m-1)t^{-1},$$ where $\alpha_1=x_1 \in \mathbb{C}[\mathbb{Z}/d\mathbb{Z}]$ and $x_1$ is a generator for $\mathbb{Z}/d\mathbb{Z}$. Since $\mathrm{C}_{\alpha_1}(t) = \inv{t}\cdot 1/(1-t^{-d})$ By Equation (\ref{eq:inv}), we see that $$t^{-1} = \mathrm{K}_{\alpha_1}(t)^{-1} \cdot 1/(1-\mathrm{K}_{\alpha_1}(t)^{-d})$$ In particular, $z=\mathrm{K}_{\alpha_1}(t)$ is a solution to the equation $$(z^d-1) = z^{d-1}t,$$ and since $\mathrm{K}_{\alpha_1}=m^{-1} \mathrm{K}_{\alpha} + (m-1)/m$, we have that $$(m^{-1} \mathrm{K}_{\alpha}(t)+(m-1)m^{-1}t^{-1})^d - 1 = (m^{-1} \mathrm{K}_{\alpha}(t)+(m-1)m^{-1}t^{-1})^{d-1}t.$$ We now let $t=\mathrm{C}_{\alpha}(u)$ and we see $$(m^{-1} u^{-1} + (m-1)m^{-1}\mathrm{C}_{\alpha}(u)^{-1})^d -1 = (m^{-1} u^{-1} + (m-1)m^{-1}\mathrm{C}_{\alpha}(u)^{-1})^{d-1} \mathrm{C}_{\alpha}(u).$$ Letting $x=u^{-1}$ and using Equation (\ref{eq:11}), we see $$(m^{-1} x + (m-1)m^{-1}x^{-1} F_{G,S}(x)^{-1})^d - 1 = (m^{-1} x + (m-1) m^{-1} x^{-1} F_{G,S}(x)^{-1})^{d-1} x F_{G,S}(x).$$ In particular, after simplifying we see that $z=F_{G,S}(t)$ is a solution to $\Lambda(t,z)=0$, where $$\Lambda(t,z) = m^d t^d z^d - (z-1)(z+m-1)^{d-1},$$ which is consistent with the result obtained in Example~\ref{exam:1} via the use of combinatorial grammars. The language theoretic approach, however, has the added advantage of giving a mechanism for describing the language $\mathcal{L}(G,S)$. \section{A gap result for radii of convergence} \label{sec:gap} In this section we prove Theorem \ref{thm:main3}. We first prove an elementary estimate. \begin{lemma} Let $\phi: G\to H$ be a group homomorphism and let $S$ be a symmetric generating set for $G$. If the restriction of $\phi$ is injective on $S$ then ${\rm CL}(n;G,S)\le {\rm CL}(n;H,\phi(S))$. \end{lemma} \begin{proof} Observe that if $s_1,\ldots, s_n\in S$ and $s_1\cdots s_n=1$ then $\phi(s_1)\cdots \phi(s_n)=1$ and so the inequality is immediate. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main3}] Suppose that $H$ is a group with symmetric generating set $S$ and let $s_1,\ldots ,s_p$ be the elements of order $2$ in $S$ and let $u_1^{\pm 1},\ldots ,u_q^{\pm 1}$ be the remaining elements of $S$. Then $p+2q=|S|$. We claim that if either $p\ge 3$, $q\ge 2$ or $p,q\ge 1$ then $\rho_{H,S}^{-1}\ge 2\sqrt{2}$. To do this, we deal with a few cases. \emph{Case I:} $p\ge 3$. Let $G$ be the free product of $3$ copies of $\mathbb{Z}/2\mathbb{Z}$ with generators $x_1,x_2,x_3$ Then we have a group homomorphism $\phi:G\to H$ sending $x_i\to s_i$ for $i=1,2,3$ and this is injective on $T:=\{x_1,\ldots ,x_3\}$. Thus ${\rm CL}(n;G,T)\le {\rm CL}(n;H,S)$ and hence $1/\rho_{G,T} \le 1/\rho_{H,S}$. By Theorem \ref{thm:main2} (a), with $d=2,m=3$, we have that the cogrowth generating function for $G$ with respect to $T$ is $$4/\left(1+3\sqrt{1-8t^2}\right),$$ which has radius of convergence ${\left(2\sqrt{2}\right)}^{-1}$ and so $\rho_{H,S}^{-1}\ge 2\sqrt{2}$ in this case. \emph{Case II:} $q\ge 2$. In this case, we let $G$ be the free product of two copies of $\mathbb{Z}$ with generating set $T=\{y_1,y_1^{-1}, y_2,y_2^{-1}\}$. Then we have a homomorphism $\phi: G\to H$ sending $y_i\to u_i$ for $i=1,2$. Then we again have ${\rm CL}(n;G,T)\le {\rm CL}(n;H,S)$ and hence $1/\rho_{G,T} \le 1/\rho_{H,S}$. Taking $m=0$ and $s=2$ in Theorem \ref{thm:main2} (b), we have that the cogrowth series of $G$ with respect to $T$ is given by the series $3/(1+2\sqrt{1-12x^2})$ using the work of Chomsky and Sch\"utzenberger~\cite{chomsky} (see also OEIS A035610). This series has $1/\rho_{H,S}\ge 1/\rho_{G,T} =\sqrt{12}>2\sqrt{2}$ and so we get the result in this case. \vskip 2mm \emph{Case III}: $p,q\ge 1$. In this case, we let $G$ be the free product of $\mathbb{Z}/2\mathbb{Z}$ (with generator $x$) with $\mathbb{Z}$ (with generating set $y,y^{-1}$. We let $T$ be the symmetric generating set $\{x,y,y^{-1}\}$ and we have a homomorphism from $G\to H$ sending $x$ to $s_1$, $y\mapsto u_1$. Then this is injective on $T$ and sends $T$ into $S$, so ${\rm C}(n;H,S)\ge {\rm CL}(n;G,T)$, and Theorem \ref{thm:main2} (b) gives that the cogrowth generating series for $G$ has radius of convergence $1/2\sqrt{2}$, so we get the result in this case. We see that it suffices to consider the case when $p\le 2$, $q\le 1$, and $pq=0$, and hence $(p,q)\in \{(2,0), (1,0), (0,1)\}$. In this case, we see that $H$ is a homomorphic image of either $D_{\infty}$ or $\mathbb{Z}$, and hence it is amenable and so by Kesten's criterion $\rho_{H,S}^{-1}=|S|\in \{1,2\}$. The result follows. \end{proof} We pose the following question. \begin{question} Does there exist $\alpha\in [2\sqrt{2},\infty)$ that cannot be realized as $1/\rho_{G,S}$ for some finitely generated group $G$ and finite symmetric generating set $S$? \end{question} \bibliographystyle{plain}
1,314,259,996,257
arxiv
\section{\label{introduction} Introduction} The bright Saturn Nebula NGC\,7009 is known for its rich and prominent optical recombination lines (ORLs) of heavy element ions, especially those of O~{\sc ii}, ever since the spectrophotographic observations of Wyse \cite{wyse1942}, who published and analyzed deep spectra of the Orion Nebula and nine planetary nebulae (PNe), including NGC\,7009. He identified and measured several dozen O~{\sc ii} permitted lines in NGC\,$7009$ in the wavelength range 3700 -- 6750\,{\AA}, although accurate measurements of many of those O~{\sc ii} lines were hampered by line blending. At the end of this paper Wyse \cite{wyse1942} expressed the desire of having more accurate measurements of the O~{\sc ii} permitted lines. Aller \& Kaler \cite{ak1964} identified more than 100 O~{\sc ii} permitted lines in the spectrum of NGC\,7009. Large numbers of permitted lines of other ionic species, such as C~{\sc ii}, N~{\sc ii}, N~{\sc iii}, O~{\sc iii}, Ne~{\sc ii}, were also detected. The majority of these permitted lines are mainly excited by recombination. Other possible excitation mechanisms, such as the dielectronic recombination, radiative charge transfer, and resonance fluorescence by starlight or by some other prominent nebular emission lines, are all by their nature selective, which means that they tend to excite lines from specific spectral terms of certain parity and multiplicity only (e.g., Grandi \citealt{grandi1976}; Liu \& Danziger \citealt{ld1993a}; Liu, Danziger \& Murdin \citealt{ldm93}). With high signal-to-noise ratio, high spectral resolution and wide wavelength-coverage spectra of PNe now available, more and more ORLs of fainter intensities from heavy element ions that arise from many different multiplets are observed and provide an opportunity to study the radiative and dielectronic recombination processes and test the accuracy of the recombination theories for non-hydrogenic ions. The first systematic study of the ORLs in NGC\,7009 was carried out by Liu et al. (\citealt{liu1995}, hereafter LSBC), who analyzed dozens of O~{\sc ii} ORLs, using effective recombination coefficients calculated in the intermediate coupling scheme for transitions from the 3d\,--\,3p and 4f\,--\,3d arrays, and coefficients calculated in the {\it LS}\,coupling scheme for transitions from the 3p -- 3s array. LSBC found clear deviations from the {\it LS}\,coupling in the 3d\,--\,3p and 4f\,--\,3d transitions. Luo, Liu \& Barlow (\citealt{luo2001}, hereafter LLB01) presented high-quality observations of several dozens Ne~{\sc ii} ORLs in NGC\,7009, and derived the Ne$^{2+}$/H$^+$ abundance ratios from them. Along with the advance of observational techniques that have enabled the detections of many faint ORLs of heavy element ions in photoionized gaseous nebulae, the recombination theories of heavy element ions, such as C~{\sc ii}, N~{\sc ii}, O~{\sc ii}, and Ne~{\sc ii}, have seen steady improvements since early 1980s (e.g. Storey \citealt{storey1981}; Nussbaumer \& Storey \citealt{ns1983}, \citealt{ns1984}, \citealt{ns1986}, \citealt{ns1987}; Escalante \& Victor \citealt{ev1990}; P\'{e}quignot, Petitjean \& Boisson \citealt{ppb1991}; Storey \citealt{storey1994}; LSBC; Kisielius et al. \citealt{kisielius1998}; Davey, Storey \& Kisielius \citealt{davey2000}; Kisielius \& Storey \citealt{ks1999}, \citealt{ks2002}; Fang, Storey \& Liu \citealt{fsl2011}). The high-quality atomic data have been widely used to reveal the physical conditions (electron temperatures and densities) under which the ORLs of heavy element ions arise, and to determine ionic and elemental abundances from them (e.g. Liu et al. \citealt{liu2000}). In nebular astrophysics there has been a long-standing dichotomy whereby the ionic and elemental abundances of C, N, O and Ne relative to hydrogen determined from ORLs (e.g. C~{\sc ii} M6 $\lambda$4267, N~{\sc ii} M39b $\lambda$4041, O~{\sc ii} M1 $\lambda$4649 and M48a $\lambda$4089, Ne~{\sc ii} M55e $\lambda$4392) are systematically higher than those derived from the much brighter collisionally excited lines (CELs, often referred to as forbidden lines). With high-quality optical spectra now available, detailed studies of this problem have been carried out for several archetypal PNe (LSBC and LLB01 for NGC\,7009; Liu et al. \citealt{liu2000} for NGC\,6153; Liu et al. \citealt{liu2001b} for M\,1-42 and M\,2-36; Liu et al. \citealt{lbzbs06} for Hf\,2-2; Garnett \& Dinerstein \citealt{gd2001} for NGC\,6720). Several deep optical spectroscopic surveys of PNe, which allow for the analyses of nebulae based on ORLs, have been carried out during the past decade (Tsamis et al. \citealt{tsamis2003}, \citealt{tsamis2004}; Liu et al. \citealt{liu2004a},\,b; Robertson-Tessi \& Garnett \citealt{rtg2005}; Wesson, Liu \& Barlow \citealt{wlb2005}; Wang \& Liu \citealt{wl2007}). The abundance discrepancy factors (ADFs), defined as the ratio of the abundance derived from ORLs to that deduced from CELs, typically lie in the range 1\,--\,3. But for a significant number of PNe, ADF values exceeding 5, or even 10, are seen. The highest ADF value ($\sim$70) of all PNe is found in Hf\,2-2 (Liu et al. \citealt{lbzbs06}). Another dichotomy that is closely related to the problem of abundance discrepancy is that nebular electron temperatures derived from the traditional diagnostic [O~{\sc iii}] nebular-to-auroral line ratio are generally higher than those derived from the Balmer jump (BJ) of hydrogen recombination spectrum (e.g. Peimbert \citealt{peimbert1971}; Liu \& Danziger \citealt{ld1993b}). A number of postulations have been raised to explain these problems (e.g. Peimbert \citealt{peimbert1967}; Rubin \citealt{rubin1989}; Viegas \& Clegg \citealt{vc1994}), but all failed to provide a consistent interpretation of all the available observations. Recenltly, Nicholls, Dopita \& Sutherland \cite{nicholls12} explored the possibility that electrons in H~{\sc ii} regions and PNe depart from a Maxwell-Boltzmann equilibrium energy distribution and suggested that a ``$\kappa$-distribution" for the electron energies, which are widely found in solar system plasmas, can explain the temperature and abundance discrepancies in H~{\sc ii} regions and PNe. The bi-abundance nebular model proposed by Liu et al. \cite{liu2000}, who postulated that PNe (probably also H~{\sc ii} regions) contain H-deficient inclusions, provides a better and natural explanation of the dichotomy. In this model, the faint ORLs of heavy element ions originate mainly from the ``cold", H-deficient inclusions, while the stronger CELs are emitted from the warmer ambient plasma with `normal' chemical composition. Deep spectroscopic surveys and recombination line analysis of individual nebulae in the past decade has yielded strong evidence for the existence of such a ``cold" component (see recent reviews by Liu \citealt{liu2003}, \citealt{liu2006a}, \citealt{liu2011}). This is the second of the two papers devoted to very deep spectroscopy of NGC\,7009. In the previous paper (Fang \& Liu \citealt{fl2011}, hereafter Paper~I), we presented high-quality spectra of NGC\,7009 and tabulation of all detected lines, including their observed and dereddened intensities, many of which were obtained via careful deblending using the technique of multi-Gaussian profile fitting. We also carried out plasma diagnostics using the CEL ratios, the H~{\sc i} recombination spectrum (including the Balmer and Paschen decrements of the line spectrum, and the Balmer and Paschen jumps of the continuum spectrum), and the He~{\sc i} and He~{\sc ii} recombination spectrum (including the He~{\sc i} recombination line ratios, and discontinuities of the He~{\sc i} and He~{\sc ii} recombination continua). The average electron temperature yielded by CELs, $T_\mathrm{e}$(CELs), is higher than that from the H~{\sc i} Balmer jump, $T_\mathrm{e}$(H~{\sc i}~BJ), which in turn is higher than the temperature derived from the He~{\sc i} recombination line ratios, $T_\mathrm{e}$(He~{\sc i}). The current paper focuses on analyses of the optical recombination spectra of heavy element ions detected in the spectrum of NGC\,7009. New effective recombination coefficients, including those for the N~{\sc ii} and O~{\sc ii} recombination spectrum that were calculated in the intermediate coupling scheme, are now available and are utilized in the analyses. Plasma diagnostics based on the ORLs of heavy element ions are carried out in Section\,2, and the electron temperatures derived from the N~{\sc ii} and O~{\sc ii} ORL ratios agree with each other and are both close to 1200~K. Thus the general pattern of electron temperatures, $T_\mathrm{e}$(CELs) $\gtrsim$ $T_\mathrm{e}$(H~{\sc i}~BJ) $\gtrsim$ $T_\mathrm{e}$(He~{\sc i}) $\gtrsim$ $T_\mathrm{e}$(N~{\sc ii},\,O~{\sc ii}~ORLs), which was predicted by the bi-abundance nebular model (Liu \citealt{liu2003}) and has been seen in many PNe, is confirmed in the current analysis of NGC\,7009. A comprehensive analysis of individual multiplets of the C~{\sc ii}, N~{\sc ii}, O~{\sc ii}, and Ne~{\sc ii} recombination spectra are presented in Section\,3. The lines are critically examined for potential blending effects. Comparison is made for the observed and predicted relative intensities of the best observed transitions, using the latest effective recombination coefficients. Ionic and elemental abundances are derived in Section\,4, where ADFs for the C, N, O, and Ne ionic abundances are calculated. The results are discussed in Section\,5, followed by a summary in Section\,6. \section{\label{diagnose} Plasma diagnostics based on the ORLs of heavy element ions} \subsection{\label{diagnose:data} Effective recombination coefficients} Reliable atomic data, most importantly the effective recombination coefficients of abundant heavy element ions such as C~{\sc ii}, N~{\sc ii}, O~{\sc ii}, and Ne~{\sc ii}, are key to the spectroscopic analysis of photoionized gaseous nebulae. Most of the {\it ab~initio} calculations of heavy element ions aimed for astrophysical applications hitherto were carried out in the {\it LS}\,coupling scheme. This approximation tacitly assumes a statistical distribution in the population of the fine-structure levels of the recombining ions (i.e., 1\,:\,2 for the N$^{2+}$ $^{2}$P$^{\rm o}_{1/2}$ and $^{2}$P$^{\rm o}_{3/2}$ levels in the case of N~{\sc ii}; 1\,:\,3\,:\,5 for the O$^{2+}$ $^{3}$P$_{0}$, $^{3}$P$_{1}$ and $^{3}$P$_{2}$ levels in the case of O~{\sc ii}). The assumption of {\it LS}\,coupling may give satisfactory results for some of the low-lying transitions such as those belonging to the 3p\,--\,3s configuration, but not for many of the transitions from the higher 3d\,--\,3p or 4f\,--\,3d configurations. In low-density objects such as H~{\sc ii} regions and evolved PNe, the relative populations of the ground-term fine-structure levels of the recombining ion actually have density-dependence and deviate from the statistical distribution, and so do the relative emissivities of resultant recombination lines. A better treatment of the recombination and the following cascading in a proper coupling scheme is vital for probing the physical conditions in gaseous nebulae. New {\it ab initio} calculation of the effective recombination coefficients for the N~{\sc ii} recombination spectrum was presented by Fang, Storey \& Liu (\citealt{fsl2011}, hereafter FSL11)\footnote{Dr. Daniel P\'{e}quignot found some anomalies in the published data of FSL11, which were due to mislabeling of five bound-state energy levels of N~{\sc ii}. The labeling has recently been corrected and the effective recombination coefficients for the N~{\sc ii} lines were re-calculated. A corrigendum has been in preparation. Figs.\,\ref{nii:v3}, \ref{nii:v19} and \ref{nii:v39} in the current paper are based on the revised effective recombination coefficients of N~{\sc ii}.}, who took into account the density dependence of effective recombination coefficients arising from the density-dependence of relative populations of the ground fine-structure levels of the recombining ion (i.e. N$^{2+}$ $^2$P$^{\rm o}_{1/2}$ and $^2$P$^{\rm o}_{3/2}$), an elaboration that has not been attempted before for this ion. The availability of such data opens up the possibility of electron density determination via recombination line analysis. Fig.\,\ref{niii:frac} (also Fig.\,$3$ in FSL11) shows the relative populations of the N$^{2+}$ $^{2}$P$^{\rm o}_{1/2}$ and $^{2}$P$^{\rm o}_{3/2}$ fine-structure levels as a function of electron density under typical nebular conditions. Photoionization cross-sections, bound state energies, and oscillator strengths of N~{\sc ii} with $n\leq{11}$ and $l\leq{4}$ were obtained using the close-coupling R-matrix method in the intermediate coupling scheme. Photoionization data were computed using an energy mesh which accurately map out the near-threshold resonances, and were used to derive recombination coefficients, including radiative and dielectronic recombination. Also new is the inclusion in the calculations of the effects of dielectronic recombination via high-$n$ resonances lying between the $^2$P$^{\rm o}_{1/2}$ and $^2$P$^{\rm o}_{3/2}$ thresholds. The calculated coefficients are valid for temperatures down to an unprecedentedly low level ($\sim$100~K). Figs.\,\ref{nii:v3}, \ref{nii:v19} and \ref{nii:v39} (also Figs.\,$5$, $6$ and $7$ in FSL11) show the theoretical relative intensities of the fine-structure components of the M3 2p3p\,$^3$D\,--\,2p3s\,$^3$P$^{\rm o}$, M19 2p3d\,$^{3}$F$^{\rm o}$\,--\,2p3p\,$^{3}$D and M39 2p4f\,G[7/2,9/2]\,--\,2p3d\,$^{3}$F$^{\rm o}$ multiplets of N~{\sc ii}, respectively, as a function of electron density. So far, most calculations of the O~{\sc ii} effective recombination coefficients have been in the {\it LS}\,coupling assumption. The first comprehensive treatment of the O~{\sc ii} recombination at nebular temperatures and densities was by Storey \cite{storey1994}, who adopted the bound-bound and bound-free radiative data of O~{\sc ii} from the Opacity Project data base (Cunto et al. \citealt{cunto1993}) and took into account cascading as well as the effects of collisions. LSBC presented partial treatment of intermediate coupling effects in transitions between the ($^3$P)4f, ($^3$P)3d, and ($^3$P)3p electron configurations. The most recent calculations of effective recombination coefficients for the O~{\sc ii} recombination spectrum was carried out by P.~J. Storey (private communication, hereafter PJS) in the intermediate coupling scheme. Density dependence of the relative populations of the ground-term fine-structure levels of the recombining ion was considered in the level population calculations. Fig.\,\ref{oiii:frac} shows the fractional populations of the recombining ion O$^{2+}$ $^{3}$P$_{0}$, $^{3}$P$_{1}$ and $^{3}$P$_{2}$ fine-structure levels as a function of electron density. The new O~{\sc ii} recombination coefficients were calculated down to a temperature of 400~K. Figs.\,\ref{oii:v1}, \ref{oii:v10} and \ref{oii:v48} show the theoretical relative intensities of the fine-structure components of the O~{\sc ii} M1 2p$^{2}$3p\,$^{4}$D$^{\rm o}$\,--\,2p$^{2}$3s\,$^{4}$P, M10 2p$^{2}$3d\,$^{4}$F\,--\,2p$^{2}$3p\,$^{4}$D$^{\rm o}$ and M48 4f\,G[5,4,3]$^{\rm o}$\,--\,3d\,$^4$F multiplets, respectively, as a function of electron density. The new effective recombination coefficients for the N~{\sc ii} and O~{\sc ii} recombination spectra provide an opportunity to construct nebular plasma diagnostics based on the ORLs of heavy element ions. With those new atomic data, we have determined electron temperatures and densities for over 100 Galactic PNe and 40 Galactic and extragalactic H~{\sc ii} regions (McNabb et al. \citealt{mfls2011}). By comparing our results of plasma diagnostics based on the N~{\sc ii} and O~{\sc ii} ORLs with the electron temperatures given in literature ($T_\mathrm{e}$'s derived from CELs, H~{\sc i} Balmer jump and the He~{\sc i} recombination lines), we find a temperature sequence for about 50 PNe, $T_\mathrm{e}$([O~{\sc iii}]) $\gtrsim$ $T_\mathrm{e}$(H~{\sc i} BJ) $\gtrsim$ $T_\mathrm{e}$(He~{\sc i}) $\gtrsim$ $T_\mathrm{e}$(N~{\sc ii}~\&~ O~{\sc ii}~ORLs), which is consistent with predictions from the bi-abundance nebular model postulated by Liu et al. \cite{liu2000}. Kisielius et al. \cite{kisielius1998} published the Ne~{\sc ii} effective recombination coefficients that were calculated in the {\it LS}\,coupling scheme. Only transitions between states with $l\leq{2}$ were presented. Preliminary effective recombination coefficients for a few selected lines from the 4f\,--\,3d configuration are available (P.~J. Storey, private communication), but only for a single temperature and density case. All the previous calculations of the Ne~{\sc ii} recombination spectrum assumed that the three ground-term fine-structure levels of the recombining ion Ne$^{2+}$, $^{3}$P$_{2}$, $^{3}$P$_{1}$ and $^{3}$P$_{0}$, are thermalized, i.e. they are populated according to the statistical weights. However, the $^{3}$P$_{1}$ and $^{3}$P$_{0}$ levels have relatively large critical densities: 2.0$\times$10$^5$~cm$^{-3}$ for $^{3}$P$_{1}$ and 2.9$\times$10$^4$~cm$^{-3}$ for $^{3}$P$_{0}$ at 10\,000~K, and these values drop to about half when the electron temperature decreases to 1000~K. At physical conditions lower than the critical densities, the $^{3}$P$_{1}$ and $^{3}$P$_{0}$ levels are underpopulated compared to the values under thermal equilibrium. Fig.\,\ref{neiii:frac} shows the fractional populations of the three Ne~{\sc iii} levels as a function of electron density. The effects of the non-equilibrium level populations of Ne~{\sc iii} on the effective recombination coefficients for the 4f -- 3d transitions are not clear and may vary from line to line. For the strongest 4f\,--\,3d lines that form exclusively from recombination of target $^{3}$P$_{2}$ plus cascades, their effective recombination coefficients will be underestimated if a thermal equilibrium of the Ne~{\sc iii} ground levels is assumed, and that will cause a corresponding overestimation of the derived Ne$^{2+}$/H$^+$. Many Ne~{\sc ii} recombination lines from different multiplets have been observed in deep spectra of PNe and H~{\sc ii} regions and ionic abundances derived (e.g. LLB01). However, a proper analysis of those data requires new calculations in an appropriate coupling scheme for the strongest Ne~{\sc ii} recombination lines, especially those belonging to the 3d\,--\,3p and 4f\,--\,3d transition arrays. \begin{figure*} \begin{center} \includegraphics[width=9.5cm,angle=-90]{fig01.pdf} \caption{Fractional populations of the N$^{2+}$ $^{2}$P$^{\rm o}_{1/2}$ and $^{2}$P$^{\rm o}_{3/2}$ fine-structure levels. Four temperature cases, 200, 1000, 5000 and 10\,000~K, are shown. This figure is obtained by solving level population equations for a five-level atomic model.} \label{niii:frac} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=10cm,angle=-90]{fig02.pdf} \caption{Fractional intensities of the N~{\sc ii} M3 2p3p\,$^3$D\,--\,2p3s\,$^3$P$^{\rm o}$ $\lambda$5679 multiplet as a function of electron density. The numbers in the brackets ($J_2\,-\,J_1$) following the wavelength labels are the total angular momentum quantum numbers of the upper and lower levels, respectively. Transitions from the upper levels with the same angular momentum quantum number $J_2$ are represented by curves of the same color and line type. Four temperature cases, $\log{T_\mathrm{e}}$~[K] = 2.5, 3.0, 3.5, and 4.0, are presented. The calculations were based on the effective recombination coefficients of FSL11.} \label{nii:v3} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=10cm,angle=-90]{fig03.pdf} \caption{Same as Fig.\,\ref{nii:v3} but for the fractional intensities of the N~{\sc ii} M19 2p3d\,$^{3}$F$^{\rm o}$\,--\,2p3p\,$^{3}$D $\lambda$5004 multiplet.} \label{nii:v19} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=10cm,angle=-90]{fig04.pdf} \caption{Same as Fig.\,\ref{nii:v3} but for the fractional intensities of the N~{\sc ii} M39 2p4f\,G[7/2,9/2]\,--\,2p3d\,$^{3}$F$^{\rm o}$ $\lambda$4041 multiplet.} \label{nii:v39} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=9.5cm,angle=-90]{fig05.pdf} \caption{Fractional populations of the O$^{2+}$ $^{3}$P$_{0}$, $^{3}$P$_{1}$ and $^{3}$P$_{2}$ fine-structure levels. Four temperature cases, 200, 1000, 5000 and 10\,000~K, are shown. This figure is obtained by solving the level population equations for a five-level atomic model.} \label{oiii:frac} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=10cm,angle=-90]{fig06.pdf} \caption{Same as Fig.\,\ref{nii:v3} but for the fractional intensities of the O~{\sc ii} M1 2p$^{2}$3p\,$^{4}$D$^{\rm o}$\,--\,2p$^{2}$3s\,$^{4}$P $\lambda$4652 multiplet. The calculations were based on the unpublished effective recombination coefficients of PJS.} \label{oii:v1} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=10cm,angle=-90]{fig07.pdf} \caption{Same as Fig.\,\ref{oii:v1} but for the fractional intensities of the O~{\sc ii} M10 2p$^{2}$3d\,$^{4}$F\,--\,2p$^{2}$3p\,$^{4}$D$^{\rm o}$ $\lambda$4075 multiplet.} \label{oii:v10} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=10cm,angle=-90]{fig08.pdf} \caption{Same as Fig.\,\ref{oii:v1} but for the fractional intensities of the O~{\sc ii} M48 4f\,G[5,4,3]$^{\rm o}$\,--\,3d\,$^4$F $\lambda$4089 multiplet.} \label{oii:v48} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=10cm,angle=-90]{fig09.pdf} \caption{Fractional populations of the Ne$^{2+}$ $^{3}$P$_{2}$, $^{3}$P$_{1}$ and $^{3}$P$_{0}$ fine-structure levels. Two temperature cases, 1000 and 10\,000~K, are shown. This figure is obtained by solving the level population equations for a five-level atomic model.} \label{neiii:frac} \end{center} \end{figure} \subsection{\label{diagnose:cii} Electron temperature from the C~{\sc ii} recombination lines} Most C~{\sc ii} lines detected in the spectrum of NGC\,7009 are mainly excited by radiative recombination, except for a few for which dielectronic recombination dominates. Examples of the latter include the C~{\sc ii} M28.01 3d$^{\prime}$\,$^2$F$^{\rm o}$\,--\,3p$^{\prime}$\,$^2$D $\lambda$8797 multiplet, which originates from dielectronic capture of an electron to the 2s2p($^{3}$P$^{\rm o}_{J}$)3d\,$^{2}$F$^{\rm o}$ autoionization state that lies 0.41~eV (Moore \citealt{moore1993}) above the first ionization threshold 2s$^{2}$\,$^1$S$_{0}$ and the subsequent decay to the 2s2p($^{3}$P$^{\rm o}_{J}$)3p\,$^{2}$D bound state that lies about 1.00~eV below the ionization threshold. Fig.\,\ref{ciii_diele} is a schematic diagram that shows the dielectronic and radiative recombination of C~{\sc ii}. The electron on an autoionizing state either decays to another autoionizing or bound state with the emission of radiation, or autoionizes to a true continuum state leaving an ion and a free electron with no emission of radiation. The latter process usually dominates, and the population of autoionization states is close to that given by Saha and Boltzmann equations as in the case of the local thermodynamic equilibrium (LTE). The emissivity of a dielectronic recombination line is sensitive to electron temperature through the Boltzmann factor $\exp{(-E/kT_\mathrm{e})}$, where $E$ is the excitation energy of the upper state relative to the ionization threshold. By comparing the strength of a dielectronic recombination line to that of an ordinary (i.e. radiative recombination dominated) recombination line, whose emissivity has a relatively weak power-law dependence on electron temperature ($\sim\,T_\mathrm{e}^{\alpha}$, where $\alpha\sim$1), one can determine the electron temperature. The C~{\sc ii} dielectronic lines have been used to determine electron temperatures in stellar winds of PNe (e.g. De~Marco et al. \citealt{dsb1998}). The strongest C~{\sc ii} recombination line detected in the spectra of nebulae is the M6 4f\,$^{2}$F$^{\rm o}$\,--\,3d\,$^{2}$D $\lambda$4267 line, which is excited by radiative recombination only. The upper state of the $\lambda$4267 line lies about 3.4~eV below the ionization threshold 2s$^{2}$\,$^{1}$S$_{0}$ (see Fig.\,\ref{ciii_diele}), and its population is far from LTE, and thus has a very different temperature-dependence from that of the upper state of the M28.01 $\lambda$8797 transition (i.e. 3d$^{\prime}$\,$^2$F$^{\rm o}$). We use the intensity ratio of the $\lambda$8793.80 (3d$^{\prime}$\,$^2$F$^{\rm o}_{7/2}$\,--\,3p$^{\prime}$\,$^2$D$_{5/2}$) line, the stronger fine-structure component of the C~{\sc ii} M28.01 multiplet, and the $\lambda$4267 line to determine electron temperature. In NGC\,7009, this line ratio yields a temperature of 3000~K, as shown in Fig.\,\ref{cii_te}. The atomic data used here are the effective dielectronic and radiative recombination coefficients of Nussbaumer \& Storey \cite{ns1984} and P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991}, respectively. Measurements of the C~{\sc ii} M28.01 lines are presented in Section\,\ref{orls:cii:v28.01}. \begin{figure} \begin{center} \includegraphics[width=7.0cm,angle=-90]{fig10.pdf} \caption{Schematic figure showing the dielectronic recombination of C~{\sc ii} through the autoionizing state between the ionization thresholds 2s$^{2}$\,$^{1}$S$_{0}$ and 2s2p\,$^{3}$P$^{\rm o}_{J}$ of C~{\sc iii}. The electrons captured to the 2s2p($^{3}$P$^{\rm o}_{J}$)3d\,$^{2}$F$^{\rm o}$ autoionizing state either go back to a true continuum state 2s$^{2}$($^{1}$S$_{0}$)$\kappa_{0}\l_{0}$ through autoionization, or decay to the 2s2p($^{3}$P$^{\rm o}_{J}$)3p\,$^{2}$D bound state through the C~{\sc ii} M28.01 $\lambda$8797 transition. Also shown is the C~{\sc ii} M6 $\lambda$4267 radiative recombination transition between the 4f\,$^{2}$F$^{\rm o}$ and the 3d\,$^{2}$D bound states.} \label{ciii_diele} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.0cm,angle=-90]{fig11.pdf} \caption{The C~{\sc ii} $\lambda$8794/$\lambda$4267 ratio as a function of electron temperature. The plot is based on the effective dielectronic and radiative recombination coefficients calculated by Nussbaumer \& Storey \citet{ns1984} and P\'{e}quignot, Petitjean \& Boisson \citet{ppb1991}, respectively. The observed C~{\sc ii} $\lambda$8794/$\lambda$4267 ratio in NGC\,7009 yields an electron temperature of $\sim$3000$\pm$250~K. The error bar is calculated from measurement uncertainties of the two lines.} \label{cii_te} \end{center} \end{figure} \subsection{\label{diagnose:niioii} Electron temperatures and densities from the N~{\sc ii} and O~{\sc ii} recombination lines} In the low-density conditions in nebulae, the relative populations of the ground-term fine-structure levels of a recombining ion (e.g., N$^{2+}$ $^{2}$P$^{\rm o}_{1/2}$ and $^{2}$P$^{\rm o}_{3/2}$ in the case of N~{\sc ii}) vary with electron density, and this is reflected in the density dependence of the resultant emissivities (i.e. the effective recombination coefficients) of different recombination lines within a multiplet of the recombined ion. Thus by comparing the intensities of two ORLs belonging to the same multiplet but formed from different parent levels one can determine electron density. At typical nebular conditions, emissivities of heavy element recombination lines have only a weak, power-law dependence on electron temperature, $\epsilon\propto{T_\mathrm{e}^{-\alpha}}$ ($\alpha\sim{1}$), and in general, the line ratios depend very little on temperature. However, the temperature sensitivity still differs for recombination lines decaying from levels of different orbital angular momentum quantum number $l$, and this difference becomes more profound if two lines of very different $l$ are compared. Thus the intensity ratio of two lines from multiplets of different $l$ can be used to determine electron temperature, provided that the measurements are precise enough (e.g. Liu \citealt{liu2003}; FSL11). Figs.\,\ref{nii_te:v3v39} and \ref{nii_ne:v3} show that the N~{\sc ii} line ratio $\lambda$5679/$\lambda$4041 observed in NGC\,7009 yields an electron temperature of $\sim$1200$\pm$200~K, whereas the $\lambda$5679/$\lambda$5666 ratio yields a density of 2000\,--\,3000~cm$^{-3}$. The $\lambda$5679.56 line is the strongest fine-structure component of the N~{\sc ii} M3 3p\,$^{3}$D\,--\,3s\,$^{3}$P$^{\rm o}$ $\lambda$5679 multiplet, and forms exclusively from the $^2$P$^{\rm o}_{3/2}$ core capturing an electron plus cascades from higher levels, while the second strongest line $\lambda$5666.63 can form, in addition, from recombination of the $^2$P$^{\rm o}_{1/2}$ core. For the target N$^{2+}$, the population of the fine-structure level $^2$P$^{\rm o}_{3/2}$ relative to $^2$P$^{\rm o}_{1/2}$ increases with electron density due to collisional excitation, and this results in an increase of the $\lambda$5679.56 intensity relative to the $\lambda$5666.63 line with density, as shown in Fig.\,\ref{nii:v3}. Thus the $\lambda$5679/$\lambda$5666 ratio can be used as a density diagnostic. The $\lambda$4041.31 line belongs to the N~{\sc ii} M39b 4f\,G[9/2]\,--\,3d\,$^3$F$^{\rm o}$ multiplet and is the strongest among the N~{\sc ii} 4f\,--\,3d array. It forms from recombination of the $^2$P$^{\rm o}_{3/2}$ core plus cascades from higher levels. The intensity ratio of the $\lambda$5679.56 and $\lambda$4041.31 lines has a relatively strong temperature dependence, and thus can serve as a temperature diagnostic. In the spectrum of NGC\,7009, the $\lambda$5666.63 line is free of blending and is amongst the best observed N~{\sc ii} ORLs, while the $\lambda$5679.56 and $\lambda$4041.31 lines are affected by line blending. Accurate measurements of the latter two lines were obtained using multi-Gaussian profile fits (see Section\,\ref{nii_orls}). The M19 2p3d\,$^3$F$^{\rm o}$\,--\,2p3p\,$^3$D multiplet is the strongest of the 3d\,--\,3p configuration of N~{\sc ii}. The density-dependence of the relative emissivities of the two strongest fine-structure components of M19 is noticeable (Fig.\,\ref{nii:v19}). The intensity ratio of those components, $\lambda$5005.15/($\lambda$5001.14\,+\,$\lambda$5001.48), may serve as another density diagnostic. Similarly, the intensity ratio of the $\lambda$5005.15 and M39b $\lambda$4041.31 lines may be used to determine electron temperature. Figs.\,\ref{nii_te:v19v39} and \ref{nii_ne:v19} show the $\lambda$5005/$\lambda$5001 and $\lambda$5005/$\lambda$4041 ratios of N~{\sc ii} as a function of electron density and temperature, respectively. However, accurate measurements of the N~{\sc ii} M19 lines are essentially impossible due to the presence of the extraordinarily strong [O~{\sc iii}] $\lambda$5007 line, which is often strongly saturated in deep spectra. Some N~{\sc ii} states of parentage other than $^{2}$P$^{\rm o}$ have energies even higher than the 2p($^{2}$P$^{\rm o}$)4f\,G[9/2] spectral term, which is the upper state of the M39b $\lambda$4041.31 line. The intensity ratio of an N~{\sc ii} recombination line that originates from one of those high-energy states to the M3 $\lambda$5679.56 line can also be used as a temperature diagnostic. Possible candidates in the optical waveband for such application are, e.g. the M63 3p$^{\prime}$\,$^{5}$D$^{\rm o}$\,--\,3s$^{\prime}$\,$^{5}$P, M66 3d$^{\prime}$\,$^{5}$F\,--\,3p$^{\prime}$\,$^{5}$D$^{\rm o}$ and M72 4f$^{\prime}$\,$^{5}$G$^{\rm o}$\,--\,3d$^{\prime}$\,$^{5}$F multiplets. According to the experimental data given by NIST\footnote{The NIST Spectra Database\\ $http://physics.nist.gov/PhysRefData/ASD/levels_form.html$}, the upper state of the M63 multiplet is about 1.85~eV below the ionization threshold N~{\sc iii} $^{2}$P$^{\rm o}_{1/2}$, while the upper states of the M66 and M72 multiplets are 0.53 and 3.67~eV, respectively, above this threshold. The R-matrix calculation of the bound-state energy levels of N~{\sc ii} in FSL11 only extends to about 0.45~eV (corresponding to $n\,=\,11$ in the principal series of N~{\sc ii}) below the ionization threshold. Thus only the energy levels of the 2s2p$^{2}$($^{4}$P)\,3s and 2s2p$^{2}$($^{4}$P)\,3p configurations (i.e., the levels of the $^{5}$P, $^{3}$P, $^{3}$S$^{\rm o}$, $^{5}$D$^{\rm o}$, $^{5}$P$^{\rm o}$, $^{3}$D$^{\rm o}$, $^{5}$S$^{\rm o}$ and $^{3}$P$^{\rm o}$ spectral terms, in the energy order given by NIST) are included in the R-matrix calculation and the N~{\sc ii} recombination lines that originate from those levels are precisely calculated. In principle, the intensity ratio of the $\lambda$5679.56 and the $\lambda$5535.36 lines, the strongest fine-structure components of the M3 3p\,$^{3}$D\,--\,3s\,$^{3}$P$^{\rm o}$ and the M63 3p$^{\prime}$\,$^{5}$D$^{\rm o}$\,--\,3s$^{\prime}$\,$^{5}$P multiplets of N~{\sc ii}, respectively, can be used to determine electron temperature. Fig.\,\ref{nii_te:extra} shows the $\lambda$5679/$\lambda$5535 ratio as a function of electron temperature, and this relation is quite insensitive to electron density in the logarithmic scale. However, accurate measurements of the $\lambda$5535.36 line is difficult due to weakness (about 10$^{4}$ times weaker than H$\beta$). We have not detected any N~{\sc ii} lines of the parentage other than $^{2}$P$^{\rm o}$ in the deep spectrum of NGC\,7009. The $\lambda$4649.13 line is the strongest of the O~{\sc ii} M1 3p\,$^{4}$D$^{\rm o}$\,--\,3s\,$^{4}$P multiplet, and forms only from recombination of the $^3$P$_{2}$ core plus cascades from higher energy levels, while another O~{\sc ii} M1 line $\lambda$4661.63 can form, in addition, from recombination of the $^3$P$_{0}$ and $^3$P$_{1}$ cores. For the recombining ion O$^{2+}$, the population of the fine-structure level $^3$P$_{2}$ relative to $^3$P$_{0}$ and $^3$P$_{1}$ increases with electron density due to collisional excitation, and so does the resultant emissivity of the $\lambda$4649.13 line relative to the $\lambda$4661.63 line, as is shown in Fig.\,\ref{oii:v1}. Thus the intensity ratio $\lambda$4649/$\lambda$4662 can serve as a density diagnostic. The $\lambda$4089.29 line (M48a 4f\,G[5]$^{\rm o}_{11/2}$\,--\,3d\,$^{4}$F$_{9/2}$) is the strongest amongst the O~{\sc ii} 4f\,--\,3d array, and forms from recombination of the $^3$P$_{2}$ core. The intensity ratio of the $\lambda$4089.29 and the $\lambda$4649.13 lines has a strong temperature dependence, and can be used to determine electron temperature. Figs.\,\ref{oii_te:v1v48} and \ref{oii_ne:v1} show that the observed O~{\sc ii} line ratios $\lambda$4649/$\lambda$4089 and $\lambda$4649/$\lambda4662$ in NGC\,7009 yield an electron temperature of $\sim$1400$\pm$300~K and a density of 2500\,--\,4000~cm$^{-3}$, respectively. Although the $\lambda$4661.63 line is the third strongest in the O~{\sc ii} M1 multiplet, it is free from line blending and thus best observed, while the $\lambda$4649.13 and $\lambda$4089.29 lines both suffer from line blending: the $\lambda$4649.13 line is blended with another O~{\sc ii} M1 line $\lambda$4650.84 and the three C~{\sc iii} M1 lines $\lambda\lambda$4647.42, 4650.25 and 4651.47; the $\lambda$4089.29 line is contaminated by the Si~{\sc iv} M1 $\lambda$4088.86 (4p\,$^{2}$P$^{\rm o}_{3/2}$\,--\,4s\,$^{2}$S$_{1/2}$) line. Multi-Gaussian fitting was carried out to derive the intensities of the two O~{\sc ii} ORLs, and both intensities are accurate to within 20 per cent. Details of spectral fits are given in Section\,\ref{oii_orls}. The M10 3d\,$^4$F\,--\,3p\,$^4$D$^{\rm o}$ $\lambda$4075 multiplet is the strongest transition of the 3d\,--\,3p configuration of O~{\sc ii}. Given the opposite trends of the fractional intensities of the $\lambda$4075.86 and $\lambda$4069.89,62 lines, the three fine-structure components of M10, as a function of electron density, as shown in Fig.\,\ref{oii:v10}, the intensity ratio of the two lines may serve as another density diagnostic. Here the $\lambda$4075.86 line is the strongest component of the M10 multiplet. The intensity ratio of the $\lambda$4075.86 line and the $\lambda$4089.29 line, the strongest fine-structure component of the M48a 4f\,G[5]$^{\rm o}$\,--\,3d\,$^{4}$F multiplet of O~{\sc ii}, can be used as another temperature diagnostic. Figs.\,\ref{oii_te:v10v48} and \ref{oii_ne:v10} show the $\lambda$4076/$\lambda$4089 and $\lambda$4076/$\lambda$4070 as a function of electron temperature and density, respectively. Here the intensity of the $\lambda$4070 line is a sum of the $\lambda$4069.89 (M10 3d\,$^{4}$F$_{5/2}$\,--\,3p\,$^{4}$D$^{\rm o}_{3/2}$) and $\lambda$4069.62 (M10 3d\,$^{4}$F$_{3/2}$\,--\,3p\,$^{4}$D$^{\rm o}_{1/2}$) lines. If we assume a density of about 4300~cm$^{-3}$, as derived from CEL ratios (Paper~I), the electron temperature deduced from the O~{\sc ii} $\lambda$4076/$\lambda$4089 ratio is 1150$\pm$300~K for NGC\,7009. The electron density derived from the O~{\sc ii} $\lambda$4076/$\lambda$4070 ratio is of large uncertainty, due to the relatively large measurement uncertainties of the two lines. The $\lambda$4075.86 line is blended with [S~{\sc ii}] $\lambda$4076.35 (3p$^{3}$\,$^{2}$P$^{\rm o}_{1/2}$\,--\,$^{4}$S$^{\rm o}_{3/2}$) line, while the $\lambda$4069.89,62 line is blended with the [S~{\sc ii}] $\lambda$4068.60 (3p$^{3}$\,$^{2}$P$^{\rm o}_{3/2}$\,--\,$^{4}$S$^{\rm o}_{3/2}$) line and the three C~{\sc iii} M16 5g\,$^{3}$G\,--\,4f\,$^{3}$F$^{\rm o}$ lines $\lambda\lambda$4067.94, 4068.92 and 4070.31. Multi-Gaussian profile fitting was carried out to obtain line fluxes (c.f. Section\,\ref{oii_orls:v10}). Several O~{\sc ii} recombination lines with parentage other than $^{3}$P have been detected in the spectrum of NGC\,7009. These lines belong to the M15 3p$^{\prime}$\,$^{2}$F$^{\rm o}$\,--\,3s$^{\prime}$\,$^{2}$D, M36 3d$^{\prime}$\,$^{2}$G\,--\,3p$^{\prime}$\,$^{2}$F$^{\rm o}$, M101 4f$^{\prime}$\,H[5]$^{\rm o}$\,--\,3d$^{\prime}$\,$^{2}$G, and M105 4f$^{\prime}$\,P[1]$^{\rm o}$\,--\,3d$^{\prime}$\,$^{2}$S multiplets of O~{\sc ii}. According to the experimental data from NIST, the upper states of the M15 and M36 multiplets are 6.76 and 3.80~eV, respectively, below the ionization threshold O~{\sc iii} $^{3}$P$_{0}$, while the upper states of the M101 and M105 multiplets are about 0.89 and 0.87~eV, respectively, below this threshold. In the most recent calculation of PJS for the O~{\sc ii} effective recombination coefficients, only the transitions between the levels with principal quantum number $n\leq$6 (i.e. corresponding to 1.50~eV below the ionization threshold $^{3}$P$_{0}$) were presented. Thus only the effective recombination coefficients of the M15 and M36 lines are available. The strongest fine-structure components of the M15 and M36 multiplets are $\lambda$4590.97 (3p$^{\prime}$\,$^{2}$F$^{\rm o}_{7/2}$\,--\,3s$^{\prime}$\,$^{2}$D$_{5/2}$) and $\lambda$4189.79 (3d$^{\prime}$\,$^{2}$G$_{9/2}$\,--\,3p$^{\prime}$\,$^{2}$F$^{\rm o}_{7/2}$), respectively, and both lines are detected in the deep spectrum of NGC\,7009, as shown in Figs.\,\ref{4555-4625} and \ref{4176-4260}. Although the effective recombination coefficients of the M101 and M105 multiplets are not available, the strongest fine-structure component of the M101 multiplet, the $\lambda$4253.90 (4f$^{\prime}$\,H[5]$^{\rm o}_{11/2}$\,--\,3d$^{\prime}$\,$^{2}$G$_{9/2}$) line, is also detected (Fig.\,\ref{4176-4260}). Fig.\,\ref{oii_te:extra} shows the O~{\sc ii} line ratios $\lambda$4649.13/$\lambda$4590.97 and $\lambda$4649.13/$\lambda$4189.79 as a function of electron temperature. The figure also shows that the two line ratio--temperature relations are insensitive to electron density, indicating that they are good temperature diagnostics. Both line ratios detected in the spectrum of NGC\,7009 yield electron temperatures close to 3600~K. \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig12.pdf} \caption{The N~{\sc ii} $\lambda$5679/$\lambda$4041 ratio as a function of electron temperature. Different curves represent different density cases, and are based on the calculation of FSL11. The observed N~{\sc ii} $\lambda$5679/$\lambda$4041 ratio in NGC\,7009 yields an electron temperature of about 1200$\pm$200~K. The error bar was calculated from measurement uncertainties of the two lines.} \label{nii_te:v3v39} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig13.pdf} \caption{The N~{\sc ii} $\lambda$5679/$\lambda$5666 ratio as a function of electron density. Different curves represent different temperature cases, and are based on the calculation of FSL11. The observed N~{\sc ii} $\lambda$5679/$\lambda$5666 ratio in NGC\,7009 yields an electron density of 2000\,--\,3000~cm$^{-3}$. The error bar was calculated from measurement uncertainties of the two lines.} \label{nii_ne:v3} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig14.pdf} \caption{The N~{\sc ii} $\lambda$5005/$\lambda$4041 ratio as a function of electron temperature. Different curves represent different density cases, and are based on the calculation of FSL11.} \label{nii_te:v19v39} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig15.pdf} \caption{The N~{\sc ii} $\lambda$5005/$\lambda$5001 ratio as a function of electron density. Here the intensity of the $\lambda$5001 line is a sum of the $\lambda$5001.48 (M19 3d\,$^{3}$F$^{\rm o}_{3}$\,--\,3p\,$^{3}$D$_{2}$) and $\lambda$5001.14 (M19 3d\,$^{3}$F$^{\rm o}_{2}$\,--\,3p\,$^{3}$D$_{1}$) lines. Different curves represent different temperature cases, and are based on the calculation of FSL11.} \label{nii_ne:v19} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig16.pdf} \caption{The N~{\sc ii} $\lambda$5679/$\lambda$5535 ratio as a function of electron temperature. Here the intensity of the $\lambda$5535 line is a sum of the $\lambda$5535.36 (M63 3p$^{\prime}$\,$^{5}$D$^{\rm o}_{4}$\,--\,3s$^{\prime}$\,$^{5}$P$_{3}$) and $\lambda$5535.36 (M63 3p$^{\prime}$\,$^{5}$D$^{\rm o}_{1}$\,--\,3s$^{\prime}$\,$^{1}$P$_{3}$) lines. Different curves represent different density cases, and are based on the calculation of FSL11.} \label{nii_te:extra} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig17.pdf} \caption{The O~{\sc ii} $\lambda$4649/$\lambda$4089 ratio as a function of electron temperature. Different curves represent different density cases, and are based on the unpublished calculation of PJS. The observed O~{\sc ii} $\lambda$4649/$\lambda$4089 ratio in NGC\,7009 yields an electron temperature of about 1400$\pm$300~K. The error bar was calculated from measurement uncertainties of the two lines.} \label{oii_te:v1v48} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig18.pdf} \caption{The O~{\sc ii} $\lambda$4649/$\lambda$4662 ratio as a function of electron density. Different curves represent different temperature cases, and are based on the unpublished calculation of PJS. The observed O~{\sc ii} $\lambda$4649/$\lambda$4662 ratio in NGC\,7009 yields an electron density of 2500\,--\,4000~cm$^{-3}$. The error bar was calculated from measurement uncertainties of the two lines.} \label{oii_ne:v1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig19.pdf} \caption{Same as Fig.\,\ref{oii_te:v1v48} but for the O~{\sc ii} $\lambda$4076/$\lambda$4089 ratio as a function of electron temperature.} \label{oii_te:v10v48} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig20.pdf} \caption{Same as Fig.\,\ref{oii_ne:v1} but for the O~{\sc ii} $\lambda$4076/$\lambda$4070 ratio as a function of electron density. Here the intensity of the $\lambda$4070 line is a sum of the $\lambda$4069.89 (M10 3d\,$^{4}$F$_{5/2}$\,--\,3p\,$^{4}$D$^{\rm o}_{3/2}$) and $\lambda$4069.62 (M10 3d\,$^{4}$F$_{3/2}$\,--\,3p\,$^{4}$D$^{\rm o}_{1/2}$) lines.} \label{oii_ne:v10} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=0]{fig21.pdf} \caption{The O~{\sc ii} $\lambda$4649/$\lambda$4591 ($upper$) and $\lambda$4649/$\lambda$4189 ($lower$) ratios as a function of electron temperature. Here the intensity of the $\lambda$4189 line is a sum of the $\lambda$4189.79 (M36 3d$^{\prime}$\,$^{2}$G$_{9/2}$\,--\,3p$^{\prime}$\,$^{2}$F$^{\rm o}_{7/2}$) and $\lambda$4189.59 (M36 3d$^{\prime}$\,$^{2}$G$_{7/2}$\,--\,3p$^{\prime}$\,$^{2}$F$^{\rm o}_{7/2}$) lines. Different curves represent different density cases, and are based on the unpublished calculation of PJS. Both O~{\sc ii} line ratios observed in the spectrum of NGC\,7009 yield electron temperatures close to 3600$\pm$500~K. The error bars were calculated from measurement uncertainties of the lines.} \label{oii_te:extra} \end{center} \end{figure} \subsection{\label{diagnose:neii} The Ne~{\sc ii} recombination lines as potential plasma diagnostics} So far no efforts have been attempted for plasma diagnostics based on the Ne~{\sc ii} recombination spectrum, partly due to the lack of suitable effective recombination coefficients. Since all Ne~{\sc ii} effective recombination coefficients were calculated under the {\it LS}\,coupling scheme, and relative populations of the $^{3}$P$_{2}$, $^{3}$P$_{1}$ and $^{3}$P$_{0}$ parent levels were assumed to be proportional to the statistical weights, no density diagnostic is possible with the current available atomic data. However, the Ne~{\sc ii} recombination line ratios may still serve as temperature diagnostics using the effective recombination coefficients of Kisielius et al. \cite{kisielius1998}. The M2 3p~$^{4}$D$^{\rm o}$\,--\,3s~$^{4}$P $\lambda$3337 multiplet is the strongest transition of the 3p\,--\,3s configuration, and M13 3d~$^{4}$F\,--\,3p~$^{4}$D$^{\rm o}$ $\lambda$3220 is the strongest multiplet of the 3d\,--\,3p configuration of Ne~{\sc ii}. The intensity ratio of the strongest fine-structure components of those two multiplets, $\lambda$3334.84/$\lambda$3218.19, may be serve as a temperature diagnostic, as shown in the upper panel of Fig.\,\ref{neii_te}. In order to obtain reliable electron temperature, the measurement uncertainty of the $\lambda$3334/$\lambda$3218 ratio needs to be less than 10 per cent, which is very demanding to achieve. Recombination of the Ne$^{2+}$ $^1$D core plus cascades gives rise to another series of Ne~{\sc ii} recombination lines, and the strongest multiplet of this series is M9 3p$^{\prime}$\,$^2$F$^{\rm o}$\,--\,3s$^{\prime}$\,$^2$D $\lambda$3571. The intensity ratio of the $\lambda$3568.50 line, the strongest fine-structure component of the M9 multiplet, and the M2 $\lambda$3334.84 line may be used as another temperature diagnostic, as shown in the lower panel of Fig.\,\ref{neii_te}. The calculation of Kisielius et al. \cite{kisielius1998} shows that the $\lambda$3568/$\lambda$3334 ratio is only marginally sensitive to electron temperature. In order to derive reliable temperature, the line ratio (especially the $\lambda$3568 line) needs to be measured to a very high accuracy level. Although the temperature range considered in the calculation of Kisielius et al. \cite{kisielius1998} is from 1000 to 20\,000~K, the current analytic fit to the effective recombination coefficient for the $\lambda$3568 line is only valid for 2000\,--20\,000~K. As a consequence, the usage of the diagnostic curve of the $\lambda$3568/$\lambda$3334 ratio in Fig.\,\ref{neii_te} outside this temperature range is not recommended. The Ne~{\sc ii} $\lambda$3334/$\lambda$3218 and $\lambda$3568/$\lambda$3334 ratios observed in NGC\,7009 are 1.86 and 0.40, respectively, both falling outside the diagnostic ranges of Fig.\,\ref{neii_te}. \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=0]{fig22.pdf} \caption{The Ne~{\sc ii} recombination line ratios as a function of electron temperature. $Upper$: The $\lambda$3334/$\lambda$3218 ratio. $Lower$: The $\lambda$3568/$\lambda$3334 ratio. The plot is based on the effective recombination coefficients of Kisielius et al. \citet{kisielius1998}. Only a density case of 10\,000~cm$^{-3}$ is presented.} \label{neii_te} \end{center} \end{figure} \subsection{\label{diagnose:ciii} The C~{\sc iii}, N~{\sc iii} and O~{\sc iii} recombination lines as potential temperature diagnostics} In this Section, we discuss the possibility of using the C~{\sc iii}, N~{\sc iii} and O~{\sc iii} optical recombination line ratios to determine electron temperatures. Although some of those lines are detected in the spectrum of NGC\,7009, they are not used for plasma diagnostics in the current paper, due to the lack of adequate atomic data. Unless otherwise specified, the effective recombination coefficients of Nussbaumer \& Storey \cite{ns1984} and P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} are used to create the diagnostic curves. The C~{\sc iii} lines are excited by recombination only, and the ratios of the best observed lines can be used as temperature probes. The intensity ratio of the M1 $\lambda$4649 (3p\,$^{3}$P$^{\rm o}$\,--\,3s\,$^{3}$S) and M16 $\lambda$4069 (5g\,$^{3}$G\,--\,4f\,$^{3}$F$^{\rm o}$) multiplets of C~{\sc iii} is sensitive to electron temperature, as shown in the upper panel of Fig.\,\ref{ciii_te}. The intensity ratio of the C~{\sc iii} triplet M1 $\lambda$4649 and singlet M18 $\lambda$4187 (5g\,$^{1}$G\,--\,4f\,$^{1}$F$^{\rm o}$) can also be used to determine electron temperature, as shown in the lower panel of Fig.\,\ref{ciii_te}. However, those C~{\sc iii} lines all suffer from line blending. The $\lambda$4187 line is blended with the O~{\sc ii} M36 $\lambda$4185.45 (3d$^{\prime}$\,$^{2}$G$_{7/2}$\,--\,3p$^{\prime}$\,$^{2}$F$^{\rm o}_{5/2}$) line, but its intensity can be measured to a high accuracy using multi-Gaussian profile fitting (Fig.\,\ref{4176-4260}). The C~{\sc iii} M1 $\lambda$4649 triplets are blended with the O~{\sc ii} M1 3p\,$^{4}$D$^{\rm o}$\,--\,3s\,$^{4}$P lines $\lambda\lambda$4649.13 and 4650.84 (Fig.\,\ref{4625-4680}), and the C~{\sc iii} M16 $\lambda$4069 triplets are blended with three O~{\sc ii} M10 3d\,$^{4}$F\,--\,3p\,$^{4}$D$^{\rm o}$ lines and the [S~{\sc ii}] $\lambda$4068.60 line (Fig.\,\ref{4060-4115}). Intensities of the C~{\sc iii} M1 and M16 multiplets are obtained from multi-Gaussian profile fitting. The intensity ratio of the fine-structure components of each multiplet was assumed to be as in {\it LS}\,coupling (Sections\,\ref{oii_orls:v1} and \ref{oii_orls:v10}). In NGC\,7009, the C~{\sc iii} $I$(M1~$\lambda$4649)/$I$(M16~$\lambda$4069) and $I$(M1~$\lambda$4649)/$I$(M18~$\lambda$4187) ratios are 1.05 and 3.41, respectively. Both ratios are beyond the diagnostic ranges of Fig.\,\ref{ciii_te}. Fig.\,\ref{ciii_te} shows that the C~{\sc iii} $I$(M1~$\lambda$4649)/$I$(M16~$\lambda$4069) and $I$(M1~$\lambda$4649)/$I$(M18~$\lambda$4187) ratios increase with electron temperature below 10\,000~K, but both decrease when the temperature goes beyond $\sim$12\,600~K ($\log{T_\mathrm{e}}\sim$\,4.1). In order to explain those trends, the effective radiative ($\alpha_{\rm eff}^{\rm R}$) and dielectronic ($\alpha_{\rm eff}^{\rm D}$) recombination coefficients as well as the total effective recombination coefficients ($\alpha_{\rm eff}^{\rm Tot}$) of the C~{\sc iii} M1 and M16 multiplets are shown in Fig.\,\ref{ciii_coeff} as a function of electron temperature. Below 10\,000~K, the C~{\sc iii} M16 multiplet is dominated by radiative recombination. In this temperature regime, the total effective recombination coefficient of the C~{\sc iii} M16 multiplet decreases much faster than that of the M1 multiplet as temperature increases. When the temperature goes above 10\,000~K, the decreasing rate of the M16 multiplet slows down because its monotonically increasing dielectronic recombination coefficient becomes relatively significant, while that of the M1 multiplet does not change much. The most prominent permitted transitions of N~{\sc iii} in optical, the M1 $\lambda$4100 (3p\,$^{2}$P$^{\rm o}$\,--\,3s\,$^{2}$S) and M2 $\lambda$4641 (3d\,$^{2}$D\,--\,3p\,$^{2}$P$^{\rm o}$) multiplets, are affected by the Bowen fluorescence mechanism (e.g. Bowen \citealt{bowen1934}, \citealt{bowen1935}). The N~{\sc iii} M18 $\lambda$4379.11 (5g\,$^{2}$G\,--\,4f\,$^{2}$F$^{\rm o}$) line is amongst the best observed N~{\sc iii} lines in the spectrum of NGC\,7009, which are not affected by the fluorescence processes. The intensity ratio of the $\lambda$4379.11 line and the $\lambda$4195.76 line, which is the second strongest fine-structure component of the N~{\sc iii} M6 3p$^{\prime}$\,$^{2}$D\,--\,3s$^{\prime}$\,$^{2}$P$^{\rm o}$ multiplet, can be used as a temperature diagnostic. Fig.\,\ref{niii_te} shows the N~{\sc iii} $\lambda$4379/$\lambda$4196 ratio as a function of electron temperature. The dominant excitation mechanism of the N~{\sc iii} M18 multiplet is radiative recombination, while the M6 multiplet is mainly excited by dielectronic recombination. The other two fine-structure components of the N~{\sc iii} M6 multiplet, $\lambda\lambda$4200.10 and 4215.77 cannot be used: the former one is blended with the He~{\sc ii} $\lambda$4199.83 (11g\,$^{2}$G\,--\,4f\,$^{2}$F$^{\rm o}$) line, which is more than three times stronger, and the latter one cannot be accurately measured due to weakness ($<$10$^{-4}$ of the H$\beta$ intensity). Another N~{\sc iii} multiplet, M17 5f\,$^{2}$F$^{\rm o}$\,--\,4d\,$^{2}$D $\lambda$4003, when used in pair with the N~{\sc iii} M6 $\lambda$4195.76 line, may also be a temperature diagnostic, but its radiative recombination coefficients are unknown. The N~{\sc iii} M17 lines are detected in the spectrum of NGC\,7009 (Fig.\,\ref{3940-4006}). The majority of the O~{\sc iii} triplets of the 3d\,--\,3p and 3p\,--\,3s configurations detected in the spectrum of NGC\,7009 are mainly excited by the fluorescence or charge-transfer mechanism (e.g. Liu \& Danziger \citealt{ld1993a}; Liu, Danziger \& Murdin \citealt{ldm93}). Thus those lines are not suitable for plasma diagnostics or abundance determinations. However, the O~{\sc iii} M8 3d\,$^{3}$F$^{\rm o}$\,--\,3p\,$^{3}$D multiplet is unaffected by such mechanisms. The intensity of the strongest component of the O~{\sc iii} M8 multiplet, $\lambda$3265.32 (3d\,$^{3}$F$^{\rm o}_{4}$\,--\,3p\,$^{3}$D$_{3}$), relative to the best observed O~{\sc iii} 5g\,--\,4f line, can in principle be used as a temperature diagnostic. The O~{\sc iii} M8 multiplet is mainly excited by radiative recombination at temperatures below 5000~K, and the contribution of dielectronic recombination to the total recombination rate catches up with that of the radiative recombination at about 16\,000~K (Nussbaumer \& Storey \citealt{ns1984}; P\'{e}quignot, Petitjean \& Boisson \citealt{ppb1991}). The O~{\sc iii} 5g\,--\,4f lines are dominantly excited by radiative recombination. The upper panel of Fig.\,\ref{oiii_te} shows the intensity ratio of the M46b $\lambda$4435 (5g\,H[11/2]$^{\rm o}$\,--\,4f\,G[9/2]) multiplet, the strongest transition of the 5g\,--\,4f configuration, and the M8 $\lambda$3265.32 line as a function of electron temperature. P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} present the radiative recombination coefficients of both the M8 and M46b multiplets of O~{\sc iii}, while Nussbaumer \& Storey \cite{ns1984} only give the dielectronic recombination coefficients of M8. The lower panel of Fig.\,\ref{oiii_te} shows the intensity ratio of the $\lambda$4434.60 (5g\,H[11/2]$^{\rm o}_{6}$\,--\,4f\,G[9/2]$_{5}$) line, the strongest fine-structure component of the M46b multiplet, and the M8 $\lambda$3265.32 line as a function of electron temperature. The effective recombination coefficients of the M46b $\lambda$4434.60 line are adopted from Kisielius \& Storey \cite{ks1999}, whose calculations for the O~{\sc iii} 5g\,--\,4f recombination spectrum were carried out in the intermediate coupling scheme and valid from 5000 to 20\,000~K. Measurement of the $\lambda$4434.60 line is of large uncertainty due to line blending. The other O~{\sc iii} 5g\,--\,4f lines are not detected in the spectrum of NGC\,7009. \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=0]{fig23.pdf} \caption{The C~{\sc iii} recombination line ratios as a function of electron temperature. $Upper$: The $I$(M1~$\lambda$4649)/$I$(M16~$\lambda$4069) ratio. $Lower$: The $I$(M1~$\lambda$4649)/$I$(M18~$\lambda$4187) ratio. The figure is based on the effective dielectronic and radiative recombination coefficients calculated by Nussbaumer \& Storey \citet{ns1984} and P\'{e}quignot, Petitjean \& Boisson \citet{ppb1991}, respectively.} \label{ciii_te} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8.0cm,angle=0]{fig24.pdf} \caption{The effective recombination coefficients of the C~{\sc iii} M1 $\lambda$4649 (black curves) and M16 $\lambda$4069 (red curves) multiplets as a function of electron temperature. Different curves of the same color represent different recombination coefficients: radiative recombination coefficient $\alpha_{\rm eff}^{\rm R}$ (dotted line), dielectronic recombination coefficient $\alpha_{\rm eff}^{\rm D}$ (dashed line), and total effective recombination coefficient $\alpha_{\rm eff}^{\rm Tot}$ (solid line). Data source of the plot is the same as Fig.\,\ref{ciii_te}.} \label{ciii_coeff} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.0cm,angle=-90]{fig25.pdf} \caption{The N~{\sc iii} $I$(M18~$\lambda$4379)/$I$(M6~$\lambda$4196) ratio as a function of electron temperature. Data source of the figure is the same as Fig.\,\ref{ciii_te}.} \label{niii_te} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=0]{fig26.pdf} \caption{$Upper$: The O~{\sc iii} $\lambda$4435/$\lambda$3265 ratio as a function of electron temperature. Here the $\lambda$4435 line is the M46b 5g\,H[11/2]$^{\rm o}$\,--\,4f\,G[9/2] multiplet of O~{\sc iii}. The figure is based on the dielectronic and radiative recombination coefficients of Nussbaumer \& Storey \citet{ns1984} and P\'{e}quignot, Petitjean \& Boisson \citet{ppb1991}, respectively. $Lower$: The O~{\sc iii} $\lambda$4434.60/$\lambda$3265 ratio as a function of electron temperature. Here the $\lambda$4434.60 line is the strongest fine-structure component of the O~{\sc iii} M46b multiplet, and the effective recombination coefficients of this line are from Kisielius \& Storey \citet{ks1999}, whose calculations were carried out in the intermediate coupling scheme and are valid from 5000 to 20\,000~K.} \label{oiii_te} \end{center} \end{figure} \subsection{\label{diagnose:summary} Summary of the ORL diagnostics} We have discussed about the possibility of using various recombination line ratios of heavy element ions to determine electron temperatures and densities. The line ratios are illustrated as a function of temperature or density. For some cases, there is still a lack of adequate effective recombination coefficients or the expected lines are not detected in the spectrum of NGC\,7009 due to weakness and/or line blending. The O~{\sc iii} lines of the 5g\,--\,4f configuration are not detected. Fig.\,\ref{oiii_te} is probably applicable once deep spectra with higher resolution are available. The effective recombination coefficients for the C~{\sc iii}, N~{\sc iii} and O~{\sc iii} lines quoted from Nussbaumer \& Storey \cite{ns1984} and P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} are probably inadequate, which can be inferred from the fact that the observed line ratios are all outside of the diagnostic ranges of Figs.\,\ref{ciii_te} and \ref{niii_te}. The applicability of Figs.\,\ref{nii_te:v19v39} and \ref{nii_ne:v19} is quite small, given that the N~{\sc ii} M19 lines are close to the O~{\sc iii} $\lambda$5007 line. The N~{\sc ii} and O~{\sc ii} recombination lines of parentage other than the ground states of the recombining ions are good temperature diagnostics. As shown in Figs.\,\ref{nii_te:extra} and \ref{oii_te:extra}, those line ratios are insensitive to electron density. The electron temperatures derived from the two O~{\sc ii} line ratios ($\lambda$4649/$\lambda$4591 and $\lambda$4649/$\lambda$4189) in Fig.\,\ref{oii_te:extra} are consistent with each other, and are close to the temperature value yielded by the C~{\sc ii} $\lambda$8794/$\lambda$4267 ratio as shown in Fig.\,\ref{cii_te}. Although no N~{\sc ii} lines of parentage other than $^{2}$P$^{\rm o}$ are detected in the spectrum of NGC\,7009 due to weakness, they are promising diagnostic tools in spectroscopy. The most reliable temperatures derived from the ORLs of heavy element ions are those yielded by the N~{\sc ii} and O~{\sc ii} lines, as shown in Figs.\,\ref{nii_te:v3v39} and \ref{oii_te:v1v48}. Currently only the N~{\sc ii} and O~{\sc ii} recombination lines can be used to determine electron density, since the density dependence of the population distributions of the energetically lowest fine-structure levels of the recombining ions have been taken into account in the recombination calculations for those two ions (this effect was also considered by Kisielius \& Storey \citealt{ks1999} for the calculation of the 5g\,--\,4f recombination lines of O~{\sc iii}). Figs.\,\ref{nii_ne:v3}, \ref{oii_ne:v1} and \ref{oii_ne:v10} all show large scatter in electron density for a given line ratio observed in NGC\,7009. This is reasonable because the ORL ratios only have very weak density dependence. New treatment of the Ne~{\sc ii} recombination in the intermediate coupling scheme is needed. \section{\label{orls} The optical recombination spectrum of heavy elements} In this section, we present a comprehensive analysis of the most significant permitted transitions of C~{\sc ii}, N~{\sc ii}, O~{\sc ii}, and Ne~{\sc ii} as well as C~{\sc iii}, N~{\sc iii} and O~{\sc iii} detected in the spectrum of NGC\,7009 reported in Paper~I. The lines are critically examined for potential blending effects and compared to theoretical predictions using the latest atomic data. Unless specified otherwise, all intensities quoted throughout the paper are corrected for interstellar extinction\footnote{In Paper~I, we derived a mean value of 0.174 for the logarithmic extinction at H$\beta$, $c$(H$\beta$), using the observed H~{\sc i} Balmer line ratios H$\alpha$/H$\beta$ and H$\gamma$/H$\beta$. The predicted H~{\sc i} line ratios in the Case~B assumption were adopted from Storey \& Hummer \cite{sh1995}, with $T_\mathrm{e}$ = 10\,000~K and $N_\mathrm{e}$ = 10\,000~cm$^{-3}$.} and in units of $I$(H$\beta$) = 100, and the theoretical intensities/ratios are predicted assuming an electron temperature of 1000~K as given by the N~{\sc ii} and O~{\sc ii} ORL diagnostics (Figs.\,\ref{nii_te:v3v39} and \ref{oii_te:v1v48}). The wavelengths of atomic transitions are adopted from the compilation of laboratory and theoretical values of Hirata \& Horaguchi \cite{hh1995}. The extinction-corrected flux of H$\beta$, $I$(H$\beta$), of NGC\,7009 is derived using $\log{I({\rm H}\beta)}$\,=\,$\log{F({\rm H}\beta)}$\,+\,$c$(H$\beta$), where $F$(H$\beta$) is the observed H$\beta$ flux ($-9.80$ in logarithm), which is adopted from Cahn, Kaler \& Stanghellini \cite{cks92}, and $c$(H$\beta$) is the logarithmic extinction at H$\beta$, which was derived from the H~{\sc i} Balmer line ratios H$\alpha$/H$\beta$ and H$\gamma$/H$\beta$ (Paper~I). The value of $c$(H$\beta$) we derived for NGC\,7009 is 0.174, which agrees with the value (0.17) given by Cahn, Kaler \& Stanghellini \cite{cks92}, who used the radio/H$\beta$ flux ratio. Thus in NGC\,7009 we have $I$(H$\beta$) = 10$^{-9.63}$~erg\,cm$^{-2}$\,s$^{-1}$. \subsection{\label{orls:cii} The C~{\sc ii} optical recombination spectrum} Several dozen emission lines were identified as the permitted transitions of C~{\sc ii}, with 41 being solid identifications (Paper~I). The strongest transitions are presented in the current paper. As an example, we give principles of fits for the multiplets M6 and M28.01 in this section. The other multiplets are presented in Appendix\,\ref{appendix:a}. The effective recombination coefficients of Davey, Storey \& Kisielius \cite{davey2000} are used for ORL analysis. \subsubsection{\label{orls:cii:v6} Multiplet 6, 4f $^2$F$^{\rm o}$ -- 3d $^2$D} C~{\sc ii} M6 $\lambda$4267 is the strongest C~{\sc ii} multiplet observed in NGC\,7009 (see Fig.\,\ref{4260-4310}). The three fine-structure components of this multiplet have close wavelengths: 4267.00, 4267.26 and 4267.26\,{\AA}. Single Gaussian profile fitting to the emission feature gives an intensity of 0.880 (normalized to a scale where H$\beta$ = 100), with an uncertainty of less than 5 per cent. Here the contributions from the O~{\sc ii} M53c 4f\,D[1]$^{\rm o}_{3/2}$\,--\,3d\,$^4$P$_{5/2}$ $\lambda$4263.27 and Ne~{\sc ii} M57c 4f~1[3]$^{\rm o}_{7/2}$\,--\,3d~$^4$F$_{9/2}$ $\lambda$4267.38 lines are negligible. This intensity value agrees with LSBC, whose observation yields a value of 0.838. The calculation of Bastin (\citealt{bastin2006}, hereafter B06) shows that the Case~A effective recombination coefficient for the C~{\sc ii} M6 $\lambda4267$ line differs from that in Case~B by 1.5 per cent, and a similar difference is given by Davey, Storey \& Kisielius \cite{davey2000}, indicating that this transition is case insensitive. \subsubsection{\label{orls:cii:v28.01} Multiplet 28.01, 3d$^{\prime}$ $^2$F$^{\rm o}$ -- 3p$^{\prime}$ $^2$D} This multiplet is a dielectronic transition, which is a result of cascading from the autoionization state 2s2p($^3$P$^{\rm o}$)\,3d\,$^2$F$^{\rm o}$ that lies about 0.41~eV above the first ionization threshold to the 2s2p($^3$P$^{\rm o}$)\,3p\,$^2$D state that lies 1.00~eV below the ionization threshold (Moore \citealt{moore1993}). The features of the two fine-structure components, $\lambda$8793.80 (3d$^{\prime}$\,$^2$F$^{\rm o}_{7/2}$\,--\,3p$^{\prime}$\,$^2$D$_{5/2}$) and $\lambda$8799.90 (3d$^{\prime}$\,$^2$F$^{\rm o}_{5/2}$\,--\,3p$^{\prime}$\,$^2$D$_{3/2}$) are very broad (Fig.\,\ref{8740-8840}). The wings of the two lines obviously affect the weaker emission features nearby. Detailed analysis of the complex indicates that at least two more emission lines are blended with the two C~{\sc ii} lines: one is the He~{\sc ii} 23p\,$^2$P$^{\rm o}$\,--\,6s\,$^2$S $\lambda$8799.0 line, while the other is unknown. The results of fitting to the features are shown in Fig.\,\ref{8740-8840}. For each of the two C~{\sc ii} M28.01 lines, we used a simulated Lorentz profile with an intrinsic width of 6.86\,{\AA} convolved with a Gaussian instrumental profile with a full width at half-maximum (FWHM) of 3.00\,{\AA} to fit the observed feature. The convolution of the Lorentz profile and Gaussian gives a Voigt profile with a width of 8.50\,{\AA}, which fits the observed features quite well (Fig.\,\ref{8740-8840}). The intensity contribution of the blended He~{\sc ii} $\lambda$8799 line was estimated from the He~{\sc ii} 4f\,$^2$F$^{\rm o}$\,--\,3d\,$^2$D $\lambda$4686 line and the hydrogenic theory of Storey \& Hummer \cite{sh1995}. an electron temperature of 10\,000~K and a density of 10\,000~cm$^{-3}$ were assumed. After correcting for the contribution from the He~{\sc ii} line, the intensity of the $\lambda$8799.90 line is 0.022$\pm$0.003. With the assumption that the relative intensity of the two C~{\sc ii} M28.01 lines are as in the {\it LS}\,coupling, i.e. 1.0\,:\,0.7, the intensity of the $\lambda$8793.80 line is 0.032, which is lower than the total intensity of the broad feature at $\lambda$8793 (see Fig.\,\ref{8740-8840}), indicating that there is probably unknown blending. Multiple trial fitting to the profile shows that an emission feature with an observed wavelength of 8793.20\,{\AA} best fits the feature, and its intensity is $<$0.01. The {\sc emili}\footnote{{\sc emili} is developed by Dr. B. Sharpee et al. and is designed to aid in the identification of weak emission lines, particularly the weak recombination lines seen in high dispersion and high signal-to-noise (S/N) spectra. URL: $http://www.pa.msu.edu/astro/software/emili$} code (Sharpee et al. \citealt{sharpee03}) identified this probably blended weak line as a [Cr~{\sc ii}] line with a laboratory wavelength of 8795.17\,{\AA}. More efforts are needed to verify this identification. The intensity ratio of the C~{\sc ii} $\lambda$8793.80 line and the C~{\sc ii} M6 $\lambda$4267 multiplet is 0.036, and that yields an electron temperature of $\sim\,3000$~K (Section\,\ref{diagnose:cii} and Fig.\,\ref{cii_te}). As discussed in Section\,\ref{diagnose:cii}, this temperature is questionable, due to different excitation mechanisms of the C~{\sc ii} M28.01 $\lambda$8797 and the M6 $\lambda$4267 multiplets. Measurements of the C~{\sc ii} M28.01 lines are inaccurate unless detailed modeling of the autoionization levels of C~{\sc ii} is carried out. Besides, too many skylines in the near-red of the spectrum of NGC\,7009 and the relatively poor sky subtraction in this wavelength region also makes accurate measurements of the C~{\sc ii} lines difficult (c.f. Section\,$2.1$ in Paper~I). \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig27.pdf} \caption{Spectrum of NGC\,7009 from 8740 to 8840\,{\AA} showing the two C~{\sc ii} M28.01 autoionization lines $\lambda\lambda$8793.80 and 8799.90. The dashed curve is the sum of Voigt profile fits to the two C~{\sc ii} lines, which probably blend with two weaker features, He~{\sc ii} $\lambda$8799.00 and an unknown component, whose profiles are assumed to be Gaussian and are represented by the dotted curve. The solid continuous curve is the sum of all fits. Here a Voigt profile is the convolution of a Gaussian ($\sim$3.0\,{\AA} FWHM) and a Lorentz profile ($\sim$6.86\,{\AA} FWHM). Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{8740-8840} \end{center} \end{figure} \subsubsection{\label{orls:cii:summary} Comments on the C~{\sc ii} recombination spectrum} The current effective recombination coefficients used for analysis of the C~{\sc ii} lines are mainly from Davey, Storey \& Kisielius \cite{davey2000} and Bastin \cite{bastin2006}. The former calculation takes much care in the low temperature case ($T_\mathrm{e}\,<$\,5000~K), while the latter one mainly covers higher temperatures (5000\,--\,50\,000~K) and includes the effects of high temperature dielectronic recombination, which rapidly becomes important above an electron temperature of $\sim$15\,000~K. Both calculations were carried out in the {\it LS}\,coupling assumption and only for the transitions of the parentage $^{1}$S. In the current analysis, we adopt the calculation of Davey, Storey \& Kisielius \cite{davey2000}, assuming a temperature of 1000~K. The best observed (i.e. the most accurate) multiplets of C~{\sc ii} in the spectrum of NGC\,7009 are M6 (4f\,$^2$F$^{\rm o}$\,--\,3d\,$^2$D) and some transitions that belong to the $n$g\,--\,4f ($n\,\geq$\,5) array. For the transitions with parentage other than $^{1}$S, only the M28.01 (3d$^{\prime}$\,$^{2}$F$^{\rm o}$\,--\,3p$^{\prime}$\,$^{2}$D) multiplet is detected, but measurements of this multiplet could be unreliable, as mentioned in Section\,\ref{orls:cii:v28.01}. The effective dielectronic recombination coefficients of Nussbaumer \& Storey \cite{ns1984} and radiative recombination coefficients of P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} were used for the analysis of the M28.01 multiplet. A full treatment of the C~{\sc ii} recombination in an appropriate coupling assumption (i.e. intermediate coupling), with transitions between the autoionization levels taken into account, is needed in the future. \subsection{\label{nii_orls} The N~{\sc ii} optical recombination spectrum} In this section, we present intensities of the N~{\sc ii} ORLs detected in the spectrum of NGC\,7009, and analyze these lines using the N~{\sc ii} effective recombination coefficients of FSL11. Unless otherwise specified, the theoretical relative intensities of the N~{\sc ii} lines quoted in this section are all based on that calculation. Comparison of the observed and predicted relative intensities of the N~{\sc ii} lines with accurate intensities is made to assess the new atomic data. An electron temperature of 1000~K is assumed throughout the analysis. In this section, spectral fits and discussion of results are only given for the M3 3p\,$^{3}$D\,--\,3s\,$^{3}$P$^{\rm o}$, M28 3d\,$^{3}$D$^{\rm o}$\,--\,3p\,$^{3}$P multiplets and the strongest multiplets M39a,b of the 4f\,--\,3d transition array. Discussion of other multiplets of N~{\sc ii} are given in Appendix\,\ref{appendix:b}. \subsubsection{\label{nii_orls:v3} Multiplet 3, 3p $^3$D -- 3s $^3$P$^{\rm o}$} This multiplet is the strongest of N~{\sc ii} in optical. The intensities of the N~{\sc ii} M3 lines are presented in column\,5 of Table\,\ref{relative:nii_v3} in unites of $I(\lambda5679.56)$ = 1.0. Also presented are the theoretical relative intensities in the {\it LS}\,coupling assumption (column 3) and the intermediate coupling (column 4). The theoretical predictions in intermediate coupling are calculated from the N~{\sc ii} effective recombination coefficients of FSL11. Comparisons of the observed and predicted relative intensities are in columns 6 and 7. Results of multi-Gaussian profile fitting to the wavelength range 5650\,--\,5760\,{\AA} are also presented in Fig.\,\ref{5650-5760}. The strongest M3 line, $\lambda$5679.56, is blended with $\lambda$5676.02 of the same multiplet (Fig.\,\ref{5650-5760}). Two Gaussian profiles with the same width were used to fit them. The intensities of $\lambda\lambda$5679.56 and 5676.02 are 0.135$\pm$0.007 and 0.035$\pm$0.004, respectively. Thus the $\lambda$5676.02/$\lambda$5679.56 ratio is 0.273, which agrees with both theoretical ratios within errors (Table\,\ref{relative:nii_v3}). Another two lines, $\lambda\lambda$5666.63 and 5710.77, are free of line blending. The intensity of the $\lambda$5710.77 line is 0.020$\pm$0.003, which agrees better with the intermediate coupling (Table\,\ref{relative:nii_v3}). The intensity of the $\lambda$5666.63 line is 0.065$\pm$0.007, which also agrees better with intermediate coupling (Table\,\ref{relative:nii_v3}). Another M3 line $\lambda$5686.21 is partially blended with a weaker feature, which was identified as {Mn~{\sc v}} $\lambda$5692.00 (Fig.\,\ref{5650-5760}). The fitted intensity of $\lambda$5686.21 is 0.024$\pm$0.005, which seems to agree better with {\it LS}\,coupling (Table\,\ref{relative:nii_v3}). However, the intensity of this line is questionable due to weakness. The other M3 line, $\lambda$5730.65, is not observed in our spectrum. \begin{table} \centering \caption{Comparison of the observed and predicted relative intensities of the N~{\sc ii} M3 lines detected in the spectrum of NGC\,7009. $I_{\rm IC}$ is the theoretical intensity deduced from the effective recombination coefficients of FSL11, and $I_{\rm LS}$ is the value in the {\it LS}\,coupling assumption. The above two symbols have the same meaning in other tables of the current paper. An electron temperature of 1000~K is assumed for the theoretical predictions $I_{\rm IC}$.} \label{relative:nii_v3} \begin{tabular}{lcccccc} \hline Line & $J_2-J_1$ & $I_{\rm LS}$ & $I_{\rm IC}$ & $I_{\rm obs}$ & $\frac{I_{\rm obs}}{I_{\rm LS}}$ & $\frac{I_{\rm obs}}{I_{\rm IC}}$\\ \hline $\lambda$5666.63 & 2 -- 1 & 0.536 & 0.466 & 0.481 & 0.897 & 1.032\\ $\lambda$5676.02 & 1 -- 0 & 0.238 & 0.215 & 0.273 & 1.147 & 1.271\\ $\lambda$5679.56 & 3 -- 2 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000\\ $\lambda$5686.21 & 1 -- 1 & 0.179 & 0.128 & 0.187 & 1.047 & 1.464\\ $\lambda$5710.77 & 2 -- 2 & 0.179 & 0.167 & 0.151 & 0.844 & 0.902\\ \hline \end{tabular} \end{table} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig28.pdf} \caption{Spectrum of NGC\,7009 from 5650 to 5760\,{\AA} showing the N~{\sc ii} M3 lines and other emission features. The continuous curve is the sum of Gaussian profile fits. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{5650-5760} \end{center} \end{figure} \subsubsection{\label{nii_orls:v28} Multiplet 28, 3d $^3$D$^{\rm o}$ -- 3p $^3$P} Gaussian profile fitting to M28 $\lambda$5941.65 (3d\,$^3$D$^{\rm o}_{3}$\,--\,3p\,$^3$P$_{2}$) yields an intensity of 0.030$\pm$0.004. The intensity contribution from the M28 $\lambda$5940.24 (3d\,$^3$D$^{\rm o}_{1}$\,--\,3p\,$^3$P$_{1}$) line is negligible. The intensity ratio of $\lambda$5941.65 and the N~{\sc ii} M3 $\lambda$5679.56 line is 0.228, which is 42 per cent lower than the theoretical ratio. Another M28 line $\lambda$5927.81 (3d\,$^3$D$^{\rm o}_{1}$\,--\,3p\,$^3$P$_{0}$) is blended with M28 $\lambda$5931.78 (3d\,$^3$D$^{\rm o}_{2}$\,--\,3p\,$^3$P$_{1}$), which coincides in wavelength with the He~{\sc ii} 25h\,$^2$H$^{\rm o}$\,--\,5g\,$^2$G $\lambda$5931.83 line. Two Gaussian profiles were used to fit the complex, and the intensity of the $\lambda$5927.81 line is 0.007, with a large uncertainty. The intensity ratio of $\lambda$5927.81 to the $\lambda$5941.65 line is 0.250, which agrees with the predicted ratio 0.235. Assuming that the He~{\sc ii} $\lambda$5931.83 line contributes 55 per cent to the total intensity of the blend at $\lambda$5932, as estimated from the hydrogenic theory of Storey \& Hummer \cite{sh1995}, we obtained an intensity of the $\lambda$5931.78 line which is much higher than the theoretical prediction. Another M28 line $\lambda$5952.39 (3d\,$^3$D$^{\rm o}_{2}$\,--\,3p\,$^3$P$_{2}$) is blended with the He~{\sc ii} 24h\,$^2$H$^{\rm o}$\,--\,5g\,$^2$G $\lambda$5952.93 line. The intensity of the $\lambda$5952.39 line is much higher than the predicted value. The other M28 line $\lambda$5960.90 (3d\,$^3$D$^{\rm o}_{1}$\,--\,3p\,$^3$P$_{2}$) is not observed. \begin{table} \centering \caption{Same as Table\,\ref{relative:nii_v3} but for a comparison of the observed and predicted relative intensities of the N~{\sc ii} M28 lines detected in the spectrum of NGC\,7009.} \label{relative:nii_v28} \begin{tabular}{lcccccc} \hline Line & $J_2-J_1$ & $I_{\rm LS}$ & $I_{\rm IC}$ & $I_{\rm obs}$ & $\frac{I_{\rm obs}}{I_{\rm LS}}$ & $\frac{I_{\rm obs}}{I_{\rm IC}}$\\ \hline $\lambda$5941.65$^a$ & 3 -- 2 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000\\ $\lambda$5952.39$^b$ & 2 -- 2 & 0.152 & 0.122 & 0.255 & 1.686 & 2.103\\ $\lambda$5931.78$^c$ & 2 -- 1 & 0.455 & 0.412 & 0.471 & 1.037 & 1.143\\ $\lambda$5960.90$^d$ & 1 -- 2 & 0.010 & 0.009 & -- & 0.000 & 0.000\\ $\lambda$5927.81 & 1 -- 0 & 0.202 & 0.219 & 0.317 & 1.570 & 1.448\\ \hline \end{tabular} \begin{description} \item [$^a$] Including the $\lambda$5940.24 (3d\,$^3$D$^{\rm o}_{1}$\,--\,3p\,$^3$P$_{1}$) line. \item [$^b$] Corrected for the contribution from the He~{\sc ii} $\lambda$5952.93 (24h\,$^{2}$H$^{\rm o}$\,--\,5g\,$^{2}$G) line (74 per cent). \item [$^c$] Corrected for the contribution from the He~{\sc ii} $\lambda$5931.83 (25h\,$^{2}$H$^{\rm o}$\,--\,5g\,$^{2}$G) line (57 per cent). \item [$^d$] Not detected. \end{description} \end{table} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig29.pdf} \caption{Spectrum of NGC\,7009 from 5850 to 6000\,{\AA} showing the N~{\sc ii} M28 lines and other emission features. The continuous curve is the sum of Gaussian profile fits. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{5850-6000} \end{center} \end{figure} \subsubsection{\label{nii_orls:4f-3d} 4f -- 3d transitions} The 4f\,--\,3d transitions of N~{\sc ii} are located in the blue side of our spectrum ($<$4500\,{\AA}) and suffer from line blending. Accurate measurements of most of them are difficult. Table\,\ref{relative:nii_4f-3d} presents the observed and predicted relative intensities of those 4f\,--\,3d lines with the most reliable intensities. Figs.\,\ref{4005-4058} and \ref{4176-4260} show some of the N~{\sc ii} lines of the 4f\,--\,3d transition array detected in the spectrum of NGC\,7009. The results of multi-Gaussian profile fitting are overplotted. Only the M39a and M39b multiplets are discussed here. Discussion of other 4f\,--\,3d transitions of N~{\sc ii} are presented in Appendix\,\ref{appendix:nii:4f-3d}. \paragraph{\label{nii:4f-3d:v39a} Multiplet 39a, 4f G[7/2] -- 3d $^3$F$^{\rm o}$:} The $\lambda$4035.08 (4f\,G[7/2]$_{3}$\,--\,3d\,$^3$F$^{\rm o}_{2}$) and $\lambda$4043.53 (4f\,G[7/2]$_{4}$\,--\,3d\,$^3$F$^{\rm o}_{3}$) lines are shown in Fig.\,\ref{4005-4058}. The $\lambda$4035.08 line is blended with the O~{\sc ii} M50b 4f\,F[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$F$_{5/2}$ $\lambda$4035.07 line, which contributes 15 per cent to the total intensity, and the O~{\sc ii} M50b 4f\,F[3]$^{\rm o}_{7/2}$\,--\,3d\,$^4$F$_{5/2}$ $\lambda$4035.46 line, which is negligible. The $\lambda$4035 line has an intensity 0.037$\pm$0.006. Here the contribution from the O~{\sc ii} $\lambda$4035.07 line has been corrected for. This intensity agrees with the predicted relative intensity. The $\lambda$4043.53 line is partially blended with the N~{\sc ii} M39b 4f\,G[9/2]$_{5}$\,--\,3d\,$^3$F$^{\rm o}_{4}$ $\lambda$4041.31 line, which is more than 2 times stronger. The intensity of the $\lambda$4043.53 line also agrees well with the predicted value. Reliable measurements of the $\lambda$4044.78 (4f\,G[7/2]$_{3}$\,--\,3d\,$^3$F$^{\rm o}_{3}$) line are difficult. The other two lines $\lambda$4056.90 (4f\,G[7/2]$_{4}$\,--\,3d\,$^3$F$^{\rm o}_{4}$) and $\lambda$4058.16 (4f\,G[7/2]$_{3}$\,--\,3d\,$^3$F$^{\rm o}_{4}$) are too weak. \paragraph{\label{nii:4f-3d:v39b} Multiplet 39b, 4f G[9/2] -- 3d $^3$F$^{\rm o}$:} The $\lambda$4041.31 (4f\,G[9/2]$_{5}$\,--\,3d\,$^3$F$^{\rm o}_{4}$) line is blended with the O~{\sc ii} M50c 4f\,F[2]$^{\rm o}_{5/2}$\,--\,3d~$^4$F$_{5/2}$ $\lambda$4041.28 and O~{\sc ii} M50c 4f\,F[2]$^{\rm o}_{3/2}$\,--\,3d\,$^4$F$_{5/2}$ $\lambda$4041.95 lines. The two O~{\sc ii} lines contribute only $\sim$7 per cent to the total intensity of the blend at $\lambda$4041. The intensity of the $\lambda$4041.31 line is 0.082$\pm$0.008. The intensity ratio of $\lambda$4041.31 to the N~{\sc ii} M3 $\lambda$5679.56 line is 0.604, which agrees quite well with the predicted ratio 0.598. Another M39b line $\lambda$4026.08 (4f\,G[9/2]$_{4}$\,--\,3d\,$^3$F$^{\rm o}_{3}$) is blended with the He~{\sc i} M18 5d\,$^3$D\,--\,2p\,$^3$P$^{\rm o}$ $\lambda$4026.20 line. The other M39b line $\lambda$4039.35 (4f\,G[9/2]$_{4}$\,--\,3d\,$^3$F$^{\rm o}_{4}$) is not observed. \begin{table} \centering \caption{Same as Table\,\ref{relative:nii_v3} but for a comparison of the observed and predicted relative intensities of the N~{\sc ii} 4f\,--\,3d lines detected in the spectrum of NGC\,7009.} \label{relative:nii_4f-3d} \begin{tabular}{lcllr} \hline Line & $J_2-J_1$ & $I_{\rm IC}$ & $I_{\rm obs}$ & $\frac{I_{\rm obs}}{I_{\rm IC}}$\\ \hline M39a 4f G[7/2] -- 3d $^3$F$^{\rm o}$ & & & &\\ $\lambda$4035.08$^a$ & 3--2 & 0.477 & 0.540 & 1.132\\ $\lambda$4043.53 & 4--3 & 0.436 & 0.432 & 0.992\\ M39b 4f G[9/2] -- 3d $^3$F$^{\rm o}$ & & & &\\ $\lambda$4041.31$^b$ & 5--4 & 1.000 & 1.000 & 1.000\\ M43a 4f F[5/2] -- 3d $^1$D$^{\rm o}$ & & & &\\ $\lambda$4176.16$^c$ & 3--2 & 0.293 & 0.343 & 1.171\\ M43b 4f F[7/2] -- 3d $^1$D$^{\rm o}$ & & & &\\ $\lambda$4171.61 & 3--2 & 0.296 & 0.281 & 0.950\\ M48a 4f F[5/2] -- 3d $^3$D$^{\rm o}$ & & & &\\ $\lambda$4236.91$^d$ & 2--1 & 0.539 & 0.669 & 1.241\\ M48b 4f F[7/2] -- 3d $^3$D$^{\rm o}$ & & & &\\ $\lambda$4241.78$^e$ & 4--3 & 0.984 & 0.996 & 1.012\\ M55a 4f D[5/2] -- 3d $^3$P$^{\rm o}$ & & & &\\ $\lambda$4442.02$^f$ & 2--1 & 0.130 & 0.242 & 1.859\\ M58a 4f G[7/2] -- 3d $^1$F$^{\rm o}$ & & & &\\ $\lambda$4552.53$^g$ & 4--3 & 0.201 & 0.395 & 1.965\\ M58b 4f G[9/2] -- 3d $^1$F$^{\rm o}$ & & & &\\ $\lambda$4530.41$^h$ & 4--3 & 0.474 & 0.593 & 1.251\\ M61a 4f D[5/2] -- 3d $^1$P$^{\rm o}$ & & & &\\ $\lambda$4694.64$^i$ & 2--1 & 0.147 & 0.222 & 1.510\\ \hline \end{tabular} \begin{description} \item [$^a$] Including the contribution from the O~{\sc ii} M50b 4f\,F[3]$^{\rm o}_{5/2}$\,--\,3d\,4F$_{5/2}$ $\lambda$4035.07 line. Neglecting the O~{\sc ii} M50b 4f\,F[3]$^{\rm o}_{7/2}$\,--\,3d\,4F$_{5/2}$ $\lambda$4035.46 line. \item [$^b$] Neglecting the contributions from the O~{\sc ii} M50c 4f\,F[2]$^{\rm o}_{5/2}$\,--\,3d\,$^4$F$_{5/2}$ $\lambda$4041.28 and O~{\sc ii} M50c 4f\,F[2]$^{\rm o}_{3/2}$\,--\,3d\,$^4$F$_{5/2}$ $\lambda$4041.95 lines. \item [$^c$] Including N~{\sc ii} M43a 4f\,F[5/2]$_{2}$\,--\,3d\,$^1$D$^{\rm o}_{2}$ $\lambda$4175.66. \item [$^d$] Including N~{\sc ii} M48b 4f\,F[7/2]$_{3}$\,--\,3d\,$^3$D$^{\rm o}_{2}$ $\lambda$4237.05. \item [$^e$] Including N~{\sc ii} M48a 4f\,F[5/2]$_{3}$\,--\,3d\,$^3$D$^{\rm o}_{2}$ $\lambda$4241.78. Neglecting N~{\sc ii} M48a 4f\,F[5/2]$_{2}$\,--\,3d\,$^3$D$^{\rm o}_{2}$ $\lambda$4241.24 and Ne~{\sc ii} M52c 4f\,2[1]$^{\rm o}_{3/2}$\,--\,3d\,$^4$D$_{1/2}$ $\lambda$4242.04. \item [$^f$] Including Ne~{\sc ii} M60b 4f\,1[4]$^{\rm o}_{7/2}$\,--\,3d\,$^2$F$_{5/2}$ $\lambda$4442.69. Neglecting O~{\sc iii} M49b 5g\,F[3]$^{\rm o}_{2,\,3}$\,--\,4f\,D[3]$_{2}$ $\lambda$4442.02. \item [$^g$] Including Ne~{\sc ii} M55d 4f\,2[2]$^{\rm o}_{5/2}$\,--\,3d\,$^4$F$_{3/2}$ $\lambda$4553.17 and Si~{\sc iii} M2 4p\,$^3$P$^{\rm o}_{2}$\,--\,4s\,$^3$S$_{1}$ $\lambda$4552.62. \item [$^h$] Including N~{\sc iii} M3 3p$^{\prime}$\,$^4$D$_{1/2}$\,--\,3s$^{\prime}$\,$^4$P$^{\rm o}_{3/2}$ $\lambda$4530.86. \item [$^i$] Overestimated. \end{description} \end{table} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig30.pdf} \caption{Spectrum of NGC\,7009 from 4005 to 4058\,{\AA} showing the N~{\sc ii} ORLs of the M39a 4f\,G[7/2]\,--\,3d\,$^3$F$^{\rm o}$ and M39b 4f\,G[9/2]\,--\,3d\,$^3$F$^{\rm o}$ multiplets. The strongest N~{\sc ii} lines of the 4f\,--\,3d array, $\lambda$4041.31 (M39b 4f\,G[9/2]$_{5}$\,--\,3d\,$^3$F$^{\rm o}_{4}$) is observed. The continuous curve is the sum of Gaussian profile fits. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{4005-4058} \end{center} \end{figure} \begin{figure*} \begin{minipage}{128mm} \begin{center} \includegraphics[width=10cm,angle=-90]{fig31.pdf} \caption{Spectrum of NGC\,7009 from 4176 to 4260\,{\AA} showing the N~{\sc ii} ORLs of the M48a 4f\,F[5/2]\,--\,3d\,$^3$D$^{\rm o}$, M48b 4f\,F[7/2]\,--\,3d\,$^3$D$^{\rm o}$, M49a 4f\,G[7/2]\,--\,3d\,$^3$D$^{\rm o}$, and M49b 4f\,G[9/2]\,--\,3d\,$^3$D$^{\rm o}$ multiplets. The continuous curve is the sum of Gaussian profile fits. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{4176-4260} \end{center} \end{minipage} \end{figure*} \subsubsection{\label{nii_orls:summary} Comments on the N~{\sc ii} recombination spectrum} The effective recombination coefficients used for the analysis of the N~{\sc ii} recombination spectrum are from FSL11, which is dedicated to low temperatures ($T_\mathrm{e}\,<$\,10\,000~K) and is an improvement over all previous calculations for this ion, as described Section\,\ref{diagnose:data}. The best observed N~{\sc ii} lines in our spectrum are M3 (3p\,$^{3}$D\,--\,3s\,$^{3}$P$^{\rm o}$), M12 (3p\,$^{1}$D\,--\,3s\,$^{1}$P$^{\rm o}$), and the strongest lines of the 4f\,--\,3d array, e.g. $\lambda$4041.31 (M39b 4f\,G[9/2]$_{5}$\,--\,3d\,$^{3}$F$^{\rm o}_{4}$), $\lambda$4043.53 (M39a 4f\,G[7/2]$_{4}$\,--\,3d\,$^{3}$F$^{\rm o}_{4}$). Those N~{\sc ii} lines have been used for plasma diagnostics (Section\,\ref{diagnose:niioii}). The fine-structure components of the N~{\sc ii} multiplets M5 (3p\,$^{3}$P\,--\,3s\,$^{3}$P$^{\rm o}$), M20 (3d\,$^{3}$D$^{\rm o}$\,--\,3p\,$^{3}$D), M28 (3d\,$^{3}$D$^{\rm o}$--\,3p\,$^{3}$P), and M29 (3d\,$^{3}$P$^{\rm o}$\,--\,3p\,$^{3}$P) observed in our spectrum are incomplete due to line blending. Those that are detected are also blended with other lines. Although multi-Gaussian profile fitting has been carried out and effective recombination coefficients used to correct for the blended line of other ionic species, the derived intensities of those N~{\sc ii} lines could still be questionable. Grandi \cite{grandi1976} shows that the strongest components of the M3, M5 and M28 multiplets are affected by the fluorescence mechanism in the Orion nebula. However, such effects is probably insignificant in NGC\,7009 (C. Morisset, private communication). The 4f\,--\,3d transitions are almost certainly free of fluorescence enhancement because they require such high excitation energy photons that the central star of NGC\,7009 cannot afford. \subsection{\label{oii_orls} The O~{\sc ii} optical recombination spectrum} LSBC observed eight multiplets of the 3\,--\,3 transition arrays and a few dozen 4f\,--\,3d lines of O~{\sc ii}. The effective radiative recombination coefficients for the O~{\sc ii} 3d\,--\,3p and 4f\,--\,3d transitions were calculated under the intermediate coupling scheme, and were used for spectral analysis. For the 3p,--\,3s transitions, the effective recombination coefficients of Storey \cite{storey1994}, whose calculations were carried out in the {\it LS}\,coupling scheme, were utilized. LSBC confirmed the breakdown of {\it LS}\,coupling in the O~{\sc ii} transitions, especially those of the 4f\,--\,3d configuration. In Paper~I, we presented very deep spectrum of NGC\,7009. The data quality is higher than that in LSBC. In the current paper, we present the intensities of the O~{\sc ii} ORLs, and analyze the O~{\sc ii} recombination spectrum using the new O~{\sc ii} effective recombination coefficients of PJS. Unless otherwise specified, the theoretical relative intensities of the O~{\sc ii} lines quoted in this section are all based on that calculation. Comparison of the observed and predicted relative intensities of the O~{\sc ii} lines with accurate intensities is made to assess the new atomic data. An electron temperature of 1000~K is assumed throughout the analysis. In this section, spectral fits and discussion of the results are only given for the M1 3p\,$^{4}$D$^{\rm o}$\,--\,3s\,$^{4}$P, M10 3d\,$^{4}$F\,--\,3p\,$^{4}$D$^{\rm o}$ multiplets and the strongest multiplets M39a,b of the 4f\,--\,3d transition array. Discussion of other multiplets of O~{\sc ii} are given in Appendix\,\ref{appendix:c}. \subsubsection{\label{oii_orls:v1} Multiplet 1, 3p $^4$D$^{\rm o}$ -- 3s $^4$P} The M1 multiplet is the strongest amongst all the O~{\sc ii} permitted transitions, and is one of the best observed multiplets. Comparisons of the observed and predicted relative intensities of the M1 fine-structure components are presented in Table\,\ref{relative:oii_v1}. The results of multi-Gaussian profile fitting are shown in Fig.\,\ref{4625-4680}. The strongest M1 line $\lambda$4649.13 is blended with $\lambda$4650.84 of the same multiplet; also blended are the three lines of C~{\sc iii} M1 3p~$^3$P$^{\rm o}$ -- 3s~$^3$S: $\lambda\lambda$4647.42, 4650.25 and 4651.47. Five Gaussian profiles of the same FWHM were used to fit the complex, with the laboratory wavelength differences utilized. The relative intensities of the three C~{\sc iii} M1 lines were assumed to be as in {\it LS}\,coupling, i.e. 5\,:\,3\,:\,1, but the relative intensities of the two O~{\sc ii} lines were not constrained. The intensity of the $\lambda$4649.13 line is 0.667$\pm$0.030. The intensity of the $\lambda$4650.84 line is 0.169$\pm$0.008. The intensity ratio of the $\lambda$4650.84 and $\lambda$4649.13 lines agrees with the theoretical ratios predicted in the intermediate coupling, but is slightly higher than the {\it LS}\,coupling value (Table\,\ref{relative:oii_v1}). Another three M1 lines, $\lambda\lambda$4661.63, 4673.73 and 4676.24, are free of line blending. The fitted intensities of the three lines agree with the predicted values, except for $\lambda$4673.73, whose measurement is obviously higher than both predicted values (Table\,\ref{relative:oii_v1}). The measurement uncertainties of the three lines are all less than 10 per cent. Such large difference between the observed and predicted intensity of $\lambda$4673.73 cannot be explained explicitly. $\lambda$4673.73 coincides in wavelength with C~{\sc iii} M5 3p$^{\prime}$~$^3$P$_{1}$ -- 3s$^{\prime}$~$^3$P$^{\rm o}_{2}$ $\lambda$4673.95. However, as discussed in LSBC, a significant contribution from the C~{\sc iii} $\lambda$4673.95 line was unlikely because another C~{\sc iii} M5 line $\lambda$4665.86, which is expected to be much stronger than $\lambda$4673.95, is not observed. $\lambda$4676.24 is blended with O~{\sc ii} M91 4f~G[4]$^{\rm o}_{7/2}$ -- 3d~$^2$D$_{5/2}$ $\lambda$4677.07 and N~{\sc ii} M61b 4f~D[3/2]$_{2}$ -- 3d~$^1$P$^{\rm o}_{1}$ $\lambda$4678.14, but the contributions of these two lines are probably insignificant, as estimated from the new effective recombination coefficients. Another two M1 lines, $\lambda\lambda$4638.86 and 4641.81, are both blended with two N~{\sc iii} M2 lines $\lambda\lambda$4640.64 and 4641.85, which are excited by the Bowen fluorescence mechanism. Also blended with this feature is N~{\sc ii} M5 3p~$^3$P$_{1}$ -- 3s~$^3$P$^{\rm o}_{2}$ $\lambda$4643.09. Taking into account another N~{\sc iii} M2 line $\lambda$4634.14 and N~{\sc ii} M5 3p~$^3$P$_{2}$ -- 3s~$^3$P$^{\rm o}_{2}$ $\lambda$4630.54, we used six Gaussian profiles (the O~{\sc ii} $\lambda$4641.81 line coincides in wavelength with N~{\sc iii} $\lambda$4641.85, thus they were treated as a single component) to fit the complex, assuming that all the six components had the same FWHM. The results of the fitting are plotted in Fig.\,\ref{4625-4680}. Here the contribution of N~{\sc iii} M2 $\lambda$4641.85 to the total intensity of the blend at $\lambda$4642 was estimated from N~{\sc iii} M2 $\lambda$4634.14, which is free of line blending. The intensity ratio of the two N~{\sc iii} lines was assumed to be as in pure {\it LS}\,coupling, i.e. 1\,:\,5, considering the fact that the two lines decay from the same upper level, thus their intensity ratio depends only on the coupling scheme instead of the excitation mechanism. The relative intensity of the two O~{\sc ii} M1 lines were not constrained. The intensity of $\lambda$4641.81 thus obtained is 0.437$\pm$0.085. This measurement agrees well with the predicted value in the intermediate coupling (Table\,\ref{relative:oii_v1}). The resultant intensity of the $\lambda$4638.86 line is higher than the theoretical ratios (Table\,\ref{relative:oii_v1}), but its intensity could be unreliable due to the strength of the N~{\sc iii} M2 $\lambda$4640.64 line, which is more than 10 times stronger. The C~{\sc ii} M12.01 6d\,$^2$D -- 4p\,$^2$P$^{\rm o}$ lines, $\lambda\lambda$4637.63, 4638.91 and 4639.07, may be also blended in the $\lambda$4638 feature, but taking them into account makes the task of line deblending more difficult. The other M1 line $\lambda$4696.35, which is expected to be the faintest of O~{\sc ii} M1, is observed (the inset in Fig.\,\ref{4625-4680}). It coincides in wavelength with O~{\sc ii} M89a 4f~D[3]$^{\rm o}_{5/2}$ -- 3d~$^2$D$_{3/2}$ $\lambda$4696.35, and is partially blended with another weak feature which was identified as N~{\sc ii} M61a 4f~D[5/2]$_{2}$ -- 3d~$^1$P$^{\rm o}_{1}$ $\lambda$4694.64. Accurate measurements of $\lambda$4696.35 are difficult due to weakness. Assuming that the O~{\sc ii} M89a $\lambda$4696.35 line contributes 38 per cent to total flux of the blend at $\lambda$4696, as estimated from the new O~{\sc ii} effective recombination coefficients, we obtained an intensity of 0.015 for the M1 $\lambda$4696.35 line, which agrees well with the newly predicted value (Table\,\ref{relative:oii_v1}). \begin{table} \centering \caption{Same as Table\,\ref{relative:nii_v3} but for a comparison of the observed and predicted relative intensities of O~{\sc ii} M1 lines detected in the spectrum of NGC\,7009.} \label{relative:oii_v1} \begin{tabular}{lcccccc} \hline Line & $J_2-J_1$ & $I_{\rm LS}$ & $I_{\rm IC}$ & $I_{\rm obs}$ & $\frac{I_{\rm obs}}{I_{\rm LS}}$ & $\frac{I_{\rm obs}}{I_{\rm IC}}$\\ \hline $\lambda$4638.86 & 3/2--1/2 & 0.208 & 0.283 & 0.502 & 2.414 & 1.774\\ $\lambda$4641.81 & 5/2--3/2 & 0.525 & 0.632 & 0.656 & 1.249 & 1.037\\ $\lambda$4649.13 & 7/2--5/2 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000\\ $\lambda$4650.84 & 1/2--1/2 & 0.208 & 0.290 & 0.263 & 1.264 & 0.907\\ $\lambda$4661.63 & 3/2--3/2 & 0.267 & 0.317 & 0.326 & 1.222 & 1.028\\ $\lambda$4673.73 & 1/2--3/2 & 0.042 & 0.051 & 0.078 & 1.857 & 1.535\\ $\lambda$4676.24 & 5/2--5/2 & 0.225 & 0.220 & 0.237 & 1.054 & 1.078\\ $\lambda$4696.35 & 3/2--5/2 & 0.025 & 0.024 & 0.023 & 0.920 & 0.962\\ \hline \end{tabular} \end{table} \begin{figure*} \begin{minipage}{128mm} \begin{center} \includegraphics[width=10cm,angle=-90]{fig32.pdf} \caption{Spectrum of NGC\,7009 from 4625 to 4680\,{\AA} showing the O~{\sc ii} M1 lines and other emission features. The solid continuous curve is the sum of Gaussian profile fits. The dashed curve is the sum of the Gaussian profiles of the three C~{\sc iii} M1 (3p\,$^{3}$P$^{\rm o}$\,--\,3s\,$^{3}$S) lines $\lambda\lambda$4647.42, 4650.25 and 4651.47, whose intensity ratio is fixed to be as in {\it LS}\,coupling, i.e. 1\,:\,3\,:\,5. The inset shows the very weak O~{\sc ii} M1 $\lambda$4696.35 (3p\,$^{4}$D$^{\rm o}_{3/2}$\,--\,3s\,$^{4}$P$_{5/2}$) line, which is located in a different wavelength region. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{4625-4680} \end{center} \end{minipage} \end{figure*} \subsubsection{\label{oii_orls:v10} Multiplet 10, 3d $^4$F -- 3p $^4$D$^{\rm o}$} The observed and predicted relative intensities of O~{\sc ii} M10 lines detected in NGC\,7009 are presented in table\,\ref{relative:oii_v10}. The emission features of the M10 lines and the results of Gaussian profile fitting are shown in Fig.\,\ref{4060-4115} $\lambda\lambda$4069.62 and 4069.89, are blended with [S~{\sc ii}] 3p$^{3}$~$^{2}$P$^{\rm o}_{3/2}$ -- 3p$^{3}$~$^{4}$S$^{\rm o}_{3/2}$ $\lambda$4068.60 and the three C~{\sc iii} M16 5g~$^{3}$G -- 4f~$^{3}$F$^{\rm o}$ lines: $\lambda\lambda$4067.94, 4068.91 and 4070.26. The blend feature at $\lambda$4070 is also partially resolved from another O~{\sc ii} M10 line $\lambda$4072.16. Six Gaussian profiles of the same FWHM were used to fit the complex ($\lambda\lambda$4069.62 and 4069.89 were treated as a single component, given their close wavelengths), assuming that the differences of the observed wavelengths were the same as those of the laboratory ones. Assumption was also made that the relative intensities of the three C~{\sc iii} M16 lines were as in pure {\it LS}\,coupling, i.e. 1.00~:~1.31~:~1.71. The $\lambda$4072.16 line blends with N~{\sc ii} M38b 4f~F[7/2]$_{3}$ -- 3d~$^3$F$^{\rm o}_{2}$ $\lambda$4073.05 and O~{\sc ii} M48a 4f~G[5]$^{\rm o}_{9/2}$ -- 3d~$^{4}$F$_{7/2}$ $\lambda$4071.23. The intensity of the N~{\sc ii} $\lambda$4073.05 line was assumed to be negligible, and the O~{\sc ii} $\lambda$4071.23 line contributes about 9 per cent to the total intensity of the blend at $\lambda$4072. Despite of the assumptions above, no further constraint was set for the relative intensities of the following four features in Gaussian profile fitting: The C~{\sc iii} M16 multiplet, [S~{\sc ii}] $\lambda$4068.60, the two O~{\sc ii} M10 lines at $\lambda$4070, and O~{\sc ii} $\lambda$4072.15. The resultant total intensity of $\lambda$4069.62,\,89 is 0.635$\pm$0.060. The intensity of the $\lambda$4072.16 line derived from the fits is 0.549$\pm$0.029. These intensities agree well with the theoretical values predicted in the intermediate coupling (Table\,\ref{relative:oii_v10}). The total intensity of the C~{\sc iii} M16 $\lambda$4069 multiplet yielded by Gaussian profile fitting is 0.288$\pm$0.058. The intensity ratio of the C~{\sc iii} M16 multiplet to the C~{\sc iii} M1 $\lambda$4650 multiplet (the measurements of C~{\sc iii} M1 are described in Section\,\ref{oii_orls:v1}) agrees with the predicted ratio within errors. Here the predicted C~{\sc iii} M16/M1 ratio is derived based on the radiative and dielectronic recombination coefficients given by P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} and Nussbaumer \& Storey \cite{ns1984}, respectively. The C~{\sc iii} M1 $\lambda$4650 multiplet is mainly excited by dielectronic recombination, while the C~{\sc iii} M16 $\lambda$4069 multiplet is by radiative recombination (LSBC). $\lambda$4075.86 is expected to be the strongest in O~{\sc ii} M10 (Table\,\ref{relative:oii_v10}). It is blended with [S~{\sc ii}] 3p$^{3}$~$^{2}$P$^{\rm o}_{1/2}$ -- 3p$^{3}$~$^{4}$S$^{\rm o}_{3/2}$ $\lambda$4076.35. We used the same technique as LSBC to derive the intensities. The flux contribution of [S~{\sc ii}] $\lambda$4076.35 to the blend at $\lambda$4076 was estimated from the measured intensity of [S~{\sc ii}] $\lambda$4068.60. A five-level atomic model was constructed to calculate the level population of S$^{+}$, with an appropriate electron temperature and density assumed. The calculated [S~{\sc ii}] $\lambda$4068.60/$\lambda$4076.35 intensity ratio was 3.04, the same as the ratio given by LSBC. The resultant intensity of the O~{\sc ii} $\lambda$4075.86 line is 0.688$\pm$0.070. The intensity ratio of the $\lambda$4075.86 line and the O~{\sc ii} M1 $\lambda$4661.63 (the measurements of the M1 $\lambda$4661.63 line is given in Section\,\ref{oii_orls:v1}) is 3.164, in close agreement with the predicted ratio 3.004 based on the latest O~{\sc ii} effective recombination coefficients. Here an electron temperature N~{\sc ii} M38a 4f~F[5/2]$_{2}$ -- 3d~3F$^{\rm o}_{2}$ $\lambda$4076.91 and N~{\sc ii} M38a 4f~F[5/2]$_{3}$ -- 3d~3F$^{\rm o}_{2}$ $\lambda$4077.40 also blend in the $\lambda$4076 feature, but their flux contributions were assumed to be negligible. $\lambda$4078.84 is much weaker, and is close to the blend feature at $\lambda$4076 (Fig.\,\ref{4060-4115}). The intensity of the $\lambda$4078.84 line derived from Gaussian profile fitting is 0.089$\pm$0.009. The intensity ratio of $\lambda$4078.84 to the $\lambda$4075.86 line agrees well with the predicted ratio in the intermediate coupling (Table\,\ref{relative:oii_v10}). The actual measurement uncertainty of this line could be even larger due to the weakness. Another M10 line $\lambda$4085.11 blends with O~{\sc ii} M48b 4f~G[4]$^{\rm o}_{7/2}$ -- 3d~$^4$F$_{5/2}$ $\lambda$4083.90 and N~{\sc ii} M38b 4f~G[7/2]$_{3}$ -- 3d~$^{3}$F$^{\rm o}_{3}$ $\lambda$4082.89. The O~{\sc ii} $\lambda$4083.90 contributes about 47 per cent to the total intensity, and the N~{\sc ii} line is probably negligible. After the correction for the blend, the intensity of $\lambda$4085.11 agrees well with the newly predicted value (Table\,\ref{relative:oii_v10}). Measurements of the remaining M10 lines are difficult: $\lambda$4092.93 (3d~$^{4}$F$_{7/2}$ -- 3p~$^{4}$D$^{\rm o}_{7/2}$) is partially blended with N~{\sc iii} M1 3p~$^2$P$^{\rm o}_{3/2}$ -- 3s~$^2$S$_{1/2}$ $\lambda$4097.33, which is excited by the Bowen fluorescence mechanism; $\lambda$4094.14 (3d~$^{4}$F$_{3/2}$ -- 3p~$^{4}$D$^{\rm o}_{5/2}$) blends in the N~{\sc iii} $\lambda$4097.33 line, and $\lambda$4106.02 (3d~$^{4}$F$_{5/2}$ -- 3p~$^{4}$D$^{\rm o}_{7/2}$) is embedded in the wing of H~{\sc i} $\lambda$4101. \begin{table} \centering \caption{Same as Table,\ref{relative:oii_v1} but for a comparison of the observed and predicted relative intensities of O~{\sc ii} M10 lines detected in the spectrum of NGC\,7009. Only the components with the most reliable measurements are presented.} \label{relative:oii_v10} \begin{tabular}{lcccccc} \hline Line & $J_2-J_1$ & $I_{\rm LS}$ & $I_{\rm IC}$ & $I_{\rm obs}$ & $\frac{I_{\rm obs}}{I_{\rm LS}}$ & $\frac{I_{\rm obs}}{I_{\rm IC}}$\\ \hline $\lambda$4069.89$^a$ & 5/2--3/2 & 0.730 & 0.956 & 0.923 & 1.264 & 0.965\\ $\lambda$4072.16$^b$ & 7/2--5/2 & 0.686 & 0.807 & 0.798 & 1.163 & 0.989\\ $\lambda$4075.86 & 9/2--7/2 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000\\ $\lambda$4078.84 & 3/2--3/2 & 0.112 & 0.141 & 0.130 & 1.161 & 0.922\\ $\lambda$4085.11$^c$ & 5/2--5/2 & 0.146 & 0.162 & 0.165 & 1.129 & 1.018\\ \hline \end{tabular} \begin{description} \item [$^a$] Including the contribution from the O~{\sc ii} M10 3d\,$^4$F$_{3/2}$\,--\,3p\,$^4$D$^{\rm o}_{1/2}$ $\lambda$4069.62 line. \item [$^b$] Corrected for the contribution from the O~{\sc ii} M48a 4f\,G[5]$^{\rm o}_{9/2}$\,--\,3d\,$^{4}$F$_{7/2}$ $\lambda$4071.23 line, which is about 9 per cent. Neglecting the N~{\sc ii} M38b 4f\,G[7/2]$_{3}$\,--\,3d\,$^{3}$F$^{\rm o}_{2}$ $\lambda$4073.05 line (about 2 per cent). \item [$^c$] Neglecting N~{\sc ii} M38b 4f\,G[7/2]$_{3}$\,--\,3d\,$^{3}$F$^{\rm o}_{3}$ $\lambda$4082.89 (less than 2 per cent). \end{description} \end{table} \begin{figure*} \begin{minipage}{128mm} \begin{center} \includegraphics[width=10cm,angle=-90]{fig33.pdf} \caption{Spectrum of NGC\,7009 from 4060 to 4115\,{\AA} showing the O~{\sc ii} M10 lines and some O~{\sc ii} lines from the 4f\,--\,3d configuration. The strongest O~{\sc ii} 4f\,--\,3d line $\lambda$4089.29 (M48a 4f\,G[5]$^{\rm o}_{11/2}$\,--\,3d\,$^4$F$_{9/2}$) is observed. The solid continuous curve is the sum of Gaussian profile fits. The dashed curve is the sum of the Gaussian profiles of the three C~{\sc iii} M16 (5g\,$^{3}$G\,--\,4f\,$^{3}$F$^{\rm o}$) lines $\lambda\lambda$4067.94, 4068.92 and 4070.31, whose intensity ratio is fixed to be as in {\it LS}\,coupling, i.e. 1.00\,:\,1.31\,:\,1.71. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{4060-4115} \end{center} \end{minipage} \end{figure*} \subsubsection{\label{oii_orls:4f-3d} 4f -- 3d transitions} Several dozen transitions of this group were identified and presented in the emission line list of NGC\,7009 (Paper~I). Table\,\ref{relative:oii_4f-3d} gives the observed and predicted relative intensities of the 4f\,--\,3d lines of O~{\sc ii} with the most reliable measurements. For most cases, the measured intensities agree with both calculations. Here we only present spectral fits and discussion of the M48a 4f\,G[5]$^{\rm o}$\,--\,3d\,$^{4}$F multiplet of O~{\sc ii}. Discussion of other multiplets of the O~{\sc ii} 4f\,--\,3d configuration are given in Appendix\,\ref{appendix:oii:4f-3d}. Figs.\,\ref{4260-4310} and \ref{4444-4504} show some of the detected O~{\sc ii} ORLs from the 4f\,--\,3d configuration. The strongest 4f\,--\,3d line of O~{\sc ii} observed in NGC\,7009, $\lambda$4089.29 (M48a 4f\,G[5]$^{\rm o}_{11/2}$\,--\,3d\,$^{4}$F$_{9/2}$), is shown in Fig.\,\ref{4060-4115}. A few O~{\sc ii} lines of the 4f\,--\,3d configuration, which are blended with the N~{\sc ii} lines of the same configuration, are shown in Fig.\,\ref{4005-4058}. The $\lambda$4089.29 (M48a 4f\,G[5]$^{\rm o}_{11/2}$\,--\,3d\,$^4$F$_{9/2}$) line is blended with the $\lambda$4088.27 (M48a 4f\,G[5]$^{\rm o}_{9/2}$\,--\,3d\,$^4$F$_{9/2}$) and the Si~{\sc iv} M1 4p\,$^2$P$^{\rm o}_{3/2}$\,--\,4s\,$^2$S$_{1/2}$ $\lambda$4088.86 lines (Fig.\,\ref{4005-4058}). The $\lambda$4088.27 line contributes less than 2 per cent to the total flux of the blend at $\lambda$4089. The contribution of the Si~{\sc iv} $\lambda$4088.86 line was estimated from the observed Si~{\sc iv} M1 4p\,$^2$P$^{\rm o}_{1/2}$\,--\,4s\,$^2$S$_{1/2}$ $\lambda$4116.10, assuming that the relative intensities of the two S~{\sc iv} M1 lines are as in the pure {\it LS}\,coupling, i.e. 2\,:\,1. The resultant intensity of the $\lambda$4089.29 line is 0.265$\pm$0.013. The intensity ratio of $\lambda$4089.29 and the O~{\sc ii} M1 $\lambda$4649.13 line is 0.398, which agrees with the new theoretical prediction (0.387) within measurement errors. The other M48a line $\lambda$4071.23 (4f\,G[5]$^{\rm o}_{9/2}$\,--\,3d\,$^4$F$_{7/2}$) is blended with the O~{\sc ii} M10 3d\,$^{4}$F$_{7/2}$\,--\,3p\,$^{4}$D$^{\rm o}_{5/2}$ $\lambda$4072.16 line (see Section\,\ref{oii_orls:v10}), which is 10 times stronger. \begin{table*} \begin{minipage}{110mm} \caption{Comparison of the observed and predicted relative intensities of the O~{\sc ii} 4f\,--\,3d lines detected in the spectrum of NGC\,7009. {\it LSBC} is the predicted intensities based on the effective recombination coefficients of Liu et al. \citet{liu1995}, and {\it PJS} is based on the unpublished effective recombination coefficients of P.~J. Storey. An electron temperature of 1000~K is assumed for the theoretical predictions.} \label{relative:oii_4f-3d} \centering \begin{tabular}{lcllllr} \hline Line & $J_2-J_1$ & $I_{\rm pred}$ & $I_{\rm pred}$ & $I_{\rm obs}$ & $\frac{I_{\rm obs}}{I_{\rm pred}}$ & $\frac{I_{\rm obs}}{I_{\rm pred}}$\\ & & {\it LSBC} & {\it PJS} & & {\it LSBC} & {\it PJS}\\ \hline M48a 4f G[5]$^{\rm o}$ -- 3d $^4$F & & & & &\\ $\lambda$4089.29 & 11/2--9/2 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000\\ M48b 4f G[4]$^{\rm o}$ -- 3d $^4$F & & & & &\\ $\lambda$4083.90 & 7/2--5/2 & 0.285 & 0.316 & 0.326 & 1.141 & 1.032\\ M48c 4f G[3]$^{\rm o}$ -- 3d $^4$F & & & & &\\ $\lambda$4087.15 & 5/2--3/2 & 0.271 & 0.347 & 0.347 & 1.280 & 1.000\\ M50a 4f F[4]$^{\rm o}$ -- 3d $^4$F & & & & &\\ $\lambda$4062.94 & 9/2--9/2 & 0.125 & 0.126 & 0.137 & 1.096 & 1.087\\ M50b 4f F[3]$^{\rm o}$ -- 3d $^4$F & & & & &\\ $\lambda$4048.21 & 7/2--7/2 & 0.063 & 0.068 & 0.076 & 1.206 & 1.120\\ M53a 4f D[3]$^{\rm o}$ -- 3d $^4$P & & & & &\\ $\lambda$4303.83$^a$ & 7/2--5/2 & 0.413 & 0.522 & 0.534 & 1.293 & 1.022\\ M53b 4f D[2]$^{\rm o}$ -- 3d $^4$P & & & & &\\ $\lambda$4294.78$^b$ & 5/2--3/2 & 0.232 & 0.326 & 0.253 & 1.091 & 0.776\\ $\lambda$4307.23 & 3/2--1/2 & 0.105 & 0.118 & 0.108 & 1.031 & 0.919\\ M53c 4f D[1]$^{\rm o}$ -- 3d $^4$P & & & & &\\ $\lambda$4288.82$^c$ & 3/2--1/2 & 0.151 & 0.123 & 0.145 & 0.958 & 1.176\\ M55 4f G[3]$^{\rm o}$ -- 3d $^4$P & & & & &\\ $\lambda$4291.25$^d$ & 7/2--5/2 & 0.156 & 0.188 & 0.221 & 1.414 & 1.176\\ M63a 4f D[3]$^{\rm o}$ -- 3d $^4$D & & & & &\\ $\lambda$4357.25$^e$ & 7/2--5/2 & 0.057 & 0.088 & 0.094 & 1.651 & 1.067\\ M67c 4f F[2]$^{\rm o}$ -- 3d $^4$D & & & & &\\ $\lambda$4282.96$^f$ & 5/2--3/2 & 0.154 & 0.168 & 0.185 & 1.200 & 1.101\\ M76b 4f G[4]$^{\rm o}$ -- 3d $^2$F & & & & &\\ $\lambda$4371.62$^g$ & 9/2--7/2 & 0.097 & 0.109 & 0.127 & 1.303 & 1.159\\ M78a 4f F[4]$^{\rm o}$ -- 3d $^2$F & & & & &\\ $\lambda$4313.44$^h$ & 9/2--7/2 & 0.121 & 0.133 & 0.139 & 1.150 & 1.045\\ M78b 4f F[3]$^{\rm o}$ -- 3d $^2$F & & & & &\\ $\lambda$4285.69 & 7/2--5/2 & 0.189 & 0.264 & 0.229 & 1.208 & 0.869\\ M86a 4f D[3]$^{\rm o}$ -- 3d $^2$P & & & & &\\ $\lambda$4491.23 & 5/2--3/2 & 0.137 & 0.198 & 0.215 & 1.569 & 1.086\\ M86b 4f D[2]$^{\rm o}$ -- 3d $^2$P & & & & &\\ $\lambda$4489.49 & 3/2--1/2 & 0.065 & 0.082 & 0.083 & 1.271 & 1.004\\ M88 4f G[3]$^{\rm o}$ -- 3d $^2$P & & & & &\\ $\lambda$4477.90$^i$ & 5/2--3/2 & 0.086 & 0.109 & 0.113 & 1.316 & 1.108\\ M92a 4f F[4]$^{\rm o}$ -- 3d $^2$D & & & & &\\ $\lambda$4609.44 & 7/2--5/2 & 0.428 & 0.444 & 0.483 & 1.126 & 1.088\\ M92b 4f F[3]$^{\rm o}$ -- 3d $^2$D & & & & &\\ $\lambda$4602.13$^j$ & 5/2--3/2 & 0.171 & 0.194 & 0.195 & 1.139 & 1.003\\ \hline \end{tabular} \begin{description} \item [$^a$] Corrected for the contribution from O~{\sc ii} M65a 4f\,G[5]$^{\rm o}_{9/2}$\,--\,3d\,$^4$D$_{7/2}$ $\lambda$4303.61 (about 12 per cent). Neglecting O~{\sc ii} M53a 4f\,D[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$P$_{5/2}$ $\lambda$4304.08 ($\sim$3 per cent). \item [$^b$] Including O~{\sc ii} M53b 4f\,D[2]$^{\rm o}_{3/2}$\,--\,3d\,$^4$P$_{3/2}$ $\lambda$4294.92 ($\sim$12 per cent). \item [$^c$] Including O~{\sc ii} M53c 4f\,D[1]$^{\rm o}_{1/2}$\,--\,3d\,$^4$P$_{1/2}$ $\lambda$4288.82. \item [$^d$] Including O~{\sc ii} M78c 4f\,F[2]$^{\rm o}_{5/2}$\,--\,3d\,$^{2}$F$_{5/2}$ $\lambda$4292.21. Neglecting O~{\sc ii} M55 4f\,G[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$P$_{5/2}$ $\lambda$4291.86 and O~{\sc ii} M78c 4f\,F[2]$^{\rm o}_{3/2}$\,--\,3d\,$^{2}$F$_{5/2}$ $\lambda$4292.95. \item [$^e$] Including O~{\sc ii} M63a 4f\,D[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$D$_{3/2}$ $\lambda$4357.25. Neglecting O~{\sc ii} M63a 4f\,D[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$D$_{5/2}$ $\lambda$4357.52. \item [$^f$] Including O~{\sc ii} M67c 4f\,F[2]$^{\rm o}_{5/2}$\,--\,3d\,$^4$D$_{5/2}$ $\lambda$4283.25. Neglecting O~{\sc ii} M78a 4f\,F[4]$^{\rm o}_{7/2}$\,--\,3d\,$^2$F$_{5/2}$ $\lambda$4282.02. \item [$^g$] Neglecting O~{\sc ii} M76b 4f\,G[4]$^{\rm o}_{7/2}$\,--\,3d\,$^{2}$F$_{7/2}$ $\lambda$4371.24 (less than 2 per cent). \item [$^h$] Corrected for the contribution from the O~{\sc ii} M78a 4f\,F[4]$^{\rm o}_{7/2}$\,--\,3d\,$^{2}$F$_{7/2}$ $\lambda$4312.11 line ($\sim$33 per cent). \item [$^i$] Neglecting O~{\sc iii} M45a 5g\,H[11/2]$^{\rm o}_{5,\,6}$\,--\,4f\,G[9/2]$_{5}$ $\lambda$4477.91 (less than 2 per cent). \item [$^j$] Corrected for the contribution from N\,{\sc ii} M5 3p~$^{3}$P$_{2}$\,--\,3s\,$^{3}$P$^{\rm o}_{1}$ $\lambda$4601.48 (26 per cent). Neglecting Ne~{\sc ii} M64d 4f\,2[2]$^{\rm o}_{5/2}$\,--\,3d\,$^{4}$P$_{5/2}$ $\lambda$4600.16. \end{description} \end{minipage} \end{table*} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig34.pdf} \caption{Spectrum of NGC\,7009 from 4260 to 4310\,{\AA} showing the O~{\sc ii} ORLs from the M53a 4f~D[3]$^{\rm o}$ -- 3d~$^4$P, M53b 4f~D[2]$^{\rm o}$ -- 3d~$^4$P, M53c 4f~D[1]$^{\rm o}$ -- 3d~$^4$P, M67a 4f~F[4]$^{\rm o}$ -- 3d~$^4$D, M67b 4f~F[3]$^{\rm o}$ -- 3d~$^4$D, and M67c 4f~F[2]$^{\rm o}$ -- 3d~$^4$D multiplets. The very broad emission feature at $\lambda$4275 is formed by more than 10 O~{\sc ii} ORLs from the 4f -- 3d configuration blended together, with positions and wavelengths of the components labeled. The continuous curve is the sum of Gaussian profile fits. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{4260-4310} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig35.pdf} \caption{Spectrum of NGC\,7009 from 4444 to 4504\,{\AA} showing the O~{\sc ii} and O~{\sc iii} ORLs. The only Mg~{\sc ii} line detected in NGC\,7009, Mg~{\sc ii} M4 4f~$^{2}$F$^{\rm o}$ -- 3d~$^{2}$D $\lambda$4481.20, is present. The continuous curve is the sum of Gaussian profile fits. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{4444-4504} \end{center} \end{figure} \subsubsection{\label{oii_orls:other_parentage} Multiplets with parentage other than $^3$P} The O~{\sc ii} ORLs from multiplets with parentage other than $^3$P are detected in NGC\,7009, and they include M15, M36, M101 and M105, which were also observed by LSBC. However, the effective recombination coefficients are only available for two of those multiplets (c.f. discussion in Section\,\ref{diagnose:niioii}). Table\,\ref{oii_parent:1d} presents the line intensities. The intensities observed by LSBC are also listed. The O~{\sc ii} M15 3p$^{\prime}$\,$^{2}$F$^{\rm o}$\,--\,3s$^{\prime}$\,$^{2}$D lines $\lambda\lambda$4590.97 and 4596.18 are shown in Fig.\,\ref{4555-4625}. The O~{\sc ii} M36 3d$^{\prime}$\,$^2$G\,--\,3p$^{\prime}$\,$~2$F$^{\rm o}$ lines $\lambda\lambda$4185.45 and 4189.79 are shown in Fig.\,\ref{4176-4260}. \begin{table*} \begin{minipage}{110mm} \caption{The O~{\sc ii} optical recombination lines in NGC\,7009 with parentage other than $^{3}$P$_{J}$. Line intensities measured by LSBC are also presented. All intensities are normalized such that $I$(H$\beta$) = 100.} \label{oii_parent:1d} \centering \begin{tabular}{lcclll} \hline Line & Mult.& Transition & Current & \multicolumn{2}{c}{LSBC}\\ (\AA) & & & & PA=0$^{\rm o}$ & PA=45$^{\rm o}$\\ \hline $\lambda$4590.97 & M15 & ($^{1}$D)3p~$^{2}$F$^{\rm o}_{7/2}$ -- ($^{1}$D)3s~$^{2}$D$_{5/2}$ & 0.087 & 0.0907 & 0.0752\\ $\lambda$4596.18$^a$ & M15 & ($^{1}$D)3p~$^{2}$F$^{\rm o}_{5/2}$ -- ($^{1}$D)3s~$^{2}$D$_{3/2}$ & 0.062 & 0.0424 & 0.0476\\ $\lambda$4185.45 & M36 & ($^{1}$D)3d~$^{2}$G$_{7/2}$ -- ($^{1}$D)3p~$^{2}$F$^{\rm o}_{5/2}$ & 0.070 & 0.0572 & 0.0580\\ $\lambda$4189.79$^b$ & M36 & ($^{1}$D)3d~$^{2}$G$_{9/2}$ -- ($^{1}$D)3p~$^{2}$F$^{\rm o}_{7/2}$ & 0.083 & 0.0703 & 0.0747\\ $\lambda$4253.89$^c$ & M101 & ($^{1}$D)4f~H[5]$^{\rm o}_{11/2}$ -- ($^{1}$D)3d~$^{1}$G$_{9/2}$ & 0.058 & 0.0318: & 0.0249:\\ $\lambda$4843.37$^d$ & M105 & ($^{1}$D)4f~P[1]$^{\rm o}_{3/2}$ -- ($^{1}$D)3d~$^{2}$S$_{1/2}$ & 0.021 & & \\ \hline \end{tabular} \begin{description} \item [$^a$] Including O~{\sc ii} M15 ($^{1}$D)3p~$^{2}$F$^{\rm o}_{5/2}$ -- ($^{1}$D)3s~$^{2}$D$_{5/2}$ $\lambda$4595.96. \item [$^b$] Including O~{\sc ii} M36 ($^{1}$D)3d~$^{2}$G$_{7/2}$ -- ($^{1}$D)3p~$^{2}$F$^{\rm o}_{7/2}$ $\lambda$4189.59. \item [$^c$] Including O~{\sc ii} M101 ($^{1}$D)4f~H[5]$^{\rm o}_{9/2}$ -- ($^{1}$D)3d~$^{1}$G$_{9/2}$ $\lambda$4253.91 and O~{\sc ii} M101 ($^{1}$D)4f~H[5]$^{\rm o}_{9/2}$ -- ($^{1}$D)3d~$^{1}$G$_{7/2}$ $\lambda$4254.12. \item [$^d$] Including O~{\sc ii} $\lambda$4843.37 M105 ($^{1}$D)4f~P[1]$^{\rm o}_{1/2}$ -- ($^{1}$D)3d~$^{2}$S$_{1/2}$. \end{description} \end{minipage} \end{table*} \subsubsection{\label{oii_orls;summary} Comments on the O~{\sc ii} recombination spectrum} The unpublished effective recombination coefficients of P.~J. Storey used in the current analysis of the O~{\sc ii} recombination spectrum are accurate at low temperatures ($T_\mathrm{e}\,<$\,10\,000~K). Appropriate assumptions have been made in the calculation, as described in Section\,\ref{diagnose:data}. The O~{\sc ii} recombination spectrum is the richest amongst all the heavy element ions observed in NGC\,7009. The best observed multiplets of the 3\,--\,3 transitions of O~{\sc ii} are M1 (3p\,$^{4}$D$^{\rm o}$\,--\,3s\,$^{4}$P), M2 (3p\,$^{4}$P$^{\rm o}$\,--\,3s\,$^{4}$P), M10 (3d\,$^{4}$F\,--\,3p\,$^{4}$D$^{\rm o}$), M19 (3d\,$^{4}$P\,--\,3p\,$^{4}$P$^{\rm o}$), M25 (3d\,$^{2}$F\,--\,3p\,$^{2}$D$^{\rm o}$) and M28 (3d\,$^{4}$P\,--\,3p\,$^{4}$S$^{\rm o}$). Although some fine-structure components of those multiplets are blended with other lines, multi-Gaussian profile fitting gives reliable intensities for most of them. The 4f\,--\,3d lines with the most reliable measurements are presented in Table\,\ref{relative:oii_4f-3d}. The O~{\sc ii} multiplets are not affected by any other excitation mechanisms (e.g. fluorescence, charge transfer), and the strongest lines have been used for plasma diagnostics (Section\,\ref{diagnose:niioii}). Several O~{\sc ii} lines with the parentage $^{1}$D are detected, as presented in Table\,\ref{oii_parent:1d}. Only the dielectronic recombination coefficients for the M15 and M36 multiplets are available from Nussbaumer \& Storey \cite{ns1984}. The calculation of PJS does not reach such high energy. \subsection{\label{neii_orls} The Ne~{\sc ii} optical recombination spectrum} Several dozen emission features in the spectrum of NGC\,7009 were identified as the Ne~{\sc ii} permitted lines. In this section, we present spectral fits and discussion of the results only for the M2 3p\,$^4$D$^{\rm o}$\,--\,3s\,$^4$P, M13 3d\,$^{4}$F\,--\,3p\,$^{4}$D$^{\rm o}$, M9 3p$^{\prime}$\,$^2$F$^{\rm o}$\,--\,3s$^{\prime}$\,$^2$D, and the M55e 4f\,2[5]$^{\rm o}$\,--\,3d\,$^4$F multiplets. Discussion of other multiplets of Ne~{\sc ii} are given in Appendix\,\ref{appendix:d}. Line intensities measured by LLB01 are used for comparison if available. Predicted intensities that are based on the {\it LS}\,coupling calculations of Kisielius et al. \cite{kisielius1998} are also used for the analysis. \subsubsection{\label{neii_orls:v2} Multiplet 2, 3p $^4$D$^{\rm o}$ -- 3s $^4$P} This is the strongest multiplet of Ne~{\sc ii}. Fig.\,\ref{3290-3348} shows that $\lambda$3334.84 (3p~$^4$D$^{\rm o}_{7/2}$ -- 3s~$^4$P$_{5/2}$) is affected by the O~{\sc iii} fluorescence line M3 3p~$^3$S$_{1}$ -- 3s~$^3$P$^{\rm o}_{2}$ $\lambda$3340.76, which is more than 10 times stronger. The fitted intensity of $\lambda$3334.84 is 0.428, which is higher than LLB01 (0.345). Given that our observational data are the same as LLB01, such difference is probably due to the different reddening laws used. Besides, measurements of $\lambda$3334.84 could be overestimated due to the O~{\sc iii} line. Another line $\lambda$3355.02 (3p~$^4$D$^{\rm o}_{5/2}$ -- 3s~$^4$P$_{3/2}$) blends with He~{\sc i} M8 7p~$^1$P$^{\rm o}_{1}$ -- 2s~$^1$S$_{0}$ $\lambda$3354.55 (Fig.\,\ref{3345-3410}), which contributes 43 per cent to the total intensity, as estimated from the calculations of Benjamin, Skillman \& Smits \cite{bss99}. The resultant intensity of $\lambda$3355.02 is 0.223$\pm$0.045. The intensity ratio $\lambda$3355.02/$\lambda$3334.84 agrees well with the predicted value. The intensity of another line $\lambda$3360.60 (3p~$^4$D$^{\rm o}_{3/2}$ -- 3s~$^4$P$_{1/2}$) is 0.040, which is unreliable (Fig.\,\ref{3345-3410}). The other M2 lines are not observed. \subsubsection{\label{neii_orls:v13} Multiplet 13, 3d $^4$F -- 3p $^4$D$^{\rm o}$} Ne~{\sc ii} M13 lines locate in the far-blue and they are difficult to observe due to weakness as well as the relatively poor S/N. Only $\lambda$3218.19 (3d~$^4$F$_{9/2}$ -- 3p~$^4$D$^{\rm o}_{7/2}$) and $\lambda$3244.09 (3d~$^4$F$_{9/2}$ -- 3p~$^4$D$^{\rm o}_{7/2}$) are observed, and they are shown in Fig.\,\ref{3180-3250}. The fitted intensity of $\lambda$3218.19 is 0.234, which agrees well with LLB01 (0.229). Here the blended Ne~{\sc ii} M16 3d~$^4$P$_{1/2}$ -- 3p~$^4$D$^{\rm o}_{3/2}$ $\lambda$3217.30 was assumed to be negligible ($\sim$3 per cent). The intensity ratio of $\lambda$3218.19 to the Ne~{\sc ii} M2 $\lambda$3334.84 line is 0.546, which agrees with the predicted ratio 0.594. The intensity of the $\lambda$3244.09 line is 0.074, higher than the measurement of LLB01 (0.0576). \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig36.pdf} \caption{Spectrum of NGC\,7009 from 3180 to 3250\,{\AA} showing the Ne~{\sc ii} M13 lines and other emission features. The inset shows the profiles of the two strong lines He~{\sc i} M3 4p\,$^3$P$^{\rm o}$\,--\,2s\,$^3$S $\lambda$3187.74 and He~{\sc ii} 5f\,$^2$F$^{\rm o}_{7/2}$\,--\,3d\,$^2$D$_{5/2}$ $\lambda$3203.17. The continuous curve is the sum of Gaussian profile fits. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{3180-3250} \end{center} \end{figure} \subsubsection{\label{neii_orls:v9} Multiplet 9, 3p$^{\prime}$ $^2$F$^{\rm o}$ -- 3s$^{\prime}$ $^2$D} This is the only Ne~{\sc ii} multiplet of an excited-state parentage (2s$^{2}$2p$^{4}$\,$^{1}$D) detected in the spectrum of NGC\,7009, and shown in Fig.\,\ref{3540-3595}. The fitted intensity of $\lambda$3568.50 (3p$^{\prime}$\,$^2$F$^{\rm o}_{7/2}$\,--\,3s$^{\prime}$\,$^2$D$_{5/2}$) is 0.168$\pm$0.016. The measured total intensity of the other two lines $\lambda\lambda$3574.18,\,61 is 0.053, with an uncertainty of about 10 per cent. Thus the intensity ratio ($\lambda\lambda$3574.18\,+\,3574.61)/$\lambda$3568.50 is 0.32, which differs from the {\it LS}\,coupling value 0.75. No measurements of this multiplet are given in LLB01. \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig37.pdf} \caption{Spectrum of NGC\,7009 from 3540 to 3595\,{\AA} showing the Ne~{\sc ii} M9 and M34 lines as well as other emission features. The continuous curve is the sum of Gaussian profile fits. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{3540-3595} \end{center} \end{figure} \subsubsection{\label{neii_orls:4f-3d} 4f -- 3d transitions} Several dozen Ne~{\sc ii} ORLs of the 4f -- 3d configuration are detected or estimated by deblending techniques (i.e. Gaussian profile fitting). The measured intensities are 10$^{-4}$ of H$\beta$ or even lower. Table\,\ref{relative:neii_4f-3d} presents the measured and predicted relative intensities of the 4f\,--\,3d transitions with the most reliable measurements. The predicted intensities are based on the preliminary effective recombination coefficients calculated by P.~J. Storey (private communication) for a few selected Ne~{\sc ii} 4f\,--\,3d quartet lines. In this section, we only present spectral fits and results for the M55e 4f\,2[5]$^{\rm o}$\,--\,3d\,$^4$F multiplet of Ne~{\sc ii}. Discussion of other multiplets of the 4f\,--\,3d configuration are given in Appendix\,\ref{appendix:neii:4f-3d}. Some of the Ne~{\sc ii} lines belonging to the M52a 4f\,2[4]$^{\rm o}$\,--\,3d\,$^4$D and M52b 4f\,2[3]$^{\rm o}$\,--\,3d\,$^4$D multiplets are shown in Fig\,\ref{4176-4260}. Fig\,\ref{4380-4445} also shows many Ne~{\sc ii} lines detected in the spectrum of NGC\,7009. The $\lambda$4391.99 (M55e 4f\,2[5]$^{\rm o}_{11/2}$\,--\,3d\,$^4$F$_{9/2}$) line observed in NGC\,7009 is shown in Fig.\,\ref{4380-4445}. Its intensity is 0.077$\pm$0.008, which agrees with LLB01: 0.0728 (ESO 1.52~m), 0.0779 (WHT 1996) and 0.0713 (WHT 1997). The contribution from the $\lambda$4392.00 (4f\,2[5]$^{\rm o}_{9/2}$\,--\,3d\,$^4$F$_{9/2}$) line of the same multiplet is negligible. The other M55e line $\lambda$4409.30 (4f\,2[5]$^{\rm o}_{9/2}$\,--\,3d\,$^4$F$_{7/2}$) is blended with the Ne~{\sc ii} $\lambda$4409.78 (M55b 4f\,2[3]$^{\rm o}_{7/2}$\,--\,3d\,$^4$F$_{9/2}$) line, whose intensity is negligible, and the O~{\sc iii} $\lambda$4408.29 (M46a 5g\,H[5]$^{\rm o}_{4}$\,--\,4f\,G[4]$_{3}$) line, which contributes about 10 per cent to the total intensity, as estimated from the effective recombination coefficients of Kisielius \& Storey \cite{ks1999}. The resultant intensity of the $\lambda$4409.30 line is 0.060, which agrees with LLB01: 0.0631 (ESO 1.52~m), 0.0615 (WHT 1996) and 0.0624 (WHT 1997). The intensity ratio $\lambda$4409.30/$\lambda$4391.99 observed in NGC\,7009 is 20 per cent higher than the predicted value (Table\,\ref{relative:neii_4f-3d}). \begin{table} \centering \caption{Comparison of the observed and predicted relative intensities of the Ne~{\sc ii} 4f\,--\,3d lines detected in the spectrum of NGC\,7009. The predicted intensities $I_{\rm pred}$ are based on the preliminary effective recombination coefficients for some of the strongest Ne~{\sc ii} lines from the 4f\,--\,3d configuration (P.~J. Storey, private communication).} \label{relative:neii_4f-3d} \begin{tabular}{lcllr} \hline Line & $J_2-J_1$ & $I_{\rm pred}$ & $I_{\rm obs}$ & $\frac{I_{\rm obs}}{I_{\rm pred}}$\\ \hline M55e 4f 2[5]$^{\rm o}$ -- 3d $^4$F & & & &\\ $\lambda$4391.99$^a$ & 11/2--9/2 & 1.000 & 1.000 & 1.000\\ $\lambda$4409.30$^b$ & 9/2--7/2 & 0.665 & 0.812 & 1.221\\ M52a 4f 2[4]$^{\rm o}$ -- 3d $^4$D & & & &\\ $\lambda$4219.75$^c$ & 9/2--7/2 & 0.555 & 0.831 & 1.497\\ $\lambda$4233.85 & 7/2--5/2 & 0.139 & 0.227 & 1.631\\ M52b 4f 2[3]$^{\rm o}$ -- 3d $^4$D & & & &\\ $\lambda$4231.64$^d$ & 7/2--5/2 & 0.131 & 0.370 & 2.811\\ $\lambda$4250.65 & 5/2--3/2 & 0.088 & 0.216 & 2.461\\ M57b 4f 1[4]$^{\rm o}$ -- 3d $^4$F & & & &\\ $\lambda$4397.99 & 7/2--5/2 & 0.346 & 0.325 & 0.941\\ M60c 4f 1[3]$^{\rm o}$ -- 3d $^2$F & & & &\\ $\lambda$4428.64$^e$ & 7/2--5/2 & 0.437 & 0.591 & 1.353\\ M61a 4f 2[4]$^{\rm o}$ -- 3d $^2$D & & & &\\ $\lambda$4430.94$^f$ & 7/2--5/2 & 0.283 & 0.452 & 1.598\\ M61d 4f 2[2]$^{\rm o}$ -- 3d $^2$D & & & &\\ $\lambda$4457.05$^g$ & 5/2--3/2 & 0.098 & 0.361 & 3.662\\ M65 4f 0[3]$^{\rm o}$ -- 3d $^4$P & & & &\\ $\lambda$4413.22$^h$ & 7/2--5/2 & 0.239 & 0.376 & 1.574\\ M66c 4f 1[3]$^{\rm o}$ -- 3d $^4$P & & & &\\ $\lambda$4421.39 & 5/2--3/2 & 0.089 & 0.113 & 1.263\\ \hline \end{tabular} \begin{description} \item [$^a$] Neglecting Ne~{\sc ii} M55e 4f\,2[5]$^{\rm o}_{9/2}$\,--\,3d\,$^4$F$_{9/2}$ $\lambda$4392.00. \item [$^b$] Corrected for the contribution from the O~{\sc iii} M46a 5g\,H[5]$^{\rm o}_{4}$\,--\,4f\,G[4]$_{3}$ $\lambda$4408.29 line ($\sim$10 per cent). Neglecting Ne~{\sc ii} M55b 4f\,2[3]$^{\rm o}_{7/2}$\,--\,3d\,$^4$F$_{9/2}$ $\lambda$4409.78. \item [$^c$] Including Ne~{\sc ii} M52d 4f\,2[2]$^{\rm o}_{5/2}$\,--\,3d\,$^4$D$_{5/2}$ $\lambda$4220.89. Neglecting Ne~{\sc ii} M52a 4f\,2[4]$^{\rm o}_{7/2}$\,--\,3d\,$^4$D$_{7/2}$ $\lambda$4219.37. \item [$^d$] Including Ne~{\sc ii} M52b 4f\,2[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$D$_{5/2}$ $\lambda$4231.53. \item [$^e$] Including Ne~{\sc ii} M61b 4f\,2[3]$^{\rm o}_{7/2}$\,--\,3d\,$^2$D$_{5/2}$ $\lambda$4428.52. Neglecting Ne~{\sc ii} M60c 4f\,1[3]$^{\rm o}_{5/2}$\,--\,3d\,$^2$F$_{5/2}$ $\lambda$4428.52 and Ne~{\sc ii} M61b 4f\,2[3]$^{\rm o}_{5/2}$\,--\,3d\,$^2$D$_{5/2}$ $\lambda$4428.41. \item [$^f$] Including Ne~{\sc ii} M57a 4f\,1[2]$^{\rm o}_{5/2}$\,--\,3d\,$^4$F$_{3/2}$ $\lambda$4430.90 and Ne~{\sc ii} M55a 4f\,2[4]$^{\rm o}_{9/2}$\,--\,3d\,$^4$F$_{7/2}$ $\lambda$4430.06. Neglecting Ne~{\sc ii} M57a 4f\,1[2]$^{\rm o}_{3/2}$\,--\,3d\,$^4$F$_{3/2}$ $\lambda$4431.11. \item [$^g$] Neglecting Ne~{\sc ii} M61d 4f\,2[2]$^{\rm o}_{3/2}$\,--\,3d\,$^2$D$_{3/2}$ $\lambda$4457.24 and Ne~{\sc ii} M66c 4f\,1[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$P$_{5/2}$ $\lambda$4457.24. Including Ne~{\sc ii} M66c 4f\,1[3]$^{\rm o}_{7/2}$\,--\,3d\,$^4$P$_{5/2}$ $\lambda$4457.36. \item [$^h$] Neglecting Ne~{\sc ii} M65 4f\,0[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$P$_{5/2}$ $\lambda$4413.11. Including Ne~{\sc ii} M57c 4f\,1[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$F$_{3/2}$ $\lambda$4413.11. \end{description} \end{table} \subsubsection{\label{neii_orls:summary} Comments on the Ne~{\sc ii} recombination spectrum} The Ne~{\sc ii} effective recombination coefficients currently used are mainly from Kisielius et al. \cite{kisielius1998}. This calculation is carried out for the transitions of $l\,\leq$\,2. Although some effective recombination coefficients for the Ne~{\sc ii} 4f\,--\,3d transitions are available (P.~J. Storey, unpublished), only a few selected Ne~{\sc ii} lines are calculated, and the results are quite preliminary. The best observed Ne~{\sc ii} multiplets of the 3\,--\,3 transitions is M2 (3p\,$^{4}$D$^{\rm o}$\,--\,3s\,$^{4}$P). For the Ne~{\sc ii} multiplets, M12 (3d\,$^4$D\,--\,3p\,$^4$D$^{\rm o}$), M13 (3d\,$^4$F\,--\,3p\,$^4$D$^{\rm o}$), M20 (3d\,$^2$F\,--\,3p\,$^2$D$^{\rm o}$), M21 (3d\,$^2$D\,--\,3p\,$^2$D$^{\rm o}$), M28 (3d\,$^2$P\,--\,3p\,$^2$S$^{\rm o}$) and M34 (3d\,$^4$P\,--\,3p\,$^4$S$^{\rm o}$), only the strongest fine-structure components are observed, and the weaker components are either blended with other lines or not detected. Measurements of the M1 (3p\,$^{4}$P$^{\rm o}$\,--\,3s\,$^{4}$P), M5 (3p\,$^2$D$^{\rm o}$\,--\,3s\,$^2$P) and M6 (3p\,$^2$S$^{\rm o}$\,--\,3s\,$^2$P) lines are of large uncertainties due to line blending. The M9 (3p$^{\prime}$\,$^2$F$^{\rm o}$\,--\,3s$^{\prime}$\,$^2$D) multiplet is the only Ne~{\sc ii} transition observed in out spectrum of parentage other than $^{3}$P. The effective recombination coefficients for this multiplet are available from Kisielius et al. \cite{kisielius1998}, which are probably unreliable considering that the calculation is in the {\it LS}\,coupling assumption. The possibility of using the Ne~{\sc ii} lines to determine electron temperatures is discussed in Section\,\ref{diagnose:neii}. In NGC\,7009, the effects of the fluorescence mechanism on the Ne~{\sc ii} M1 and M2 multiplets are probably not important. Thus those lines could be safe for plasma diagnostics and abundance determinations. \subsection{\label{ciii_orls} The C~{\sc iii} permitted lines} More than 20 lines in NGC\,7009 were identified as C~{\sc iii} permitted transitions (Paper~I); some identifications could be questionable. In this section, we only introduce three multiplets: M1 3p\,$^3$P$^{\rm o}$\,--\,3s\,$^3$S, M16 5g\,$^3$G\,--\,4f\,$^3$F$^{\rm o}$ and M18 5g\,$^1$G\,--\,4f\,$^1$F$^{\rm o}$. The three C~{\sc iii} M1 lines, $\lambda\lambda$4647.42, 4650.25 and 4651.47, are blended with the O~{\sc ii} M1 (3p\,$^{4}$D$^{\rm o}$\,--\,3s\,$^{4}$P) lines $\lambda\lambda$4649.13 and 4650.84, as shown in Fig.\,\ref{4625-4680}. Techniques used to obtain the total intensity of the C~{\sc iii} M1 multiplet are illustrated in Section\,\ref{oii_orls:v1}. The intensity ratio of the three lines was assumed to be as in the {\it LS}\,coupling, i.e. 1\,:\,3\,:\,5. The total intensity is 0.303, which is accurate to 20 per cent. This intensity agrees with the measurements of LSBC: 0.274 (PA = 45$^{\rm o}$) and 0.438 (PA = 0$^{\rm o}$). The C~{\sc iii} M16 lines $\lambda\lambda$4067.94, 4068.91 and 4070.26 are blended with the [S~{\sc ii}] $\lambda$4068 (3p$^{3}$\,$^{2}$P$^{\rm o}_{3/2}$\,--\,$^{4}$S$^{\rm o}_{3/2}$) and two O~{\sc ii} M10 (3d\,$^{4}$F\,--\,3p\,$^{4}$D$^{\rm o}$) lines $\lambda\lambda$4069.62 and 4069.89, as shown in Fig.\,\ref{4060-4115}. Details of deriving the total intensity of the C~{\sc iii} M16 multiplet are given in Section\,\ref{oii_orls:v10}. The intensity ratio of the three C~{\sc iii} M16 lines was also assumed to be as in the {\it LS}\,coupling, i.e. 1.00\,:\,1.31\,:\,1.71. The total intensity is 0.288, with a large uncertainty ($\sim$40 per cent). The intensity ratio of the C~{\sc iii} M16 $\lambda$4069 and the C~{\sc iii} M1 $\lambda$4650 multiplets is 0.952. The predicted ratio of the two C~{\sc iii} multiplets is 0.922, which is calculated from the radiative and dielectronic recombination coefficients of P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} and Nussbaumer \& Storey \cite{ns1984}, respectively. The C~{\sc iii} M18 $\lambda$4186.90 (5g\,$^1$G$_{4}$\,--\,4f\,$^1$F$^{\rm o}_{3}$) line is blended with the O~{\sc ii} M36 $\lambda$4185.44 (3d$^{\prime}$\,$^2$G$_{7/2}$\,--\,3p$^{\prime}$\,$^2$F$^{\rm o}_{5/2}$) line, as shown in Fig.\,\ref{4176-4260}. Multi-Gaussian fits yield an intensity of 0.089$\pm$0.018 for the $\lambda$4186.90 line. This intensity agrees with those given by LSBC: 0.0533 (PA = 45$^{\rm o}$) and 0.102 (PA = 0$^{\rm o}$). The intensity ratio of the C~{\sc iii} M18 $\lambda$4187 and the C~{\sc iii} M1 $\lambda$4650 multiplets is 0.293, which agrees with the predicted ratio (0.331) within errors. \subsection{\label{niii_orls} The N~{\sc iii} permitted lines} More than 30 lines were identified as the N~{\sc iii} permitted transitions, including multiplets M1 and M2, which are mainly excited by the Bowen fluorescence mechanism. Transitions from the states with excited parentage (i.e. other than $^1$S) are detected. Most N~{\sc iii} lines suffer from line blending. In this section, we only present intensity measurements and discussion for the M3 3p$^{\prime}$\,$^{4}$D\,--\,3s$^{\prime}$\,$^{4}$P$^{\rm o}$ and M18 5g\,$^{2}$G\,--\,4f\,$^{2}$F$^{\rm o}$ multiplets. Discussion of other multiplets of N~{\sc iii} is given in Appendix\,\ref{appendix:e}. \subsubsection{\label{niii_orls:v3} Multiplet 3, 3p$^{\prime}$ $^4$D -- 3s$^{\prime}$ $^4$P$^{\rm o}$} $\lambda$4510.91 (3p$^{\prime}$~$^4$D$_{5/2}$ -- 3s$^{\prime}$~$^4$P$^{\rm o}_{3/2}$ and 3p$^{\prime}$~$^4$D$_{3/2}$ -- 3s$^{\prime}$~$^4$P$^{\rm o}_{1/2}$) blends with [K~{\sc iv}] 3p$^4$~$^1$S$_{0}$ -- 3p$^4$~$^1$D$_{2}$ $\lambda$4510.92, whose intensity contribution is unknown. Another M3 line $\lambda$4514.86 (3p$^{\prime}$~$^4$D$_{7/2}$ -- 3s$^{\prime}$~$^4$P$^{\rm o}_{5/2}$) is partially resolved from Ne~{\sc ii} M58a 4f~2[4]$^{\rm o}_{9/2}$ -- 3d~$^{2}$F$_{7/2}$ $\lambda$4517.83 (Fig.\,\ref{4505-4555}). Several Ne~{\sc ii} ORLs, M58b $\lambda$4514.88, M64d $\lambda$4516.66 and M58a $\lambda$4517.83, are also blended in the complex, which makes line measurements very difficult. Another line $\lambda$4518.15 (3p$^{\prime}$~$^4$D$_{1/2}$ -- 3s$^{\prime}$~$^4$P$^{\rm o}_{1/2}$) blends with the Ne~{\sc ii} M58a $\lambda$4517.83 line, which probably dominates the total intensity. Another two lines $\lambda\lambda$4523.58 (3p$^{\prime}$~$^4$D$_{3/2}$ -- 3s$^{\prime}$~$^4$P$^{\rm o}_{3/2}$) and 4547.30 (3p$^{\prime}$~$^4$D$_{3/2}$ -- 3s$^{\prime}$~$^4$P$^{\rm o}_{5/2}$) decay from the same upper level, thus their intensity ratio depends only on the coupling scheme. The measured line ratio $\lambda$4547.30/$\lambda$4523.58 is 0.119, which is slightly higher than the {\it LS}\,coupling value 0.094. Measurements of the $\lambda$4547.30 line could be of large error due to weakness (Fig.\,\ref{4505-4555}). Another M3 line $\lambda$4534.58 (3p$^{\prime}$~$^4$D$_{5/2}$ -- 3s$^{\prime}$~$^4$P$^{\rm o}_{5/2}$) blends with O~{\sc iii} M48 5g~G[4]$^{\rm o}_{3,\,4}$ -- 4f~D[3]$_{3}$ $\lambda$4534.31 and Ne~{\sc ii} M55b 4f~2[3]$^{\rm o}_{7/2}$ -- 3d~$^4$F$_{5/2}$ $\lambda$4534.64 as well as another three Ne~{\sc ii} lines M55b $\lambda$4534.52, M55c $\lambda$4535.37, and M55c $\lambda$4535.57, whose intensity contribution could be negligible. The other line $\lambda$4530.86 (3p$^{\prime}$~$^4$D$_{1/2}$ -- 3s$^{\prime}$~$^4$P$^{\rm o}_{3/2}$) blends with N~{\sc ii} M58b 4f~2[5]$_{4}$ -- 3d~$^1$F$^{\rm o}_{3}$ $\lambda$4530.41, which is probably more than 3 times stronger. \begin{figure} \begin{center} \includegraphics[width=7.8cm,angle=-90]{fig38.pdf} \caption{Spectrum of NGC\,7009 from 4505 to 4555\,{\AA} showing the N~{\sc iii} M3 lines and other emission features. The continuous curve is the sum of Gaussian profile fits. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{4505-4555} \end{center} \end{figure} \subsubsection{\label{niii_orls:v18} Multiplet 18, 5g $^2$G -- 4f $^2$F$^{\rm o}$} $\lambda$4379.11 (5g~$^2$G$_{9/2}$ -- 4f~$^2$F$^{\rm o}_{7/2}$ and 5g~$^2$G$_{7/2}$ -- 4f~$^2$F$^{\rm o}_{7/2}$) is detected in Fig.\,\ref{4310-4382}. Gaussian profile fitting gives an intensity of 0.367, with an uncertainty of less than 10 per cent. Measurements for this line given by LSBC are 0.312 (PA = 45$^{\rm o}$) and 0.397 (PA = 0$^{\rm o}$). $\lambda$4379.11 blends with Ne~{\sc ii} M60b 4f~1[4]$^{\rm o}_{9/2}$ -- 3d~$^2$F$_{7/2}$ $\lambda$4379.55, which contributes about 10 per cent to the total intensity, and Ne~{\sc ii} M60b 4f~1[4]$^{\rm o}_{7/2}$ -- 3d~$^2$F$_{7/2}$ $\lambda$4379.40 and O~{\sc iii} M39c 5g~G[5]$^{\rm o}_{4,5}$ -- 4f~F[4]$_{4}$ $\lambda$4379.58, both of which were assumed to be negligible. This line is used to derive N$^{3+}$/H$^+$ abundance ratio. \subsection{\label{oiii_orls} The O~{\sc iii} permitted lines} In Paper~I, about two dozen O~{\sc iii} permitted transitions from the 3\,--\,3 configuration were identified, including multiplets M2 3p\,$^{3}$D\,--\,3s\,$^{3}$P$^{\rm o}$, M3 3p\,$^{3}$S\,--\,3s\,$^{3}$P$^{\rm o}$, M4 3p\,$^{3}$P\,--\,3s\,$^{3}$P$^{\rm o}$, M12 3d\,$^{3}$P$^{\rm o}$\,--\,3p\,$^{3}$S and M15 3d\,$^{3}$P$^{\rm o}$\,--\,3p\,$^{3}$P, which are mainly excited by the Bowen fluorescence or charge-transfer (Liu \& Danziger \citealt{ld1993a}). All the O~{\sc iii} 5g\,--\,4f lines are blended with other emission features. In this section, we only present emission line measurements of the M8 3d\,$^{3}$F$^{\rm o}$\,--\,3p\,$^3$D and M15 3d\,$^{3}$P$^{\rm o}$\,--\,3p\,$^{3}$P multiplets. Measurement results of other O~{\sc iii} multiplets are given in Appendix\,\ref{appendix:f}. Discussion of the O~{\sc iii} fluorescence lines is in Section\,\ref{oiii_orls:fluorescence}. \subsubsection{\label{oiii_orls:v8} Multiplet 8, 3d $^3$F$^{\rm o}$ -- 3p $^3$D} Only $\lambda$3260.85 (3d~$^3$F$^{\rm o}_{3}$ -- 3p~$^3$D$_{2}$) and $\lambda$3265.32 (3d~$^3$F$^{\rm o}_{4}$ -- 3p~$^3$D$_{3}$) are observed (Fig.\,\ref{3250-3305}). Measurements of the two lines could be of relatively large error due to poor S/N in the far blue of the spectrum. However, this multiplet cannot be excited by either the Bowen fluorescence or charge transfer, and the two lines are still used to determine the O$^{3+}$/H$^{+}$ abundance ratio. \begin{figure} \begin{center} \includegraphics[width=7.8cm,angle=-90]{fig39.pdf} \caption{Spectrum of NGC\,7009 from 3250 to 3305\,{\AA} showing the O~{\sc iii} M8 lines $\lambda\lambda$3260.85 and 3265.32. The continuous curve is the sum of Gaussian profile fits. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for.} \label{3250-3305} \end{center} \end{figure} \subsubsection{\label{oiii_orls:v15} Multiplet 15, 3d $^3$P$^{\rm o}$ -- 3p $^3$P} Fig.\,\ref{3395-3475} shows the O~{\sc iii} M15 lines detected in the spectrum of NGC\,7009. Single-Gaussian profile fitting yields an intensity of 11.575 for the $\lambda$3444.06 line, with an uncertainty of about 3 per cent. The intensity contribution from the blended He~{\sc i} M7 $\lambda$3447.59 (6p\,$^1$P$^{\rm o}_{1}$\,--\,2s\,$^1$S$_{0}$) line is only 2 per cent. Another M15 line $\lambda$3428.62 is blended with the $\lambda$3430.57 line of the same multiplet, which is marginally resolved in Fig.\,\ref{3395-3475}. Two Gaussian profiles with the same FWHM were used to fit the feature, and the resultant intensities of the $\lambda$3428.62 and the $\lambda$3430.57 lines are 1.409 and 0.319, respectively, both with uncertainties of less than 10 per cent. Another M15 line $\lambda$3415.26 is partially resolved from the Ne~{\sc ii} M21 $\lambda$3416.91 (3d\,$^2$D$_{5/2}$\,--\,3p\,$^2$D$^{\rm o}_{5/2}$) line; the other two M15 lines, $\lambda\lambda$3405.71 and 3408.12 are also detected in the spectrum. The three O~{\sc iv} M2 3d\,$^{2}$D\,--\,3p\,$^{2}$P$^{\rm o}$ lines, $\lambda\lambda$3403.52, 3411.69 and 3413.64, are blended among the above three O~{\sc iii} M15 lines, as is shown in Fig.\,\ref{3395-3475}. Multi-Gaussian profile fitting was carried out for the complex. The intensity of the $\lambda$3415.26 line is 0.415$\pm$0.021. The intensities of the $\lambda\lambda$3405.71 and 3408.12 lines are 0.210 and 0.127, respectively, both with uncertainties of 10 to 15 per cent. Analysis of the measured intensities of the O~{\sc iv} M2 lines are in Section\,\ref{oiv_orls}. The observed intensity ratio $\lambda$3428.62/$\lambda$3444.06 is 0.122, lower than the theoretical prediction (0.336) given by Saraph \& Seaton \cite{ss1980}, who assumed that the relative intensities within the O~{\sc iii} M15 multiplet to be as in {\it LS}\,coupling, but agrees with the intermediate calculations of Kastner et al. \cite{kastner1983}. The observed intensity ratio of the three O~{\sc iii} M15 lines $\lambda\lambda$3405.71, 3415.26 and 3430.57, which have the common upper level, is 1\,:\,2.102\,:\,1.615. This ratio differs from that in the pure {\it LS}\,coupling, i.e. 1\,:\,0.75\,:\,1.25. Discussion of the intensity ratios of the O~{\sc iii} M15 lines is presented in Section\,\ref{oiii_orls:fluorescence}. \begin{figure} \begin{center} \includegraphics[width=7.5cm,angle=-90]{fig40.pdf} \caption{Spectrum of NGC\,7009 from 3395 to 3475\,{\AA} showing the O~{\sc iii} M15 lines. The profile of the O~{\sc iii} M15 $\lambda$3444 line shows that there is a weak component blended on its right side, which could be the He~{\sc i} M7 6p\,$^1$P$^{\rm o}_{1}$\,--\,2s\,$^1$S$_{0}$ $\lambda$3447.59 line. Several weak He~{\sc i} lines from the $n$d\,--\,2p transition array are also detected. The solid continuous curve is the sum of Gaussian profile fits. The dashed curve is the sum of the Gaussian profiles of the three O~{\sc iv} M2 3d\,$^{2}$D\,--\,3p\,$^{2}$P$^{\rm o}$ lines $\lambda\lambda$3403.52, 3411.69 and 3413.64. Continuum has been subtracted and the spectrum has been normalized such that H$\beta$ has an integrated flux of 100. Extinction has not been corrected for. The inset shows the profile of the O~{\sc iii} M15 $\lambda$3444 line.} \label{3395-3475} \end{center} \end{figure} \subsubsection{\label{oiii_orls:5g-4f} 5g -- 4f transitions} The O~{\sc iii} permitted transitions of 5g\,--\,4f configuration are in the wavelength range 4300--4600\,{\AA}, where numerous ORLs of the singly-ionized ions of C, N, O and Ne are located. The typical intensities of the 5g\,--\,4f lines of O~{\sc iii} are $\sim$10$^{-4}$--10$^{5}$ of H$\beta$, and accurate measurements of these lines are difficult due to line blending. The strongest O~{\sc iii} line of the 5g\,--\,4f configuration, $\lambda$4434.60 (M46b 5g\,H[6]$^{\rm o}_{6}$\,--\,4f\,G[5]$_{5}$), is marginally detected in the spectrum of NGC\,7009, as is shown in Fig.\,\ref{4380-4445}. Multi-Gaussian profile fitting yields an intensity of 0.024$\pm$0.003, from which we derived an O$^{3+}$/H$^{+}$ abundance ratio that is higher than that deduced from the O~{\sc iii} M8 $\lambda$3265 (3d\,$^3$F$^{\rm o}$\,--\,3p\,$^3$D) multiplet by a factor of 20. Given the relatively more accurate measurements of the O~{\sc iii} M8 lines (see Section\,\ref{oiii_orls:v8}), this much higher O$^{3+}$/H$^{+}$ abundance ratio derived from the $\lambda$4434.60 line is questionable. The effective recombination coefficients calculated by Kisielius \& Storey \cite{ks1999} for the 5g\,--\,4f transitions of O~{\sc iii} are utilized to estimate the intensity contributions where necessary. \subsubsection{\label{oiii_orls:fluorescence} Fluorescence of the O~{\sc iii} permitted lines} In Table\,\ref{bowen_ratios} we compare the observed and predicted intensity ratios for several pairs of O~{\sc iii} Bowen fluorescence lines originating from the same upper level. The observed intensity ratios (column 2 of Table\,\ref{bowen_ratios}) are based on the measurements in NGC\,7009 described in Section\,\ref{oiii_orls}. The errors are estimated from the measurement errors, including Gaussian profile fit and noise level of the local continuum. The profiles of the strong O~{\sc iii} Bowen lines deviate from the exact Gaussian due to charge transfer effect, thus we used Gaussian profiles to fit these emission lines. The error from Gaussian profile fit is 5\,--\,10 per cent for strong O~{\sc iii} Bowen lines. These O~{\sc iii} Bowen lines are close to the blue end of the spectrum, the uncertainties due to the poor S/N in this wavelength region are also taken into account. The errors are significant for the two ratios $\lambda$3791/$\lambda$3755 and $\lambda$3774/$\lambda$3757, because the flux errors are large for the three relatively weak lines $\lambda\lambda$3791.28 (about 21 per cent), 3757 (about 20 per cent) and 3774 (about 32 per cent), and thus the resulting uncertainties estimated from the error propagation formula are comparable to the ratio values. The observations of Liu \& Danziger \cite{ld1993a} for the same object are also presented for comparison. The radiative transition probabilities of O$^{2+}$ have been calculated since late 1960s (Nussbaumer \citealt{nussbaumer1969}), and later by Saraph \& Seaton \cite{ss1980} and Luo et al. \cite{luo1989}. In these approaches {\it LS}\,coupling was assumed. Calculations of the O$^{2+}$ transition probabilities with the scheme of intermediate coupling are presented by Kastner et al. \cite{kastner1983} and Kastner \& Bhatia \cite{kb1990} for some improved values for cascading from the 2p3d~$^3$P$^{\rm o}_{2}$ level. Fischer \cite{fischer1994} carried out a non-relativistic configuration interaction calculation with relativistic correction and showed obvious improvement over the previous work. Simultaneously, a full relativistic configuration interaction calculation of Tong et al. \cite{tong1994} gave predicted O~{\sc iii} Bowen line ratios which were further improvement, especially for the $\lambda$3133/$\lambda$3444 ratio. Their results are in agreement with observed ratios within the accuracy of observations. A more recent variational Breit-Pauli calculation was done by Tachiev \& Fischer \cite{tff2001} for the carbon-like sequence. The current measurements agree better with the intermediate coupling values. \begin{table*} \begin{minipage}{110mm} \caption{Comparison of the observed and predicted Bowen fluorescence line ratios.} \label{bowen_ratios} \centering \begin{tabular}{llllllll} \hline Line Ratio & Current obs. & LD93$^a$ & SS80$^b$ & KBB83$^c$ & FF94$^d$ & TZLL94$^e$ & TFF01$^f$\\ \hline $\lambda$3133/$\lambda$3444 & 3.261$\pm$0.071 & 3.140$\pm$0.440 & 3.610 & 4.450 & 3.170 & 3.290 & 3.342\\ $\lambda$3299/$\lambda$3341 & 0.252$\pm$0.117 & 0.285$\pm$0.022 & 0.201 & 0.228 & 0.264 & 0.260 & 0.268\\ $\lambda$3312/$\lambda$3341 & 0.698$\pm$0.086 & 0.651$\pm$0.048 & 0.606 & 0.656 & 0.728 & 0.717 & 0.736\\ $\lambda$3428/$\lambda$3444 & 0.122$\pm$0.027 & 0.204$\pm$0.023 & 0.336 & 0.164 & 0.150 & 0.149 & 0.154\\ $\lambda$3791/$\lambda$3755 & 0.271$\pm$0.213 & 0.232$\pm$0.037 & 0.330 & 0.309 & 0.296 & 0.301 & 0.299\\ $\lambda$3774/$\lambda$3757 & 0.539$\pm$0.379 & 0.569$\pm$0.127 & 0.750 & 0.715 & 0.708 & 0.701 & 0.704\\ \hline \end{tabular} \begin{description} \item [$^a$] Liu \& Danziger \cite{ld1993a}; \item [$^b$] Saraph \& Seaton \cite{ss1980}; \item [$^c$] Kastner et al. \cite{kastner1983}; \item [$^d$] Fischer \cite{fischer1994}; \item [$^e$] Tong et al. \cite{tong1994}; \item [$^f$] Tachiev \& Fischer \cite{tff2001}. \end{description} \end{minipage} \end{table*} \subsection{\label{oiv_orls} The O~{\sc iv} permitted lines} Measurements of the O~{\sc iv} lines, $\lambda\lambda$3403.52, 3411.69 and 3413.64 of the M2 3d\,$^2$D\,--\,3p\,$^3$P$^{\rm o}$ multiplet are of large uncertainty due to weakness and line blending (Fig.\,\ref{3395-3475}). The intensity of the $\lambda$3411.69 line derived from multi-Gaussian profile fitting is 0.057$\pm$0.014. The intensity ratio of the other two O~{\sc iv} M2 lines $\lambda\lambda$3403.52 and 3413.64, which share the same upper level $^{2}$D$_{3/2}$, is 1.318. This intensity ratio is significantly lower than the value in the {\it LS}\,coupling assumption, i.e. 5.0. So far the only effective recombination coefficients available for the O~{\sc iv} lines are the radiative recombination coefficients given by P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} and the dielectronic recombination coefficients given by Nussbaumer \& Storey \cite{ns1984}. Both calculations were carried out in the {\it LS}\,coupling scheme. The O~{\sc iv} M2 lines are the only lines detected for this ion in the spectrum of NGC\,7009, but are not used in abundance determinations. \section{\label{abundances} Ionic and elemental abundances} In this section, we present the ionic abundances of helium and heavy elements derived from ORLs. For heavy element ions, the electron temperatures deduced from the N~{\sc ii} and O~{\sc ii} recombination line ratios are assumed (Section\,\ref{diagnose:niioii}). For He$^{+}$/H$^{+}$, a value of 5100~K as deduced from the He~{\sc i} $\lambda$7281/$\lambda$6678 ratio is adopted. For He$^{++}$/H$^{+}$, an electron temperature of 10\,000~K, roughly the value deduced from the 5694\,{\AA} discontinuity of the He~{\sc ii} recombination continuum spectrum, is assumed. The average electron temperature derived from the various optical CEL ratios is $\sim$10\,000~K (Paper~I), and is used to determine the forbidden line abundances. Given that the electron densities derived from the N~{\sc ii} and O~{\sc ii} ORLs are close to those from the optical CELs, and that emissivities of the heavy element ORLs are only marginally sensitive to electron density, we have assumed a constant density of 4300~cm$^{-3}$ throughout the abundance determinations. \subsection{\label{orl_abundances} Ionic abundances from ORLs} Ionic abundances of He, C, N, O, Ne, Mg and Si are derived in this Section, using the extinction-corrected fluxes of ORLs given in Paper~I. A critical analysis of the C~{\sc ii}, N~{\sc ii}, O~{\sc ii} and Ne~{\sc ii} permitted lines detected in the spectrum of NGC\,7009 is presented in Section\,3, and is used to guide the determinations of ORL abundances of heavy element ions. Again, the new effective recombination coefficients for the N~{\sc ii} and O~{\sc ii} recombination spectrum are utilized. \subsubsection{\label{orl_abundances:part1} He$^+$/H$^+$ and He$^{2+}$/H$^+$} Numerous calculations have been dedicated to the recombination spectrum of He~{\sc i} (e.g. Mathis \citealt{mathis57}; Burgess \& Seaton \citealt{bs60a}, \citealt{bs60b}; Pottasch \citealt{pottasch62}; Robbins \citealt{robbins68}, \citealt{robbins70}; Robbins \& Robinson \citealt{rr71}; Brocklehurst \citealt{b72}; Almog \& Netzer \citealt{an89}; Smits \citealt{smits91}, \citealt{smits96}; Benjamin, Skillman \& Smits \citealt{bss99}, \citealt{bss02}; Porter et al. \citealt{porter05}; Bauman et al. \citealt{bauman05}). The recombination-cascade spectrum of He~{\sc i}, with no collision taken into account, was first computed by Brocklehurst \cite{b72}. A better treatment of emissivities from He~{\sc i} in nebular environment is that of Smits \cite{smits96}, who used more accurate calculations of radiative transition rates and photoionization cross-sections, and corrected an error in Brocklehurst \cite{b72}. Benjamin, Skillman \& Smits \cite{bss99} combined the detailed He~{\sc i} recombination model of Smits \cite{smits96} with the collisional transitions of Sawey \& Berrington \cite{sb93} and calculated more accurate He~{\sc i} recombination line emissivities that include the effects of collisional excitation from both the 2s\,$^{3}$S and 2s\,$^{1}$S levels. Benjamin, Skillman \& Smits \cite{bss02} studied the effects of the optical depth of the 2s\,$^{3}$S metastable level on the He~{s\c i} line intensities. The availability of these improved atomic data has made it possible to obtain secure measurements of the ionic and elemental abundances of helium, given high-quality spectroscopic data. Zhang et al. \cite{zhang05a} developed the nebular plasma diagnostics based on the He~{\sc i} ORLs, using the theoretical emissivities of He~{\sc i} lines provided by Benjamin, Skillman \& Smits \cite{bss99}. In Paper~I, we derived electron temperatures from He~{\sc i} line ratios using the method of Zhang et al. \cite{zhang05a}. In order to keep consistency, we derived ionic and elemental abundances of helium in this Section, using the atomic data of Benjamin, Skillman \& Smits \cite{bss99} and assuming the temperature to be that yielded by the He~{\sc i} lines ($\sim$5000~K). Ionic and elemental abundances of helium relative to hydrogen derived from the He~{\sc i} and He~{\sc ii} ORLs are presented in Table\,\ref{abundances:i}. The adopted He$^+$/H$^+$ ratio (0.099) is an average of the values derived from the He~{\sc i} $\lambda\lambda$4471, 5876 and 6678 lines with weights of 1\,:\,3\,:\,1, roughly proportional to their intrinsic intensities (Benjamin, Skillman \& Smits \citealt{bss99}). Here the effective recombination coefficients were adopted from Benjamin, Skillman \& Smits \citealt{bss99}. Case~A recombination was assumed for the triplet lines and Case~B for the singlet. An electron temperature of 5000~K, as derived from the He~{\sc i} $\lambda$7281/$\lambda$6678 ratio, and a density of 10\,000~cm$^{-3}$ were assumed. Under this physical condition, the effective recombination coefficient for the $\lambda$4471 line given by Benjamin, Skillman \& Smits \citealt{bss99} is 6 per cent higher than that of Brocklehurst \cite{b72}. This difference is 7 per cent when the temperature increases to 10\,000~K. The differences between the calculations of Benjamin, Skillman \& Smits \citealt{bss99} and Brocklehurst \cite{b72} for the other two He~{\sc i} lines are less than 2 per cent at 5100~K. The $\lambda$4471 line suffers most from line blending among the three He~{\sc i} lines: It is blended with two O~{\sc ii} M86c lines $\lambda\lambda$4469.46 and 4469.48, three O~{\sc iii} lines M49c $\lambda\lambda$4471.02, 4475.17 and M45b $\lambda$4476.11, and one Ne~{\sc ii} line M61b $\lambda$4468.91. Both wings of the $\lambda$4471 line are also affected by weak features (Fig.\,\ref{4444-4504}). All the blending brings an uncertainty of $\sim$10 per cent to the intensity of the $\lambda$4471 line. The $\lambda$4471 line intensity adopted in the current analysis has been corrected for the contributions from the blended lines listed above, using the effective recombination coefficients available for the O~{\sc ii} and O~{\sc iii} lines. The intensity of the He~{\sc i} $\lambda$6678 line has also been corrected for the contribution from the blended He~{\sc ii} $\lambda$6683.20 (13h\,$^{2}$H$^{\rm o}$\,--\,5g\,$^{2}$G) line, which brings an uncertainty of about 3 per cent. The measurement uncertainty of the He~{\sc i} $\lambda$5876 line is less than 2 per cent. The He$^+$/H$^+$ abundances listed in Table\,\ref{abundances:i} agree with each other reasonably well, except for those yielded by the triplet line $\lambda$7065 (3s\,$^{3}$S\,--\,2p\,$^{3}$P$^{\rm o}$) and the singlet line $\lambda$5016 (3p\,$^{1}$P$^{\rm o}$\,--\,2s\,$^{1}$S). The former one is more than two times higher than the average abundance, while the latter is nearly half of the value. The abnormally high abundance value is probably due to the enhanced $\lambda$7065 line as a result of self-absorption from the metastable 2s\,$^{3}$S level. By comparing the observed intensities of the He~{\sc i} singlet lines of the $n$s\,$^{1}$S--\,2p\,$^{1}$P$^{\rm o}$, $n$p\,$^{1}$P$^{\rm o}$\,--\,2s\,$^{1}$S and $n$d\,$^{1}$D\,--\,2p\,$^{1}$P$^{\rm o}$ series, relative to the $\lambda$4922 (4d\,$^{1}$D$_{2}$\,--\,2p\,$^{1}$P$^{\rm o}_{1}$) line, with the theoretical predictions (c.f. Section\,$4.5$ in Paper~I), we concluded that the singlet transitions of He~{\sc i} in NGC\,7009 are close to the Case~B assumption. Departure from Case~B of the He~{\sc i} singlet lines as a result of the He~{\sc i} Lyman photons being destroyed by photoionization of neutral hydrogen and/or absorption by dust grains (Liu et al. \cite{liu2001b}) is unlikely to be significant. Thus the low ionic abundance yielded by the $\lambda$5016 line in Table\,\ref{abundances:i} is mainly due to self-absorption from the metastable 2s\,$^{1}$S level. The $\lambda$5016 line is blending with the N~{\sc ii} M19 3d\,$^3$F$^{\rm o}_{2}$\,--\,3p\,$^3$D$_{2}$ $\lambda$5016.39, whose contribution is negligible ($<$\,1.0 per cent). The He$^{2+}$/H$^+$ abundance ratios were derived from two He~{\sc ii} lines $\lambda$3203 (5f\,$^{2}$F$^{\rm o}$\,--\,3d\,$^{2}$D) and $\lambda$4686 (4f\,$^{2}$F$^{\rm o}$\,--\,3d\,$^{2}$D). The effective recombination coefficients of the two lines were adopted from the hydrogenic calculation of Storey \& Hummer \cite{sh1995}. Although the electron temperature ($\sim$11\,000~K) derived from the discontinuity at $\lambda$5694\,{\AA} of the He~{\sc ii} recombination continuum is of large uncertainty due to weakness of the jump, we assumed an electron temperature of 10\,000~K when deriving the He$^{2+}$/H$^{+}$ abundance ratio. The total elemental abundance of helium relative to hydrogen is 0.112, which is calculated from He/H = He$^+$/H$^+$ + He$^{2+}$/H$^+$. This agrees well with the value of 0.109 given by LSBC. Several He~{\sc i} recombination line series have been observed in our spectrum, and relative intensities of these lines are presented in Table\,\ref{hei_lines}. Also presented in the Table are the theoretical predictions given by Benjamin, Skillman \& Smits \cite{bss99}, Brocklehurst \cite{b72}, and Smits \cite{smits96}. Case~A was assumed for the triplet lines and Case~B for the singlet. The observed intensities of the $n$d\,$^{3}$D\,--\,2p\,$^{3}$P$^{\rm o}$ and $n$d\,$^{1}$D\,--\,2p\,$^{1}$P$^{\rm o}$ series of He~{\sc i}, relative to the $\lambda$4471 line, agree well with those predicted by recombination theory. The obvious weakness of the $n$p\,$^{1}$P$^{\rm o}$\,--\,2s\,$^{1}$S series, compared with theoretical intensities, is caused by self-absorption from the metastable 2s\,$^{1}$S level. Such self-absorption should at the same time lead to the enhancement of the $n$s\,$^{1}$S\,--\,2p\,$^{1}$P$^{\rm o}$ series. However, what we observed as shown in Table\,\ref{hei_lines} are opposite: The $\lambda$7281 line seems weaker than the recent prediction. The $\lambda$3889 (3p\,$^{3}$P$^{\rm o}$\,--\,2s\,$^{3}$S) line is affected by self-absorption. Enhancement of the $n$s\,$^{3}$S\,--\,2p\,$^{3}$P$^{\rm o}$ series, in particular the $\lambda$7065 line, is clearly observed. Similar patterns in the relative intensities of the He~{\sc i} lines are also observed in NGC\,6153 (Liu et al. \citealt{liu2000}), M\,1-42 and M\,2-36 (Liu et al. \citealt{liu2001b}). \begin{table} \begin{minipage}{65mm} \centering \caption{Recombination line helium abundances.} \label{abundances:i} \begin{tabular}{crl} \hline He$^{i+}$/H$^{+}$ & Line & Abundance\\ & (\AA) & \\ \hline Triplet lines & & \\ He$^{+}$/H$^{+}$ & He~{\sc i} $\lambda$3187.74 & 0.094\\ He$^{+}$/H$^{+}$ & He~{\sc i} $\lambda$3888.64 & 0.089\\ He$^{+}$/H$^{+}$ & He~{\sc i} $\lambda$4026.20 & 0.103\\ He$^{+}$/H$^{+}$ & He~{\sc i} $\lambda$4471.50 & 0.098\\ He$^{+}$/H$^{+}$ & He~{\sc i} $\lambda$5875.60 & 0.103\\ He$^{+}$/H$^{+}$ & He~{\sc i} $\lambda$7065.71 & 0.267\\ \\ Singlet lines & & \\ He$^{+}$/H$^{+}$ & He~{\sc i} $\lambda$4921.93 & 0.099\\ He$^{+}$/H$^{+}$ & He~{\sc i} $\lambda$5015.68 & 0.065\\ He$^{+}$/H$^{+}$ & He~{\sc i} $\lambda$6678.15 & 0.095\\ He$^{+}$/H$^{+}$ & He~{\sc i} $\lambda$7281.35 & 0.090\\ \\ He$^{+}$/H$^{+}$ & Mean & \textbf{0.099}\\ \\ He$^{2+}$/H$^{+}$ & He~{\sc ii} $\lambda$4685.68 & 0.013\\ He$^{2+}$/H$^{+}$ & He~{\sc ii} $\lambda$3203.17 & 0.012\\ \\ He/H & & \textbf{0.112}\\ \hline \end{tabular} \end{minipage} \end{table} \begin{table} \begin{minipage}{68mm} \centering \caption{The He~{\sc i} lines observed in NGC\,7009. Intensities are normalized to a scale where $I$(He~{\sc i}~$\lambda$4471) = 1.0. The theoretical predictions of Benjamin, Skillman \& Smits (\citealt{bss99}, \textbf{BSS99}), Brocklehurst (\citealt{b72}, \textbf{B72}) and Smits (\citealt{smits96}, \textbf{S96}), at $T_\mathrm{e}$ = 5000~K and $N_\mathrm{e}$ = 10$^{4}$~cm$^{-3}$, are presented for purpose of comparison. The Case~A recombination is assumed for the triplets and Case~B for the singlets.} \label{hei_lines} \begin{tabular}{lrlrlr} \hline $\lambda_{\rm lab}$ & $n$ & $I_{\rm obs}$ & $I_{\rm pred}$ & $I_{\rm pred}$ & $I_{\rm pred}$\\ (\AA) & & & BSS99 & B72 & S96 \\ \hline \multicolumn{6}{c}{$n$s\,$^{1}$S\,--\,2p\,$^{1}$P$^{\rm o}$ series}\\ 3935.91 & 8 & 0.002 & & 0.0024 & \\ 4023.99 & 7 & 0.005 & & 0.0037 & \\ 4437.55 & 5 & 0.014 & 0.013 & 0.0124 & \\ 5047.74 & 4 & 0.033 & 0.031 & 0.030 & \\ 7281.35 & 3 & 0.106 & 0.118 & 0.108 & 0.113\\ \\ \multicolumn{6}{c}{$n$p\,$^{1}$P$^{\rm o}$\,--\,2s\,$^{1}$S series}\\ 3354.55 & 7 & 0.036 & & 0.034 & \\ 3447.59 & 6 & 0.042 & & 0.054 & \\ 3613.64 & 5 & 0.060 & 0.094 & 0.097 & \\ 5015.68$^a$ & 3 & 0.312 & 0.498 & 0.512 & 0.486\\ \\ \multicolumn{6}{c}{$n$d\,$^{1}$D\,--\,2p\,$^{1}$P$^{\rm o}$ series}\\ 3926.53 & 8 & 0.029 & & 0.027 & \\ 4009.26 & 7 & 0.049 & & 0.041 & \\ 4143.76 & 6 & 0.059 & & 0.068 & \\ 4387.93 & 5 & 0.125 & 0.120 & 0.124 & 0.120\\ 4921.93 & 4 & 0.245 & 0.269 & 0.275 & 0.269\\ 6678.15$^b$ & 3 & 0.770 & 0.849 & 0.866 & 0.847\\ \\ \multicolumn{6}{c}{$n$s\,$^{3}$S\,--\,2p\,$^{3}$P$^{\rm o}$ series}\\ 3599.32 & 9 & 0.005 & & 0.0033 & \\ 3732.88 & 7 & 0.007 & & 0.0076 & \\ 4120.99 & 5 & 0.054 & 0.028 & 0.0260 & \\ 4713.20 & 4 & 0.123 & 0.076 & 0.0649 & 0.075\\ 7065.71 & 3 & 0.912 & 0.398 & 0.243 & 0.356\\ \\ \multicolumn{6}{c}{$n$p\,$^{3}$P$^{\rm o}$\,--\,2s\,$^{3}$S series}\\ 3187.74 & 4 & 0.742 & 0.748 & 0.747 & 0.748\\ 3888.64$^c$ & 3 & 1.714 & 1.891 & 1.895 & 1.865\\ \\ \multicolumn{6}{c}{$n$d\,$^{3}$D\,--\,2p\,$^{3}$P$^{\rm o}$ series}\\ 3456.86 & 19 & 0.006 & & 0.0077 & \\ 3461.01 & 18 & 0.008 & & 0.0089 & \\ 3465.94 & 17 & 0.009 & & 0.0105 & \\ 3471.81 & 16 & 0.011 & & 0.0126 & \\ 3478.97 & 15 & 0.018 & & 0.0153 & \\ 3487.72 & 14 & 0.021 & & 0.0187 & \\ 3498.64 & 13 & 0.026 & & 0.0234 & \\ 3512.51 & 12 & 0.034 & & 0.0297 & \\ 3530.49 & 11 & 0.040 & & 0.0385 & \\ 3554.41 & 10 & 0.055 & & 0.0513 & \\ 3587.27 & 9 & 0.073 & & 0.0707 & \\ 3634.23 & 8 & 0.112 & & 0.101 & \\ 3705.00 & 7 & 0.187 & & 0.154 & \\ 3819.60 & 6 & 0.275 & & 0.251 & \\ 4026.20 & 5 & 0.504 & 0.452 & 0.459 & 0.451\\ 4471.50 & 4 & 1.000 & 1.000 & 1.000 & 1.000\\ 5875.60 & 3 & 3.018 & 2.916 & 3.010 & 2.952\\ \hline \end{tabular} \begin{description} \item [$^a$] Slightly overestimated due to the saturated [O~{\sc iii}] $\lambda$5007 line. \item [$^b$] Corrected for the contribution from the He~{\sc ii} $\lambda$6683.20 (13h\,$^{2}$H$^{\rm o}$\,--\,5g\,$^{2}$G) line. \item [$^c$] Corrected for the contribution from the H~{\sc i} H8 $\lambda$3889 line. \end{description} \end{minipage} \end{table} \subsubsection{\label{orl_abundances:part2} C$^{2+}$/H$^+$ abundances from ORLs} The C$^{2+}$/H$^+$ abundance ratios derived from the 3\,--\,3 and 4f\,--\,3d transitions as well as from the $n$g\,--\,4f transition array are presented in Table\,\ref{abundances:ii}. The effective recombination coefficients of Davey, Storey \& Kisielius \cite{davey2000} were used. Their calculation was carried out from 500 to 20\,000~K. As described earlier, an electron temperature of 1000~K deduced from the N~{\sc ii} and O~{\sc ii} ORLs and probably prevalent in the postulated ``cold" component where the ORLs of heavy elements arise (Liu et al. \citealt{liu2000}), was assumed in the calculations. Transitions between doublet states were assumed to be in Case~B given the ground term C~{\sc ii} $^2$P$^{\rm o}$. For those doublet transitions whose Case~B effective recombination coefficients are not available in Davey, Storey \& Kisielius \cite{davey2000}, the more recent calculations of Bastin \cite{bastin2006} were adopted. Bastin \cite{bastin2006} calculated the effective recombination coefficients for doublet transitions in both Case~A and B from 5000 to 50\,000~K. Accurately extrapolating the effective recombination coefficients of Bastin \cite{bastin2006} to 1000~K is difficult because the recombination coefficients are not exactly a linear function of electron temperature. The H~{\sc i} Balmer jump temperature 6500~K and a density of 4300~cm$^{-3}$ were assumed when we derived the C$^{2+}$/H$^+$ abundance using the coefficients of Bastin \cite{bastin2006}. The C$^{2+}$/H$^+$ abundance ratio derived from the C~{\sc ii} $\lambda$4267 line is 5.507$\times$10$^{-4}$. In the calculation of Davey, Storey \& Kisielius \cite{davey2000}, the Case~A effective recombination coefficient for the C~{\sc ii} $\lambda$4267 line differs from Case~B by only 0.8 per cent. If a temperature of 10\,000~K is assumed, the C$^{2+}$/H$^+$ ratio derived will increase to 8.432$\times$10$^{-4}$, which then agrees with the abundance given by LSBC. At 10\,000~K, the effective recombination coefficient for the C~{\sc ii} $\lambda$4267 line given by Davey, Storey \& Kisielius \cite{davey2000} differs from that of Bastin \cite{bastin2006} by only 1.4 per cent in Case~B, and 1.8 per cent in Case~A. Thus we expect that the C$^{2+}$/H$^+$ ratio derived from the coefficients of Bastin \cite{bastin2006} should agree with LSBC. In Table\,\ref{abundances:ii}, the C$^{2+}$/H$^+$ abundance ratios derived from the three $n$g\,$^2$G\,--\,4f\,$^2$F$^{\rm o}$ lines (M17.02 $\lambda$9903, M17.04 $\lambda$6462, and M17.06 $\lambda$5342) and the M16.04 6f\,$^2$F$^{\rm o}$\,--\,4d\,$^2$D $\lambda$6151 line are based on the Case~B effective recombination coefficients of Bastin \cite{bastin2006}. The abundance ratios all agree with those given by LSBC. The Case~B effective recombination coefficients for the C~{\sc ii} $n$g\,$^2$G\,--\,4f\,$^2$F$^{\rm o}$ ($n$\,=\,5, 6 and 7) transitions calculated by Bastin \cite{bastin2006} are almost identical to the Case~A values at 6500~K; for the M16.04 6f\,$^2$F$^{\rm o}$\,--\,4d\,$^2$D $\lambda$6151 transition, the effective recombination coefficients of the two cases differ by only 2 per cent. The C~{\sc ii} M4 4s\,$^2$S\,--\,3p\,$^2$P$^{\rm o}$ $\lambda$3920 multiplet was not given by Davey, Storey \& Kisielius \cite{davey2000}, and the Case~B recombination coefficient of Bastin \cite{bastin2006} was used. The derived C$^{2+}$/H$^+$ abundance ratio agrees with those derived from the three $n$g\,--\,4f transitions. The adopted C$^{2+}$/H$^+$ abundance ratio in the current paper is 6.865$\times$10$^{-4}$, which is an average of the abundances derived from the transitions in Table\,\ref{abundances:ii}. \begin{table} \centering \caption{Recombination line C$^{2+}$/H$^{+}$ abundances. Intensities are normalized such that $I$(H$\beta$) = 100.} \label{abundances:ii} \begin{tabular}{lcll} \hline Line & Mult. & $I_{\rm obs}$ & C$^{2+}$/H$^{+}$\\ (\AA) & & & ($\times$10$^{-4}$)\\ \hline 4f $^2$F$^{\rm o}$ -- 3d $^2$D & M6 & & \\ $\lambda$4267 & & 0.875 & 5.507\\ \\ 4s $^2$S -- 3p $^2$P$^{\rm o}$ & M4 & & \\ $\lambda$3918.98 & & 0.015 & \\ $\lambda$3920.69 & & 0.033 & \\ sum & & 0.048 & 8.370$^a$\\ \\ 3d $^2$D -- 3p $^2$P$^{\rm o}$ & M3 & & \\ $\lambda$7231.32 & & 0.127 & \\ $\lambda$7236.42$^b$ & & 0.262 & \\ sum & & 0.437 & 3.198\\ \\ 5g $^2$G -- 4f $^2$F$^{\rm o}$ & M17.02 & & \\ $\lambda$9903.46 & & 0.200 & 8.197$^a$\\ \\ 6g $^2$G -- 4f $^2$F$^{\rm o}$ & M17.04 & & \\ $\lambda$6461.95 & & 0.087 & 8.275$^a$\\ \\ 7g $^2$G -- 4f $^2$F$^{\rm o}$ & M17.06 & & \\ $\lambda$5342.40 & & 0.037 & 6.771$^a$\\ \\ 6f $^2$F$^{\rm o}$ -- 4d $^2$D & M16.04 & & \\ $\lambda$6151.43 & & 0.034 & 7.734$^a$\\ \hline \end{tabular} \begin{description} \item [$^a$] Based on the C~{\sc ii} effective recombination coefficients of B06. An electron temperature of 6500~K, as derived from the H~{\sc i} Balmer jump, and a density of 4300~cm$^{-3}$, as derived from the CEL ratios (Paper~I), are assumed. \item [$^b$] Including the C~{\sc ii} M3 $\lambda$7237.17 line. \end{description} \end{table} \subsubsection{\label{orl_abundances:part3} N$^{2+}$/H$^+$ abundances from ORLs} The N$^{2+}$/H$^+$ abundance ratios derived from the N~{\sc ii} ORLs detected in the spectrum of NGC\,7009, with the most reliable measurements, are presented in Table\,\ref{abundances:iii}. The M3 multiplet of the 3p\,--\,3s configuration is the best observed amongst all the 3\,--\,3 transitions. The N~{\sc ii} effective recombination coefficients of Kisielius \& Storey \cite{ks2002} are used when we derive the N$^{2+}$/H$^{+}$ abundance ratios from the total intensity of an N~{\sc ii} multiplet, which is a sum of line intensities of all the fine-structure components. If some components are missing from a multiplet, i.e., not detected due to weakness or line blending, the total intensity of that multiplet is then calculated by assuming the relative intensities of the fine-structure components are as in {\it LS}\,coupling. The most recent N~{\sc ii} effective recombination coefficients calculated by FSL11 are used when we derive the N$^{2+}$/H$^{+}$ abundances from the fine-structure components of each N~{\sc ii} multiplet. The Case~B recombination was assumed for the triplets and Case~A for the singlets. An electron temperature of 1000~K and a density of 4300~cm$^{-3}$ were assumed throughout the abundance determinations. Under such physical condition, the effective recombination coefficient for H$\beta$ adopted is 1.86$\times$10$^{-13}$~cm$^3$\,s$^{-1}$, which is calculated from the hydrogenic theory of Storey \& Hummer \cite{sh1995}. The M3 3p\,$^3$D\,--\,3s\,$^3$P$^{\rm o}$ $\lambda$5680 multiplet is the strongest N~{\sc ii} permitted transition. At $T_\mathrm{e}$ = 1000~K, the {\it LS}\,coupling effective recombination coefficient of N~{\sc ii} M3 $\lambda$5680 multiplet calculated by Kisielius \& Storey \cite{ks2002} is case-insensitive, with the Case~B value being only 20 per cent higher than that for Case~A. In the calculation of FSL11, the Case~B effective recombination coefficient for the $\lambda$5679.56 line, which is the strongest component of the N~{\sc ii} M3 multiplet, is 27 per cent higher than in Case~A. The Case~B effective recombination coefficients for another two N~{\sc ii} M3 lines, $\lambda\lambda$5666.63 and 5676.02, are higher than their corresponding Case~A values by 28 and 24 per cent, respectively. The Case~B coefficients of the other N~{\sc ii} M3 lines are about 24 to 28 per cent higher than for Case~A. In Table\,\ref{abundances:iii}, the N$^{2+}$/H$^+$ abundance ratios derived from the N~{\sc ii} M3 $\lambda$5680 multiplet lines almost agree with the value derived from the total intensity of this multiplet. The N~{\sc ii} singlet line 3p\,$^1$D\,--\,3s\,$^1$P$^{\rm o}$ $\lambda$3995 is case-insensitive, with its Case~B effective recombination coefficient being only 3.6 per cent higher than for Case~A. The calculation of FSL11 shows that the Case~B effective recombination coefficient for the $\lambda$3995 line is higher than the Case~A value by about 5 per cent at 1000~K. The M5 3p\,$^3$P\,--\,3s\,$^3$P$^{\rm o}$, M20 3d\,$^3$D$^{\rm o}$\,--\,3p\,$^3$D, M28 3d\,$^3$D$^{\rm o}$\,--\,3p\,$^3$P and M29 3d\,$^3$P$^{\rm o}$\,--\,3p\,$^3$P multiplets of N~{\sc ii} are all case-sensitive. The Case~B effective recombination coefficient for the M5 3p\,$^3$P\,--\,3s\,$^3$P$^{\rm o}$ $\lambda$4623 multiplet calculated by Kisielius \& Storey \cite{ks2002} is higher than in Case~A by a factor of 9. FSL11 shows that the Case~B effective recombination coefficient for the $\lambda$4630.54 line, which is the strongest component of the N~{\sc ii} M5 multiplet, is higher than for Case~A by a factor of 8. The Case~B effective recombination coefficients for another two N~{\sc ii} M5 lines, $\lambda\lambda$4601.48 and 4621.39, are 8\,--\,9 per cent higher than in Case~A. For the weakest lines in the N~{\sc ii} M5 multiplet (the $\lambda\lambda$4607.16, 4613.87 and 4643.09 lines), their Case~B effective recombination coefficients do not differ much from the Case~A values. In Table\,\ref{abundances:iii}, the N$^{2+}$/H$^+$ abundance ratios derived from the N~{\sc ii} M5 components in the Case~B assumption agree with those yielded by the case-insensitive N~{\sc ii} M3 $\lambda$5680 lines, which signifies Case~B is a better assumption for the N~{\sc ii} M5 multiplet. Kisielius \& Storey \citealt{ks2002} shows that the Case~B effective recombination coefficients for the N~{\sc ii} M20 3d\,$^3$D$^{\rm o}$\,--\,3p\,$^3$D $\lambda$4794, M28 3d\,$^3$D$^{\rm o}$\,--\,3p\,$^3$P $\lambda$5939 and M29 3d\,$^3$P$^{\rm o}$\,--\,3p\,$^3$P $\lambda$5479 multiplets are higher than those for Case~A by a factor of 52, 51 and 25, respectively. FSL11 reveals that the Case~B effective recombination coefficients for the strongest fine-structure components in the above three multiplets are 48, 49 and 16 times higher than those for Case~A, at a given temperature of 1000~K. Given such large differences between the two cases, the N$^{2+}$/H$^+$ abundance ratios derived from the three N~{\sc ii} multiplets (Table\,\ref{abundances:iii}) suggest that Case~B is a better assumption for the 3\,--\,3 transitions of N~{\sc ii}. N$^{2+}$/H$^+$ abundances are also derived from the 4f\,--\,3d transitions, which are case-insensitive. The Case~B effective recombination coefficient for M39b 4f\,G[9/2]$_{5}$\,--\,3d\,$^3$F$^{\rm o}_{4}$ $\lambda$4041.31, the strongest 4f\,--\,3d transition of N~{\sc ii}, differs from the Case~A value by only 0.25 per cent (FSL11). The N$^{2+}$/H$^+$ abundance derived from the $\lambda$4041.31 line agrees well with that from N~{\sc ii} M39a 4f\,G[7/2]$_{4}$\,--\,3d\,$^3$F$^{\rm o}_{3}$ $\lambda$4043.53. Most of the other N~{\sc ii} 4f\,--\,3d transitions in Table\,\ref{abundances:iii} yield abundances close to those from the $\lambda\lambda$4041.31 and 4043.53 lines, except for M39b 4f\,G[9/2]$_{4}$\,--\,3d\,$^3$F$^{\rm o}_{4}$ $\lambda$4039.35, which gives an abnormally high abundance value. We attribute this to the large measurement error due to weakness of this line. N~{\sc ii} M58a 4f\,G[7/2]$_{4}$\,--\,3d\,$^1$F$^{\rm o}_{3}$ $\lambda$4552.53 also yields a relatively high abundance value, which could be due to the blended Si~{\sc iii} M2 4p\,$^3$P$^{\rm o}_{2}$\,--\,4s\,$^3$S$_{1}$ $\lambda$4552.62 and Ne~{\sc ii} M55d 4f\,2[2]$^{\rm o}_{5/2}$\,--\,3d\,$^4$F$_{3/2}$ $\lambda$4553.17 lines. The intensity of the N~{\sc ii} M61a 4f\,D[5/2]$_{2}$\,--\,3d\,$^1$P$^{\rm o}_{1}$ $\lambda$4694.64 line is probably overestimated due to unknown blend, which can be noticed from the profile of the feature. The three lines $\lambda\lambda$4039.35, 4552.53 and 4694.64 are excluded from calculating the total intensity and the average abundance ratio. The N$^{2+}$/H$^+$ abundances derived from the $\lambda\lambda$4041.31,\,4043.53 lines agree with those from the N~{\sc ii} M3 $\lambda\lambda$5666.63,\,5679.56 lines. The N~{\sc ii} M3 $\lambda$5676.02 line yields a relatively high abundance. Measurements of this line could be unreliable due to the blended $\lambda$5679.56 line, which is 4 times stronger (Fig.\,\ref{5650-5760}). The relatively low S/N's at this wavelength region also affect the measurements. It has been known for decades that the N~{\sc ii} permitted lines from the low-lying 3d\,--\,3p and 3p\,--\,3s triplet arrays, whose upper levels are linked to the ground term 2p$^2$\,$^3$P by resonance lines, can be enhanced by fluorescence excitation. Grandi \cite{grandi1976} used photoionization models to study the excitation mechanisms of permitted transitions from common heavy element ions observed in the spectra of the Orion nebula and PNe NGC\,7027 and NGC\,7662, and found that while the N~{\sc ii} M28 3d\,$^3$D$^{\rm o}$\,--\,3p\,$^3$P $\lambda$5942 multiplet is excited by both recombination and continuum fluorescence of the starlight, emission of the N~{\sc ii} M3 3p\,$^3$D\,--\,3s\,$^3$P$^{\rm o}$ $\lambda$5680, M5 3p\,$^3$P\,--\,3s\,$^3$P$^{\rm o}$ $\lambda$4630 and M30 4s\,$^3$P$^{\rm o}$\,--\,3p\,$^3$P $\lambda$3838 multiplets are dominated by fluorescence excitation of the N~{\sc ii} 4s\,$^3$P$^{\rm o}_{1}$ level by the He~{\sc i} 1s8p\,$^1$p$^{\rm o}_{1}$\,--\,1s$^2$\,$^1$S$_{0}$ $\lambda$508.643 resonance line, which coincides in wavelength with the N~{\sc ii} 2p4s\,$^3$P$^{\rm o}_{1}$\,--\,2p$^2$\,$^3$P$_{0}$ $\lambda$508.668 line. Fluorescence excitation, by line or continuum, however, cannot excite the singlet transitions or transitions from the 3d\,--\,4f configuration. Escalante \& Morisset \cite{em2005} analyzed the N~{\sc ii} spectrum of the Orion nebula by using nebular and stellar atmosphere models. Their modeling shows that the intensity of most of the N~{\sc ii} permitted lines in Orion could be explained by fluorescence of the starlight continuum. Recombination of N$^{2+}$ contributes a minor part of the observed intensities of lines from the 3p and 3d levels connected to the ground state. They confined the effective temperature of the ionizing star to be lower than 38\,000~K in order to reproduce the observed line intensities. Our current analysis shows that the N$^{2+}$/H$^{+}$ abundances derived from the 3p\,--\,3s transitions agree with those derived from the 4f\,--\,3d recombination lines (the values in boldface in Table\,\ref{abundances:iii}), which are unlikely to be affected by the fluorescence mechanisms. That indicates fluorescence excitation of the N~{\sc ii} 3p\,--\,3s lines, albeit may still exists, is probably insignificant in NGC\,7009. Fluorescence enhancement of the N~{\sc ii} lines by starlight could also be negligible, given the physical condition of NGC\,7009 (C. Morisset, private communication). However, physical parameters, e.g., UV radiation field of the central star, optical depth of the resonance transition that connect the ground state 2p$^2$\,$^3$P and an excited state (e.g. 2p4s\,$^3$P$^{\rm o}$), column densities of N$^{+}$ and N$^{2+}$ ions, etc., are needed in order to estimate the enhancement of the N~{\sc ii} lines due to resonance fluorescence by starlight or by other emission lines. That means consistent modeling of the central star and nebula is needed. The average N$^{2+}$/H$^{+}$ abundance from the 3\,--\,3 transitions is 3.70$\times$10$^{-4}$, which agrees with the average value (3.71$\times$10$^{-4}$) from the 4f\,--\,3d transitions. Here the N~{\sc ii} transitions (e.g. M29) that yield abnormally high abundances are excluded from averaging. The N$^{2+}$/H$^{+}$ abundance derived by co-adding the line intensities of the 4f\,--\,3d transitions is 3.42$\times$10$^{-4}$, which agrees well with the abundance calculated from the total intensity of the M3 multiplet of N~{\sc ii} (the values in boldface in Table\,\ref{abundances:iii}). The abundances derived from total intensities of the 4f\,--\,3d transitions are preferred over the average values of abundances from individual lines, since strong lines are better detected with smaller (relative) flux uncertainties. We adopt the mean value (3.45$\times$10$^{-4}$) derived by averaging the abundances from the M3 multiplet and the total intensity of the 4f\,--\,3d transitions as the recombination line N$^{2+}$/H$^{+}$ abundance of NGC\,7009. This value is about 10 per cent higher than 3.10$\times$10$^{-4}$ (slit position angle PA = 45$^{\rm o}$) given by LSBC, who used the N~{\sc ii} M39b $\lambda$4041.31 and M39a $\lambda$4043.53 lines to derive the N$^{2+}$/H$^+$ abundance of NGC\,7009. \begin{table} \centering \caption{Recombination line N$^{2+}$/H$^{+}$ abundances. Intensities are normalized such that H$\beta$ = 100. The N~{\sc ii} effective recombination coefficients of FSL11 and Kisielius \& Storey (\citealt{ks2002}, KS02) are both used for purpose of comparison.} \label{abundances:iii} \begin{tabular}{lclrr} \hline Line & Mult. & $I_{\rm obs}$ & \multicolumn{2}{c}{N$^{2+}$/H$^{+}$}\\ & & & \multicolumn{2}{c}{($\times$10$^{-4}$)}\\ (\AA) & & & FSL11 & KS02\\ \hline \multicolumn{5}{c}{3 -- 3 transitions}\\ $\lambda$5666.63 & M3 & 0.064 & 3.442 & \\ $\lambda$5676.02 & M3 & 0.036 & 4.075 & \\ $\lambda$5679.56 & M3 & 0.130 & 3.319 & \\ $\lambda$5686.21 & M3 & 0.024 & 4.695 & \\ $\lambda$5710.77 & M3 & 0.020 & 2.886 & \\ \textbf{M3 3p~$^3$D -- 3s~$^3$P$^{\rm o}$} & & 0.280 & \textbf{3.482} & \textbf{3.358}\\ \\ $\lambda$4601.48 & M5 & 0.016 & 3.087 & \\ $\lambda$4621.39$^a$ & M5 & 0.020 & 4.423 & \\ $\lambda$4630.54 & M5 & 0.067 & 3.549 & \\ \textbf{M5 3p~$^3$P -- 3s~$^3$P$^{\rm o}$} & & 0.102 & \textbf{3.598} & \textbf{2.242}\\ \\ $\lambda$3994.99 & M12 & 0.033 & 6.655 & \\ \textbf{M12 3p~$^1$D -- 3s~$^1$P$^{\rm o}$} & & 0.033 & \textbf{6.655} & \textbf{6.776}\\ \\ $\lambda$4803.29 & M20 & 0.032 & 2.723 & \\ \textbf{M20 3d~$^3$D$^{\rm o}$ -- 3p~$^3$D} & & 0.078 & \textbf{4.412} & \textbf{3.464}\\ \\ $\lambda$5941.65$^b$ & M28 & 0.030 & 1.795 & \\ \textbf{M28 3d~$^3$D$^{\rm o}$ -- 3p~$^3$P} & & 0.063 & \textbf{3.038} & \textbf{2.119}\\ \\ $\lambda$5480.06$^c$ & M29 & 0.012 & 9.409 & \\ \textbf{M29 3d~$^3$P$^{\rm o}$ -- 3p~$^3$P} & & 0.061 & \textbf{14.949} & \textbf{7.590}\\ \\ Average & & & \textbf{3.695} & \\ \\ \multicolumn{5}{c}{4f -- 3d transitions}\\ $\lambda$4035.08 & M39a & 0.035 & 2.926 &\\ $\lambda$4041.31$^d$ & M39b & 0.081 & 3.174 &\\ $\lambda$4043.53 & M39a & 0.035 & 3.168 &\\ $\lambda$4171.61 & M43b & 0.023 & 3.033 &\\ $\lambda$4176.16 & M43a & 0.021 & 3.739 &\\ $\lambda$4236.91$^e$ & M48a & 0.036 & 3.727 &\\ $\lambda$4241.78$^f$ & M48a & 0.093 & 3.689 &\\ $\lambda$4179.67 & M50a & 0.010 & 3.508 &\\ $\lambda$4432.74 & M55a & 0.036 & 3.089 &\\ $\lambda$4442.02 & M55a & 0.011 & 3.204 &\\ $\lambda$4552.53$^g$ & M58a & 0.032 & 6.381 &\\ $\lambda$4530.41 & M58b & 0.046 & 3.792 &\\ $\lambda$4678.14 & M61b & 0.012 & 3.049 &\\ $\lambda$4694.64 & M61a & 0.020 & 5.393 &\\ \\ Sum & & 0.466 & \textbf{3.419} &\\ Average & & & \textbf{3.705} &\\ \hline \end{tabular} \begin{description} \item [$^a$] Including O~{\sc ii} M92c lines 4f\,F[2]$^{\rm o}_{5/2}$\,--\,3d\,$^2$D$_{5/2}$ $\lambda$4621.27. Neglecting O~{\sc ii} M92c 4f\,F[2]$^{\rm o}_{3/2}$\,--\,3d\,$^2$D$_{5/2}$ $\lambda$4622.14. \item [$^b$] Neglecting the N~{\sc ii} M28 3d\,$^3$D$^{\rm o}_{1}$\,--\,3p\,$^3$P$_{1}$ $\lambda$5940.24 line, which contributes less than 1 per cent to the total intensity. \item [$^c$] Including N~{\sc ii} M29 3d\,$^3$P$^{\rm o}_{2}$\,--\,3p\,$^3$P$_{1}$ $\lambda$5478.10. \item [$^d$] Corrected for the contribution from the O~{\sc ii} M50c 4f\,F[2]$^{\rm o}_{5/2}$\,--\,3d\,$^4$F$_{5/2}$ $\lambda$4041.28 line (7 per cent). \item [$^e$] Corrected for the contribution from the N~{\sc ii} M48b 4f\,F[7/2]$_{3}$\,--\,3d\,$^3$D$^{\rm o}_{2}$ $\lambda$4237.05 line ($\sim$30 per cent). \item [$^f$] Including N~{\sc ii} M48b 4f\,F[7/2]$_{4}$\,--\,3d\,$^3$D$^{\rm o}_{3}$ $\lambda$4241.78. \item [$^g$] Including the contribution from the Ne~{\sc ii} M55d 4f\,2[2]$^{\rm o}_{5/2}$\,--\,3d\,$^4$F$_{3/2}$ $\lambda$4553.17 line. Neglecting Ne~{\sc ii} M55d 4f\,2[2]$^{\rm o}_{3/2}$\,--\,3d\,$^4$F$_{3/2}$ $\lambda$4553.40. \end{description} \end{table} \subsubsection{\label{orl_abundances:part4} O$^{2+}$/H$^+$ abundances from ORLs} In the spectrum of NGC\,7009, O~{\sc ii} has the most abundant optical recombination spectrum amongst all the heavy element ions detected. The most prominent multiplets of the O~{\sc ii} transitions are presented in Section s\,\ref{oii_orls} and \ref{appendix:c}. The spectrum analyzed in the current paper covers very broad wavelength range (3040--11\,100\,{\AA}) and is among the deepest CCD spectra ever taken for an emission line nebula, and also has higher quality than that published by LSBC. In the wavelength ranges 3040--4048\,{\AA} and 3990--4980\,{\AA}, where the most prominent recombination lines of O~{\sc ii} are located, our data quality is as good as that of Liu et al. \cite{liu2000} for PN NGC\,6153, which was observed using the same instruments mounted on the ESO 1.5~m telescope. The O$^{2+}$/H$^+$ abundance ratios derived from the O~{\sc ii} ORLs with the most reliable measurements are presented in Tables\,\ref{abundances:iv} (the 3d\,--\,3p and 3p\,--\,3s transitions) and \ref{abundances:v} (the 4f\,--\,3d transitions). The Case~B effective recombination coefficients of O~{\sc ii} calculated by PJS are adopted for the abundance determinations. An electron temperature of 1000~K is assumed. For purpose of comparison, the effective recombination coefficients of Storey \cite{storey1994} for the 3p\,--\,3s transitions, and LSBC for the 3d\,--\,3p and 4f\,--\,3d transitions, are also used. Case~B is assumed for the quartet transitions, and Case~A for the doublets. An electron temperature of 5000~K is assumed when using the data of Storey \cite{storey1994}, whose calculation is valid from 5000 to 20\,000~K. Since the calculation of Storey \cite{storey1994} is only spectral term-resolved, we deduce the effective recombination coefficients for each fine-structure component of a multiplet with the assumption that their relative intensities are as in the {\it LS}\,coupling. As in the case of the N~{\sc ii} lines (Section\,\ref{orl_abundances:part3}), for each multiplet in Table\,\ref{abundances:iv}, we calculate the abundance value using the co-added intensities from all fine-structure components observed; for the O~{\sc ii} 4f\,--\,3d transitions in Table\,\ref{abundances:v}, we also calculate the abundance after co-adding the intensities of all detected lines. The O~{\sc ii} M1 3p\,$^4$D$^{\rm o}$\,--\,3s\,$^4$P $\lambda$4650 multiplet is case-insensitive. At 5000~K, the Case~B effective recombination coefficient of the M1 $\lambda$4650 multiplet given by Storey \cite{storey1994} is only 3.4 per cent higher than for Case~A. P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} shows that the difference between the effective radiative recombination coefficients in the two cases for this multiplet is 2.5 per cent. The most recent calculation of PJS reveals that the difference between the effective recombination coefficients in the two cases for the strongest O~{\sc ii} M1 line $\lambda$4649.13 is less than 5 per cent. The coefficients in the two cases for the other O~{\sc ii} M1 fine-structure lines are of similar order. The O$^{2+}$/H$^+$ abundances derived from the individual O~{\sc ii} M1 lines agree with each other, except for $\lambda\lambda$4638.86 and 4673.73, which yield very high abundance values. The average O$^{2+}$/H$^+$ from O~{\sc ii} M1 is 16.2$\times$10$^{-4}$, which agrees well with the values derived from the recombination coefficients of P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} and Storey \cite{storey1994}. The current measurements of the multiplet also agrees with LSBC, who gives 15.3$\times$10$^{-4}$ (PA = 0$^{\rm o}$) and 13.5$\times$10$^{-4}$ (PA = 45$^{\rm o}$). Of the seven observed 3d\,--\,3p multiplets in Table\,\ref{abundances:iv}, the intensities of those from the upper terms 2p$^{2}$3d\,${4}$F and 2p$^{2}$3d\,$^{4}$D are almost independent of the assumption of Case~A or Case~B. The M2 3p\,$^4$P$^{\rm o}$\,--\,3s\,$^4$P $\lambda$4341, M6 3p\,$^2$P$^{\rm o}$\,--\,3s\,$^2$P $\lambda$3967, M10 3d\,$^4$F\,--\,3p\,$^4$D$^{\rm o}$ $\lambda$4075, M12 3d\,$^4$D\,--\,3p\,$^4$D$^{\rm o}$ $\lambda$3867, M20 3d\,$^4$D\,--\,3p\,$^4$P$^{\rm o}$ $\lambda$4111 and M26 3d\,$^2$D\,--\,3p\,$^2$D$^{\rm o}$ $\lambda$4385 multiplets of O~{\sc ii} are case-insensitive, with their Case~B or Case~C\footnote{Defined with reference to the O~{\sc ii} recombination spectrum. In Case~B, lines terminating on the 2p$^3$\,$^4$S$^{\rm o}$ term are assumed to be optically thick and no radiative decays are permitted to this state. In Case~C, radiative decays to both the 2p$^3$\,$^4$S$^{\rm o}$ and $^2$D$^{\rm o}$ terms are excluded.} effective recombination coefficients being 2.3--30 per cent higher than those for Case~A (Storey \citealt{storey1994}). The O$^{2+}$/H$^{+}$ abundance ratios derived from the M1 and M2 multiplets agree with those derived from the 3d\,--\,3p and 4f\,--\,3d transitions, while in LSBC and Liu et al. \cite{liu2000} for NGC\,6153, the abundances from those two multiplets are lower by about 40 per cent and a factor of 2. respectively. For Case~C to apply to the doublets, it requires transitions to the 2p$^{3}$\,$^{2}$D$^{\rm o}$ state of the ground configuration to be optically thick, which is unlikely in the physical condition of NGC\,7009. Thus the doublets (e.g. the M5 3p\,$^2$D$^{\rm o}$\,--\,3s\,$^2$P $\lambda$4418, M6 3p\,$^{2}$P$^{\rm o}$\,--\,3s\,$^{2}$P $\lambda$3967 and M25 3d\,$^2$F\,--\,3p\,$^2$D$^{\rm o}$ $\lambda$4704 multiplets) can be regarded as case-insensitive. The M11 3d\,$^4$P\,--\,3p\,$^4$D$^{\rm o}$ $\lambda$3903, M19 3d\,$^4$P\,--\,3p\,$^4$P$^{\rm o}$ $\lambda$4152 and M28 3d\,$^4$P\,--\,3p\,$^4$S$^{\rm o}$ $\lambda$4913 multiplets, which have the same upper term that can decay to the 2p$^{3}$\,$^{4}$S$^{\rm o}$ ground state of O~{\sc ii} via resonance transitions, are expected to be very case-sensitive: their Case~B effective recombination coefficients are more than 20 times higher than the Case~A values. Table\,\ref{abundances:iv} shows that in our current analysis, the O$^{2+}$/H$^{+}$ abundance ratios derived from the quartets M10, M12 and M20, all case-insensitive, are systematically higher than those derived from the case-sensitive multiplets M19 and M28, a phenomenon discovered by Liu et al. \cite{liu2000} for NGC\,6153. As pointed out by Liu et al. \cite{liu2000}, it is possible that there is a small departure from the assumed Case~B towards Case~A, which would increase the derived abundances for the multiplets that decay from the 2p$^{2}$3d\,$^{4}$P upper term. The three doublets presented in Talbe\,\ref{abundances:iv} all yield abundance values that are consistent with those derived from the case-insensitive quartets (M10, M12 and M20), except for the M6 doublets, which yield systematically higher abundances. The O~{\sc ii} M28 lines yield relatively lower abundance ratios, indicating that Case~B might not be a good approximation for this multiplet. The O$^{2+}$/H$^{+}$ abundance ratios derived from the $\lambda$4132.80 and $\lambda$4153.30 line of the M19 multiplet of O~{\sc ii} are obviously lower than the other O~{\sc ii} multiplets. The observed $\lambda$4156.53 line of M19 is too strong compared to other components of this multiplet, with the derived abundance value being higher than those deduced from other multiplets by nearly a factor of two. That was also observed in NGC\,6153 (Liu et al. \citealt{liu2000}) and by LSBC for the same object. No convincing candidates for lines which might be blended with the O~{\sc ii} M19 $\lambda$4156.53 line, and thus cause the discrepancy, were found by LSBC. In NGC\,7009, the intensity ratio of the $\lambda$4156.53 ($J$ = 3/2\,--\,5/2) and $\lambda$4132.80 ($J$ = 3/2\,--\,1/2) lines of M19, which decay from the same upper level, is 1.06, which is about a factor of two lower than the values of 1.7 (slit position PA = 0$^{\rm o}$) and 2.1 (PA = 45$^{\rm o}$) found for the same object by LSBC, but agrees with the values of 0.6 (for the minor axis) and 1.1 (for the whole nebula) given by Liu et al. \cite{liu2000} for NGC\,6153. The $\lambda$4132.80 line detected in our spectrum of NGC\,7009 has an FWHM of 1.85\,{\AA}, which is broader than the $\lambda$4156.53 line (FWHM$\sim$1.60\,{\AA}). Such a line width of the $\lambda$4132.80 line might be contributed by line blending, which results in a line ratio that is lower than the previous measurements. Besides, different data quality might also contribute to the discrepancy between LSBC and our current measurements. For the $\lambda$4156.53 line, we still do not know which feature it is blended with. In general, the O$^{2+}$/H$^{+}$ abundance ratios deduced using the Case~B effective recombination coefficients of PJS are lower than those deduced using the radiative recombination coefficients of LSBC, by 30--50 per cent for the 3d\,--\,3p transitions, and by 20--30 per cent for most of the 4f\,--\,3d transitions. The O$^{2+}$/H$^{+}$ abundance ratios calculated, by co-adding the line intensities of each multiplet of the 3p\,--\,3s configuration, using the effective recombination coefficients of PJS, do not differ much from those deduced using the coefficients of Storey \cite{storey1994}. However, the abundances deduced from each fine-structure components of the 3p\,--\,3s multiplets, using the effective recombination coefficients of PJS, are systematically lower than those deduced using the coefficients of Storey \cite{storey1994}. This difference is more obvious for the two doublets M5 and M6, with the abundances deduced based on the coefficients of PJS being lower by about 10 per cent. If we use the Case~A effective recombination coefficients of Storey \cite{storey1994}, the derived abundances will be lower than those deduced using the data of PJS by a factor of 7. The 4f\,--\,3d transitions are essentially case-insensitive, and the O$^{2+}$/H$^+$ abundance ratios derived from those lines using the data of PJS agree well. The abundance value calculated by co-adding the 4f\,--\,3d line intensities in Table\,\ref{abundances:iv} is 1.330$\times$10$^{-3}$, which agrees with the average value (1.403$\times$10$^{-3}$) of the 3\,--\,3 transitions. The mean O$^{2+}$/H$^{+}$ abundance ratios derived by averaging the values from all 3\,--\,3 multiplets (excluding the values that are abnormally high) plus the co-added 4f\,--\,3d transitions is 1.417$\times$10$^{-3}$, which is slightly lower than the recombination line abundances given by LSBC: 17.0$\pm$1.0$\times$10$^{-4}$ (PA = 45$^{\rm o}$) and 17.6$\pm$1.7$\times$10$^{-4}$ (PA = 0$^{\rm o}$). This value is adopted as the recombination line O$^{2+}$/H$^{+}$ abundance in NGC\,7009. \begin{table} \centering \caption{Recombination line O$^{2+}$/H$^{+}$ abundances derived from the 3\,--\,3 transitions. Intensities are normalized such that $I$(H$\beta$) = 100. The effective recombination coefficients of Storey \citet{storey1994} (for the 3p\,--\,3s transitions) and LSBC (for the 3d\,--\,3p and 4f\,--\,3d transitions) are also used for purpose of comparison.} \label{abundances:iv} \begin{tabular}{lllrr} \hline Line & Mult. & $I_{\rm obs}$ & \multicolumn{2}{c}{O$^{2+}$/H$^{+}$}\\ & & & \multicolumn{2}{c}{($\times$10$^{-4}$)}\\ (\AA) & & & PJS & LSBC\\ \hline $\lambda$4638.86 & M1 & 0.335 & 25.002 & 33.581\\ $\lambda$4641.81 & M1 & 0.437 & 14.549 & 17.355\\ $\lambda$4649.13 & M1 & 0.666 & 13.677 & 13.886\\ $\lambda$4650.84 & M1 & 0.175 & 12.731 & 17.542\\ $\lambda$4661.63 & M1 & 0.217 & 14.490 & 16.946\\ $\lambda$4673.73 & M1 & 0.052 & 21.607 & 25.815\\ $\lambda$4676.24 & M1 & 0.158 & 15.125 & 14.642\\ $\lambda$4696.35 & M1 & 0.015 & 13.528 & 12.510\\ \textbf{M1 3p~$^4$D$^{\rm o}$ -- 3s~$^4$P} & & 2.056 & \textbf{15.241} & \textbf{17.369}\\ \\ $\lambda$4317.14 & M2 & 0.091 & 12.821 & 12.383\\ $\lambda$4319.63 & M2 & 0.063 & 14.010 & 7.938\\ $\lambda$4325.76 & M2 & 0.029 & 17.471 & 19.731\\ $\lambda$4336.86 & M2 & 0.053 & 18.008 & 22.536\\ $\lambda$4345.56 & M2 & 0.141 & 15.455 & 19.187\\ $\lambda$4349.43 & M2 & 0.195 & 15.876 & 10.530\\ $\lambda$4366.89 & M2 & 0.085 & 11.471 & 10.710\\ \textbf{M2 3p~$^4$P$^{\rm o}$ -- 3s~$^4$P} & & 0.657 & \textbf{14.589} & \textbf{12.577}\\ \\ $\lambda$4414.90 & M5 & 0.100 & 19.665 & 22.885\\ $\lambda$4416.97 & M5 & 0.064 & 17.370 & 26.363\\ $\lambda$4452.37 & M5 & 0.014 & 20.859 & 28.835\\ \textbf{M5 3p~$^2$D$^{\rm o}$ -- 3s~$^2$P} & & 0.178 & \textbf{18.855} & \textbf{24.440}\\ \\ $\lambda$3945.04 & M6 & 0.026 & 56.459 & 61.094\\ $\lambda$3954.36 & M6 & 0.030 & 31.310 & 35.245\\ $\lambda$3973.26 & M6 & 0.065 & 27.719 & 30.546\\ $\lambda$3982.71 & M6 & 0.010 & 20.411 & 23.497\\ \textbf{M6 3p~$^2$P$^{\rm o}$ -- 3s~$^2$P} & & 0.131 & \textbf{30.785} & \textbf{29.451}\\ \\ $\lambda$4069.89$^a$ & M10 & 0.635 & 16.777 & 23.914\\ $\lambda$4072.16$^b$ & M10 & 0.549 & 17.051 & 20.461\\ $\lambda$4075.86 & M10 & 0.687 & 16.687 & 19.241\\ $\lambda$4078.84 & M10 & 0.089 & 15.899 & 11.811\\ $\lambda$4085.11 & M10 & 0.107 & 16.746 & 23.176\\ $\lambda$4092.93$^c$ & M10 & 0.077 & 17.567 & 22.941\\ \textbf{M10 3d~$^4$F -- 3p~$^4$D$^{\rm o}$} & & 2.144 & \textbf{16.732} & \textbf{20.841}\\ \\ $\lambda$3907.45 & M11 & 0.010 & 16.364 & 9.291\\ \textbf{M11 3d~$^4$P -- 3p~$^4$D$^{\rm o}$} & & 0.026$^d$ & \textbf{15.861} & \textbf{8.411}\\ \\ $\lambda$3842.82 & M12 & 0.015 & 12.406 & 28.465\\ $\lambda$3851.03 & M12 & 0.017 & 12.351 & 19.397\\ $\lambda$3882.19$^e$ & M12 & 0.053 & 14.929 & 14.462\\ \textbf{M12 3d~$^4$D -- 3p~$^4$D$^{\rm o}$} & & 0.099 & \textbf{13.877} & \textbf{10.972}\\ \\ $\lambda$4129.32 & M19 & 0.032 & 21.489 & 45.444\\ $\lambda$4132.80 & M19 & 0.078 & 9.736 & 13.237\\ $\lambda$4140.70$^f$ & M19 & 0.003 & 45.123 & 23.923\\ $\lambda$4153.30 & M19 & 0.112 & 9.596 & 13.302\\ $\lambda$4156.53$^g$ & M19 & 0.079 & 35.714 & 59.101\\ $\lambda$4169.22 & M19 & 0.082 & 17.892 & 28.689\\ \textbf{M19 3d~$^4$P -- 3p~$^4$P$^{\rm o}$} & & 0.386 & \textbf{13.760} & \textbf{17.267}\\ \\ $\lambda$4110.79 & M20 & 0.025 & 14.198 & 9.655\\ \textbf{M20 3d~$^4$D -- 3p~$^4$P$^{\rm o}$} & & 0.259$^d$ & \textbf{14.162} & \textbf{10.973}\\ \hline \end{tabular} \end{table} \addtocounter{table}{-1} \begin{table} \centering \caption{Continued.} \label{abundances:iv} \begin{tabular}{lllrr} \hline Line & Mult. & $I_{\rm obs}$ & \multicolumn{2}{c}{O$^{2+}$/H$^{+}$}\\ & & & \multicolumn{2}{c}{($\times$10$^{-4}$)}\\ (\AA) & & & PJS & LSBC\\ \hline $\lambda$4699.22 & M25 & 0.024 & 25.809 & 33.771\\ $\lambda$4705.35 & M25 & 0.021 & 16.924 & 17.753\\ \textbf{M25 3d~$^2$F -- 3p~$^2$D$^{\rm o}$} & & 0.046$^h$ & \textbf{20.735} & \textbf{24.293}\\ \\ $\lambda$4890.86 & M28 & 0.013 & 6.360 & 10.377\\ $\lambda$4906.83 & M28 & 0.046 & 10.907 & 17.114\\ $\lambda$4924.53 & M28 & 0.074 & 10.609 & 16.190\\ \textbf{M28 3d~$^4$P -- 3p~$^4$S$^{\rm o}$} & & 0.133 & \textbf{10.049} & \textbf{15.311}\\ \\ Average & & & \textbf{14.032} & \textbf{21.896}\\ \hline \end{tabular} \begin{description} \item [$^a$] Including the O~{\sc ii} M10 3d\,$^4$F$_{3/2}$\,--\,3p\,$^4$D$^{\rm o}_{1/2}$ $\lambda$4069.62 line. \item [$^b$] Corrected for the contribution from the O~{\sc ii} M48a 4f\,G[5]$^{\rm o}_{9/2}$\,--\,3d\,$^{4}$F$_{7/2}$ $\lambda$4071.23 line ($\sim$10 per cent). Neglecting the N~{\sc ii} M38b 4f\,G[7/2]$_{3}$\,--\,3d\,$^{3}$F$^{\rm o}_{2}$ $\lambda$4073.05 line ($\sim$2 per cent). \item [$^c$] Overestimated due to N~{\sc iii} M1 3p\,$^{2}$P$^{\rm o}_{3/2}$\,--\,3s\,$^{2}$S$_{1/2}$ $\lambda$4097.33 line which is more than 20 times stronger. \item [$^d$] Assuming the relative intensities of this multiplet are the predicted values based on the effective recombination coefficients of PJS. \item [$^e$] Corrected for the contribution from the O~{\sc ii} M11 3d\,$^4$P$_{3/2}$\,--\,3p\,$^4$D$^{\rm o}_{3/2}$ $\lambda$3882.45 line ($\sim$17 per cent). Neglecting the O~{\sc ii} M12 3d\,$^{4}$D$_{5/2}$\,--\,3p\,$^{4}$D$^{\rm o}_{7/2}$ $\lambda$3883.13 line ($\sim$2 per cent). \item [$^f$] Overestimated due to the much stronger He~{\sc i} M53 6d\,$^{1}$D$_{2}$\,--\,2p\,$^{1}$P$^{\rm o}_{1}$ $\lambda$4143.76 line. \item [$^g$] Neglecting the N~{\sc ii} M50b 4f\,D[3/2]$_{2}$\,--\,3d\,$^3$D$^{\rm o}_{1}$ $\lambda$4156.39 and the N~{\sc ii} M50b 4f\,D[2]$_{1}$\,--\,3d\,$^3$D$^{\rm o}_{1}$ $\lambda$4157.01 lines ($\sim$4 per cent in total). \item [$^h$] Not including the O~{\sc ii} M25 3d\,$^2$F$_{5/2}$\,--\,3p\,$^2$D$^{\rm o}_{5/2}$ $\lambda$4741.71 line. \end{description} \end{table} \begin{table} \centering \caption{Recombination line O$^{2+}$/H$^{+}$ abundances from the 4f\,--\,3d transitions. Intensities are normalized such that $I$(H$\beta$) = 100. The O~{\sc ii} effective recombination coefficients of LSBC are also used for purpose of comparison.} \label{abundances:v} \begin{tabular}{lllrr} \hline Line & Mult. & $I_{\rm obs}$ & \multicolumn{2}{c}{O$^{2+}$/H$^{+}$}\\ & & & \multicolumn{2}{c}{($\times$10$^{-4}$)}\\ (\AA) & & & PJS & LSBC\\ \hline $\lambda$4089.29$^a$ & M48a & 0.264 & 12.901 & 15.308\\ $\lambda$4083.90 & M48b & 0.094 & 13.310 & 19.077\\ $\lambda$4087.15$^b$ & M48c & 0.091 & 13.675 & 19.461\\ $\lambda$4062.94$^c$ & M50a & 0.036 & 13.975 & 16.592\\ $\lambda$4048.21$^d$ & M50b & 0.021 & 14.363 & 19.236\\ $\lambda$4303.83$^e$ & M53a & 0.120 & 13.844 & 16.847\\ $\lambda$4294.78$^f$ & M53b & 0.067 & 13.025 & 16.756\\ $\lambda$4307.23 & M53b & 0.029 & 12.415 & 16.053\\ $\lambda$4288.82$^g$ & M53c & 0.026 & 17.477 & 15.659\\ $\lambda$4282.96$^h$ & M67c & 0.063 & 17.615 & 23.690\\ $\lambda$4466.42$^i$ & M86b & 0.032 & 12.247 & 20.368\\ $\lambda$4489.49 & M86b & 0.022 & 12.958 & 19.542\\ $\lambda$4491.23 & M86a & 0.057 & 14.606 & 24.119\\ $\lambda$4609.44$^j$ & M92a & 0.128 & 13.969 & 17.309\\ $\lambda$4602.13$^k$ & M92b & 0.052 & 12.938 & 17.628\\ \\ sum & & 1.103 & \textbf{13.303} & \textbf{17.556}\\ Average & & & \textbf{13.955} & \textbf{18.510}\\ \hline \end{tabular} \begin{description} \item [$^a$] Corrected for the contribution from Si~{\sc iv} M1 4p~$^{2}$P$^{\rm o}_{3/2}$\,--\,4s~$^{2}$S$_{1/2}$ $\lambda$4088.86, which is about 15 per cent. Neglecting O~{\sc ii} M48a 4f~G[5]$^{\rm o}_{9/2}$\,--\,3d~$^{4}$F$_{9/2}$ $\lambda$4088.27 (less than 2 per cent). \item [$^b$] Neglecting N~{\sc ii} M38a 4f~F[5/2]$_{3}$\,--\,3d~$^3$F$^{\rm o}_{3}$ $\lambda$4087.30 (about 3 per cent). \item [$^c$] Including Ne~{\sc ii} M53 4f~0[3]$^{\rm o}_{7/2}$\,--\,3d~$^{4}$D$_{5/2}$ $\lambda$4062.97. \item [$^d$] Neglecting the contribution from O~{\sc ii} M50b 4f~F[3]$^{\rm o}_{5/2}$\,--\,3d~$^{4}$F$_{7/2}$ $\lambda$4047.80, which is probably 7 per cent. \item [$^e$] Corrected for the contribution from O~{\sc ii} M65a 4f~G[5]$^{\rm o}_{9/2}$\,--\,3d~$^4$D$_{7/2}$ $\lambda$4303.61, which is about 20 per cent. \item [$^f$] The contribution from O~{\sc ii} M53b 4f~D[2]$^{\rm o}_{3/2}$\,--\,3d~$^4$P$_{3/2}$ $\lambda$4294.92 to the blend at $\lambda$4295, which is estimated to be 28 per cent, has been subtracted. \item [$^g$] A blend of O~{\sc ii} M53c 4f~D[1]$^{\rm o}_{1/2}$\,--\,3d~$^4$P$_{1/2}$ $\lambda$4288.82 and O~{\sc ii} M53c 4f~D[1]$^{\rm o}_{3/2}$\,--\,3d~$^4$P$_{1/2}$ $\lambda$4288.82. \item [$^h$] Corrected for the contributions from O~{\sc ii} M67c 4f~F[2]$^{\rm o}_{3/2}$\,--\,3d~$^4$D$_{3/2}$ $\lambda$4283.73 (about 35 per cent) and O~{\sc ii} M67c 4f~F[2]$^{\rm o}_{5/2}$\,--\,3d~$^4$D$_{5/2}$ $\lambda$4283.25 (about 6 per cent). Neglecting O~{\sc ii} M78a 4f~F[4]$^{\rm o}_{7/2}$\,--\,3d~$^{2}$F$_{5/2}$ $\lambda$4282.02 (less than 2 per cent). Including Ne~{\sc ii} M57c 4f~1[3]$^{\rm o}_{7/2}$\,--\,3d~$^{4}$F$_{7/2}$ $\lambda$4283.73, whose contribution to the total intensity is unknown due to the lack of atomic data. \item [$^i$] Corrected for the contribution from O~{\sc ii} M86b 4f~D[2]$^{\rm o}_{3/2}$\,--\,3d~$^{2}$P$_{3/2}$ $\lambda$4466.59, which is about 20 per cent. \item [$^j$] Corrected for the contribution from O~{\sc ii} M92c 4f~F[2]$^{\rm o}_{5/2}$\,--\,3d~$^{2}$D$_{3/2}$ $\lambda$4610.20, which is about 20 per cent. \item [$^k$] Corrected for the contribution from N~{\sc ii} M5 3p~$^{3}$P$_{2}$\,--\,3s~$^{3}$P$^{\rm o}_{1}$ $\lambda$4601.48, which is about 25 per cent. \end{description} \end{table} \subsubsection{\label{orl_abundances:part5} Ne$^{2+}$/H$^+$ abundances from ORLs} Tables\,\ref{abundances:vi} and \ref{abundances:vii} present the recombination line Ne$^{2+}$/H$^+$ abundances derived from the 3\,--\,3 and 4f\,--\,3d transitions, respectively. For the 3d\,--\,3p and 3p\,--\,3s transitions, the effective recombination coefficients calculated in the {\it LS}\,coupling assumption by Kisielius et al. \cite{kisielius1998} are adopted. Case~A is assumed for the quartet transitions and Case~B for the doublets. The calculation of Kisielius et al. \cite{kisielius1998} is valid from 1000 to 20\,000~K, and four density cases, 10$^{2}$, 10$^{4}$, 10$^{5}$ and 10$^{6}$~cm$^{-3}$, were calculated. We assumed an electron density of 10$^4$~cm$^{-3}$ and a temperature of 1000~K in the abundance determinations. For purpose of comparison, also presented in Tables\,\ref{abundances:vi} and \ref{abundances:vii} are the recombination line Ne$^{2+}$/H$^{+}$ abundances derived by LLB01, who used the same CCD spectrum as analyzed in the current paper but assumed an electron temperature of 7100~K, as derived from the Balmer discontinuity. In general, the Ne$^{2+}$/H$^+$ abundance ratios derived from the 3d\,--\,3p and 3p\,--\,3s transitions in the current work are lower than those given by LLB01. This is mainly due to the different temperatures adopted. For the 4f\,--\,3d transitions, the abundances derived by us are systematically lower than those given by LLB01 by about 13 per cent, except the strongest lines of this transition array, e.g. $\lambda$4391.99 (M55e 4f\,2[5]$^{\rm o}_{11/2}$\,--\,3d\,$^4$F$_{9/2}$) and $\lambda$4409.30 (M55e 4f\,2[5]$^{\rm o}_{9/2}$\,--\,3d\,$^4$F$_{7/2}$), which yield close Ne$^{2+}$/H$^{+}$ abundance ratios from the two analyses. The difference between the abundances given by the two analyses are partially contributed by the different extinctions used: The logarithmic extinction at H$\beta$, $c$(H$\beta$), derived by LLB01 is 0.07, while ours is 0.174 (Paper~I). The Ne~{\sc ii} M21 3d\,$^2$D$_{5/2}$\,--\,3p\,$^2$D$^{\rm o}_{5/2}$ $\lambda$3416.91 line yields relatively higher abundance, probably due to a blend of the Ne~{\sc ii} M19 3d\,$^4$F$_{7/2}$\,--\,3p\,$^2$D$^{\rm o}_{5/2}$ $\lambda$3417.69 line. The nearby O~{\sc iii} Bowen fluorescence line M15 3d\,$^3$P$^{\rm o}_{1}$\,--\,3p\,$^3$P$_{1}$ $\lambda$3415.26 also affects the measurement of the $\lambda$3416.91 line (Fig.\,\ref{3395-3475}). The Ne~{\sc ii} lines of the M9 3p$^{\prime}$\,$^{2}$F$^{\rm o}$\,--\,3s$^{\prime}$\,$^{2}$D multiplet, $\lambda\lambda$3568.50 and 3574.61 are detected in the spectrum of NGC\,7009 (Fig.\,\ref{3540-3595}). The Ne$^{2+}$/H$^{+}$ abundance derived from this multiplet is much higher than yielded by other multiplets. Similar results are also observed by LLB01 and in another two PNe, M\,1-42 and M\,2-36 (Liu et al. \citealt{liu2001b}). LLB01 pointed out that the high abundance yielded by M9 is possibly due to the underestimated effective recombination coefficients for this multiplet. The 3p\,$^{4}$D$^{\rm o}_{5/2}$\,--\,3s\,$^{4}$P$_{3/2}$ $\lambda$3355.02 line of multiplet M2 is blended with the He~{\sc i} M8 7p\,$^1$P$^{\rm o}_{1}$\,--\,2s\,$^1$S$_{0}$ $\lambda$3354.55 line, and is also partially blended with the [Cl~{\sc iii}] 3p$^{3}$\,$^{2}$P$^{\rm o}_{1/2}$\,--\,3p$^{3}$\,$^{4}$S$^{\rm o}_{3/2}$ $\lambda$3353.17 line, as shown in Fig.\,\ref{3345-3410}. The intensity of the [Cl~{\sc iii}] line was obtained from line fitting with two Gaussian profiles. The intensity contribution from the He~{\sc i} line was corrected for, as LLB01 did, using the observed intensity of the He~{\sc i} M6 5p\,$^1$P$^{\rm o}_{1}$\,--\,2s\,$^1$S$_{0}$ $\lambda$3613.64 line, assuming the line ratio $I$($\lambda$3354.55)/$I$($\lambda$3613.64) = 0.35 (in Case~B assumption), as predicted by Brocklehurst \cite{b72}. Here an electron temperature of 5000~K, as derived from the He~{\sc i} line ratios (Paper~I), and a density of 10\,000~cm$^{-3}$ were assumed. The correction for the He~{\sc i} line amounts to 26 per cent, close to the result of LLB01 (30 per cent). The reason that we use the He~{\sc i} $\lambda$3613.64 line, instead of the He~{\sc i} M48 4d\,$^{1}$D$_{2}$\,--\,2p\,$^{1}$P$^{\rm o}_{1}$ $\lambda$4921.93 line, to correct for the intensity contribution from the He~{\sc i} $\lambda$3354.55 line is that given the small wavelength span between the $\lambda$3354.55, 3613.64 lines, measurements of their intensity ratio are much less sensitive to any uncertainties in reddening corrections and flux calibration. The Ne~{\sc ii} 4f\,--\,3d recombination lines in Table\,\ref{abundances:vii} are those whose preliminary effective recombination coefficients are available (P.~J. Storey, private communication). The Ne$^{2+}$/H$^{+}$ abundances derived from the 4f\,--\,3d transitions are systematically higher than those derived from the 3\,--\,3 transitions by about 50 per cent. The difference between the abundances derived from the 3\,--\,3 and 4f\,--\,3d transitions is mainly due to the inadequacy of the Ne~{\sc ii} effective recombination coefficients. The average Ne$^{2+}$/H$^+$ abundance from the 4f\,--\,3d transitions is 8.5$\times$10$^{-4}$, about 0.15 dex higher than the average value deduced from the individual lines of the 3\,--\,3 transition array. Here the two abnormally high abundances yielded by the Ne~{\sc ii} M52b $\lambda$4250.65 (4f\,2[3]$^{\rm o}_{5/2}$\,--\,3d\,$^{4}$D$_{3/2}$) and M61d $\lambda$4457.05 (4f\,2[2]$^{\rm o}_{5/2}$\,--\,3d\,$^{2}$D$_{3/2}$) lines are excluded from averaging. We adopt the abundance value 8.42$\times$10$^{-4}$, which is calculated by co-adding the intensities of the 4f\,--\,3d transitions, as the recombination line Ne$^{2+}$/H$^{+}$ abundance of NGC\,7009. The two Ne~{\sc ii} lines $\lambda\lambda$4250.65 and 4457.05 that yield abnormally high abundances are excluded from the abundance calculation. \begin{table} \centering \caption{Recombination line Ne$^{2+}$/H$^+$ abundances derived from the 3\,--\,3 transitions. Intensities are normalized such that $I$(H$\beta$) = 100. The abundances of LLB01 are presented for purpose of comparison.} \label{abundances:vi} \begin{tabular}{lclrr} \hline Line & Mult. & $I_{\rm obs}$ & \multicolumn{2}{c}{Ne$^{2+}$/H$^{+}$}\\ & & & \multicolumn{2}{c}{($\times$10$^{-4}$)}\\ (\AA) & & & Current & LLB01\\ \hline $\lambda$3694.21 & M1 & 0.254 & 7.932 & 8.97\\ $\lambda$3709.62 & M1 & 0.105 & 8.290 & 8.49\\ $\lambda$3777.14 & M1 & 0.048 & 3.859 & 3.54\\ \textbf{M1 3p~$^4$P$^{\rm o}$ -- 3s~$^4$P} & & 0.686 & \textbf{7.544} & \textbf{7.45}\\ \\ $\lambda$3334.84 & M2 & 0.414 & 5.847 & 5.14\\ $\lambda$3344.40 & M2 & 0.129 & 8.805 & \\ $\lambda$3355.02$^a$ & M2 & 0.195 & 5.278 & 5.89\\ \textbf{M2 3p~$^4$D$^{\rm o}$ -- 3s~$^4$P} & & 1.035 & \textbf{5.850} & \textbf{5.39}\\ \\ $\lambda$3713.08 & M5 & 0.297 & 6.141 & \\ \textbf{M5 3p~$^2$D$^{\rm o}$ -- 3s~$^2$P} & & 0.495 & \textbf{6.141} & \\ \\ $\lambda$3481.93 & M6 & 0.043 & 4.795 & 5.52\\ \textbf{M6 3p~$^2$S$^{\rm o}$ -- 3s~$^2$P} & & 0.066 & \textbf{4.795} & \textbf{5.52}\\ \\ $\lambda$3047.56 & M8 & 0.120 & 7.720 & \\ \textbf{M8 3d~$^4$D -- 3p~$^4$P$^{\rm o}$} & & 0.571 & \textbf{7.692} & \\ \\ $\lambda$3329.16 & M12 & 0.051 & & 7.67\\ $\lambda$3357.82 & M12 & 0.020 & & \\ $\lambda$3362.94 & M12 & 0.026 & & \\ $\lambda$3374.06 & M12 & 0.014 & & \\ $\lambda$3390.55 & M12 & 0.016 & & \\ \textbf{M12 3d~$^4$D -- 3p~$^4$D$^{\rm o}$} & & 0.149 & \textbf{8.051} & \textbf{7.67}\\ \\ $\lambda$3218.19 & M13 & 0.227 & 4.836 & 5.45\\ $\lambda$3244.09 & M13 & 0.072 & 2.253 & 2.02\\ \textbf{M13 3d~$^4$F -- 3p~$^4$D$^{\rm o}$} & & 0.636 & \textbf{4.840} & \textbf{3.68}\\ \\ $\lambda$3367.22 & M20 & 0.102 & 2.820 & 3.38\\ \textbf{M20 3d~$^2$F -- 3p~$^2$D$^{\rm o}$} & & 0.179 & \textbf{2.832} & \textbf{3.38}\\ \\ $\lambda$3416.91$^b$ & M21 & 0.075 & 11.818 & 13.00\\ $\lambda$3453.07 & M21 & 0.018 & 4.448 & 4.25\\ \textbf{M21 3d~$^2$D -- 3p~$^2$D$^{\rm o}$} & & 0.050 & \textbf{3.529} & \textbf{4.25}\\ \\ $\lambda$3542.85 & M34 & 0.030 & & 2.89\\ $\lambda$3565.82 & M34 & 0.024 & & \\ $\lambda$3594.16 & M34 & 0.011 & & \\ \textbf{M34 3d~$^4$P -- 3p~$^4$S$^{\rm o}$} & & 0.065 & \textbf{3.006} & \textbf{3.05}\\ \\ $\lambda$3568.50 & M9 & 0.168 & & \\ $\lambda$3574.61$^c$ & M9 & 0.053 & & \\ \textbf{M9 3p$^{\prime}$~$^{2}$F$^{\rm o}$ -- 3s$^{\prime}$~$^{2}$D} & & 0.221 & \textbf{36.125} & \\ \\ Average & & & \textbf{6.040}\\ \hline \end{tabular} \begin{description} \item [$^a$] Corrected for the contribution from the He~{\sc i} M8 7p\,$^1$P$^{\rm o}_{1}$\,--\,2s\,$^1$S$_{0}$ $\lambda$3354.55 line. \item [$^b$] Probably overestimated due to Ne~{\sc ii} M19 3d\,$^4$F$_{7/2}$\,--\,3p\,$^2$D$^{\rm o}_{5/2}$ $\lambda$3417.69. Excluded from calculating the average Ne$^{2+}$/H$^+$ abundance ratio. \item [$^c$] Including the Ne~{\sc ii} M9 3p$^{\prime}$\,$^{2}$F$^{\rm o}_{5/2}$\,--\,3s$^{\prime}$\,$^{2}$D$_{5/2}$ $\lambda$3574.18 line. \end{description} \end{table} \begin{table} \centering \caption{Recombination line Ne$^{2+}$/H$^+$ abundances from the 4f\,--\,3d transitions. Intensities are normalized such that $I$(H$\beta$) = 100. The abundances of LLB01 are presented for purpose of comparison.} \label{abundances:vii} \begin{tabular}{lclrr} \hline Line & Mult. & $I_{\rm obs}$ & \multicolumn{2}{c}{Ne$^{2+}$/H$^{+}$}\\ & & & \multicolumn{2}{c}{($\times$10$^{-4}$)}\\ (\AA) & & & Current & LLB01\\ \hline $\lambda$4391.99$^a$ & M55e & 0.072 & 7.451 & 7.53\\ $\lambda$4409.30 & M55e & 0.058 & 9.039 & 9.80\\ $\lambda$4219.75$^b$ & M52a & 0.049 & 9.135 & 12.60\\ $\lambda$4233.85 & M52a & 0.011 & 8.230 & 11.40\\ $\lambda$4231.64 & M52b & 0.009 & 7.084 & 19.20\\ $\lambda$4250.65$^c$ & M52b & 0.016 & 18.853 & 19.90\\ $\lambda$4397.99 & M57b & 0.024 & 7.174 & 6.56\\ $\lambda$4379.55$^d$ & M60b & 0.050 & 8.367 & \\ $\lambda$4428.64$^e$ & M60c & 0.037 & 8.775 & 11.50\\ $\lambda$4430.94$^f$ & M61a & 0.025 & 9.158 & 12.10\\ $\lambda$4457.05$^{c,g}$ & M61d & 0.026 & 27.305 & 26.30\\ $\lambda$4413.22$^h$ & M65 & 0.021 & 9.099 & 12.90\\ $\lambda$4421.39 & M66c & 0.008 & 9.723 & \\ \\ Sum & & 0.364$^i$ & \textbf{8.425} & \textbf{9.93}\\ Average & & & \textbf{8.476} & \textbf{9.77}\\ \hline \end{tabular} \begin{description} \item [$^a$] Neglecting the contribution from Ne~{\sc ii} M55e 4f\,2[5]$^{\rm o}_{9/2}$\,--\,3d\,$^4$F$_{9/2}$ $\lambda$4392.00. \item [$^b$] Neglecting the contribution from Ne~{\sc ii} M52a 4f\,2[4]$^{\rm o}_{7/2}$\,--\,3d\,$^4$D$_{7/2}$ $\lambda$4219.37. \item [$^c$] Not included in calculating the total intensity or the average abundance value. \item [$^d$] Neglecting the contribution from Ne~{\sc ii} M60b 4f\,1[4]$^{\rm o}_{7/2}$\,--\,3d\,$^2$F$_{7/2}$ $\lambda$4379.40. \item [$^e$] Neglecting the contribution from Ne~{\sc ii} M60c 4f\,1[3]$^{\rm o}_{5/2}$\,--\,3d\,$^2$F$_{5/2}$ $\lambda$4428.52. \item [$^f$] The contribution from Ne~{\sc ii} M57a 4f\,1[2]$^{\rm o}_{5/2}$\,--\,3d\,$^4$F$_{3/2}$ $\lambda$4430.90, which is about 30 per cent, has been subtracted. The contribution from Ne~{\sc ii} M57a 4f\,1[2]$^{\rm o}_{3/2}$\,--\,3d\,$^4$F$_{3/2}$ $\lambda$4431.11 is negligible. \item [$^g$] Including the contribution from the three Ne~{\sc ii} lines, M66c 4f\,1[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$P$_{5/2}$ $\lambda$4457.24, M61d 4f\,2[2]$^{\rm o}_{3/2}$\,--\,3d\,$^2$D$_{3/2}$ $\lambda$4457.24 and M66c 4f\,1[3]$^{\rm o}_{7/2}$\,--\,3d\,$^4$P$_{5/2}$ $\lambda$4457.36. \item [$^h$] Including the contribution from Ne~{\sc ii} M57c 4f\,1[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$F$_{3/2}$ $\lambda$4413.11. Neglecting Ne~{\sc ii} M65 4f\,0[3]$^{\rm o}_{5/2}$\,--\,3d\,$^4$P$_{5/2}$ $\lambda$4413.11. \item [$^i$] Excluding Ne~{\sc ii} M52b 4f\,2[3]$^{\rm o}_{5/2}$\,--\,3d\,$^{4}$D$_{3/2}$ $\lambda$4250.65 and Ne~{\sc ii} M61d 4f\,2[2]$^{\rm o}_{5/2}$\,--\,3d\,$^{2}$D$_{3/2}$ $\lambda$4457.05. \end{description} \end{table} \subsubsection{\label{orl_abundances:part7} ORL abundances of other ions} Table\,\ref{abundances:viii} presents the recombination line C$^{3+}$/H$^+$, C$^{4+}$/H$^+$, N$^{3+}$/H$^+$, O$^{3+}$/H$^+$, and O$^{4+}$/H$^+$ abundances. The C$^{3+}$/H$^+$ abundance ratios are derived from the M1 $\lambda$4650, M16 $\lambda$4069 and M18 $\lambda$4187 multiplets. The abundances from the three multiplets agree with each other, and the adopted C$^{3+}$/H$^+$ ratio in NGC\,7009 is an average from them. The C$^{4+}$/H$^+$ abundance ratio 2.20$\times$10$^{-5}$ is derived from the C~{\sc iv} M8 $\lambda$4658 line, which is slightly contaminated by [Fe~{\sc iii}] $\lambda$4658. This abundance agrees with those given by LSBC: 0.239$\times$10$^{-4}$ (PA = 45$^{\rm o}$) and 0.182$\times$10$^{-4}$ (PA = 0$^{\rm o}$). The N$^{3+}$/H$^+$ abundance ratio is derived from N~{\sc iii} M18 $\lambda$4379. The N~{\sc iii} M17 lines, $\lambda\lambda$3998.63 and 4003.58,72 are observed, but only the effective dielectronic recombination coefficients for this multiplet are available (Nussbaumer \& Storey \citealt{ns1984}). If we adopt the dielectronic data, the derived N$^{3+}$/H$^+$ abundance from the M17 multiplet will be more than one order of magnitude higher than that from the N~{\sc iii} M18 $\lambda$4379 line (Table\,\ref{abundances:viii}). This indicates that the excitation of the N~{\sc ii} M17 lines is probably dominated by radiative recombination. We adopt the N$^{3+}$/H$^+$ ratio (1.31$\times$10$^{-4}$) derived from the N~{\sc ii} M18 $\lambda$4379 line as the abundance in NGC\,7009. This N$^{3+}$/H$^+$ abundance agrees with those given by LSBC who also used the $\lambda$4379 line: 1.34$\times$10$^{-4}$ (PA = 45$^{\rm o}$) and 1.71$\times$10$^{-4}$ (PA = 0$^{\rm o}$). We observed the N~{\sc iii} M1 and M2 lines, which are excited by the secondary Bowen fluorescence mechanism. Also detected in the spectra of NGC\,7009 are the N~{\sc iii} transitions with the $^3$P$^{\rm o}$ parentage: M3 3p$^{\prime}$~$^4$D -- 3s$^{\prime}$~$^4$P$^{\rm o}$, M6 3p$^{\prime}$~$^2$D -- 3s$^{\prime}$~$^2$P$^{\rm o}$ and M9 3d$^{\prime}$~$^4$F$^{\rm o}$ -- 3p$^{\prime}$~$^4$D. Analyses of these lines are in Section\,\ref{niii_orls}. Some strong O~{\sc iii} lines from the 3\,--\,3 arrays are excited by the fluorescence mechanism or radiative charge-transfer reaction of O$^{3+}$ and H$^0$ (Liu \& Danziger \citealt{ld1993a}; Liu, Danziger \& Murdin \citealt{ldm93}). The O~{\sc iii} M14 3d~$^3$D$^{\rm o}$ -- 3p~$^3$P $\lambda$3713 lines cannot be excited by fluorescence or charge-transfer reaction, but is likely to be excited only by recombination and therefore a useful abundance indicator for O$^{3+}$/H$^+$. Unfortunately, no recombination coefficients are available for this multiplet. The O~{\sc iii} M8 3d~$^3$F$^{\rm o}$\,--\,3p~$^3$D $\lambda$3265 lines originate from radiative and dielectronic recombination (Liu \& Danziger \cite{ld1993a}), and the effective dielectronic and radiative recombination coefficients for this multiplet are available from Nussbaumer \& Storey \cite{ns1984} and P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991}, respectively. Thus the O$^{3+}$/H$^+$ abundances are derived from two of the O~{\sc iii} M8 lines $\lambda\lambda$3260.85 and 3265.32, which are observed in NGC\,7009 (Fig.\,\ref{3250-3305}). Another M8 line $\lambda$3267.20 is blended with the $\lambda$3265.32 line, which is expected to be the strongest in M8, but its contribution is probably negligible. The O$^{3+}$/H$^+$ abundance ratio derived from O~{\sc iii} M8 is about 6.6$\times$10$^{-5}$ (Table\,\ref{abundances:viii}). A very faint O~{\sc iv} line M2 3d~$^2$D$_{5/2}$ -- 3p~$^2$P$^{\rm o}_{3/2}$ $\lambda$3411.69 is also observed to partially blended with O~{\sc iii} M15 $\lambda$3415.26 (Fig.\,\ref{3395-3475}). Another M2 line $\lambda$3413.64 is blended in between. The other line $\lambda$3403.52 is blended with O~{\sc iii} M15 $\lambda$3405.71 (Fig.\,\ref{3395-3475}), and its accurate measurements are difficult. Multi-Gaussian profile fits give an intensity of 0.0514 for the $\lambda$3411.69 line, and 0.0567 for the $\lambda$3413.64 line, both have an uncertainty of more than 20 per cent (Table\,$7$ in Paper~I). The intensity ratio of the two lines differ from the pure {\it LS}\,coupling ratio, i.e. 9\,:\,1. We use $\lambda$3411.69 to derive the O$^{4+}$/H$^+$ abundance ratio because its measurement is probably more reliable. Here the effective radiative and dielectronic recombination coefficients from P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} and Nussbaumer \& Storey \cite{ns1984}, respectively, are used. \begin{table} \centering \caption{Recombination line C$^{3+}$/H$^+$, C$^{4+}$/H$^+$, N$^{3+}$/H$^+$, O$^{3+}$/H$^+$, and O$^{4+}$/H$^+$ abundances. Intensities are normalized such that $I$(H$\beta$) = 100.} \label{abundances:viii} \begin{tabular}{lclr} \hline Line & Mult. & $I_{\rm obs}$ & C$^{3+}$/H$^+$\\ (\AA) & & & ($\times$10$^{-4}$)\\ \hline $\lambda$4647.42 & M1 & 0.170 & \\ $\lambda$4650.25 & M1 & 0.100 & \\ $\lambda$4651.47 & M1 & 0.034 & \\ \textbf{M1 3p~$^3$P$^{\rm o}$ -- 3s~$^3$S} & & 0.304 & \textbf{1.659}\\ $\lambda$4067.94 & M16 & 0.071 & \\ $\lambda$4068.91 & M16 & 0.093 & \\ $\lambda$4070.26 & M16 & 0.120 & \\ \textbf{M16 5g~$^3$G -- 4f~$^3$F$^{\rm o}$} & & 0.284 & \textbf{1.472}\\ \textbf{M18 5g~$^1$G -- 4f~$^1$F$^{\rm o}$} & $\lambda$4186.90 & 0.088 & \textbf{1.309}\\ \\ \hline Wavelength & Mult. & $I_{\rm obs}$ & C$^{4+}$/H$^+$\\ (\AA) & & & ($\times$10$^{-4}$)\\ \hline \textbf{M8 6h~$^2$H$^{\rm o}$ -- 5g~$^2$G} & $\lambda$4658.30$^a$ & 0.147 & \textbf{0.220}\\ \\ \hline Wavelength & Mult. & $I_{\rm obs}$ & N$^{3+}$/H$^+$\\ (\AA) & & & ($\times$10$^{-4}$)\\ \hline $\lambda$3998.63 & M17 & 0.025 & \\ $\lambda$4003.58$^b$ & M17 & 0.040 & \\ \textbf{M17 5f~$^2$F$^{\rm o}$ -- 4d~$^2$D} & & 0.065 & \textbf{12.390}\\ \textbf{M18 5g~$^2$G -- 4f~$^2$F$^{\rm o}$} & $\lambda$4379.11 & 0.310 & \textbf{1.313}\\ \\ \hline Wavelength & Mult. & $I_{\rm obs}$ & O$^{3+}$/H$^+$\\ (\AA) & & & ($\times$10$^{-4}$)\\ \hline $\lambda$3260.85 & M8 & 0.179 & \\ $\lambda$3265.32 & M8 & 0.139 & \\ \textbf{M8 3d~$^3$F$^{\rm o}$ -- 3p~$^3$D} & & 0.324 & \textbf{0.659}\\ \\ \hline Wavelength & Mult. & $I_{\rm obs}$ & O$^{4+}$/H$^+$\\ (\AA) & & & ($\times$10$^{-4}$)\\ \hline $\lambda$3411.69$^c$ & M2 & 0.057 & \\ \textbf{M2 3d~$^2$D -- 3p~$^2$P$^{\rm o}$} & & 0.114 & \textbf{0.158}\\ \hline \end{tabular} \begin{description} \item [$^a$] Including [Fe~{\sc iii}] $\lambda$4658.05. \item [$^b$] Including N~{\sc iii} M17 $\lambda$4003.72 (5f\,$^2$F$^{\rm o}_{5/2}$\,--\,4d\,$^2$D$_{5/2}$). \item [$^c$] Could be of large uncertainty due to line blending. \end{description} \end{table} The adopted C, N, O and Ne ionic abundances from optical recombination lines are summarized in Table\,\ref{orlabundances:adopted}. They are mostly averaged from the abundance ratios that are calculated by co-adding the line intensities of individual multiplets (or transition arrays). Atomic data references used for the ORL analysis are listed in Table\,\ref{references:orl}. \begin{table} \centering \caption{Adopted recombination line abundances for the C, N, O, and Ne ions.} \label{orlabundances:adopted} \begin{tabular}{lrr} \hline Ion & \multicolumn{2}{c}{Abundances}\\ & ($\times$10$^{-4}$) & $\log$[X$^{i+}$/H$^+$]+12\\ \hline C$^{2+}$/H$^+$ & 6.865 & 8.796\\ C$^{3+}$/H$^+$ & 1.480 & 8.170\\ C$^{4+}$/H$^+$ & 0.220 & 7.342\\ N$^{2+}$/H$^+$ & 3.450 & 8.538\\ N$^{3+}$/H$^+$ & 1.313 & 8.118\\ O$^{2+}$/H$^+$ & 14.176 & 9.152\\ O$^{3+}$/H$^+$ & 0.659 & 7.819\\ O$^{4+}$/H$^+$ & 0.158 & 7.198\\ Ne$^{2+}$/H$^+$ & 8.425 & 8.926\\ \hline \end{tabular} \end{table} Several permitted lines emitted by silicon ions were observed or deblended (Paper~I). As the first and the second ionization potentials of atomic silicon are 8.15 and 16.35~eV, respectively, we expect that the main ionization stages of silicon are doubly and triply ionized, while the amount of Si$^+$ is assumed to be negligible and Si$^{4+}$ should exist but is of much lower abundance compared to Si$^{2+}$ and Si$^{3+}$. The Si$^{3+}$/H$^+$ abundance ratios derived from Si~{\sc iii} M2 and M5 lines are presented in Table\,\ref{abundances:ix}. The Si~{\sc iv} M1 $\lambda$4116.10 line is also observed (Fig.\,\ref{4110-4175}), which means that $\lambda$4088.86 of the same multiplet should also exist. Since only the effective dielectronic recombination coefficients for a few selected Si~{\sc iii} transitions are available from Nussbaumer \& Storey \cite{ns1986}, we only present the Si$^{3+}$/H$^+$ abundance ratios. The averaged Si$^{3+}$/H$^+$ ratio is 6.04$\times$10$^{-6}$. The Mg~{\sc ii} M4 $\lambda$4481 line is observed (Fig.\,\ref{4444-4504}), and the Mg$^{2+}$/H$^+$ abundance derived is presented in Table\,\ref{abundances:ix}. Since the ionization potentials of neutral Mg$^0$ and Mg$^{2+}$ are 7.65 and 80.14~eV, respectively, we assume that magnesium in NGC\,7009 is mainly doubly ionized. Unfortunately no effective recombination coefficients for Mg~{\sc ii} lines are available. Given the similarity between the atomic structure of Mg~{\sc ii} and C~{\sc ii}, we assumed that the effective recombination coefficient of the Mg~{\sc ii} M4 4f~$^2$F$^{\rm o}$ -- 3d~$^2$D $\lambda$4481 line is equal to, or at least is close to, that of the C~{\sc ii} M6 4f~$^2$F$^{\rm o}$ -- 3d~$^2$D $\lambda$4267 transition. The effective recombination coefficient (in Case~B) for the C~{\sc ii} $\lambda$4267 line is adopted from Bastin \cite{bastin2006}, with the assumption of $T_\mathrm{e}$ = 10\,000~K and $N_\mathrm{e}$ = 10\,000~cm$^{-3}$. The calculation of Davey, Storey \& Kisielius \cite{davey2000} differs from that of Bastin \cite{bastin2006} by 1.5 per cent for the C~{\sc ii} $\lambda$4267 line. The Mg$^{2+}$/H$^+$ abundance derived from the $\lambda$4481 line is 3.18$\times$10$^{-5}$. \begin{table} \centering \caption{Recombination line Si$^{3+}$/H$^+$ and Mg$^{2+}$/H$^+$ abundances. Intensities are normalized such that $I$(H$\beta$) = 100.} \label{abundances:ix} \begin{tabular}{lcll} \hline Line & Mult. & $I_{\rm obs}$ & Si$^{3+}$/H$^+$\\ (\AA) & & & ($\times$10$^{-5}$)\\ \hline $\lambda$4552.62 & M2 & 0.0175 & \\ $\lambda$4567.82 & M2 & 0.0114 & \\ $\lambda$4574.76 & M2 & 0.0041 & \\ \textbf{M2 4p~$^3$P$^{\rm o}$ -- 4s~$^3$S} & & 0.0325 & \textbf{0.770}\\ $\lambda$3806.54 & M5 & 0.022 & \\ \textbf{M5 4d~$^3$D -- 4p~$^3$P$^{\rm o}$} & & 0.0396 & \textbf{0.438}\\ \\ \hline Wavelength & Mult. & $I_{\rm obs}$ & Mg$^{2+}$/H$^+$\\ (\AA) & & & ($\times$10$^{-5}$)\\ \hline $\lambda$4481.20$^a$ & M4 & 0.0303 & \\ \textbf{M4 4f~$^2$F$^{\rm o}$ -- 3d~$^2$D} & & 0.0309 & \textbf{3.179}\\ \hline \end{tabular} \begin{description} \item [$^a$] We assume that the Mg~{\sc ii} 4f\,--\,3d $\lambda$4481 line has an effective recombination coefficient equal to that of the C~{\sc ii} 4f\,--\,3d $\lambda$4267 line, given the similarity between the atomic structure of Mg~{\sc ii} and C~{\sc ii}. \end{description} \end{table} \begin{table*} \begin{minipage}{90mm} \caption{References for the ORL atomic data.} \label{references:orl} \centering \begin{tabular}{lll} \hline ion & \multicolumn{2}{c}{ORLs}\\ & Effec.recomb. coefficients & Comments\\ \hline H~{\sc i} & Storey \& Hummer \cite{sh1995} & Case B\\ He~{\sc i} & Benjamin, Skillman \& Smits \cite{bss99} & Case B; singlets\\ & Brocklehurst \cite{b72} & Case A; triplets\\ He~{\sc ii} & Storey \& Hummer \cite{sh1995} & Case B\\ C~{\sc i} & Escalante \& Victor \cite{ev1990} & Case A; singlets\\ & Escalante \& Victor \cite{ev1990} & Case B; triplets\\ C~{\sc ii} & Davey, Storey \& Kisielius \cite{davey2000} & Case B\\ C~{\sc iii} & P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} & Case A\\ & Nussbaumer \& Storey \cite{ns1984} & Dielectronic recombination\\ C~{\sc iv} & P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} & Case A\\ N~{\sc i} & P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} & Case A; doublets\\ & P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} & Case B; quartets\\ N~{\sc ii} & FSL11 & Case B\\ N~{\sc iii} & P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} & Case A\\ & Nussbaumer \& Storey \cite{ns1984} & Dielectronic recombination\\ O~{\sc i} & P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} & Case A\\ O~{\sc ii} & P.~J. Storey (PJS, private communication) & Case B\\ O~{\sc iii} & P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} & Case A\\ O~{\sc iv} & P\'{e}quignot, Petitjean \& Boisson \cite{ppb1991} & Case A\\ & Nussbaumer \& Storey \cite{ns1984} & Dielectronic recombination\\ Ne~{\sc ii} & Kisielius et al. \cite{kisielius1998} & Case B; doublets\\ & Storey (unpublished) & Case A; quartets\\ & Nussbaumer \& Storey \cite{ns1987} & Dielectronic recombination\\ Mg~{\sc ii} & Davey, Storey \& Kisielius \cite{davey2000}$^a$ & Case B\\ Si~{\sc ii} & Nussbaumer \& Storey \cite{ns1986} & Dielectronic recombination\\ Si~{\sc iii} & Nussbaumer \& Storey \cite{ns1986} & Dielectronic recombination\\ \hline \end{tabular} \begin{description} \item [$^a$] Given the similarity between the atomic structure of Mg~{\sc ii} and C~{\sc ii}, we have assumed that the Mg~{\sc ii} M4 4f~$^2$F$^{\rm o}$ -- 3d~$^2$D $\lambda$4481 line has an effective recombination coefficient equal to that of the C~{\sc ii} M6 4f~$^2$F$^{\rm o}$ -- 3d~$^2$D $\lambda$4267 line (Zhang et al. \citealt{zhang05b}). \end{description} \end{minipage} \end{table*} \subsection{\label{cel_abundances} Ionic abundances from CELs} \subsubsection{\label{cel_abundances:part1} Ionic abundances from the optical CELs} The ionic abundances derived from optical CELs detected in the spectrum of NGC\,7009 are presented in Table\,\ref{abundances:cels}. An electron temperature of 10\,000~K, which is an average from different CEL diagnostic ratios (Paper~I), and a constant density of 4300~cm$^{-3}$, an average derived from a variety of optical CEL ratios, are assumed throughout the abundance determinations. In addition to the ionic abundances of N, O and Ne, abundances are also derived for the ions of F, Mg, Si, S, Cl, Ar from the CELs detected in the spectrum of NGC\,7009. The atomic data references used for CEL analysis are listed in Table\,\ref{references:cel}. \begin{table*} \begin{minipage}{115mm} \caption{Ionic abundances derived from optical CELs. Line intensities are normalized such that $I$(H$\beta$) = 100.} \label{abundances:cels} \centering \begin{tabular}{llrccc} \hline \multicolumn{2}{c}{Lines} & $I_{\rm obs}$ & X$^{i+}$/H$^+$ & \multicolumn{2}{c}{Abundances}\\ Ions & Lines (\AA) & & & X$^{i+}$/H$^+$ & $\log$[X$^{i+}$/H$^+$]+12\\ \hline $[$C~{\sc i}$]$ & $\lambda$$\lambda$9824.13,9850.26 & 0.035 & C$^{0}$/H$^+$ & 8.483$\times$10$^{-9}$ & 3.929\\ $[$N~{\sc i}$]$ & $\lambda$$\lambda$5197.90,5200.26 & 0.091 & N$^{0}$/H$^+$ & 8.435$\times$10$^{-8}$ & 4.926\\ $[$O~{\sc i}$]$ & $\lambda$$\lambda$6300.30,6363.78 & 0.740 & O$^{0}$/H$^+$ & 8.395$\times$10$^{-7}$ & 5.924\\ $[$N~{\sc ii}$]$ & $\lambda$5754.64 & 0.390 & N$^+$/H$^+$ & 2.876$\times$10$^{-6}$ & 6.459\\ $[$N~{\sc ii}$]$ & $\lambda$$\lambda$6548.04,6583.46 & 20.600 & N$^+$/H$^+$ & 2.725$\times$10$^{-6}$ & 6.435\\ $[$O~{\sc ii}$]$ & $\lambda$$\lambda$3726.03,3728.81 & 19.971 & O$^+$/H$^+$ & 8.779$\times$10$^{-6}$ & 6.943\\ $[$O~{\sc ii}$]$ & $\lambda$$\lambda$7319.99,7330.73 & 2.300 & O$^+$/H$^+$ & 1.996$\times$10$^{-5}$ & 7.300\\ $[$O~{\sc iii}$]$ & $\lambda$4363.21 & 7.300 & O$^{2+}$/H$^+$ & 2.435$\times$10$^{-4}$ & 8.387\\ $[$O~{\sc iii}$]$ & $\lambda$4931.23 & 0.120 & O$^{2+}$/H$^+$ & 2.455$\times$10$^{-4}$ & 8.390\\ $[$O~{\sc iii}$]$ & $\lambda$$\lambda$4958.91,5006.84 & 1550.753 & O$^{2+}$/H$^+$ & 3.231$\times$10$^{-4}$ & 8.509\\ $[$F~{\sc ii}$]$ & $\lambda$4789.45 & 0.017 & F$^+$/F$^+$ & 2.579$\times$10$^{-8}$ & 4.411\\ $[$F~{\sc iv}$]$ & $\lambda$4059.90 & 0.012 & F$^{3+}$/F$^+$ & 4.387$\times$10$^{-9}$ & 3.642\\ $[$Ne~{\sc iii}$]$ & $\lambda$3342.50 & 0.755 & Ne$^{2+}$/H$^+$ & 3.079$\times$10$^{-4}$ & 8.488\\ $[$Ne~{\sc iii}$]$ & $\lambda$$\lambda$3868.76 & 118.837 & Ne$^{2+}$/H$^+$ & 1.296$\times$10$^{-4}$ & 7.928\\ $[$Ne~{\sc iii}$]$ & $\lambda$4012.01 & 0.014 & Ne$^{2+}$/H$^+$ & 2.161$\times$10$^{-4}$ & 8.335\\ $[$Ne~{\sc iv}$]$ & $\lambda$$\lambda$4724.17,4725.67 & 0.042 & Ne$^{3+}$/H$^+$ & 1.501$\times$10$^{-5}$ & 7.176\\ $[$Ne~{\sc iv}$]$ & $\lambda$$\lambda$4714.17,4715.66 & 0.067 & Ne$^{3+}$/H$^+$ & 3.845$\times$10$^{-5}$ & 7.585\\ $[$S~{\sc ii}$]$ & $\lambda$$\lambda$4068.60,4076.35 & 0.960 & S$^+$/H$^+$ & 1.085$\times$10$^{-7}$ & 5.035\\ $[$S~{\sc ii}$]$ & $\lambda$$\lambda$6716.44,6730.82 & 3.700 & S$^+$/H$^+$ & 1.192$\times$10$^{-7}$ & 5.076\\ $[$S~{\sc iii}$]$ & $\lambda$3721.69 & 1.045 & S$^{2+}$/H$^+$ & 2.781$\times$10$^{-6}$ & 6.444\\ $[$S~{\sc iii}$]$ & $\lambda$6312.10 & 1.400 & S$^{2+}$/H$^+$ & 2.265$\times$10$^{-6}$ & 6.355\\ $[$S~{\sc iii}$]$ & $\lambda$$\lambda$9068.60,9530.60 & 64.000 & S$^{2+}$/H$^+$ & 1.965$\times$10$^{-6}$ & 6.293\\ $[$Cl~{\sc ii}$]$ & $\lambda$6161.84 & 0.006 & Cl$^+$/H$^+$ & 3.434$\times$10$^{-8}$ & 4.536\\ $[$Cl~{\sc ii}$]$ & $\lambda$8578.69,9123.60 & 0.075 & Cl$^+$/H$^+$ & 4.994$\times$10$^{-9}$ & 3.698\\ $[$Cl~{\sc iii}$]$ & $\lambda$3353.17 & 0.076 & Cl$^{2+}$/H$^+$ & 1.388$\times$10$^{-7}$ & 5.142\\ $[$Cl~{\sc iii}$]$ & $\lambda$$\lambda$5517.72,5537.89 & 1.000 & Cl$^{2+}$/H$^+$ & 5.513$\times$10$^{-8}$ & 4.741\\ $[$Cl~{\sc iii}$]$ & $\lambda$8480.85 & 0.020 & Cl$^{2+}$/H$^+$ & 8.319$\times$10$^{-8}$ & 4.920\\ $[$Cl~{\sc iv}$]$ & $\lambda$5323.28 & 0.012 & Cl$^{3+}$/H$^+$ & 2.525$\times$10$^{-8}$ & 4.402\\ $[$Cl~{\sc iv}$]$ & $\lambda$$\lambda$7530.80,8045.63 & 0.990 & Cl$^{3+}$/H$^+$ & 5.475$\times$10$^{-8}$ & 4.738\\ $[$Ar~{\sc iii}$]$ & $\lambda$3109.17 & 0.175 & Ar$^{2+}$/H$^+$ & 5.956$\times$10$^{-7}$ & 5.775\\ $[$Ar~{\sc iii}$]$ & $\lambda$5191.82 & 0.100 & Ar$^{2+}$/H$^+$ & 8.584$\times$10$^{-7}$ & 5.934\\ $[$Ar~{\sc iii}$]$ & $\lambda$$\lambda$7135.80,7751.10 & 18.300 & Ar$^{2+}$/H$^+$ & 1.027$\times$10$^{-6}$ & 6.012\\ $[$Ar~{\sc iv}$]$ & $\lambda$$\lambda$4711.37,4740.17 & 7.600 & Ar$^{3+}$/H$^+$ & 5.592$\times$10$^{-7}$ & 5.748\\ $[$Ar~{\sc iv}$]$ & $\lambda$$\lambda$7237.40,7262.76 & 0.372 & Ar$^{3+}$/H$^+$ & 2.713$\times$10$^{-6}$ & 6.433\\ $[$Ar~{\sc v}$]$ & $\lambda$$\lambda$6435.10,7005.67 & 0.065 & Ar$^{4+}$/H$^+$ & 6.654$\times$10$^{-9}$ & 3.823\\ $[$K~{\sc iv}$]$ & $\lambda$$\lambda$6101.83,6795.10 & 0.196 & K$^{3+}$/H$^+$ & 1.146$\times$10$^{-8}$ & 4.059\\ \hline \end{tabular} \end{minipage} \end{table*} \begin{table*} \begin{minipage}{90mm} \caption{References for the CEL atomic data.} \label{references:cel} \centering \begin{tabular}{lll} \hline ion & \multicolumn{2}{c}{CELs}\\ & Transition probabilities & Collision strengths\\ \hline C~{\sc ii} & Nussbaumer \& Storey \cite{ns1981a} & Blum \& Pradhan \cite{bp1992}\\ C~{\sc iii} & Keenan et al. \cite{keenan1992} & Keenan et al. \cite{keenan1992}\\ & Fleming et al. \cite{fleming1996} & \\ C~{\sc iv} & Wiese et al. \cite{wiese1966} & Gau \& Henry \cite{gh1977}\\ N~{\sc i} & Zeippen \cite{zeippen1982} & Berrington \& Burke \cite{bb1981}\\ N~{\sc ii} & Nussbaumer \& Rusca \cite{nr1979} & Stafford et al. \cite{stafford1994}\\ O~{\sc i} & Baluja \& Zeippen \cite{bz1988} & Berrington \cite{berrington1988}\\ & & Berrington \& Burke \cite{bb1981}\\ O~{\sc ii} & Zeippen \cite{zeippen1982} & Pradhan \cite{pradhan1976}\\ O~{\sc iii} & Nussbaumer \& Storey \cite{ns1981b} & Aggarwal \cite{aggarwal1983}\\ F~{\sc ii} & Baluja \& Zeippen \cite{bz1988} & Butler \& Zeippen \cite{bz1994}\\ F~{\sc iv} & Fischer \& Saha \cite{fs1985} & Lennon \& Burke \cite{lb1994}\\ Ne~{\sc ii} & Mendoza \cite{mendoza1983} & Bayes et al. \cite{bayes1985}\\ Ne~{\sc iii} & Mendoza \cite{mendoza1983} & Butler \& Zeippen \cite{bz1994}\\ Ne~{\sc iv} & Zeippen \cite{zeippen1982} & Giles \cite{giles1981}\\ Ne~{\sc v} & Fischer \& Saha \cite{fs1985} & Lennon \& Burke \cite{lb1994}\\ S~{\sc ii} & Mendoza \& Zeippen \cite{mz1982b} & Keenan et al. \cite{keenan1996}\\ & Keenan et al. \cite{keenan1993} & \\ S~{\sc iii} & Mendoza \& Zeippen \cite{mz1982a} & Mendoza \cite{mendoza1983}\\ S~{\sc iv} & Storey (unpublished) & Saraph \& Storey \cite{ss1999}\\ Cl~{\sc ii} & Mendoza \cite{mendoza1983} & Mendoza \cite{mendoza1983}\\ Cl~{\sc iii} & Mendoza \cite{mendoza1983} & Mendoza \cite{mendoza1983}\\ Cl~{\sc iv} & Mendoza \& Zeippen \cite{mz1982b} & Butler \& Zeippen \cite{bz1989}\\ Ar~{\sc ii} & Pelan \& Berrington \cite{pb1995} & Vujnovic \& Wiese \cite{vw1992}\\ Ar~{\sc iii} & Mendoza \& Zeippen \cite{mz1983} & Johnson \& Kingston \cite{jk1990}\\ Ar~{\sc iv} & Mendoza \& Zeippen \cite{mz1982b} & Zeippen et al. \cite{zeippen1987}\\ Ar~{\sc v} & Mendoza \& Zeippen \cite{mz1982a} & Mendoza \cite{mendoza1983}\\ Fe~{\sc iii} & Nahar \& Pradhan \cite{np1996} & Zhang \cite{zhang1996}\\ Fe~{\sc iv} & Garstang \cite{garstang1958} & Zhang \& Pradhan \cite{zp1997}\\ & Fischer \& Rubin \cite{fr2004} & \\ Fe~{\sc v} & & Berrington \cite{berrington1995}\\ Fe~{\sc vi} & Nussbaumer \& Storey \cite{ns1978} & Nussbaumer \& Storey \cite{ns1978}\\ Fe~{\sc vii} & Nussbaumer \& Storey \cite{ns1982} & Keenan \& Norrington \cite{kn1987}\\ & & Berrington et al. \cite{berrington2000}\\ \hline \end{tabular} \end{minipage} \end{table*} \subsubsection{\label{cel_abundances:part2} Ionic abundances from the IR and UV CELs} NGC\,7009 has been observed in wavelength range other than optical: the {\it IUE}\,Short Wavelength Prime (SWP) and Long Wavelength Redundant (LWR) observations by Perinotto \& Benvenuti \cite{pb1981}, the {\it IRAS} Low Resolution Spectrometer (LRS) observations by Pottasch et al. \cite{pottasch1986}, the {\it ISO} Short Wavelength Spectrometer (SWS) and Long Wavelength Spectrometer (LWS) observations by Liu et al. \cite{liu2001a}, and the Kuiper Airborne Observatory ({\it KAO}) observations by Rubin et al. \cite{rubin1997}. Ionic abundances derived from ten near- to far-infrared lines and seven ultraviolet lines are presented in Table\,\ref{abundances_ir_uv}. The dereddened and normalized intensities of three infrared lines, the [Ne~{\sc ii}] 12.8$\mu$m and the [Ne~{\sc iii}] 15.5 and 36.0$\mu$m, are adopted from LLB01. The Ne$^+$/H$^+$ abundance ratio is derived from the [Ne~{\sc ii}] 12.8$\mu$m line, assuming a temperature of 10\,020~K and a density of 4300~cm$^{-3}$. The derived Ne$^+$/H$^+$ ratio is 1.32$\times$10$^{-5}$, which agrees with 1.38$\times$10$^{-5}$ given by LLB01, as is expected. The critical densities of the [Ne~{\sc iii}] $^3$P$_{1}$ and $^3$P$_{0}$ levels are 2.1$\times$10$^5$ and 3.1$\times$10$^4$~cm$^{-3}$ (at $T_\mathrm{e}$ = 10\,000~K; Osterbrock \& Ferland \citealt{of2006}), respectively, much larger than the average electron density. The Ne$^{2+}$/H$^+$ abundance ratios deduced from the [Ne~{\sc iii}] 15.5 and 36$\mu$m lines are 1.67$\times$10$^{-4}$ and 1.48$\times$10$^{-4}$, respectively. The two ratio values agree with each other within errors. We adopt a value of 1.65$\times$10$^{-4}$, which is derived from the sum of the intensities of the two [Ne~{\sc iii}] infrared (IR) fine-structure lines, as the Ne$^{2+}$/H$^+$ ratio in NGC\,7009. It agrees with the value of 1.63$\times$10$^{-4}$ given by LLB01. An electron temperature of $9980$~K and a density of 3930~cm$^{-3}$ were assumed in LLB01. The observed line fluxes, in units of erg\,cm$^{-2}$\,s$^{-1}$, of the [N~{\sc iii}] 57$\mu$m and the [O~{\sc iii}] 52 and 88$\mu$m fine-structure lines are adopted from Liu et al. \cite{liu2001a}. These fluxes were normalized using the observed total H$\beta$ flux, 10$^{-9.63}$~erg\,cm$^{-2}$\,s$^{-1}$. The extinction of the three IR lines, as pointed out by Liu et al. \cite{liu2001a}, should be negligible. Since the critical density of the [N~{\sc iii}] $^2$P$^{\rm o}_{3/2}$ level is 1.5$\times$10$^3$~cm$^{-3}$, comparable to the density of NGC\,7009, we have assumed a density of $1260$~cm$^{-3}$, deduced from the [O~{\sc iii}] 52$\mu$m/88$\mu$m line ratio, in deriving the N$^{2+}$/H$^+$ abundance ratio from the [N~{\sc iii}] 57$\mu$m line. Here an electron temperature of 10\,020~K is again assumed. The derived N$^{2+}$/H$^+$ ratio is 4.97$\times$10$^{-5}$, in close agreement with 4.91$\times$10$^{-5}$ given by Liu et al. \cite{liu2001a}. The O$^{2+}$/H$^+$ abundance ratios derived from the [O~{\sc iii}] 52 and 88$\mu$m lines are 2.79$\times$10$^{-4}$ and 2.76$\times$10$^{-4}$, respectively. Here an electron temperature of 9800~K derived from the [O~{\sc iii}] $\lambda$4959/$\lambda$4363 ratio, and a density of 1260~cm$^{-3}$ derived from the [O~{\sc iii}] 52$\mu$m/88$\mu$m ratio, were assumed. Given that the critical densities of the [O~{\sc iii}] $^3$P$_{1}$ and $^3$P$_{2}$ fine-structure levels are 5.1$\times$10$^2$ and 3.6$\times$10$^3$~cm$^{-3}$, respectively (Osterbrock \& Ferland \citealt{of2006}), the density value of 1260~cm$^{-3}$ for the O~{\sc iii} IR-line abundances is appropriate. We adopt a value of 2.99$\times$10$^{-4}$, which is derived from the sum of the two [O~{\sc iii}] IR lines, as the O$^{2+}$/H$^+$ abundance ratio in NGC\,7009. This abundance ratio agrees well with the value of 2.96$\times$10$^{-4}$ given by Liu et al. \cite{liu2001a}. The flux of the [O~{\sc iv}] 25.9$\mu$m line were estimated from the $F$([O~{\sc iv}]~25.9$\mu$m)/$F$([S~{\sc iii}]~18.7$\mu$m) flux ratio given by Rubin et al. \cite{rubin1997}, who obtained far-IR observations of the PNe NGC\,7009, NGC\,7027 and NGC\,6210 with the Kuiper Airborne Observatory ({\it KAO}). The {\it ISO} SWS observations by X.-W. Liu (unpublished) during the {\it ISO} Orbit\,$\#344$ in 1996 gives the $F$([O~{\sc iv}]~25.9$\mu$m)/$F$([S~{\sc iii}]~18.7$\mu$m) ratio that differs from that of Rubin et al. \cite{rubin1997} by more than 30 per cent. The O$^{4+}$/H$^+$ abundance ratio derived from the above two observations are given in Table\,\ref{abundances_ir_uv}. Here the flux of the [S~{\sc iii}] 18.7$\mu$m line was adopted from Pottasch et al. \cite{pottasch1986}. The {\it IRAS} fluxes (in units of erg\,cm$^{-2}$\,s$^{-1}$) of the [Ne~{\sc v}] 14.3$\mu$m, the [S~{\sc iii}] 18.7$\mu$m and the [S~{\sc iv}] 10.52$\mu$m lines, as well as the total flux of H$\beta$, are adopted from Pottasch et al. \cite{pottasch1986}. The Ne$^{4+}$/H$^+$ abundance ratio derived from the [Ne~{\sc v}] 14.3$\mu$m line is 6.11$\times$10$^{-7}$. Here an electron temperature of 10\,020~K is assumed. The critical densities for the [Ne~{\sc v}] $^3$P$_{1}$ and $^3$P$_{2}$ fine-structure levels are 6.2$\times$10$^3$ and 3.5$\times$10$^4$~cm$^-3$ (Osterbrock \& Ferland \citealt{of2006}), respectively. Thus the electron density of 1260~cm$^{-3}$ from the [O~{\sc iii}] 52$\mu$m/88$\mu$m ratio is again assumed. If we adopt a density value of 4300~cm$^{-3}$, the derived Ne$^{4+}$/H$^+$ abundance ratio slightly increases to 6.87$\times$10$^{-7}$. The S$^{2+}$/H$^+$ abundance ratio derived from the [S~{\sc iii}] 18.7$\mu$m line is 8.36$\times$10$^{-7}$. Here an electron temperature of 10\,020~K and a density of 1260~cm$^{-3}$ are assumed. Since the critical densities of the [S~{\sc iii}] $^3$P$_{1}$ and $^3$P$_{2}$ fine-structure levels are 1.98$\times$10$^3$ and 1.54$\times$10$^4$~cm$^{-3}$, respectively, a density of 1260~cm$^{-3}$ is reasonable. If we increase the density value to 4300~cm$^{-3}$, the S$^{2+}$/H$^+$ ratio derived then increases to 9.93$\times$10${-7}$. The S$^{3+}$/H$^+$ abundance ratio derived from the [S~{\sc iv}] 10.52$\mu$m line is 7.34$\times$10$^{-6}$. The same temperature is assumed. The observed fluxes (in unit of erg\,cm$^{-2}$\,s$^{-1}$) for the seven ultraviolet (UV) lines in Table\,\ref{abundances_ir_uv} are adopted from Perinotto \& Benvenuti \cite{pb1981}. The fluxes are normalized using their H$\beta$ flux, which should be multiplied by a factor 0.48, the fraction of the H$\beta$ flux entering into the {\it IUE} slot in position of SWP and LWR images. Using the logarithmic reddening constant $c({\rm H}\beta)$ = 0.174 derived in Paper~I and the extinction curve of Howarth \cite{howarth1983}, we derived the dereddened intensities of those UV lines. The ionic abundances are presented in Table\,\ref{abundances_ir_uv}. Here an electron temperature of 10\,020~K and a density of 4300~cm$^{-3}$ are assumed in the abundance calculations. \begin{table*} \begin{minipage}{112mm} \caption{Ionic abundances derived from the UV and far-IR fine-structure CELs. Intensities are normalized such that $I$(H$\beta$) = 100.} \label{abundances_ir_uv} \centering \begin{tabular}{lrrllcl} \hline Ions & Lines & $I_{\rm obs}$ & & X$^{i+}$/H$^+$ & $\log$(X$^{i+}$/H$^+$)+12 & Ref.\\ \hline \multicolumn{2}{c}{IR Lines} & & & & & \\ \hline $[$N~{\sc iii}$]$ & 57~$\mu$m & 24.315 & N$^{2+}$/H$^{+}$ & 4.974$\times$10$^{-5}$ & 8.065 & (1)\\ $[$O~{\sc iii}$]$ & 52~$\mu$m & 165.940 & O$^{2+}$/H$^{+}$ & 2.792$\times$10$^{-4}$ & 8.446 & (1)\\ $[$O~{\sc iii}$]$ & 88~$\mu$m & 55.029 & O$^{2+}$/H$^{+}$ & 2.762$\times$10$^{-4}$ & 8.441 & (1)\\ $[$O~{\sc iv}$]$ & 25.9~$\mu$m & 20.29$\pm$1.10 & O$^{3+}$/H$^{+}$ & 1.275$\times$10$^{-5}$ & 7.105 & (2)$^a$\\ $[$O~{\sc iv}$]$ & 25.9~$\mu$m & 14.16$\pm$0.76 & O$^{3+}$/H$^{+}$ & 8.893$\times$10$^{-6}$ & 6.950 & (2)$^b$\\ $[$Ne~{\sc ii}$]$ & 12.8~$\mu$m & 9.9$\pm$3.0 & Ne$^+$/H$^{+}$ & 1.322$\times$10$^{-5}$ & 7.121 & (3)\\ $[$Ne~{\sc iii}$]$ & 15.5~$\mu$m & 250$\pm$12 & Ne$^+$/H$^{+}$ & 1.667$\times$10$^{-4}$ & 8.222 & (3)\\ $[$Ne~{\sc iii}$]$ & 36.0~$\mu$m & 18.0$\pm$1.8 & Ne$^{2+}$/H$^{+}$ & 1.477$\times$10$^{-4}$ & 8.169 & (3)\\ $[$Ne~{\sc v}$]$ & 14.3~$\mu$m & 8.696 & Ne$^{4+}$/H$^{+}$ & 6.107$\times$10$^{-7}$ & 5.786 & (4)\\ $[$S~{\sc iii}$]$ & 18.7~$\mu$m & 7.826 & S$^{2+}$/H$^{+}$ & 8.356$\times$10$^{-7}$ & 5.922 & (4)\\ $[$S~{\sc iv}$]$ & 10.52~$\mu$m & 230.434 & S$^{3+}$/H$^{+}$ & 7.342$\times$10$^{-6}$ & 6.866 & (4)\\ \hline \multicolumn{2}{c}{UV Lines (\AA)} & & & & & \\ \hline C~{\sc ii}$]$ & $\lambda$2326 & 3.821 & C$^{+}$/H$^{+}$ & 4.555$\times$10$^{-6}$ & 5.629 & (5)\\ C~{\sc iii}$]$ & $\lambda$1908 & 44.832 & C$^{2+}$/H$^{+}$ & 1.308$\times$10$^{-4}$ & 8.117 & (5)\\ N~{\sc iii}$]$ & $\lambda$1751 & 6.959 & N$^{2+}$/H$^{+}$ & 5.535$\times$10$^{-5}$ & 7.743 & (5)\\ N~{\sc iv}$]$ & $\lambda$1486 & 6.197 & N$^{3+}$/H$^{+}$ & 6.151$\times$10$^{-5}$ & 7.789 & (5)\\ O~{\sc iii}$]$ & $\lambda$1663 & 4.313 & O$^{2+}$/H$^{+}$ & 2.387$\times$10$^{-4}$ & 8.378 & (5)\\ O~{\sc iv}$]$ & $\lambda$1403 & 0.705 & O$^{3+}$/H$^{+}$ & 3.977$\times$10$^{-5}$ & 7.600 & (5)\\ $[$Ne~{\sc iv}$]$ & $\lambda$2424 & 7.789 & Ne$^{3+}$/H$^{+}$ & 1.895$\times$10$^{-5}$ & 7.278 & (5)\\ \hline \end{tabular} \begin{description} \item [(1)] The observed flux is from Liu et al. \cite{liu2001a}. \item [(2)$^a$] The observed flux is estimated from the flux ratio $F$([O~{\sc iv}]~25.9$\mu$m)/$F$([S~{\sc iii}]~18.7$\mu$m) adopted from the ISO/LWS observations during ISO $\#$344 Orbit in 1996 (Liu et al., unpublished), using the [S~{\sc iii}] 18.7$\mu$m flux adopted from Pottasch et al. \cite{pottasch1986}. \item [(2)$^b$] The observed flux is estimated from the flux ratio $F$([O~{\sc iv}]~25.9$\mu$m)/$F$([S~{\sc iii}]~18.7$\mu$m) adopted from the KAO observations of Rubin et al. \cite{rubin1997}, using the [S~{\sc iii}] 18.7$\mu$m flux adopted from Pottasch et al. \cite{pottasch1986}. \item [(3)] The observed flux is from LLB01. \item [(4)] The observed flux is from Pottasch et al. \cite{pottasch1986}. \item [(5)] The observed flux is from Perinotto \& Benvenuti \cite{pb1981}. \end{description} \end{minipage} \end{table*} \subsection{\label{compare} Comparison of the ORL and CEL abundances} \subsubsection{\label{compare:part1} Ionic abundances} In Fig.\,\ref{compare:orlcel}, the ionic abundances of C, N, O and Ne derived from ORLs are compared with the corresponding values derived from the optical, UV and far-IR CELs. Here the ionic abundances of C, N, O and Ne derived from ORLs are from Table\,\ref{orlabundances:adopted}, and the ionic abundances derived from CELs are from Tables\,\ref{abundances:cels} (optical) and \ref{abundances_ir_uv} (UV and IR). The IR fine-structure line fluxes are adopted from the recent {\it ISO}\,observations (Liu et al. \citealt{liu2001a}). We also make use of the IR line fluxes from observations of {\it IRAS} in the literature. The UV line fluxes from the {\it IUE}\,observations are dereddened using the extinction derived in Paper~I, and they are used to derive ionic abundances for highly ionized heavy elemental ions. The recombination line C$^{2+}$/H$^{+}$, N$^{2+}$/H$^{+}$, O$^{2+}$/H$^{+}$ and Ne$^{2+}$/H$^{+}$ abundances are all higher than the abundance ratios derived from CELs by nearly a factor of 5 (i.e. ADF$\sim$5), in agreement with what was observed by LSBC (for C, N and O) and LLB01 (for Ne). However, the ADF differs from 5 when the abundances derived from UV CELs are used: The N$^{3+}$/H$^{+}$ abundance derived from ORL is higher than the value derived from the N~{\sc iv}$]$ $\lambda$1486 UV line by a factor of 2. The recombination line O$^{3+}$/H$^{+}$ abundance is higher than the abundance derived from the O~{\sc iv}$]$ $\lambda$1403 UV line by only 65 per cent. This is probably mainly due to systematic difference in flux calibrations, given that the UV data are adopted from the early {\it IUE} observations of Perinotto \& Benvenuti \cite{pb1981}. The ADF value of O$^{3+}$ is close to 5 when the abundance derived from the [O~{\sc iv}] 25.9$\mu$m IR line is used. \begin{figure} \begin{center} \includegraphics[width=8.0cm,angle=-90]{fig41.pdf} \caption{Comparison of the ionic abundances derived from ORLs and the optical, UV and IR CELs. Error bars on the ORL abundances are from the propagation calculations of the ORL measurement errors.} \label{compare:orlcel} \end{center} \end{figure} \subsubsection{\label{compare:part2} Total elemental abundances} Elemental abundances derived from ORLs and CELs are compared in Table\,\ref{abundances:total}. Abundance errors (numbers in brackets) are estimated from measurement uncertainties, which is calculated by quadratically adding line flux errors from Gaussian profile fitting and the systematic uncertainties in line measurements e.g. subtraction of the continuum. Uncertainties introduced by ionization correction method, i.e. the ionization correction factors ($icf$'s), are not taken into account in error estimate. Also given in this table are the solar abundances from Asplund et al. \cite{asplund09} and average abundances of the Galactic disc and bulge PNe from literature. The C/H, N/H, O/H and Ne/H elemental abundances derived from ORLs are higher than the corresponding values derived from CELs by a factor of 5.4, 6.9, 4.7 and 5.3, respectively. This result is similar to LSBC, who derived ADF values of 6.1, 4.1 and 4.7 for C, N and O respectively in NGC\,7009. The Ne abundance discrepancy derived by LLB01 is about 4. In the analyses of LSBC, the elemental abundances of C and N derived from CELs were adopted from the earlier observations of Barker \cite{barker1983}, who first discussed the large discrepancy between the C$^{2+}$/H$^{+}$ abundances derived from the C~{\sc ii} $\lambda$4267 recombination line and from the UV CEL C~{\sc iii}$]$ $\lambda\lambda$1907,\,1909. Whenever available, the ionization correction factors ($icf$'s) given by Kingsburg \& Barlow \cite{kb1994} were used. In Section\,\ref{cel_abundances:part2}, we have derived 16 ionic abundances from UV and IR data available from the literature (Table\,\ref{abundances_ir_uv}). These ionic abundances can be used as an aid to derive total elemental abundances. When they are not necessary, we can use them to check the elemental abundance values derived from $icf$'s, by adding these UV and IR abundances with the optical abundance ratios in Table\,\ref{abundances:cels} and compare with the ionization corrected total abundances. The forbidden line O/H abundance ratio was calculated from the O$^+$/H$^+$ ratio derived from the [O~{\sc ii}] $\lambda\lambda$3726 and 3729 lines\footnote{Recombination excitation of the [O~{\sc ii}] $\lambda$$\lambda$3726, 3729 doublet is neglected. Given the small ionic concentration of O$^+$, compared to O$^{2+}$, the errors introduced to the total O/H elemental abundances deduced below, both from CELs and from ORLs, should be negligible.} and the O$^{2+}$/H$^+$ ratio derived from the [O~{\sc iii}] $\lambda\lambda$4959 and 5007 lines, correcting for the unseen O$^{3+}$ in optical using \begin{equation} \label{icf1} \frac{\rm O}{\rm H} = icf(\rm O)\times(\frac{{\rm O}^{+}}{{\rm H}^{+}} + \frac{{\rm O}^{2+}}{{\rm H}^{+}})\\ = [\frac{{\rm He}^{+} + {\rm He}^{2+}}{{\rm He}^{+}}]^{2/3} \times(\frac{{\rm O}^{+}}{{\rm H}^{+}} + \frac{{\rm O}^{2+}}{{\rm H}^{+}}). \end{equation} From the He$^+$ and He$^{2+}$ abundances given in Table\,\ref{abundances:i}, we have $icf(\rm O)$ = 1.086. Thus derived O/H ratio is 3.603$\times$10$^{-4}$, close to the value 3.716$\times$10$^{-4}$, which is derived by combining the O ions from UV, IR and optical lines, O/H = O$^+$/H$^+$ + O$^{2+}$/H$^+$ + O$^{3+}$/H$^+$. Here the O$^{3+}$/H$^+$ is derived from the O~{\sc iv}$]$ $\lambda$1403 UV line (Section\,\ref{cel_abundances:part2}). The recombination line abundance O$^+$/H$^+$ is not available, thus in order to make use of the above equation, we assume that the recombination line O$^+$/O$^{2+}$ ratio is the same as that derived from the CELs. Given the small ionic concentration of O$^+$ (less than 10 per cent in NGC\,7009), the errors introduced should be negligible. The O$^{2+}$/H$^+$, O$^{3+}$/H$^+$ and O$^{4+}$/H$^+$ abundance ratios derived from ORLs are available from Table\,\ref{orlabundances:adopted}. The recombination line O/H abundance ratio thus derived is 1.667$\times$10$^{-3}$. Both C$^{2+}$/H$^+$ and C$^{3+}$/H$^+$ abundance ratios have been derived from ORLs and are presented in Table\,\ref{orlabundances:adopted}. The C$^{4+}$/H$^+$ abundance ratio has also been derived from the C~{\sc iv} M8 $\lambda$4658.30 line, but probably is not acceptable because it blends with [Fe~{\sc iii}] $\lambda$4658.05. The unseen C$^+$/H$^+$ are corrected for using the equation of LSBC, \begin{equation} \label{icf2} \frac{\rm C}{\rm H} = \frac{{\rm C}^{2} + {\rm C}^{3}}{{\rm H}^{+}} \times icf(\rm C) \end{equation} and \begin{equation} \label{icf3} icf(\rm C) = [\frac{{\rm He}^{+} + {\rm He}^{2+}}{{\rm He}^+}]^{1/3} \times [\frac{{\rm O}^{+} + {\rm O}^{2+}}{{\rm O}^{2+}}]. \end{equation} The ionic abundances of O$^+$ and O$^{2+}$ in the above equation are assumed to be those derived from the forbidden line measurements. In NGC\,7009, most of the carbon should exist in the form of C$^{2+}$ and C$^{3+}$, and the correction required for unobserved C$^+$ and C$^{4+}$ is quite small. The Equation~\ref{icf3} gives $icf(\rm C)$ = 1.07. Thus derived recombination line C/H abundance ratio is 8.932$\times$10$^{-4}$, which is only 3 per cent lower than the value given by LSBC. For the collisionally excited lines, C$^+$/H$^+$ and C$^{2+}$/H$^+$ are derived from UV lines (Table\,\ref{abundances_ir_uv}). We assume C$^{3+}$/C$^{2+}$ = 0.216, as given by the ORL abundance ratios. Thus the CEL ratio of carbon is C/H = C$^+$/H$^+$ + C$^{2+}$/H$^+$ + C$^{3+}$/H$^+$, which is 1.636$\times$10$^{-4}$. Recombination line abundances are available for the N$^{2+}$/H$^+$ and N$^{3+}$/H$^+$ ratios (Table\,\ref{orlabundances:adopted}) but not for N$^+$/H$^+$. The latter is available from the collisionally excited [N~{\sc ii}] $\lambda\lambda$6548 and 6584 lines. The N$^{2+}$/H$^+$ ratio derived from the UV collisionally excited N~{\sc iii}$]$ $\lambda$1751 line is about 10 per cent higher than that deduced from the [N~{\sc iii}] 57~$\mu$m far-IR fine-structure line (Table\,\ref{abundances_ir_uv}). Given the weakness of the $\lambda$1751 and the relative new observations of the 75\,$\mu$m line, we adopt the N$^{2+}$/H$^+$ abundance ratio derived from the far-IR line. N$^{2+}$/H$^+$ from the 57~$\mu$m and N$^+$/H$^+$ from the $\lambda\lambda$6548 and 6584 lines yield N$^+$/N$^{2+}$ = 0.0548. We assume that this is also valid for corresponding abundances derived from ORLs. The N~{\sc iv} optical recombination lines are not clearly detected in the spectrum of NGC\,7009, but the we detected the O~{\sc iv} M2 3d~$^2$D -- 3p~$^2$P$^{\rm o}$ $\lambda$3412 line, and the O$^{4+}$/H$^+$ abundance ratio is derived from it (Table\,\ref{abundances:viii}). Given the ionization potential of N$^{3+}$ (77.472~eV) is the same to that of O$^{3+}$ (77.4~eV), we assume N$^{4+}$/N = O$^{4+}$/O. Thus the total recombination line N/H abundance is given by \begin{equation} \label{icf4} \frac{\rm N}{\rm H} = (1.0548\times\frac{{\rm N}^{2+}}{{\rm H}^{+}} + \frac{{\rm N}^{3+}}{{\rm H}^{+}}) / (1 - \frac{{\rm O}^{4+}}{\rm O}). \end{equation} To obtain the forbidden-line N/H abundance, we correct for the unseen N$^{3+}$/H$^+$ assuming N$^{3+}$/N$^{2+}$ = 0.380, as given by ORLs. so that \begin{equation} \label{icf5} \frac{\rm N}{\rm H} = \frac{{\rm N}^{+}}{{\rm H}^{+}} + 1.380\times\frac{{\rm N}^{2+}}{{\rm H}^{+}}. \end{equation} The total CEL N/H abundance thus derived is 7.01$\times$10$^{-5}$. Here the N$^{4+}$/H$^+$ is neglected, given its very low abundance, if we assume that the N$^{4+}$/N$^{2+}$ ratio from CELs is the same as that given by the ORL values. If we take into account the N$^{3+}$/H$^+$ abundance ratio derived from the UV N~{\sc iv}$]$ $\lambda$1486 line (Table\,\ref{abundances_ir_uv}), the total CEL abundance ratio is then N/H = N$^+$/H$^+$ + N$^{2+}$/H$^+$ + N$^{3+}$/H$^+$ = 1.140$\times$10$^{-4}$, which is 64 per cent higher than the value derived by the ionization correction Equation\,\ref{icf5}. Since the recent IR data should be more reliable, we adopt the ratio derived from Equation\,\ref{icf5}. Ne$^+$/H$^+$, Ne$^{2+}$/H$^+$ and Ne$^{3+}$/H$^+$ ionic abundances are available from IR (Table\,\ref{abundances_ir_uv}) and optical CELs (Table\,\ref{abundances:cels}). For the Ne$^{2+}$/H$^+$ ratio, we adopt the abundance derived from the [Ne~{\sc iii}] $\lambda$3868\footnote{Another [Ne~{\sc iii}] nebular line $\lambda$3967 is saturated in the higher resolution spectrum, and is not saturated but blended with the H~{\sc i} $\lambda$3970 line in the low-resolution spectrum.} optical line, which is only about 21 per cent lower than that derived from the ISO SWS observation of the [Ne~{\sc iii}] 15.5+36~$\mu$m IR fine-structure lines. For the Ne$^{3+}$/H$^+$ ratio, we adopt the value derived from the [Ne~{\sc iv}] $\lambda\lambda$4724.17 and 4725.67 optical lines. From the IUE observation by Perinotto \& Benvenuti \cite{pb1981}, we also derived the Ne$^{3+}$/H$^+$ abundance ratio from their observed Ne~{\sc iv}$]$ $\lambda\lambda$2422 and 2424 lines (Table\,\ref{abundances_ir_uv}), which is 26 per cent higher than that from optical. Thus the CEL Ne/H ratio obtained from the equation \begin{equation} \label{icf6} \frac{\rm Ne}{\rm H} = \frac{{\rm Ne}^{+}}{{\rm H}^{+}} + \frac{{\rm Ne}^{2+}}{{\rm H}^{+}} + \frac{{\rm Ne}^{3+}}{{\rm H}^{+}} \end{equation} is 1.578$\times$10$^{-4}$ (Table\,\ref{abundances:total}). This is only 9 per cent lower than the Ne/H ratio derived by LLB01. Only the Ne$^{2+}$/H$^+$ ratio is available from ORLs. The Ne$^+$/Ne$^{2+}$ ratio from CELs is 0.102, with and Ne$^{3+}$/Ne$^{2+}$ from optical CELs is 0.116. Assuming these ionization ratios from CELs are the same as given by ORLs, we obtain a recombination line Ne/H abundance ratio value of 8.398$\times$10$^{-4}$ by using \begin{equation} \label{icf7} \frac{\rm Ne}{\rm H} = (0.102 + 1.0 + 0.116)\times \frac{{\rm Ne}^{2+}}{{\rm H}^{+}}. \end{equation} This ratio value is 21 per cent higher than that given by LLB01. CELs emitted by [F~{\sc ii}] and [F~{\sc iv}] ions are detected, with the F$^+$/H$^+$ and F$^{3+}$/H$^+$ abundance ratios derived from the [F~{\sc ii}] $\lambda$4789 and [F~{\sc iv}] $\lambda$4060 lines respectively (Table\,\ref{abundances:cels}). O$^+$ has ionization potential comparable to F$^+$ and Ne$^{3+}$ has that comparable to F$^{3+}$. Zhang \& Liu \cite{zl2005} suggested that F/O = F$^{2+}$/O$^{2+}$ for low-excitation PNe, and F/O = (F$^{3+}$/Ne$^{3+}$)(Ne/O) for high-excitation PNe. Here we adopt the assumption of Zhang \& Liu \cite{zl2005}. Given that NGC\,7009 is a medium excitation PN, we derive the Ne$^{2+}$/H$^+$ abundance ratio from the equation \begin{equation} \label{icf8} \frac{\rm F}{\rm H} = (\frac{{\rm F}^{+}}{{\rm H}^{+}} + \frac{{\rm F}^{3+}}{{\rm H}^{+}}) / (1 - \frac{{\rm O}^{2+}}{\rm O}). \end{equation} The derived F/H abundance ratio is 2.923$\times$10$^{-7}$. The F$^{4+}$/H$^+$ should be negligible because the ionization potential of F$^{3+}$ is 87~eV, which is too high. No recombination line from the fluorine ions are detected. For the elements heavier than Ne, we detected CELs emitted by S, Cl, Ar and K ions (Table\,\ref{abundances:cels}), and ORLs by Si and Mg ions (Table\,\ref{abundances:ix}). Considering that the first ionization potential of silicon atom is only 8.2~eV, and the ionization potential of Si$^{4+}$ is 166.8~eV, which is a huge jump from that of Si$^{3+}$, 45~eV, we assume that the S$^+$/H$^+$ abundance is negligible, and the main ionization stages of Si in NGC\,7009 are Si$^{2+}$, Si$^{3+}$ and Si$^{4+}$. Several Si~{\sc iii} multiplets and the Si~{\sc iv} multiplet V1 are detected from NGC\,7009. Only effective dielectronic recombination coefficients for Si~{\sc iii} multiplets are available from Nussbaumer \& Storey \cite{ns1984}, thus only the S$^{3+}$/H$^+$ abundance ratio from S~{\sc iii} ORLs is derived. The S$^{3+}$/H$^+$ ratio is derived from the Si~{\sc iii} multiplets M2 4p~$^3$P$^{\rm o}$ -- 4s~$^3$S and M5 4d~$^3$D -- 4p~$^3$P$^{\rm o}$ (Table\,\ref{abundances:ix}). Since the ionization potential of S$^+$ (16.3~eV) is close to that of F (17.4~eV), and the ionization potential of Si$^{3+}$ (45~eV) is comparable to that of N$^{2+}$ (47.4~eV), we assume the relation: Si$^{2+}$/Si = F$^+$/F = O$^+$/O, and Si$^{4+}$/Si = N$^{3+}$/N. Thus the total Si/H abundance ratio from ORLs can be derived from the equation \begin{equation} \label{icf9} \frac{\rm Si}{\rm H} = \frac{{\rm Si}^{3+}}{{\rm H}^{+}} / (1 - \frac{{\rm O}^{+}}{\rm O} - \frac{{\rm N}^{3+}}{\rm N}) = \frac{{\rm Si}^{3+}}{{\rm H}^{+}} / (1 - \frac{{\rm F}^{+}}{\rm F} - \frac{{\rm N}^{3+}}{\rm N}). \end{equation} The Si/H abundance ratio thus derived is 8.278$\times$10$^{-6}$. In Equation\,\ref{icf9}, the O$^+$/O ratio is from optical CELs, and the N$^{3+}$/N ratio is from ORLs. The Mg$^{2+}$/H$^+$ abundance ratio is derived from the Mg~{\sc ii} M4 4f~$^2$F$^{\rm o}$ -- 3d~$^2$D $\lambda$4481 line. The ionization potentials of Mg$^0$ and Mg$^{2+}$ are 7.65~eV and 80.1~eV, respectively, thus we assume that magnesium in NGC\,7009 are mainly doubly ionized, and Mg$^+$ and Mg$^{3+}$ are negligible. No effective recombination coefficients for Mg~{\sc ii} lines are available. Given the similarity between the atomic structure of Mg~{\sc ii} and C~{\sc ii}, we assume that the Mg~{\sc ii} $\lambda$4481 line has an effective recombination coefficient equal to that of C~{\sc ii} M6 $\lambda$4267. Thus we have Mg/H = Mg$^{2+}$/H$^+$. For S, we have S$^+$/H$^+$ derived from the [S~{\sc ii}] $\lambda\lambda$6716 and 6731 optical lines, S$^{2+}$/H$^+$ derived from the [S~{\sc iii}] $\lambda$6312 optical line. The S$^{2+}$/H$^+$ is also available from the [S~{\sc iii}] 18.7~$\mu$m IR line. S$^{3+}$/H$^+$ is derived from the [S~{\sc iv}] 10.5$\mu$m far-IR fine-structure line, which was adopted from the IRAS observations by Pottasch et al. \cite{pottasch1986}. We use the S$^{2+}$/H$^+$ ratio derived from the $\lambda$6312 line. S$^{4+}$ is not observed. Since S$^{3+}$ has an ionization potential of 47.3~eV, very close to the value of 47.4~eV for N$^{2+}$, we assume that S$^{4+}$/S = N$^{3+}$/N. The N$^{3+}$/N ratio is from ORLs. Thus the S/H ratio is obtained from \begin{equation} \label{icf10} \frac{\rm S}{\rm H} = (\frac{{\rm S}^{+}}{{\rm H}^{+}} + \frac{{\rm S}^{2+}}{{\rm H}^{+}} + \frac{{\rm S}^{3+}}{{\rm H}^{+}}) / (1 - \frac{{\rm N}^{3+}}{\rm N}). \end{equation} The derived S/H ratio is 1.299$\times$10$^{-5}$. If we adopt the S$^{2+}$/H$^+$ ratio derived from the [S~{\sc iii}] 18.7~$\mu$m IR line, then the S/H ratio is 1.10$\times$10$^{-5}$, about 15 per cent lower than the case when we adopt the optical value. The Cl$^+$/H$^+$, Cl$^{2+}$/H$^+$, and Cl$^{3+}$/H$^+$ abundances are derived from optical CELs (Table\,\ref{abundances:cels}). Given that the ionization potential of Cl$^{3+}$ (53.5~eV)is similar to that of He$^+$ (54.4~eV), and that the ionization potential of Cl$^{4+}$ (67.7~eV) is comparable to that of Ne$^{2+}$ (63.5~eV), we assumed Cl$^{4+}$/Cl = He$^{2+}$/He = 0.013/0.112 = 0.116 and Cl$^{5+}$/Cl = Ne$^{3+}$/Ne = 0.095. Here the Ne$^{3+}$/Ne ratio is from the CEL abundances. Thus the Cl/H abundance ratio can be obtained from the equation \begin{equation} \label{icf11} \frac{\rm Cl}{\rm H} = (\frac{{\rm Cl}^{+}}{{\rm H}^{+}} + \frac{{\rm Cl}^{2+}}{{\rm H}^{+}} + \frac{{\rm Cl}^{3+}}{{\rm H}^{+}}) / (1 - \frac{{\rm He}^{2+}}{\rm He} - \frac{{\rm Ne}^{3+}}{\rm Ne}). \end{equation} The derived total Cl/H abundance is 1.927$\times$10$^{-7}$. Ar$^{2+}$/H$^+$ is derived from the [Ar~{\sc iii}] $\lambda\lambda$7136 and 7751 lines, Ar$^{3+}$/H$^+$ is from [Ar~{\sc iv}] $\lambda\lambda$4711 and 4740, and Ar$^{4+}$/H$^+$ is from the [Ar~{\sc v}] $\lambda\lambda$6435 and 7006 lines (Table\,\ref{abundances:cels}). The unseen Ar$^+$ is corrected for assuming Ar$^+$/Ar = N$^+$/N, where the N$^+$/N is derived from optical CELs. The total Ar/H ratio can be obtained using \begin{equation} \label{icf12} \frac{\rm Ar}{\rm H} = (\frac{{\rm Ar}^{2+}}{{\rm H}^{+}} + \frac{{\rm Ar}^{3+}}{{\rm H}^{+}} + \frac{{\rm Ar}^{4+}}{{\rm H}^{+}}) / (1 - \frac{{\rm N}^{+}}{\rm N}). \end{equation} The finally derived Ar/H abundance ratio is 2.570$\times$10$^{-6}$. We have derived the K$^{3+}$/H$^+$ abundance ratio from the [K~{\sc iv}] $\lambda\lambda$6102 and 6795 lines. A very faint feature, which might be the [K~{\sc v}] $\lambda$4163 line is detected in the spectrum of NGC\,7009, but we could not confirm that. K$^+$ is probably negligible in NGC\,7009, judging from the very low ionization potential of K. Since the ionization potential of K$^+$ (31.6~eV) is comparable to that of N$^+$ (29.6~eV), and ionization potential of K$^{3+}$ (60.9~eV) is comparable to that of F$^{2+}$ (62.7~eV), we assume K$^{2+}$/K = N$^{2+}$/N and K$^{4+}$/K = F$^{3+}$/F. Here the N$^{2+}$/N ratio is from the ORLs, and the F$^{3+}$/F ratio is from CELs. We use the following equation to correct for the unseen K$^{2+}$ and K$^{4+}$ ions, \begin{equation} \label{icf13} \frac{\rm K}{\rm H} = \frac{{\rm K}^{3+}}{{\rm H}^{+}} / (1 - \frac{{\rm N}^{2+}}{\rm N} - \frac{{\rm F}^{3+}}{\rm F}). \end{equation} The derived the total K/H abundance ratio is 4.242$\times$10$^{-8}$. \begin{table*} \begin{minipage}{150mm} \caption{Total elemental abundances derived from ORLs and CELs, in units such that $\log{N({\rm H})}$ = 12.} \label{abundances:total} \centering \begin{tabular}{lccrrrrrrrr} \hline Element & \multicolumn{2}{c}{X/H} & \multicolumn{2}{c}{$\log$[X/H]+12} & \multicolumn{2}{c}{TLW$^a$} & \multicolumn{2}{c}{TLW$^b$} & KB94$^c$ & Solar$^d$\\ & ORLs & CELs & ORLs & CELs & ORLs & CELs & ORLs & CELs & & \\ \hline He & 0.112 & & 11.049 & & 11.02 & & 11.06 & & 11.06 & 10.93\\ C & 8.93($\pm$0.45)$\times$10$^{-4}$ & 1.64($\pm$0.33)$\times$10$^{-4}$ & 8.95 & 8.21 & 9.09 & 8.52 & 9.03 & 8.56 & 8.74 & 8.43\\ N & 4.92($\pm$0.20)$\times$10$^{-4}$ & 7.14($\pm$0.35)$\times$10$^{-5}$ & 8.69 & 7.85 & 8.94 & 8.17 & 9.14 & 8.34 & 8.38 & 7.83\\ O & 1.67($\pm$0.06)$\times$10$^{-3}$ & 3.60($\pm$0.07)$\times$10$^{-4}$ & 9.22 & 8.56 & 9.22 & 8.60 & 9.32 & 8.70 & 8.66 & 8.69\\ F & & 2.92($\pm$0.58)$\times$10$^{-7}$ & & 5.47 & & & & & & 4.56\\ Ne & 8.40($\pm$0.42)$\times$10$^{-4}$ & 1.58($\pm$0.08)$\times$10$^{-4}$ & 8.92 & 8.20 & 9.06 & 7.99 & 9.07 & 8.13 & 8.06 & 7.93\\ Mg$^e$ & 3.18($\pm$0.16)$\times$10$^{-5}$ & & 7.50 & & 7.56 & & 7.71 & & & 7.60\\ Si$^f$ & 8.28($\pm$0.80)$\times$10$^{-6}$ & & 6.92 & & & & & & & 7.51\\ S & & 1.30($\pm$0.11)$\times$10$^{-5}$ & & 7.11 & & 6.84 & & 7.05 & 6.99 & 7.12\\ Cl & & 1.93($\pm$0.21)$\times$10$^{-7}$ & & 5.28 & & 5.35 & & 5.29 & & 5.50\\ Ar & & 2.57($\pm$0.38)$\times$10$^{-6}$ & & 6.41 & & 6.20 & & 6.34 & 6.51 & 6.40\\ K$^g$ & & 4.24($\pm$0.84)$\times$10$^{-8}$ & & 4.63 & & & & & & 5.03\\ \hline \end{tabular} \begin{description} \item [$^a$] Average abundances for 23 Galactic bulge PNe of Wang \& Liu \cite{wl2007} plus Galactic bulge PNe M\,1-42 and M\,2-36 analyzed by Liu et al. \cite{liu2001b}. \item [$^b$] Average abundances given by Wang \& Liu \cite{wl2007} for 58 Galactic disc PNe which were selected from Tsamis et al. (\citealt{tsamis2003}, \citealt{tsamis2004}), Liu et al. (\citealt{liu2004a},\,b) and Wesson, Liu \& Barlow \cite{wlb2005}. \item [$^c$] Average abundances of Galactic disc and bulge PNe (Kingsburg \& Barlow \citealt{kb1994}; Exter, Barlow \& Walton \citealt{exter04}), all based on CEL analyses except for helium for which ORLs were used. \item [$^d$] Solar values from Asplund et al. \cite{asplund09}. \item [$^e$] Mg$^+$/H$^+$ and Mg$^{3+}$/H$^+$ are neglected in calculating the total ORL abundance. \item [$^f$] Si$^+$/H$^+$ is neglected in calculating the total ORL abundance. \item [$^g$] K$^+$/H$^+$ and K$^{4+}$/H$^+$ are neglected in calculating the total CEL abundance. \end{description} \end{minipage} \end{table*} \section{\label{discussion} Discussion} Average elemental abundances for the Galactic disc and bulge PNe taken from Kingsburg \& Barlow \cite{kb1994} and Exter, Barlow \& Walton \cite{exter04} are presented in Table\,\ref{abundances:total} for purpose of comparison. Also presented in this table are the average abundances for 23 Galactic bulge PNe of Wang \& Liu \cite{wl2007} plus two bulge PNe M\,1-42 and M\,2-36 studied by Liu et al. \cite{liu2001b}, and 58 Galactic disc PNe selected from Tsamis et al. (\citealt{tsamis2003}, \citealt{tsamis2004}), Liu et al. (\citealt{liu2004a},\,b) and Wesson et al. \cite{wlb2005}. The helium abundance derived from the current analyses agrees well with the average value of the 58 disc sample, but 0.12 dex higher than the most recent solar value (Asplund et al. \citealt{asplund09}). The recombination line C/H abundance 8.95 derived for NGC\,7009 also agrees with the average value (9.03) of the disc sample, but 0.14 dex lower than the 23 bulge sample of Wang \& Liu \cite{wl2007}. The forbidden-line C/H abundance for NGC\,7009 is lower than all the values quoted from literature. The elemental N/H abundance derived from CELs for NGC\,7009 agrees well with the the Sun, but lower than all other average values. Our recombination line N/H abundance is lower than the average values of the bulge and dsic samples compiled by Wang \& Liu \cite{wl2007}, but is higher than the average value of the sample from Kingsburg \& Barlow \cite{kb1994} plus Exter, Barlow \& Walton \cite{exter04}. The forbidden-line O/H abundance for NGC\,7009 is 0.13 dex low than the solar value, and lower than the average value of the disc sample of Wang \& Liu \cite{wl2007} by the same amount. Our C/O abundance ratio derived from CELs is 0.96, slightly lower than all other ratios from literature, indicating that NGC\,7009 might be enriched in oxygen, which is consistent with the fact that the spectrum of NGC\,7009 is obviously rich in oxygen emission lines. The forbidden-line neon abundance of NGC\,7009 is 0.27 dex ($\sim$) higher than the solar value, and agrees with the average value (8.13) of the disc sample of Wang \& Liu \cite{wl2007}. The forbidden-line Ne/O ratio observed in NGC\,7009 is higher than the solar ratio by a factor of 2.5. However, Wang \& Liu \cite{wl2008} suggests that the solar Ne/O ratio of Asplund, Grevesse \& Sauval \cite{asplund05} be revised upwards by 0.22 dex, i.e. increased to 0.37. Similarly high forbidden-line Ne/O ratio (0.34--0.36) is also found in NGC\,6153 by Liu et al. \cite{liu2000}. The sulfur and argon abundances of NGC\,7009 both agree with the solar values. The Mg/H abundance of NGC\,7009 agrees with the average abundance of the bulge PNe sample of Wang \& Liu \cite{wl2007}, and is 0.1 dex lower than the solar value. Elemental abundances derived from ORLs and CELs observed in NGC\,7009 are presented in Table\,\ref{abundances:total}. By using the IR and UV observed line fluxes from the literature and correcting for extinction, we are able to derive elemental abundances of C, N, O, and Ne relative to hydrogen from both ORLs and CELs. The resultant ORL abundances of the above four elements are all higher than the abundance ratios derived from CELs, by a factor of 5--7. After deep spectroscopic observations of the ORLs detected in the spectrum of NGC\,7009 by LSBC (for C, N and O) and LLB01 (for Ne), those heavy-element abundance discrepancy problems are again studied quantitatively, using the deepest CCD spectrum ever taken for a gaseous nebula and the new effective recombination coefficients calculated for the N~{\sc ii} and O~{\sc ii} recombination spectra under the nebular conditions in the intermediate coupling scheme. The analyses of the spectrum are carried out in a very consistent manner, i.e. an electron temperature of 1000~K as derived from the N~{\sc ii} and O~{\sc ii} ORL ratios is assumed throughout analyses of the heavy-element ORLs, while a temperature of about 10\,000~K as yielded by CELs is assumed in calculating the CEL abundances. In the previous deep spectroscopy of NGC\,7009 by LSBC, the electron temperature ($\sim$10\,000~K) derived from the [O~{\sc iii}] nebular-to-auroral line ratio was adopted in calculating the ORL abundances. In the analyses of the neon abundance in NGC\,7009 by LLB01, the electron temperature (7100~K) derived from the H~{\sc i} Balmer jump was used to calculate the recombination line neon abundance. Liu et al. \cite{liu2000} assumed a temperature of 9100~K, as derived from the [O~{\sc iii}] CEL ratio, in calculating the recombination line abundances for the C, N, O and Ne ions in NGC\,6153. In the analyses of Galactic bulge PNe M\,1-42 and M\,2-36, Liu et al. \cite{liu2001b} used the H~{\sc i} Balmer jump temperature (3560~K for M\,1-42 and 5900~K for M\,2-36) to derived the recombination line abundances. However, the recombination line abundances derived in the studies mentioned above are questionable, given that the temperatures assumed are not derived from the heavy-element ORLs. The effective recombination coefficients of the permitted transitions of heavy element ions, which are mainly excited by radiative recombination, usually decrease as the electron temperature increases, except for the dielectronic transitions (with high-lying parent), whose effective recombination coefficients increases as the temperature increases. When we use the recombination lines that are mainly excited by radiative recombination, such as those used by LSBC and LLB01, to calculate the ionic abundances of C, N, O and Ne, using the forbidden line temperature e.g. $T_\mathrm{e}$([O~{\sc iii}]) or the H~{\sc i} Balmer jump temperature will result in overestimated recombination line abundances, because those two temperatures are both much higher than the temperatures yielded by the heavy element ORLs as revealed in the current analyses as well as many previous studies (e.g. Liu \citealt{liu2011} for a recent review). Here we take the O~{\sc ii} recombination lines as examples. When the electron temperature increases from 1000 to 10\,000~K, the effective recombination coefficient for the O~{\sc ii} M1 $\lambda$4649.13 line decreases by a factor of 7.5 (PJS), and the effective recombination coefficient for H$\beta$ decreases by a factor of 6 (Storey \& Hummer \citealt{sh1995}), as a consequence the O$^{2+}$/H$^{+}$ ionic abundance, which is calculated from the ratio of the effective recombination coefficients of the two lines $\alpha_{\rm eff}$(H$\beta$)/$\alpha_{\rm eff}$($\lambda$4649), will increase by 22 per cent. The ADFs observed in NGC\,7009, although moderate compared with those found in PNe NGC\,6153 (ADF$\sim$10, Liu et al. \citealt{liu2000}), M\,1-42 (ADF$\sim$20, Liu et al. \citealt{liu2001b}), NGC\,1501 (ADF$\sim$30, Ercolano et al. \citealt{ercolano04}) and Hf\,2-2 (ADF$\sim$70, Liu et al. \citealt{lbzbs06}), is high compared with most of the other Milky Way PNe so far spectroscopically studied (ADFs$\sim$1.6--3.2). Ever since the observation of an ADF value of about $5$ from the deep spectroscopy of NGC\,7009 by LSBC, it has been wondered that possible systematic effects in the classic plasma analysis procedure based on CEL measurements might cause the observed discrepancies, if inaccuracies in the effective recombination coefficients or contamination of the recombination lines by processes such as stellar continuum resonance fluorescence as the origins of the discrepancies can be ruled out. After observations of the [O~{\sc iii}] $\lambda$4931/$\lambda$4959 line ratio in seven Milky Way PNe and the Orion Nebula by Mathis \& Liu \cite{ml1999}, the measurement errors in CELs as the culprit has also been ruled out. Clearly the temperature and abundance discrepancies are real, and the two categories of emission lines probably arise from regions with different physical conditions. A detailed study of NGC\,6153 by Liu et al. \cite{liu2000} yielded an ADF of about 10 for that object and, based on the empirical composite models, Liu et al. \cite{liu2000} proposed a bi-abundance nebular model as a possible explanation to such high abundance discrepancy. Ever since the detection of a Balmer jump temperature as low as 3560~K for M\,1-42, 5660~K lower than the [O~{\sc iii}] forbidden line temperature for the same nebula (Liu et al. \citealt{liu2001b}), it has become increasingly clear that PNe, at least those exhibiting large ADFs, must contain another component of previously unknown ionized gas. This component of gas mainly emits ORLs yet is essentially invisible in strong CELs because the electron temperature prevailing in the component is too low to excite any UV or optical CELs. The low temperature condition in that component is probably due to the much enhanced cooling by the IR fine-structure lines of the heavy element ions, which is a consequence of of a very high metallicity (i.e. H-deficient). This physical idea is supported by detailed photoionization modeling of P\'{e}quignot et al. \cite{pequignot03} and Tylenda \cite{tylenda03} and also by direct measurements of the average electron temperatures under which various types of emission lines are emitted (c.f. Liu \citealt{liu2003}, \citealt{liu2006a},\,b and \citealt{liu2011} for reviews; Zhang et al. \citealt{zhang2004} for the plasma diagnostics based on the H~{\sc i} recombination spectrum and Zhang et al. \citealt{zhang05a} for the plasma diagnostics based on the He~{\sc i} recombination spectrum). The discovery of a dramatically high ADF value of $\sim$70 and the remarkably low Balmer jump temperature ($\sim$900~K) in PN Hf\,2-2 by Liu et al. \cite{liu2006a} strengthens the validity of the bi-abundance nebular model. In order to reproduce the multi-waveband spectroscopic and imaging observations of NGC\,6153 and investigate the nature and origin of the H-deficient inclusions, Yuan et al. \cite{yuan2011} constructed three-dimensional photoionization models, using the Monte Carlo photoionization code MOCASSIN developed by Ercolano et al. \cite{ercolano03}. Modeling of NGC\,6153 showed that chemically homogeneous models yielded small electron temperature fluctuations and failed to reproduce the strengths of the ORLs of heavy element ions. In contrast, bi-abundance models incorporating a small amount of metal-rich inclusions ($\sim$1.3 per cent of the total nebular mass) are able to match all the observations within measurement uncertainties. The metal-rich inclusions, cooled down in a very low temperature ($\sim$800~K) by ionic IR fine-structure lines, dominate the emission of heavy-element ORLs, but contribute almost nil to the emission of most CELs. The current analyses of the optical recombination spectrum of NGC\,7009 are carried out under the context of the bi-abundance nebular model, and the results of plasma diagnostics based on various types of emission lines and abundance determinations are consistent with that context: the temperature sequence $T_\mathrm{e}$([O~{\sc iii}]) $\gtrsim$ $T_\mathrm{e}$(H~{\sc i}~BJ) $\gtrsim$ $T_\mathrm{e}$(He~{\sc i}) $\gtrsim$ $T_\mathrm{e}$(N~{\sc ii}~\&~O~{\sc ii}~ORLs) is consistent with predictions from the bi-abundance model; the C$^{2+}$/H$^{+}$, N$^{2+}$/H$^{+}$, O$^{2+}$/H$^{+}$ and Ne$^{2+}$/H$^{+}$ ionic abundances derived from ORLs, using the new effective recombination coefficients and the electron temperature yielded by the N~{\sc ii} and O~{\sc ii} ORLs, are systematically higher, by about a factor of 5, than the corresponding abundances derived from CELs. It has been shown from optical observations that the ADF varies with position in several high-ADF PNe and is highest close to the central star. The ``cold" inclusions should be cooled via the IR fine-structure lines of heavy element ions. Thus it is interesting to see if the IR fine-structure line fluxes relative to optical/UV CELs peak where the ADF peaks in PNe with large ADFs. Recently, Herschel and Hubble observations of NGC\,7009 have been carried out and the results show that within the first $\sim$5~arcsec from the central star of NGC\,7009, the [O~{\sc iii}] 88$\mu$m/$\lambda$5007 flux ratio seems to increase towards center (R.~H. Rubin, private communication). With the very deep spectrum and the high-quality atomic data now available, we can derive more precise physical properties e.g. total mass, spatial distribution of the ``cold", metal-rich inclusions in NGC\,7009 through three-dimensional photoionization modeling. Given that the ADFs found in PNe are all larger than unity, the metal-rich (H-deficient) inclusions are probably a real feature of PNe. However, the the presence of those ``cold" inclusions is not predicted by the current theories of stellar evolution. Iben, Kaler \& Truran \cite{iben83} proposed that an evolved star undergoing a very late helium flash (the so-called 'born-again' PNe) may harbour H-deficient material, such as the H-deficient knots detected in the two 'born-again' PNe Abell\,30 and Abell\,58. Wesson, Liu \& Barlow \cite{wlb2003} and Wesson et al. \cite{wesson08} found that the H-deficient knots in Abell\,30 and Abell\,58 are O-rich, in contradiction with the expectation of the 'born-again' scenario. The ADFs are found to be high in the PNe with Wolf-Rayet central stars, and that can be explained by the scenario of a single post-asymptotic giant branch (post-AGB) star experiencing a list helium shell flash, but not all PNe with large ADFs have an H-deficient central star, such as the case of NGC\,7009 studied in the current paper. Garc\'{i}a-Rojas, Pe\~{n}a \& Peimbert \cite{garcia09} observed the faint ORLs in Galactic PNe with [WC] nucleus and found the results that argue against the presence of H-deficient knots coming from a late thermal pulse event. De~Marco \cite{demarco08} suggested a binary scenario to explain the observations that are in contradiction with the theory of the single post-AGB evolution. In this respect, it is interesting to note that Abell\,58 have experienced a nova-like outburst (Clayton \& De~Marco \citealt{cd97}). Lutz et al. \cite{lutz98} has also found the central star of Hf\,2-2, a PN with the largest ADF value ($\sim$70, Liu et al. \citealt{liu2006a}) ever found for an emission line nebulae, to be a close binary system. An alternative scenario of the origin of the H-deficient inclusions is that they evolve from metal-rich planetary material, such as icy planetesimals left over from the debris of planetary system of the progenitor star of the PN (Liu \citealt{liu2003}, \citealt{liu2006a}). Both high spectral- and spatial-resolution observations in the future, in combination with detailed three-dimensional photoionization modeling as has been carried out for NGC\,6153 (Yuan et al. \citealt{yuan2011}), will help to reveal the possible astrophysical origins of the H-deficient inclusions in NGC\,7009. \section{\label{summary} Summary and conclusions} Nearly two decades after the first analysis of the O~{\sc ii} optical recombination spectrum of the bright PN NGC\,7009 (LSBC), once again we focus on the rich ORLs of heavy element ions detected in very deep CCD spectrum of the same object. Thanks to much advance in observational techniques, which enables accurate detection of the weak ORLs of heavy element ions, and the steady improvements in atomic data, especially the recombination theories of heavy element ions in the physical conditions of photoionized nebulae, we are now clear that the long-standing dichotomy between nebular plasma diagnostics and abundance determinations using CELs on the one hand and ORLs on the other, are real rather than cause by, e.g., observational uncertainties or errors in atomic data. Unremitting efforts in nebular research of the past 40 years gradually lead to new understanding of the problems in nebular astrophysics. Various mechanisms (e.g. temperature fluctuations and/or density inhomogeneities, abundance inhomogeneities, non-Maxwell-Boltzmann equilibrium electrons e.g. the $\kappa$-distribution of electron energies) have been proposed to explain the discrepancies in plasma diagnostics and abundance determinations, and debate over these mechanisms are still going on. In the context of bi-abundance nebular model postulated by Liu et al. \cite{liu2000}, we present a comprehensive and critical analysis of the rich optical recombination spectrum of NGC\,7009. Transitions from individual multiplets of heavy element ions, e.g. C~{\sc ii}, N~{\sc ii}, O~{\sc ii} and Ne~{\sc ii}, are checked carefully for line blending, and accurate dereddened line fluxes of the most prominent transitions of those ions are obtained through multi-Gaussian profile fitting. In addition to the accurate observations of ORLs, we have finished new calculations of the effective recombination coefficients for the N~{\sc ii} recombination spectrum. The new effective recombination coefficients for the nebular O~{\sc ii} lines calculated by P.~J. Storey (unpublished) help to enlarge our current atomic dataset for nebular recombination line study. Both calculations were carried out in the intermediate coupling scheme, and have taken into account the density-dependence of the relative populations of the ground fine-structure levels of recombining ions (i.e. N$^{2+}$ $^{2}$P$^{\rm o}_{1/2}$ and $^2$P$^{\rm o}_{3/2}$ for N~{\sc ii}, and O$^{2+}$ $^{3}$P$_{0}$, $^{3}$P$_{1}$ and $^{3}$P$_{2}$ for O~{\sc ii}). The new effective recombination coefficients of N~{\sc ii} and O~{\sc ii} are of high quality and make nebular density diagnostics using the ORLs of heavy element ions possible for the first time. The observed relative intensities of ORLs are compared with theoretical predictions that are based on the new effective recombination coefficients. At a given electron temperature ($T_\mathrm{e}$ = 1000~K) as yielded by the ORL ratios of N~{\sc ii} and O~{\sc ii}, the predicted relative intensities of ORLs agree with the observed values. Plasma diagnostics based on the best observed N~{\sc ii} and O~{\sc ii} ORLs (i.e. the $I$(M3~$\lambda$5679)/$I$(M39b~$\lambda$4041) ratio of N~{\sc ii} and the $I$(M1~$\lambda$4649)/$I$(M48a~$\lambda$4089) ratio of O~{\sc ii}) both yield electron temperatures close to 1000~K, which is lower than those derived from the CEL ratios by nearly one order of magnitude. The low temperatures yielded by the N~{\sc ii} and O~{\sc ii} ORLs indicate that the recombination lines of heavy element ions originate from very cold regions. The electron temperatures derived from the intensity ratios of the O~{\sc ii} high-excitation recombination lines M15~$\lambda$4591 and M36~$\lambda$4189, which are formed from recombination of excited-state parent (i.e. O$^{2+}$ 2p$^{2}$\,$^{1}$D), relative to the O~{\sc ii} M1~$\lambda$4649 line agree with eath other ($\sim$3600~K), and is consistent with the fact that very cold ($\leq$1000~K) inclusions probably exist in the nebula. The electron temperature ($\sim$3000~K) yielded by the C~{\sc ii} $I$(M28.01~$\lambda$8794)/$I$(M6~$\lambda$4267) dielectronic-to-radiative recombination line ratio also agrees with the conjecture of very cold inclusions. The C$^{2+}$/H$^+$, N$^{2+}$/H$^+$, O$^{2+}$/H$^+$ and Ne$^{2+}$/H$^+$ ionic abundance ratios derived from ORLs, using the new effective recombination coefficients of N~{\sc ii} and O~{\sc ii}, are consistently higher than the corresponding values derived from CELs, by about a factor of 5. An electron temperature of 1000~K, which is yielded by the best observed N~{\sc ii} and O~{\sc ii} recombination line ratios and as a consequence presumably represents the physical condition prevailing in the regions where the heavy element ORLs arise, has been assumed throughout the recombination-line abundance determinations. The results of plasma diagnostics and abundance determinations for NGC\,7009 points to the existence of ``cold", metal-rich inclusions in NGC\,7009, and is thus consistent with the context of the current spectral analyses, i.e. the bi-abundance nebular model. Recombination line analysis for NGC\,7009 helps to assess the new atomic data. The agreement between the observed and predicted relative intensities of the N~{\sc ii} and O~{\sc ii} ORLs indicates that the current calculations of the recombination spectra of those two ionic species well represent the physical processes, i.e. radiative and dielectronic recombination, under nebular conditions. Our nebular analysis also shows that the recombination lines of different multiplets, or different $J$-resolved fine-structure components of a multiplet yield consistent ionic abundances (e.g. N$^{2+}$/H$^{+}$ and O$^{2+}$/H$^{+}$). This is another evidence that the new effective recombination coefficients are reliable. However, the Ne$^{2+}$/H$^{+}$ abundance ratio derived from the total intensity of the 4f\,--\,3d transitions is higher, by nearly 0.2 dex, than the average value derived from the multiplets of the 3\,--\,3 configuration. That indicates new calculations of the effective recombination coefficients for the Ne~{\sc ii} lines are needed. \section*{Acknowledgements} We thank Dr. P.~J. Storey for making the effective recombination coefficients for the O~{\sc ii} lines and the Ne~{\sc ii} 4f\,--\,3d transitions available prior to publication. We thank Dr. R.~H. Rubin for fruitful discussions. We would also like to thank Dr. O. De~Marco for her valuable comments and suggestions which have greatly improved the quality of this paper. This work is supported by the Natural Science Foundation of China (No. 10933001).
1,314,259,996,258
arxiv
\section{Introduction} \label{intro} The problem of finding a path in road networks has been of great importance. Given the significance of the problem area, several researchers (e.g., \cite{tdrive, jing1998hierarchical,Demiryurek2011,Gunturi2015,byang18,Delling2008hb}) have explored it from different aspects. Among these, the most fundamental is computing a path between a source and destination under a given preference metric. The preference metric of choice has typically been the minimization of distance (e.g., \cite{jing1998hierarchical}), time (e.g., \cite{Demiryurek2011,byang18}), or fuel (e.g., \cite{yanli2018}). However, the increasing proliferation of mobility-based Big Data \cite{mckinseynew,alireem2015} enables one to ask for much more nuanced routing queries such as: ``Determine the path which goes through wide roads while constraining the total length of the path to be at most 5km longer than the shortest path?'' or ``Determine a path which can be easily followed (refer \cite{ramneekdexa}) while constraining the total length of the path to be at most 5km longer than the shortest path.'' The central theme of both these queries is that they have the notion of both an optimizing metric and a constraining metric. Here the aim is to determine a path that maximizes on the optimizing metric (e.g., road width, navigability) along with constraining the path according to the constraining metric (e.g., distance). We refer to such problems as \emph{Constrained Path Optimization (CPO)} problems. \noindent \textbf{Importance of Problem:} \emph{Constrained Path Optimization} problems have recently gained interest in the database system community (e.g., \cite{ramneekdexa,ramneekj,shahabi2015,shahabi2017}) from the perspective of developing scalable solutions for some ``instantiations'' of the constrained path optimization problem. For instance, in \cite{shahabi2015,shahabi2017}, the goal was to determine a path that maximized the scenery (i.e., it has more beautiful viewpoints) while constraining the total length of the path in terms of distance. Whereas \cite{ramneekdexa,ramneekj} focused on maximizing the ``navigability'' of the route (in terms of easily identifiable landmarks or easily navigable roads) while constraining the total distance traveled. Thus, it is conceivable that several other potentials ``instantiations'' of the constrained path optimization problem can come forward in the modern age of Big Data, where different road parameters (e.g., road type, road quality, etc.) are increasingly being recorded. \subsection{Computational Challenges} \label{chal} An instance of the constrained path optimization (CPO) problem can be reduced to an instance of the arc orienteering problem (AOP), which is known to be an NP-hard problem \cite{exact1, approximation1, Chekuri2005}. Moreover, any typical urban road network would have hundreds of thousands of road segments and road intersections. Thus, scalability is vital for any potential algorithm for the CPO problem. More specifically, we believe that for any navigational system based on the CPO problem to be used in real-life, the underlying path-finding algorithm should have a running time of at most a few seconds. \noindent \textbf{Challenges in adapting minimization based approach for CPO problem:} There are many noted works done in the area of minimization of preference metric (e.g., \cite{cola,Demiryurek11,foresthop}). However, it is important to note that these approaches cannot be easily modified to solve the CPO problem. A straightforward approach to reducing a maximization problem such as the CPO problem into a minimization problem is to change the sign of score value (i.e., make it negative). However, given that most of the minimization-based approaches (e.g., \cite{cola,Demiryurek11,foresthop}) inherently use a variant of Dijkstra's algorithm for optimization, they would not be able to work with negative score values. Another approach is to use reciprocals of the score values on edges, i.e., replace each score value $x$ with $\frac{1}{x}$, and then use the minimization-based approach. However, with this reduction, we cannot get a one-to-one mapping (with the original maximization-based CPO problem instance). The challenge of this reciprocal-based approach was also detailed in \cite{ramneekj}. \subsection{Our Contributions} This paper makes the following contributions:\\ \noindent (a) To the best of our knowledge, we believe this paper is the first to explore the challenges of developing a parallel approach for the Constrained Path Optimization (CPO) problem on road networks. \noindent (b) We propose a novel parallel approach, called \emph{Parallel-Spatial-RG} algorithm, for the CPO problem on road networks. \emph{Parallel-Spatial-RG} algorithm intelligently performs task assignment and obtains good CPU utilization while avoiding deadlocks. \noindent (c) We experimentally evaluate the \emph{Parallel-Spatial-RG} algorithm using real road networks and compare it with existing alternative techniques for the CPO problem. \noindent (d) Our experimental results obtained significantly better solution quality than the current heuristic algorithm \cite{ramneekdexa,ramneekj} while maintaining comparable running times. \noindent (e) Our experimental results also showed that our proposed algorithm was able to \textbf{load balance and demonstrate an almost linear speed-up} with an increase in the number of cores.\\ \noindent \textbf{Outline:} The rest of the paper is organized as follows: Section \ref{bc} presents some basic concepts and formally defines the CPO problem. Section \ref{pa} presents our proposed approach. We evaluate our proposed approach and compare it with alternatives in Section \ref{exp}. Finally, we conclude this paper in Section \ref{con}. \section{Related Work} \label{rel-work} Research literature most relevant to our problem consist of the following: (a) work done in the area of theoretical computer science (e.g., \cite{Chekuri2005}), (b) work done in the area of database systems (e.g., \cite{ramneekdexa,shahabi2015,shahabi2017,ramneekj}), (c) work done in the area of parallel algorithms for shortest paths (e.g., \cite{dmsr,subgcenteric2,dsChakaravarthy}), (d) work done in the area of constrained shortest path problem (e.g., \cite{cola,cola-gpu,foresthop,old-csp}). Researchers from theoretical computer science have focused on developing approximation algorithms for the orienteering problem. Note that orienteering problem can be reduced to arc orienteering problem \cite{GAVALAS2015313}. Researchers \cite{Chekuri2005} proposed a quasi-polynomial algorithm that yields a $O(\log OPT)$ approximation for the orienteering problem. While the approximation ratio is impressive, the execution time of this algorithm is very high due to its high time complexity. Our adaptation of \cite{Chekuri2005} for the CPO problem yielded a running time in hours (or in some cases, days) on a typical road network dataset, which is not feasible in the real world. Database system researchers have been working on variants of the CPO problem (e.g., \cite{ramneekdexa,shahabi2015,shahabi2017,ramneekj}) from a scalability perspective. Their approaches already have running time in milliseconds, and parallelizing them can not produce a better solution quality. In contrast, our experimental analysis shows that our proposed approach obtains a significantly better solution quality while maintaining comparable running times (more details in Section \ref{exp}). Research work done in the area of parallel algorithms for shortest paths (e.g., \cite{dmsr,subgcenteric2,gunturi2019,dsChakaravarthy}) has focused on developing path-finding algorithms that optimize routes based on a single preference metric (e.g., distance or time). Hence, these cannot be used for our CPO problem, which has an inherently different computational structure as it uses more than one preference metric. Lastly, it is important to acknowledge the work done in the area of constraint shortest path (CSP) problem \cite{cola,cola-gpu,foresthop,old-csp} while proposing a solution for the CPO problem. The CSP problem aims to determine the shortest path between a source-destination pair subject to certain constraints. Note that CSP problem is a \emph{minimization based problem} and moreover the existing techniques \cite{cola,cola-gpu,foresthop,old-csp} are based on Dijkstra's algorithm. Therefore, as mentioned in Section \ref{chal}, we cannot use them to solve a CPO problem instance. \section{Basic Concepts} \label{bc} \noindent \textbf{Road network:} A road network is represented as a directed graph $G=(V,E)$. Here, nodes (set $V$) represent the road intersections, whereas directed edges (set $E$) represent the road segments. Each edge in $E$ is associated with a notion of a cost value ($> 0$) and a score value ($\geq 0$). In this paper, the cost of an edge $e=(x,y)$ corresponds to the euclidean distance between $x$ and $y$. \noindent \textbf{Optimizing Metric ($\Gamma()$):} Given any directed path $P_i$, $\Gamma(P_i)$ returns the total ``score'' collected by $P_i$. In its simplest form, $\Gamma(P_i)$ can be defined as a sum of the score values of all the edges constituting the path $P_i$. \noindent \textbf{Constraining Metric ($\Phi()$):} Given any directed path $P_i$, $\Phi(P_i)$ returns the total ``cost'' consumed by $P_i$. We define $\Phi(P_i)$ as the sum of the cost values of all the edges constituting the path $P_i$. \subsection{Problem definition} \noindent \textbf{Input:} consists of the following: \noindent (1) A road network, $G=(V,E)$, where each node $v \in V$ is associated with certain spatial coordinates. \noindent (2) A source $s$ $\in$ $V$ and a destination $d$ $\in$ $V$. \noindent (3) A positive value $overhead$ corresponds to the maximum permissible cost allowed over the cost of the minimum cost path from $s$ to $d$. This paper uses the term $budget$ to denote the sum of overhead and the cost of the minimum cost path between $s$ and $d$. \noindent \textbf{Output:} A directed path $P*$ between $s$ and $d$. \noindent \textbf{Objective function:} Maximize $\Gamma(P*)$ \noindent \textbf{Constraint:} $\Phi(P*) \leq budget$ \section{Proposed Approach} \label{pa} This section details our proposed parallel algorithm for solving the CPO problem. In Section \ref{sec:filter}, we present our Spatial-RG algorithm, inspired by the approximation algorithm proposed in \cite{Chekuri2005} for the orienteering problem. Spatial-RG algorithm has been optimized to solve CPO problem instances on road networks and is much more amenable for a good \textbf{load-balanced} parallelization (details in Section \ref{sec:parallel}). In Section \ref{sec:parallel}, we develop a parallel version of the Spatial-RG algorithm. \subsection{Spatial Recursive Greedy Approach for CPO problem} \label{sec:filter} The key idea over here is first to find an initial seed path (shortest path) between the given source and destination nodes and then recursively determine its better replacements. The initial seed path can be determined using any standard shortest path algorithm. In our implementation, we used A* algorithm with \textit{euclidean distance} as the heuristic function \cite{gunturibook}. The algorithm continues the recursion up to maximum recursion depth $\theta$ (an input parameter given to the algorithm). \\ \noindent \textbf{Details of the Spatial-RG algorithm:} A pseudocode of the algorithm is presented in Algorithm \ref{alg2}. Each call to the algorithm primarily takes the following input: (i) ``source node'' $u$, (ii) ``destination node'' $v$ and (iii) remaining budget $\beta$. In the first call to the Spatial-RG algorithm, $u$, $v$, and $\beta$ would be set according to the input values given while defining the CPO query to be processed. Thereafter, $u$, $v$, and $\beta$ change during the course of the recursion calls. In each recursion, Algorithm \ref{alg2} first iterates (outer loop on line 9) over all edges $e$ which satisfy the following two filters: (a) $\Gamma(e) > 0$ and, (b) $e$ is inside the ellipse formed by $u$ and $v$ as the foci and $\beta$ as the major axis length. It is important to note that the filter proposed in (a) may affect the quality of the final solution. Nevertheless, we use it to gain performance. Moreover, our experiments also reveal that Spatial-RG still has significantly better solution quality than the current heuristics for the CPO problem. In contrast, filter (b) maintains correctness. It was already explained in \cite{ramneekj,shahabi2015} and thus, we do not include its correctness proof. Suppose an edge $e=(x,y)\in E$ is considered in the outer loop (line 9). This means that the current path between $u$ and $v$ would be replaced by a path which is the combination of the following three sub-paths: (1) a path $P_1$ between $u$ and $x$, (2) edge $e=(x,y)$ and, (3) a path $P_2$ between $y$ and $v$. Both $P_1$ and $P_2$ are determined recursively inside the inner loop on line 11 as described next. The inner loop of Algorithm \ref{alg2} loop over a set of possible budget values. In each iteration of this inner loop, $P_1$ and $P_2$ are determined recursively for different budget values (line 12 and 13). For each pair of $P_1$ and $P_2$ (as returned by their respective recursive calls), we determine $P_{new}$ as $P_1 \cup e \cup P_2$ (joining $P_1$, $e$, and $P_2$ in same order). We store $P_{new}$ if it has a better score value than the original path between $u$ and $v$ (determined on line 1). Thus, at the termination of the inner while loop, we would have the ``best possible'' $P_1$ and $P_2$, which can be attached with $e$ (current edge being considered on the outer loop) to obtain the ``best possible'' replacement for the current path between $u$ and $v$. It is important to note that budget values sent into the recursive calls for determining $P_1$ and $P_2$ (line 12 and line 13) are not entirely independent of each other. More precisely, if a budget value of $b$ is given for determine $P_1$, then a maximum value of $\beta-b-\Phi(e)$ budget can be given for determine $P_2$. And lastly, the set of feasible budget values ($b$) for the inner while loop range from $b=Euclidean\_Distance(u,x)$ to $b=\beta - \Phi(e)$ - $Euclidean\_Distance(y,v)$. Correctness of these values can be easily proved by considering the fact \textit{shortest network distance over a graph would always be greater than or equal to the Euclidean distance. Details omitted due to lack of space}. \begin{algorithm}[ht] \caption{Spatial-RG Algorithm} \label{alg2} \begin{flushleft} \textbf{Input:} (a) Input graph $G(V,E)$; (b) source node $u$; (c) destination node $v$; (d) Remaining budget $\beta$; (e) current $level$; (f) maximum recursion depth $\theta$.\\ \textbf{Output:} (a) A directed path $P$ between $u$ and $v$ \end{flushleft} \begin{algorithmic}[1] \STATE $P$ $\leftarrow$ minimum cost path between $u$ and $v$ \IF{$\Phi(P) > \beta$} \STATE Return Null \ENDIF \IF{level = $\theta$} /*Maximum recursion depth reached*/ \STATE Return $P$ \ENDIF \STATE $s_p$ $\leftarrow$ $\Gamma(P)$ /*stores value of optimizing metric of $P$ */ \FORALL{edge $e=(x,y)\in E$ with $\Gamma(e) > 0$ and $e$ inside ellipse(u,v,$\beta$)} \STATE $b\leftarrow$ Euclidean\_Distance(u,x) \WHILE{$b \leq \beta-\Phi(e)-$Euclidean\_Distance$(y,v)$} \STATE $P_1 \leftarrow$ Spatial-RG $(u,x,b,level+1)$ \STATE $P_2 \leftarrow$ Spatial-RG $(y,v,\beta-b-\Phi(e),level+1)$ \STATE $P_{new} \leftarrow P_1 \cup e \cup P_2$ \IF{$(P_1$ $\cap$ $P_2) = null$ \& $\Gamma(P_{new}) > s_p$} \STATE $P \leftarrow P_{new}$ and $s_p$ $\leftarrow$ $\Gamma(P_{new})$ \ENDIF \STATE $b\leftarrow b+1$ \ENDWHILE \ENDFOR \STATE Return $P$ \end{algorithmic} \end{algorithm} \noindent\textbf{Spatial Indexing for implementing the Ellipse pruning:} We use a \emph{Uniform Grid Indexing} \cite{grid} to efficiently determine the edges present in $ellipse(u,v,\beta)$ on line 9 of Algorithm \ref{alg2}. This is done using the concept of spatial range query. \subsection{Parallel Algorithm for CPO problem} \label{sec:parallel} While the Spatial-RG algorithm uses several spatial filtering strategies and spatial indexing for gaining performance, its running time (as seen in our experiments) was still impracticable for meeting the real-world expectation of getting a solution for the CPO problem within a few seconds. To this end, a parallel version of Spatial-RG algorithm was developed, which can harness the increasingly available multi-core systems to \textbf{improve execution time while still maintaining the same solution quality as Spatial-RG.} This section describes the proposed parallel algorithm \emph{Parallel-Spatial-RG} for the CPO problem. On a close inspection of \emph{Spatial-RG} (Algorithm \ref{alg2}), one may realize that the algorithm has some inherent parallelism at the following three places: (a) Outer loop (For loop on line 9 in Algorithm \ref{alg2}), (b) Recursion calls on line numbers 12 and 13 (in Algorithm \ref{alg2}), (c) Inner While loop (on line 11 in Algorithm \ref{alg2}). In the proposed algorithm, only options (a) and (b) are considered for parallelization. The while loop (option (c)) on line 11 in Algorithm \ref{alg2} is not considered for parallelization as the thread which creates the tasks by unrolling the loop on line 9 in Algorithm \ref{alg2} would have no other work except the creation of tasks corresponding to loop in line 11. As a result, this thread would be sitting idle while other threads undertake the job of iterating over the while loop. We first discuss our proposed technique for parallelizing the outer loop (For loop on line 9 in Algorithm \ref{alg2}), and then the recursion calls on line numbers 12 and 13. We start our discussion with an intuitive naive approach and highlight its limitations. Following that, we describe the proposed \emph{Parallel-Spatial-RG} algorithm. \begin{figure}[ht] \centering \includegraphics[height=30mm,width=90mm]{images/Search_space.png} \caption{Illustrating the search space of Spatial-RG algorithm.} \label{figss} \end{figure} \subsubsection{\textbf{Challenges of a Naive Approach (Recursion Unpacking):}}\label{p-naive} Note that \emph{Spatial-RG} (Algorithm \ref{alg2}) has recursion calls inside a while loop. As a result, the search space of the algorithm would be non-linear. More specifically, the search space of the \emph{Spatial-RG} algorithm would be similar to a \emph{tree} structure. And the algorithm would essentially undertake a depth-first traversal of this \emph{tree-structured} search space to obtain a solution. Figure \ref{figss} illustrates this \emph{tree-structured} search space for a hypothetical instance of Algorithm \ref{alg2}. In this instance, in the first level, the loop on line 9 in Algorithm \ref{alg2} has 4 edges ($e_1, e_2, e_3, e_4$). For each of these edges, the algorithm would first have to execute the while loop (between lines 11 and 20 in Algorithm \ref{alg2}). And inside this while loop, the recursion calls are taking place. An intuitive way of parallelizing the \emph{Spatial-RG} algorithm would be to use multiple threads to explore its \emph{tree-structured} search space. As a naive approach, one can employ independent threads to explore the sub-trees (in Figure \ref{figss}) under the root in parallel and then determine the best solution amongst individual solutions obtained in different sub-trees. However, such an approach may not always guarantee good CPU utilization. For instance, consider an 8-core system (capable of running 16 threads). In the search space illustrated in Figure \ref{figss}, one may assign the sub-trees underneath the root to 4 different threads. Such an approach would not be using the remaining 12 threads. Moreover, it may also happen that the work across different sub-trees may not be uniformly distributed due to variation in the properties (density of roads and their scores) of road segments across different spatial locations. Consequently, some threads may finish their work much sooner than others. Therefore, causing even worse CPU utilization. A logical extension of this naive approach would be to determine an appropriate level in the \emph{tree-structured} search space (at runtime) based on the number of threads available and then continue exploring in parallel. However, this approach also has its limitations. Firstly, on any typical road network, \emph{Spatial-RG}'s search space tree would have a high fan-out factor. In other words, the number of ``nodes'' increases almost exponentially with each level. For instance, level 1 may have 10 nodes, and level 2 may have 80 nodes. Secondly, the search space of \emph{Spatial-RG} (on any CPO problem instance) cannot be pre-determined precisely as it is heavily dependent on the spatial distribution of the edges in the input dataset. \subsubsection{\textbf{Details of Parallel-Spatial-RG algorithm:}} As mentioned in the previous subsection, any approach which is solely dependent on unpacking the recursion (followed by simultaneous independent exploration by threads) would have poor CPU utilization. To this end, we use the following key strategy: In each recursion call, the current thread is allowed to create further tasks for exploring its designated search space, but those tasks may not always be executed in parallel. More specifically, a separate task is created for each iteration of the outer loop (loop on line 9 in Algorithm \ref{alg2}) and the recursive calls on line numbers 12 and 13 of the \emph{Spatial-RG} algorithm. These tasks are picked up by a thread only if it is idle or free. Otherwise, those tasks are executed serially by the thread which created those tasks (details provided later in the section). It is important to note that the tasks created by different threads \emph{should not be put in a global job pool}. Such an approach may lead to \emph{deadlock}, as illustrated in the following example. Consider a case where the outer loop (line 9 of Algorithm \ref{alg2}) of \emph{Spatial-RG} and the recursion calls (inside the inner loop) are parallelized. In each recursion call, the current thread would create a separate task for each outer loop iteration. Along with that, the parallelization of the recursion calls also takes place. So, the tasks created previously are further divided into two sub-tasks. All these tasks are put into a global job pool. And whenever a free thread is found, it is assigned a job from the global job pool. \begin{figure}[ht] \centering \includegraphics[width=90mm,height=30mm]{images/Deadlock-Diagram.png} \caption{Illustrating deadlock in the naive approach for parallelizing Spatial-RG algorithm.} \label{figdl} \vspace{-4mm} \end{figure} Figure \ref{figdl} illustrates a case of deadlock in such an approach. To simplify the example, we consider a very small instance and a limited number of resources (2 threads). Here in Figure ~\ref{figdl} $R$ is the recursion base. At the start of execution, the $R$ is picked up by thread $T0$. Then $T0$ creates jobs corresponding to all the edges in the outer loop and puts those jobs into the global job pool. In this example, we assume that a single job is created corresponding to a single edge $E1$ (in the outer loop) for the sake of simplicity. Now $T0$ goes into the waiting stage and waits for $E1$ to complete. The job corresponding to $E1$ is then picked by the free thread available in the thread pool $T1$. Following this, $T1$ creates two tasks $R1$ and $R2$ (corresponding to two recursive calls in the inner loop) for $E1$ and puts them into the same global job pool. Now $T1$ also goes into the waiting stage and waits for its recursive tasks ($R1$ and $R2$) to complete. At this stage, there are no free threads in the thread pool. Each thread is waiting for their respective jobs to finish, and jobs in the job pool are waiting for free threads. In Figure ~\ref{figdl}, this situation is explained using two \emph{hold and wait} cycles, which is one of the well-known conditions for \emph{deadlock}. One cycle is formed by the thread pool, $E1$, and $R1$, while the other is formed by the thread pool, $E1$, and $R2$. In each of the cases, $E1$ holds one resource (thread) and waits for its recursive task ($R1$ or $R2$) to complete, and the recursive tasks ($R1$ or $R2$) are waiting for the $R$ or $E1$ to release one of the resources (thread), while $R$ is waiting for $E1$ to be finished. So the algorithm would not proceed further, resulting in a \emph{deadlock}. One may obtain slightly better performance by making each thread hold some tasks for itself and putting the remaining ones in the global job pool. However, this strategy can still not guarantee a deadlock-free execution. Moreover, for this approach to work, one would have to decide the ``optimal number'' of tasks needed to perform serially for each thread before any of their tasks starts execution. Determining this ``optimal number'' precisely would be challenging due to variation in properties (density of roads and their scores) of road segments across different spatial locations. \begin{algorithm}[ht] \caption{Parallel-Spatial-RG($u, v, budget, current-level, \theta$):} \label{alg4} \begin{algorithmic}[1] \STATE $P$ $\leftarrow$ minimum cost path between $u$ and $v$. \IF{$\Phi(P) > budget$} \STATE Return Null \ENDIF \IF{level = $\theta$} /*Maximum recursion depth reached*/ \STATE Return $P$ \ENDIF \STATE Create a job pool $JP$ \STATE Create a result set $RS$ /*$RS$ would store results of jobs in $JP$*/ \FORALL{edge $e=(x,y)\in E$ with $\Gamma(e) > 0$ and $e$ inside the ellipse(u,v,$\beta$)} \STATE create job for Solve($P, u, v, e, budget, level, \theta$) \STATE push the job into $JP$. \ENDFOR \STATE $\Lambda \leftarrow$ Free threads in the $threadpool$ \FORALL{threads $\lambda_i \in \Lambda$} \STATE Assign a job from JP to $\lambda_i$ \\ /*On finishing the job, $\lambda_i$ would record its solution in $RS$*/ \ENDFOR \WHILE{there exists an unassigned job $J_i$ in $JP$} \IF{there is a free thread $\lambda_j$ in $threadpool$} \STATE Assign $J_i$ to $\lambda_j$ \ELSE \STATE the current ``primary'' thread picks the job $J_i$ \ENDIF \ENDWHILE \WHILE{there exists an unfinished job in $JP$} \STATE Wait for completion. \ENDWHILE \STATE $P_{new} \leftarrow$ Best solution in $RS$ \IF{ $\Gamma(P_{new}) > \Gamma(P)$} \STATE $P \leftarrow P_{new}$ \ENDIF \STATE Return $P$ \end{algorithmic} \end{algorithm} We address the problem of deadlocks through the following strategy: Each thread would maintain its \emph{local job pool}. After creating jobs, the current thread would look for free threads. If no free threads are found, the current thread would start picking up jobs from its job pool (while actively looking for free threads). \begin{algorithm}[ht] \caption{Solve($P, u, v, e, \beta, level, \theta$):} \label{alg5} \begin{algorithmic}[1] \STATE $b\leftarrow$ Euclidean\_Distance(u,x) /*edge $e=(x,y)\in E$*/ \WHILE{$b \leq \beta-\Phi(e)-$Euclidean\_Distance$(y,v)$} \STATE Create a job pool $JP2$ \STATE create a job $k_1$ for Parallel-Spatial-RG$(u,x,b,level+1, \theta)$ \STATE add $k_1$ to $JP2$ \STATE create a job $k_2$ for Parallel-Spatial-RG$(y,v,\beta-b-\Phi(e),level+1, \theta)$ \STATE add $k_2$ to $JP2$ \STATE Create a result set $RS2$ /*$RS2$ would store results of jobs in $JP2$*/ \WHILE{there exists an unassigned job $k$ in $JP2$} \IF{there is a free thread $\lambda_j$ in $threadpool$} \STATE Assign $k$ to $\lambda_j$ \ELSE \STATE Current ``primary'' thread picks the job $k$ from $JP2$ \ENDIF \ENDWHILE \WHILE{there exists an unfinished job in $JP2$} \STATE Wait for completion. \ENDWHILE \STATE $P_1 \leftarrow$ path return from job $k_1$. \STATE $P_2 \leftarrow$ path return from job $k_2$. \STATE $P_{new} \leftarrow P_1 \cup e \cup P_2$ /*Join $P_1$ and $P_2$ using edge $e$ */ \IF{$(P_1$ $\cap$ $P_2) = null$ \& $\Gamma(P_{new}) > \Gamma(P)$} \STATE $P \leftarrow P_{new}$ \ENDIF \STATE $b\leftarrow b+1$ \ENDWHILE \STATE Return $P$ \end{algorithmic} \end{algorithm} The proposed parallel approach is primarily detailed across Algorithm \ref{alg4} and Algorithm \ref{alg5}. Algorithm \ref{alg5} is used inside Algorithm \ref{alg4}. We initialize Algorithm \ref{alg4} by passing the source node, the destination node, budget, current level (set to $0$), and the maximum depth of recursion allowed. We also create a worker pool (referred to as $threadpool$ in the pseudocodes) to be used in Algorithm \ref{alg4} and Algorithm \ref{alg5} according to the number of cores available. Typically, in modern multi-core processors with hyper-threading technology, we would set the number of workers (threads) to twice the number of cores available. Overall, Algorithm \ref{alg4} is structurally similar to the \emph{Spatial-RG} algorithm. Algorithm \ref{alg5} is used inside Algorithm \ref{alg4} to do the work corresponding to the while loop between lines 11--20 in \emph{Spatial-RG}. In each recursion call, Algorithm \ref{alg4} creates a task for each iteration of the outer loop (for loop on line 9 in Algorithm \ref{alg2}). This is done in lines 10--13 of Algorithm \ref{alg4}. Each of these tasks essentially attempts to compute the solution while including a particular edge $e=(x,y)$. With reference to the \emph{Spatial-RG} algorithm (Algorithm \ref{alg2}), this is the work done corresponding to the while loop between lines 11--20. The jobs created are put in a job pool $JP$, which is \emph{local} to the current recursion call of Algorithm \ref{alg4}. Jobs from $JP$ are assigned to idle threads. These threads would report their respective results in the result set $RS$, which is shared among them. In our implementation, each thread in $JP$ is assigned a unique location in $RS$. This allows all the threads to access the $RS$ simultaneously and avoids a critical section. Consequently, we get better CPU utilization. If the number of idle threads is less than the number of jobs in $JP$, the remaining jobs are picked up by the currently executing thread (which actually created the job queue) while actively looking for free threads. This is done in lines 10--25 in Algorithm \ref{alg4}. With the intent to simplify the notations, the term ``primary thread'' refers to the thread that creates the job pool in a recursion call. There would be only one ``primary thread'' per call. As mentioned earlier, one can trivially modify this algorithm to hold out one task (from $JP$) for the ``primary thread''. This would give slightly better performance. After all the tasks in $JP$ are completed, the ``primary thread'' chooses the best result from $RS$ and returns it. As mentioned earlier, Algorithm \ref{alg5} focuses on the while loop between lines $11$--$20$ of the \emph{Spatial-RG} algorithm. It creates tasks for the two recursion calls to Algorithm \ref{alg4} and puts them into a job pool $JP2$. Similar to the previous case, this job pool is created locally by its ``primary thread''. Tasks in $JP2$ are assigned to free threads, which, in turn, would report their solutions in a shared result set. And if no free threads are found, the ``primary thread'' picks up the tasks in $JP2$ for processing. \noindent \textbf{Generalizing Parallel-Spatial-RG for Minimization:} Algorithm \ref{alg4} and Algorithm \ref{alg5} can be trivially generalized for a minimization case (e.g., minimize the \#potholes in the path) by making two small changes. First, reverse the ``if'' conditions (between $\Gamma(P_{new})$ and $\Gamma(P)$) on line 30 of Algorithm \ref{alg4} and line 22 of Algorithm \ref{alg5}. Second, remove the restriction of using only the edges $e=(x,y)\in E$ with $\Gamma(e) > 0$ on line 10 of Algorithm \ref{alg4}. In general, we believe that our approach is suitable for both maximization and minimization cases. It is important to note that our generalized version of \emph{Parallel-Spatial-RG} would still not use Dijkstra's styled enumeration (as original \emph{Parallel-Spatial-RG} does not use it) while optimizing the score values. This makes it robust to handle edges with negative scores as well. In fact, dependence on Dijkstra's styled enumeration in the current state of art solutions (e.g., \cite{cola-gpu,cola,foresthop} makes them unsuitable for the CPO problem. As otherwise, one could reverse the sign of score values (i.e., make them negative) and run a minimization-based approach. We plan to investigate this generalization further in our future work and compare it with the current state-of-the-art in CSP problem (e.g., \cite{cola,cola-gpu,foresthop}). \noindent \textbf{Comment on ForkJoinPool based implementation:} ForkJoinPool internally uses a \emph{work-stealing} based scheduler and gives priority to the jobs created by lower levels in the recursion calls. Such a strategy would avoid deadlocks. However, this scheduler does not allow the programmer to control the threads created within a particular task. As a result, race conditions can not be controlled, which leads to low CPU utilization. Thus, a trivial parallelization of the \emph{Spatial-RG} algorithm using the ForkJoinPool library would not give good CPU utilization due to the creation of race conditions at lines 15, 16, and 17 of Algorithm \ref{alg2}. \noindent \textbf{Time complexity analysis of Parallel-Spatial-RG: } In the worst case, an instance of \emph{Parallel-Spatial-RG} algorithm would iterate over all the edges $e=(x,y)\in E$ (in the outer loop), and for each iteration, it would again iterate for $\beta$ times (in the worst case) in the inner loop. Following this, it would have two recursion calls inside the inner loop. Thus, the time complexity for one recursion call is $O(2m\beta)$ ($m$ is the total number of edges present in the network). For a maximum recursion depth of $\theta$, the total time complexity of \emph{Parallel-Spatial-RG} would be $O(2m\beta)^\theta$. Despite the high worst-case time complexity, the combination of spatial filters (Section \ref{sec:filter}) and good CPU utilization (via our parallel approach) helps \emph{Parallel-Spatial-RG} to have low execution time in practice. \section{Experimental Analysis} \label{exp} In this section, we experimentally evaluate \emph{Parallel-Spatial-RG} and compare it with the current state-of-the-art. \begin{table}[ht] \caption{DATASETS} \label{tab::dataset} \centering \begin{tabular}{|c@{}|c@{}|c@{}|} \hline \textbf{Road Network } & \textbf{\#Nodes } & \textbf{\#Edges } \\ \hline Delhi & 52576 & 150488 \\ Buenos Aires & 263783 & 864408 \\ London & 285050 & 749382 \\ \hline \end{tabular} \end{table} \subsubsection{Datasets:} Our experimental analysis is done on three real-world road network datasets from \cite{datacite}. The details of the datasets are briefed in Table \ref{tab::dataset}. In each of these datasets, some edges were selected uniformly at random from across the road network and were assigned (randomly) a score value between 1 and 15. Other edges have a score value of zero. We also varied the number of edges, which had non-zero score values (details provided later). \subsubsection{Candidate Algorithms:} We compare our proposed \emph{Parallel-Spatial-RG} algorithm against the following candidates: \emph{(a) ILS*(CEI)\cite{shahabi2015}, (b) MSWBS\cite{ramneekdexa}}. We also adapted and implemented \cite{Chekuri2005} for our CPO problem. However, it showed an execution time in hours (sometimes even a day). Therefore, we did not include it in our experiments. \subsubsection{Variable Parameters:} The following parameters were varied in our experiments in subsection \ref{sec:sensitivity}. \textit{Budget ($\beta$):} Recall that budget has been defined as the sum of overhead and the cost of the shortest path. As the budget increases, the working space of \emph{Parallel-Spatial-RG} also increases. \noindent \textit{Path Length:} Path length affects the running time of all the candidate algorithms. However, the notion of path length can have multiple interpretations in a weighted graph. To ensure the interpretability of the results, for our experiments, we define the path length as the number of edges in the shortest path between the given source and destination. We reported the average value of $100$ source-destination pairs for each path length range. \noindent \textit{Recursion Depth ($\theta$):} Different recursion depths ($\theta=1$, $\theta=2$) were tried to determine an ``optimal'' recursion depth for \emph{Parallel-Spatial-RG}. This ``optimal'' recursion depth makes a good balance between solution quality and running time. \noindent \textit{Number of Threads ($n$):} \emph{Parallel-Spatial-RG} is run for different number of threads ($8, 16, 32, 64, 96, 128$) to show the near linear speed-up with an increase in the number of cores. \noindent \textit{Number of edges with non-zero score:} The proposed algorithm's runtime depends on the road network's density, more specifically, the density of the edges with non-zero scores values in that road network. \subsubsection{Metrics measured:} We measured the following in our experiments: \textit{(a) Execution time of the algorithm.} \textit{(b) Score gain over the score of the shortest path.} Score gain is defined as the difference between the total score of the solution obtained by the candidate algorithm and the shortest path. \subsubsection{Experimental Setup:} All the algorithms (including the candidate algorithm) were implemented in JAVA11. We used an Ubuntu machine with several Intel Xeon Platinum 8280M cores (total capacity of 128 threads) and 2048GB RAM. Also, note that our processor had a base frequency of 2.70GHz (with a max boost frequency of 4.00GHz). \begin{figure}[ht] \centering \subfigure[comparison of run-time.]{\label{fig:recursion-depth-a}\includegraphics[height=30mm,width=40mm]{images/level-time.png}} \quad \subfigure[comparison of score gain.]{\label{fig:recursion-depth-b}\includegraphics[height=30mm,width=40mm]{images/level-score.png}} \caption{Illustrating performance of Parallel-Spatial-RG for different recursion depths and different overheads. Path lengths $10$--$20$. $40\%$ of edges with non-zero score value. Y-axis is in $log_4$ scale.} \label{fig:recursion-depth} \end{figure} \subsection{Sensitivity Analysis} \label{sec:sensitivity} \subsubsection{Varying Recursion Depth ($\theta$):} \label{sec:recursion-depth} In this experiment, \emph{Parallel-Spatial-RG} is compared for three different values of recursion depth ($\theta$) ($1$, $2$, \& $3$) at three different values of overhead $30\%$, $40\%$, and $50\%$. The runtime of \emph{Parallel-Spatial-RG} increases exponentially with an increase in the recursion depth. Therefore, we considered a smaller instance of the Delhi road network ($9401$ nodes and $25941$ edges) in which $40\%$ of the edges had a non-zero score value for this experiment. In Fig. \ref{fig:recursion-depth-a} the run-time of \emph{Parallel-Spatial-RG} is shown for different values of $\theta$ at different overheads. Fig. \ref{fig:recursion-depth-b} shows the corresponding score gain. Overall, the figures show that as recursion depth is increased, the execution time of \emph{Parallel-Spatial-RG} increases exponentially. Whereas we only get a very small improvement on the score gain aspect. To this end, we set the recursion depth to $1$ in our remaining experiments. \subsubsection{Varying number of cores:} \label{sec:cpu-util} In this experiment, the execution time of \emph{Parallel-Spatial-RG} is analyzed for different number of cores ($8, 16, 32, 64, 96, 128$) and for different path length ranges ($11$--$20$, $21$--$30$, $31$--$40$, $41$--$50$). As per Fig. \ref{fig:CPU-utilization}, \emph{Parallel-Spatial-RG} gives an almost linear scale up with the increasing number of cores. This shows the scalability of \emph{Parallel-Spatial-RG} algorithm. Though we could not run \emph{Parallel-Spatial-RG} on a bigger system due to resource constraints, the scalability demonstrated by \emph{Parallel-Spatial-RG} guarantees a better performance on a system with a higher number of cores. Hence, we fix the number of cores to 128 for the later part of the experiments. \begin{figure*}[ht] \centering \subfigure[comparison of runtime in Delhi road network. Y-axis is in $log_4$ scale.]{\label{fig:CPU-utilization-a}\includegraphics[height=45mm,width=57mm]{images/cpu-delhi.png}} \quad \subfigure[comparison of runtime in Buenos Aires road network. Y-axis is in $log_4$ scale.]{\label{fig:CPU-utilization-b}\includegraphics[height=45mm,width=57mm]{images/cpu-BA.png}} \quad \subfigure[comparison of runtime in London road network. Y-axis is in $log_4$ scale.]{\label{fig:CPU-utilization-c}\includegraphics[height=45mm,width=57mm]{images/cpu-london.png}} \caption{Evaluating Parallel-Spatial-RG for different number of threads. Overhead=$30\%$, $40\%$ edges with non-zero score values and recursion depth $\theta=1$.} \label{fig:CPU-utilization} \vspace{-5mm} \end{figure*} In the Delhi road network \emph{Parallel-Spatial-RG} had a maximum runtime of $1.32$ seconds and $0.81$ seconds on an average across all the path length ranges. For both the Buenos Aires and London road networks, the performance of \emph{Parallel-Spatial-RG} is quite similar as both have nearly the same network size. For the Buenos Aires dataset, \emph{Parallel-Spatial-RG} had a runtime of $2.7$ seconds on average and $4.2$ seconds for a maximum case. And for the London network, those values were $2.9$ seconds and $4.15$ seconds, respectively. To summarize, the proposed method \emph{Parallel-Spatial-RG} produces high solution quality within an acceptable time limit for different datasets. For the bigger networks, it incurs a little bit higher execution time (more than $3$ seconds). This is because of the multiple numbers of shortest path computation on-the-fly for a single CPO problem instance. In our implementation, we used A* algorithm (with Euclidean distance as the heuristic function) for computing shortest paths. One may trivially improve the performance of \emph{Parallel-Spatial-RG} by using the latest shortest path computation techniques such as \cite{phast,jing1998hierarchical,sanders2005}. However, experimental evaluation with these techniques is beyond the scope of this paper and will be investigated in the future. An interesting fact to note is that in all the datasets, for the shorter path lengths($11$--$20$), the improvement with the increase in the number of cores gets saturated. This is due to the following two reasons: (a) runtime was already in around one second; (b) the number of sub-tasks created during the run was less than the total number of cores provided for the run. \begin{figure}[ht] \centering \subfigure[run-time comparison for London road network]{\label{fig:budget-time-c}\includegraphics[width=40mm,height=35mm]{images/budget-time-london.png}} \quad \subfigure[score-gain comparison for London road network]{\label{fig:budget-score-c}\includegraphics[width=40mm,height=35mm]{images/budget-score-london.png}} \caption{Comparing runtime of Parallel-Spatial-RG for $30\%$, $40\%$, and $50\%$ overhead over the shortest path cost in the road network. $40\%$ of the edges has a non-zero score value and recursion depth $1$. Y-axis is in $log_2$ scale.} \label{fig:budget} \end{figure} \subsubsection{Varying overhead over the cost of the shortest path:} \label{sec:budget} The performance of \emph{Parallel-Spatial-RG} depends on the budget value. To this end, we analyzed the behavior of \emph{Parallel-Spatial-RG} with varying overhead values of $30\%$, $40\%$, and $50\%$. Note that budget = cost of the shortest path + overhead over the shortest path cost. Fig. \ref{fig:budget} reveals that the runtime, as well as the solution quality of \emph{Parallel-Spatial-RG}, increases with the increase in overhead, i.e., increase in budget. As the behavior of \emph{Parallel-Spatial-RG} is quite similar across the datasets, we only include the results for the London dataset for this experiment and fixed the overhead value to $30\%$ over the cost of the shortest path for the later parts of the experiments. \begin{figure}[ht] \centering \subfigure[run-time comparison for London road network]{\label{fig:nav-edges-time-c}\includegraphics[width=40mm,height=35mm]{images/navigable-edges-time-london.png}} \quad \subfigure[score-gain comparison for London road network]{\label{fig:nav-edges-score-c}\includegraphics[width=40mm,height=35mm]{images/navigable-edges-score-london.png}} \caption{Comparing runtime of Parallel-Spatial-RG for $30\%$, $40\%$, and $50\%$ edges with non-zero scores values in the road network and overhead of $30\%$ over the shortest path cost and recursion depth $1$. Y-axis is in $log_2$ scale.} \label{fig:nav-edges} \end{figure} \subsubsection{Varying number of edges with non-zero scores values in the road network:} \label{sec:nav-edges} The performance of \emph{Parallel-Spatial-RG} also depends on the density of the edges with non-zero scores values. Therefore, we analyzed its behavior with varying density of the edges with non-zero score values ($30\%$, $40\%$, and $50\%$). Fig. \ref{fig:nav-edges} demonstrated that the runtime and the solution quality of \emph{Parallel-Spatial-RG} increase with the increase in density of edges with non-zero scores values. Here also, we only consider the results for the London dataset and fixed the density of edges with non-zero score value to $40\%$ for the later parts of the experiments. \begin{figure*}[ht] \centering \subfigure[average score comparison for Delhi road network]{\label{fig:candidate-a}\includegraphics[width=57mm,height=35mm]{images/candidate-delhi.png}} \quad \subfigure[average score comparison for Buenos Aires road network]{\label{fig:candidate-b}\includegraphics[width=57mm,height=35mm]{images/candidate-BA.png}} \quad \subfigure[average score comparison for London road network]{\label{fig:candidate-c}\includegraphics[width=57mm,height=35mm]{images/candidate-london.png}} \caption{Comparison of Parallel-Spatial-RG, MSWBS, and ILS*(CEI) for $30\%$ overhead over the shortest path cost, $40\%$ of edges with a non-zero score, and recursion depth $1$. Y-axis is in $log_2$ scale.} \label{fig:candidate} \end{figure*} \subsection{Comparison with Candidate Algorithms} \label{sec:MSWBS-vs-pspatial} This experiment compares \emph{Parallel-Spatial-RG} with the candidate algorithms \emph{MSWBS} and \emph{ILS*(CEI)} algorithm for various path length ranges ($11$--$20$, $21$--$31$, $31$--$40$, $41$--$50$) on each of our three road network datasets. Fig. \ref{fig:candidate} shows the average score comparison between \emph{Parallel-Spatial-RG}, \emph{MSWBS}, and \emph{ILS*(CEI)}. In terms of achieved score, \emph{Parallel-Spatial-RG} outperforms both \emph{MSWBS} and \emph{ILS*(CEI)} by a huge margin in all the three datasets. With regards to execution time, \emph{MSWBS} had an average (across different path lengths) run-time of $0.12$sec, $0.13$sec, and $0.11$sec on Delhi, Buenos Aires, and London datasets respectively. Whereas, \emph{Parallel-Spatial-RG} demonstrated an average (across different path lengths) runtime of $0.9$ seconds, $2.7$ seconds, and $2.9$ seconds respectively for the same parameters. \emph{ILS*(CEI)} allows us to fix the execution time for each query and return the obtained path up to that particular threshold. In our experiments, we fixed this threshold to be $3$ seconds (as \emph{Parallel-Spatial-RG} had a maximum execution time of $3$ seconds) in this experiment. Note that \emph{Parallel-Spatial-RG} could have obtained a much lower execution time on a system with more available threads due to ``almost'' linear scale-up. \emph{MSWBS} already has a running time in milliseconds, and \emph{ILS*(CEI)} has an almost fixed runtime (due to its threshold). Therefore, due to lack of space, we didn't include the runtime comparison between \emph{Parallel-Spatial-RG} with the candidate algorithms. \section{Conclusion} \label{con} This paper studied the problem of the Constrained Path Optimization (CPO) problem on road networks. CPO problem has value addition potential in the domain of urban navigation. However, the current state-of-the-art solutions (approximation algorithm or heuristic solutions) either fail to scale up to real-world road networks or have poor solution quality. In contrast, our proposed parallel algorithm \emph{Parallel-Spatial-RG} shows promising results in terms of both scalability and solution quality. In the future, we will continue working on the \emph{Parallel-Spatial-RG} algorithm to improve its scalability even further and also establish a formal approximation ratio for the algorithm. More specifically, we plan to explore the potential of hierarchical routing techniques for improving the scalability of our \emph{Parallel-Spatial-RG} algorithm. \bibliographystyle{IEEEtran}
1,314,259,996,259
arxiv
\section{Introduction} One of the least well understood features of hadronic physics is the $\Delta I=1/2$ rule in non-leptonic kaon decays. Decays in which isospin changes by $\Delta I=1/2$ are greatly enhanced over those with $\Delta I=3/2$. The particular example we focus on here is the ratio of amplitudes for $K\to\pi\pi$ decays: \begin{equation} {{\cal A}(K\to\pi\pi[I=0]) \over {\cal A}(K\to\pi\pi[I=2]) } \approx 22 \,. \label{eq:dirule} \end{equation} Although the origin of this large enhancement is not well understood, we do know that, in a QCD-based explanation, most of the enhancement must come from long distance, non-perturbative physics. This is because the contribution from scales where perturbation theory is reliable, say $p \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 2\;$GeV, is known to enhance the $\Delta I=1/2$ amplitude by only a factor of about two. Attempts to understand the remainder of the enhancement using models of non-perturbative physics have had partial success \cite{bbg}, but we are far from having a convincing demonstration that QCD does explain the $\Delta I=1/2$ rule. In principle, lattice QCD is well suited to study this issue~\cite{acient}. Indeed, practical approaches have been developed for both Wilson \cite{MMRT,direct} and staggered fermions \cite{toolkit,sharpepatel}. The problem is that these methods have not yet yielded useful results. Indirect methods, based on using $K\to\pi$ and $K\to$vacuum amplitudes, have large statistical errors \cite{GMlat89,SSlat90}, while direct calculations of $K\to\pi\pi$ amplitudes have been done at unphysically large quark masses, for which the presence of a scalar resonance may distort the two pion signal~\cite{GMlat89,BSlat89,gavela}. There has been little work on the problem in recent years. In this paper we revisit the lattice approach for the case of Wilson fermions (or $O(a)$ improved versions thereof). We reevaluate the existing methods, and propose a variety of new approaches. These vary from a method of comparable simplicity to that of Bernard {\em et al.}~\cite{direct}, to more speculative ideas that are likely to require much smaller lattice spacings than those presently available. The latter are needed to calculate the CP violating parts of the $K\to\pi\pi$ amplitudes. We also reappraise the calculation of $K\to\pi$ matrix elements in the light of the recent progress made in non-perturbative renormalization techniques. Existing methods rely on lowest order chiral perturbation theory to connect the unphysical amplitudes which are calculated on the lattice to the desired physical amplitude. One advantage of some of our new methods is that they are not dependent on chiral perturbation theory. This means that they can, in principle, also be used to study $B$-meson decays. The outline of this paper is as follows. In the next section we summarise the problems faced by lattice calculations, and then, in sec.~\ref{sec:direct}, recall how these are avoided by the direct method of Bernard {\em el al.} \cite{direct}. With the stage thus set, in sec.~\ref{sec:new} we then present the first three of our new methods, which also involve the calculation of $K\to\pi\pi$ amplitudes. This is followed in sec.~\ref{sec:pp} by a reappraisal of the indirect method using $K\to\pi$ amplitudes. In sec.~\ref{sec:jj}, we present a more speculative method for studying the $\Delta I=1/2$ rule, which is based on the short distance expansion of the $T$-product of two weak currents. Finally in sec.~\ref{sec:top} we suggest a possible idea for a non-perturbative determination of the CP-violating part of the $\Delta S=1$ weak Hamiltonian by introducing a fictitious top quark. Section~\ref{sec:concs} contains our conclusions. \section{The problem}\label{sec:problem} In this section we briefly review the source of the difficulty in calculating ${\cal A}(K\to\pi\pi)$ with Wilson fermions. Further details can be found in refs.~\cite{MMRT,BernardTasi} and references therein. For scales below $M_{_W}$, but above the charm quark mass, the $\Delta S=1$ part of the effective weak Hamiltonian can be written as \begin{eqnarray} {\cal H}_{\rm eff}^{\Delta S=1} &=& \lambda_u {G_F \over \sqrt2} \left[ C_+(\mu, M_{_W}) O^{(+)}(\mu) + C_-(\mu,M_{_W}) O^{(-)}(\mu) \right]\,, \label{eq:HW}\\ O^{(\pm)} &=& \left[ (\bar s \gamma_\mu^L d)(\bar u \gamma_\mu^L u) \pm (\bar s \gamma_\mu^L u)(\bar u \gamma_\mu^L d) \right] - \left[ u \leftrightarrow c \right] \,, \label{eq:oplmidef} \end{eqnarray} where $\gamma_\mu^L=\gamma_\mu (1\!-\!\gamma_5)/2$ and $\lambda_u=V_{ud}V_{us}^*$. Here and in the following we use the Euclidean metric. We are ignoring for the moment the contribution which arises when the top quark is integrated out. This is suppressed by $\lambda_t/\lambda_u$, where $\lambda_t=V_{td}V_{ts}^*$, and, in the CP conserving sector, makes a very small contribution to the decay amplitudes. The operators $O^{(\pm)}$ have different transformation properties under isospin. In particular, $O^{(-)}$ is pure $I=1/2$, whereas $O^{(+)}$ contains parts having both $I=1/2$ and $I=3/2$. An explanation of the $\Delta I=1/2$ rule\ thus requires that the $K\to\pi\pi$ matrix element of $C_- O^{(-)}$ be substantially enhanced compared to that of $C_+ O^{(+)}$. The short distance, perturbative part of the enhancement is that provided by the ratio of Wilson coefficients, $C_-/C_+$, while the long distance, non-perturbative contribution comes from the ratio of matrix elements of the operators. Consider first the short distance contribution. At $\mu=M_{_W}$, the two Wilson coefficients have nearly equal magnitudes, $|C_-/C_+|=1 + O[\alpha_s(M_{_W})]$. The enhancement arises from the renormalization group evolution down to $\mu \sim 2\;$GeV, at which scale one finds $|C_-/C_+| \approx 2$. This factor is too small by an order of magnitude to explain the $\Delta I=1/2$ rule. The remainder of the enhancement must come from the matrix elements of the operators, and these are the quantities that we wish to calculate on the lattice. In fig.~\ref{fig:wick} we show the Wick contractions that contribute to such matrix elements. The only part of ${\cal H}_{\rm eff}^{\Delta S=1}$ which gives rise to $\Delta I=3/2$ transitions is the $I=3/2$ part of $O^{(+)}$, and for this operator only contractions (A) and (C) contribute. In contrast, all four contractions are non-vanishing for the $I=1/2$ parts of $O^{(\pm)}$. Thus, in order to reproduce the $\Delta I=1/2$ rule, the sum of the contributions of (B) and (D) must be an order of magnitude larger than those of (A) and (C). In fact, as discussed below, contractions (C) and (D) give non-leading contributions in chiral perturbation theory compared to (A) and (B). Thus we expect contraction (B) to be the ``source'' of the $\Delta I=1/2$ rule. \begin{figure}[t] \vspace{0.1cm} \centerline{\epsfig{figure=deltai.eps,height=10cm}} \caption{{\it Contractions contributing to the $\Delta I=1/2$ amplitude. The lines represent propagators on a background gauge field, and the resulting contraction is to be averaged over gauge configurations with the appropriate measure. The dot represents the operators $O^{(\pm)}$. Different colour contractions are not distinguished.}} \label{fig:wick} \end{figure} There are two major difficulties which arise in the calculation of these diagrams using lattice QCD. \begin{enumerate} \item Decay amplitudes into two or more particles cannot be calculated directly in Euclidean space. This follows from the theorem of Maiani and Testa \cite{MT}. One must, instead, use a model to analytically continue correlations functions from Euclidean to physical momenta. The only exception is for final state particles at rest relative to each other, in which case there is no phase generated by final state interactions. This problem afflicts both $\Delta I=1/2$ and $3/2$ amplitudes, although it is likely to be worse for the former since final state interactions are stronger for two pions having $I=0$ \cite{isgur}. \item The ``penguin'' diagrams (B) and (D) allow the $I=1/2$ operators to mix with operators of lower dimension with coefficients which diverge as inverse powers of the lattice spacing. This mixing leads to contributions to the amplitudes which are lattice artifacts, and must be subtracted. One must also account for the mixing with other operators of dimension six, although this is less difficult because the mixing coefficients do not diverge in the continuum limit. \end{enumerate} In the remainder of this section we expand upon the latter problem. We consider only the negative parity parts of $O^{(\pm)}$, since the parts with positive parity do not contribute to $K\to\pi\pi$ amplitudes. Lower dimension operators which can appear on the lattice must have the same flavour and CPS parity as $O^{(\pm)}$. CPS is the transformation obtained by combining CP with $(s\leftrightarrow d)$ interchange \cite{politzer}. Both $O^{(\pm)}$ have positive CPS parity. These symmetries allow mixing with only two lower dimension operators (aside from operators which vanish by the equations of motion): \begin{eqnarray} O_p &=& (m_s-m_d) \bar s \gamma_5 d \,, \label{eq:opdef}\\ \widetilde{O}_\sigma &=& g_0 (m_s-m_d) \bar s \sigma_{\mu\nu} \widetilde G_{\mu\nu} d \,, \end{eqnarray} Here $g_0$ is the bare coupling constant and $\widetilde G_{\mu\nu}$ the dual field strength. The factors of $m_s\!-\!m_d$ are required by CPS symmetry. At dimension six, CPS forbids mixing with other four-fermion operators, and SU(4) flavour forbids mixing between $O^{(+)}$ and $O^{(-)}$. It is noteworthy that the allowed mixing for the negative parity parts of $O^{(\pm)}$ is, up to this stage, exactly as in the continuum, despite the fact that chiral symmetry is explicitly broken on the lattice \cite{MMRT}. The mixing with $O_p$ and $\widetilde{O}_\sigma$ is further constrained by the GIM mechanism, which implies that the mixing coefficients vanish identically when $m_c=m_u$. In continuum perturbation theory, chiral symmetry requires that the mixing coefficients are quadratic functions of the quark masses and thus vanish as $m_c^2-m_u^2$. On the lattice, by contrast, chiral symmetry is broken, and the GIM cancellation gives rise only to the factor $m_c-m_u$. This additional factor is sufficient, however, to make the mixing of $O^{(\pm)}$ with $\widetilde{O}_\sigma$ an effect of $O(a)$. Since in this work we are not attempting to remove $O(a)$ effects from the matrix elements of $O^{(\pm)}$, we do not need to consider mixing with $\widetilde{O}_\sigma$. We therefore arrive at the following form of the renormalized operator \begin{equation} O^{(\pm)}(\mu)= Z^{(\pm)}(\mu a, g^2_0) \left[ O^{(\pm)}(a) + (m_c -m_u) \frac{C^{(\pm)}_p}{a} O_p(a) \right] + O(a)\,. \label{eq:rops} \end{equation} Here $O^{(\pm)}(a)$ and $O_p(a)$ are bare lattice operators, $C^{(\pm)}_p$ are the mixing coefficients, and $Z^{(\pm)}$ are the renormalization constants which cancel the logarithmic divergence of the bare four-fermion operator. The precise definition of the lattice quark masses in eq.~(\ref{eq:rops}) is unimportant, since in all practical methods the entire coefficient of $O_p$ is determined non-perturbatively. We now show that, in spite of the fact that it multiplies a linear divergence (see eq.~(\ref{eq:rops})), it is sufficient to determine $C^{(\pm)}_p$ with an error of $O(a)$, to obtain the physical amplitudes to the same precision. We first note that, in the continuum, $O_p$ is proportional to the divergence of the axial current, apart from terms which vanish by the equations of motion. Thus, in the continuum, it does not contribute to on-shell matrix elements for which the momentum inserted by the operator, $\Delta p$, vanishes. This is why the mixing of $O^{(\pm)}$ with $O_p$ in the continuum is irrelevant to physical amplitudes. On the lattice, however, the breaking of chiral symmetry leads to $O(a)$ corrections to the PCAC equation even after renormalization~\cite{boc}: \begin{equation} \langle h_1|\partial_\mu A_\mu - (m_s+m_d) P|h_2\rangle = \langle h_1| \bar X_A |h_2 \rangle = O(a) \,. \label{eq:PCAClat} \end{equation} Here $A_\mu=Z_A \bar s \gamma_\mu \gamma_5 d$ and $P=Z_P \bar s\gamma_5 d$ are the renormalized operators, $m_d$ and $m_s$ are renormalized quark masses, $\partial_\mu$ is a lattice derivative, and $h_{1,2}$ represent hadronic states. Thus the $K\to\pi\pi$ matrix element of the subtraction term is \begin{equation} \frac{C_p^{(\pm)}}{a} (m_c -m_u) (m_s - m_d) \langle \pi \pi \vert \bar s \gamma_5 d \vert K \rangle = \frac{C_p^{(\pm)}}{a} (m_c -m_u) \frac{m_s - m_d}{m_s + m_d} {1\over Z_P} \langle \pi \pi \vert \partial_\mu A_\mu - \bar X_A \vert K \rangle \,. \label{eq:problem} \end{equation} The $\partial_\mu A_\mu$ term leads to the divergent contribution proportional to $\Delta p/a$. But when $\Delta p=0$, the term proportional to $\bar X_A/a$ remains, and is of $O(1)$ up to logarithms. Thus the subtraction is still necessary, but the magnitude of the subtraction does not diverge in the continuum limit, and implies that it is sufficient to know $C_p^{(\pm)}$ with an error of $O(a)$. In summary, to calculate the physical $K\to\pi\pi$ matrix elements of $O^{(\pm)}$ one needs both a model to do the analytic continuation from Euclidean to Minkowski space, and a method to subtract the operator $(m_c-m_u) O_p$ with appropriate coefficients $C_p^{(\pm)}$. Having summarised the problem, the remainder of the paper is devoted to illustrate several methods which can be used to extract the physical amplitudes. \section{Calculations with ${\mathbf m_s=m_d.}$}\label{sec:direct} Bernard {\em et al.} proposed an ingenious solution to both problems \cite{direct}. They suggest working with $m_s=m_d$, and calculating the Euclidean amplitude in which all three particles are at rest: \begin{equation} {\cal A}_{\rm m_s=m_d}^{(\pm)} = \langle \pi(\vec p_1\!=\!0) \pi(\vec p_2\!=\!0) | O^{(\pm)}(\mu) | K(\vec p_K\!=\!0) \rangle\bigg|_{m_s=m_d} \,. \label{eq:b+s} \end{equation} Setting $m_s=m_d$ causes $O_p$ to vanish identically, and so removes the need for the subtraction. Working with the two pions at rest solves the problem of final state interactions. The final step in the method is to extrapolate from the unphysical amplitudes ${\cal A}_{\rm dir}^{(\pm)}$ to the physical amplitudes ${\cal A}_{\rm phys}^{(\pm)}$ using lowest order chiral perturbation theory. The result is that \begin{equation} {\cal A}_{\rm phys}^{(\pm)} = { m_{K,\rm phys}^2 - m_{\pi,\rm phys}^2 \over 2 m_{K,\rm lat}^2}\ {\cal A}_{\rm m_s=m_d}^{(\pm)} \left[ 1 + O\left(m_K^2 \over \Lambda_\chi^2\right) \right] \,, \label{eq:direct} \end{equation} where $m_{K,\rm lat}$ is the mass of the lattice ``kaon'' used in calculating ${\cal A}_{\rm m_s=m_d}^{(\pm)}$. Due to the choice $m_s=m_d$, this is also the mass of the lattice ``pion''. $\Lambda_\chi\approx 4 \pi f_\pi$ is the scale which determines the size of higher order terms in chiral perturbation theory. Such terms typically give $25\%$ corrections if $m_{K,\rm lat} \approx m_{K,\rm phys}$. The direct method should thus be adequate to test the $\Delta I=1/2$ rule\ at a semiquantitative level. The main advantage of this method is its simplicity. It determines the coefficient of the contribution to ${\cal A}_{\rm phys}^{(\pm)}$ of leading order in chiral perturbation theory without the need for a subtraction. A further simplification is that one only needs to calculate the contractions (A) and (B) in fig.~\ref{fig:wick}, because, due to CPS symmetry, the other two contractions vanish identically when $m_s=m_d$. In fact, since one can show that the physical amplitude depends on quark masses only at non-leading order in chiral perturbation theory, the contributions of diagrams (C) and (D), which are proportional to $m_s-m_d$, must be of non-leading order. The main disadvantage of the method is its dependence on chiral perturbation theory. Such dependence is unavoidable once one has set $m_s=m_d$. Of particular concern is the fact that the final state interaction phase in the $I=0$ two pion channel is substantial at the physical point $s=m_K^2$ (where $s$ is the square of the two-pion centre-of-mass energy), even though it is a non-leading effect in chiral perturbation theory. Thus it is possible that the corrections in eq.~(\ref{eq:direct}) are larger than the estimate $m_K^2 / \Lambda_\chi^2$ \cite{isgur}. In principle one could reduce this error by using non-leading order chiral perturbation theory. This, however, requires knowledge of coefficients which are unavailable from experiment \cite{kambor}. \section{Alternative methods} \label{sec:new} We wish to develop methods which allow us to reduce, and ultimately remove, the dependence on chiral perturbation theory. To do this we must use quark masses which are closer to their physical values, and in particular we must consider $m_s > m_d$. Working with non-degenerate quarks implies that the mixing with $O_p$ is present, and must either be subtracted, or shown to be unimportant. We have devised a number of new methods which accomplish this goal. In this section we describe, in order of increasing complexity, the three which require relatively small changes in methodology compared to the proposal of section~\ref{sec:direct}. More speculative methods are described in subsequent sections. Before describing our methods, we first comment on the the overall renormalization factors $Z^{(\pm)}(\mu a, g_0^2)$. These must be calculated for all the methods described in this section and in sections~\ref{sec:direct} and ~\ref{sec:pp}. In the past $Z^{(\pm)}$ have been determined using one-loop perturbation theory. Although this may be satisfactory for a test of the $\Delta I=1/2$ rule, a non-perturbative determination is clearly preferable. We wish to point out that such a determination can be made by a straightforward extension of the non-perturbative methods used to normalize $\Delta S=2$ operators using quark states~\cite{talevi}. The key observation is that the anomalous dimensions of $O^{(\pm)}$ are unchanged by mixing with the pseudoscalar density, and thus coincide with those of the operators \begin{equation} \bar O^{(\pm)}= \Bigl(\bar \psi_1 \gamma_\mu \psi_2 \, \bar\psi_3 \gamma_\mu \gamma_5 \psi_4 + \bar \psi_1 \gamma_\mu \gamma_5 \psi_2 \, \bar\psi_3 \gamma_\mu \psi_4 \Bigr) \pm \Bigl( 2 \leftrightarrow 4 \Bigr) \, , \end{equation} where the subscripts $1,2,3,4$ label four different quark flavours. Only open current-current diagrams need be calculated; penguin-type diagrams with quark loops do not contribute. The non-perturbative calculation of the renormalized $\bar O^{(\pm)}$ is currently underway~\cite{contil}. \subsection{Method 1} We first describe the method and then explain why it is valid. The ingredients are \begin{itemize} \item Work with a non-perturbatively $O(a)$ improved fermion action, for which there are no errors of $O(a)$ in the spectrum, and the on-shell amplitudes of the improved currents obey the continuum chiral Ward identities up to $O(a^2)$. This can be accomplished using the methods of ref.~\cite{luscher}. \item Use the lattice four-fermion operator, suitably normalized, but do not subtract the $O_p$ term. In other words, set $C_p^{(\pm)}=0$. \item Choose quark masses such that $m_K= 2m_\pi$. \item Calculate the $K\to\pi\pi$ amplitudes with all particles at rest. Given the choice of meson masses this means that $\Delta p=0$. The resulting amplitudes are denoted ${\cal A}_{m_K=2m_\pi}^{(\pm)}$. \item Determine the physical amplitudes using lowest order chiral perturbation theory, which gives \begin{equation} {\cal A}_{\rm phys}^{(\pm)} = { m_{K,\rm phys}^2 - m_{\pi,\rm phys}^2 \over m_{K,\rm lat}^2 - m_{\pi,\rm lat}^2}\, {\cal A}_{m_K=2m_\pi}^{(\pm)} \left[ 1 + O\left(m_K^2 \over \Lambda_\chi^2\right) \right] \,. \label{eq:N1} \end{equation} \end{itemize} The validity of this method is based on the following observations. If one uses fully $O(a)$ improved fermions, and the fully $O(a)$ improved axial current, $A_\mu^I$, and pseudoscalar density, $P^I$, then the corrections to the PCAC equation (\ref{eq:PCAClat}) are of $O(a^2)$ rather than of $O(a)$ \cite{luscher}. Furthermore, the improved pseudoscalar density is simply proportional to the bare lattice operator $P^I=Z_P^I \bar s\gamma_5 d$. Thus the argument following eq.~(\ref{eq:PCAClat}) now implies that the matrix element of the subtraction term is \begin{equation} \frac{C_p^{(\pm)}}{a} (m_c -m_u) (m_s - m_d) \langle \pi \pi \vert \bar s \gamma_5 d \vert K \rangle = \frac{C_p^{(\pm)}}{a} (m_c -m_u) \frac{m_s - m_d}{m_s + m_d} {1\over Z_P^I} \langle \pi \pi \vert \partial_\mu A_\mu^I \vert K \rangle + O(a) \,. \label{eq:soln1} \end{equation} The important point is that the discretization error on the r.h.s. of this result is now of $O(a)$ rather than of $O(1)$ [cf. eq.~(\ref{eq:problem})]. This means that if we can set $\Delta p=0$, so that the term involving $\partial_\mu A_\mu^I$ vanishes, the subtraction leads to corrections only of $O(a)$. Since these are of the size that we are neglecting, it is not necessary to do the subtraction. Note that to make this argument, we do not need to know the form of $A_\mu^I$, nor the value of the renormalization constants $Z_A^I$ and $Z_P^I$. Since we have chosen quark masses such that $m_K=2 m_\pi$, and chosen all particles to have $\vec p=0$, we do have $\Delta p=0$. Furthermore, the use of pions at rest allows us to avoid final state interactions, as in the $m_s=m_d$ method. We view this new method as complementary to the method of section~\ref{sec:direct}. It is only slightly more difficult to implement practically. One complication is the need to use fully $O(a)$ improved Wilson fermions. This is, however, becoming standard in numerical simulations, now that the necessary ``clover'' coefficient $c_{SW}$ has been determined non-perturbatively for quenched QCD \cite{luscher}. Note that the method of section~\ref{sec:direct} can be used equally well with improved fermions, but that this does not provide any particular advantage over unimproved fermions. The results for $K\to\pi\pi$ amplitudes will have errors of $O(a)$ in both cases. To remove these errors would require not only improving the action but also improving the operators $O^{(\pm)}$. A second potential technical complication is the need to include all four Wick contractions of fig.~\ref{fig:wick}. One can, however, consistently drop diagrams (C) and (D) since they are non-leading in chiral perturbation theory, while we are relying in eq.~(\ref{eq:N1}) on the leading order term. A possible advantage of this method is that it requires a smaller extrapolation to the physical point, since one is using quark masses which are closer to those of the physical quarks. Furthermore, if one includes diagrams (C) and (D), one can perform a partial test of the convergence of the chiral expansion. Diagram (C) is straightforward to calculate, while the disconnected diagram (D) presents more technical difficulties. We expect, however, that since (D) is Zweig forbidden, its contribution will be smaller than that of (C). \subsection{Method 2} A potential problem with the previous method is that the momentum inserted will not, in practice, be exactly zero. If so, the part of the $O_p$ term proportional to $\partial_\mu A_\mu^I$ will give a ``small'' but divergent contribution proportional to $\Delta E/a$, where $\Delta E=m_K- 2 m_\pi$. We can mitigate this potential problem by determining $C_p^{(\pm)}$ non-perturbatively. We propose to do so by applying the condition \begin{equation} \langle 0 \vert O^{(\pm)}(\mu) \vert K \rangle = 0 \,. \label{eq:subt} \end{equation} We then proceed as in the previous method, except now keeping the subtraction term. Note that implementation of this condition does not require knowledge of the overall normalizations $Z^{(\pm)}$. For this relatively small increase in effort we effectively determine the $O(1)$ part of $C_p^{(\pm)}$, and thus reduce the error from the incomplete subtraction of the $O_p$ term by one power of $a$. In other words, whereas in Method 1 ignoring the subtraction term leads to an error $\sim \Delta E/a + O(a)$, in Method 2 the residue after implementing eq.~(\ref{eq:subt}) is $\sim \Delta E + O(a^2)$. Since there are other sources of $O(a)$ errors, the total error in the $K\to\pi\pi$ matrix element is $\sim\Delta E + O(a)$. Thus the extrapolation to $\Delta E=0$ will be considerably less delicate. Furthermore, since the method no longer relies on the improved PCAC relation, it can be used with unimproved, or tree-level improved, Wilson fermions. We now explain how the condition (\ref{eq:subt}) is justified. At leading order in chiral perturbation theory, the $K\to$vacuum matrix element takes the form~\cite{politzer} ($f_\pi \sim 132$ MeV) \begin{equation} \<0\vert O^{(\pm)}(\mu) \vert K^0\> = i\,\delta^{(\pm)}\, \frac{m^2_K - m^2_\pi}{f_\pi} \,. \label{eq:SPT1} \end{equation} The coefficients $\delta^{(\pm)}$ do not contribute to the physical $K\to\pi\pi$ matrix element (see eq.~(\ref{eq:SPT3}) below), and thus can be set to zero. A similar unphysical arbitrariness remains away from the chiral limit. The point is that we can always redefine the renormalized operators as follows~\cite{pate} \begin{equation} O^{(\pm)} \longrightarrow O^{(\pm)} + F^{(\pm)} (m_c-m_u) O_p \,. \label{eq:ambig} \end{equation} where the finite coefficients $F^{(\pm)}$ are of $O(\Lambda_{\rm QCD})$ but otherwise arbitrary. This redefinition does not change the physical matrix elements of $O^{(\pm)}$ since $O_p$ is a total divergence. But it does affect matrix elements in which momentum is inserted, such as those appearing in the condition of eq.~(\ref{eq:subt}). This condition can thus be fulfilled by an appropriate choice of $F^{(\pm)}$. In particular, it can also be used if the strange quark is replaced by the bottom quark. The possibility of redefining the renormalized operators by finite terms, as in eq.~(\ref{eq:ambig}), implies that dependence of the $K\to\pi\pi$ matrix elements on $\Delta E$ is of $O(1)$ and does not vanish with the lattice spacing as one would na\" ively expect. \subsection{Method 3} The previous methods require the use of quark masses whose ratio $m_s/m_d$ differs from its physical value. To move closer to physical quark masses while holding $\Delta E=0$ we must work with final states in which $\vec p_\pi \ne 0$. This forces us to deal with the Maiani-Testa theorem. We propose doing so using the method of Ciuchini {\em et al.} (CFMS) \cite{cfms}. Given an analytic parameterization of the two pion scattering amplitude (e.g. resonance dominance) CFMS show how, in principle, one can extract the magnitude and phase of the physical amplitude by studying the Euclidean amplitude as a function of time for a variety of final pion momenta. An assumption made by CFMS is the ``smoothness" of the off-shell amplitudes. In the absence of resonances coupled to the final-state mesons, this requires that the $K\to\pi\pi$ amplitudes do not vary rapidly with $\Delta p$. If there is a nearby resonance, CFMS assume that it dominates the momentum dependence of the amplitudes. The ``smoothness" hypothesis is, in this case, the assumption that the couplings of the resonance to the final-state particles are smooth functions of the external momenta. It is then is possible to find a simple parametrization of the amplitudes which describes their rapid variation (due to the presence of the resonance) with $\Delta p$. Thus we propose using the method of CFMS after fixing $C_p^{(\pm)}$ with the conditions eq.~(\ref{eq:subt}). This removes the terms proportional to $\Delta p/a$ which might violate the smoothness hypothesis. Since this method makes no use of chiral perturbation theory, it can be applied also $B$ decays. The only restriction is that the GIM mechanism must be operative, which limits one to processes which do not involve top quark loops. \section{${\mathbf K\to\pi}$ Matrix Elements} \label{sec:pp} In this section we reconsider the method suggested in ref.~\cite{MMRT} which uses the $K\to\pi$ matrix elements of the positive parity part of the weak Hamiltonian. This method relies on chiral perturbation theory, and in this respect we expect it to be of comparable accuracy to the direct method of Bernard {\em et al.} and the first two of new methods introduced in the previous section. Since only single-particle states are involved, the advantage of the ``$K\to\pi$ method'' is that it is technically easier to extract the relevant matrix elements. The disadvantage is that the operator mixing problem is much more complicated than for the negative parity part of ${\cal H}_{\rm eff}^{W}$, making an accurate evaluation of the matrix element of the renormalized operator difficult. We begin by recalling how the physical amplitude is obtained from the knowledge of the properties of the $K\to\pi$ amplitude exploiting the Soft-Pion Theorems (SPTs). At leading order in chiral perturbation theory the physical amplitude takes the form (for $\Delta p=0$) \begin{equation} \<\pi^+\pi^-\vert O^{(\pm)}(\mu)\vert K^0\> = i\,\gamma^{(\pm)}\, {m^2_K - m^2_\pi\over f_\pi} \,. \label{eq:SPT3} \end{equation} The coefficients $\gamma^{(\pm)}$ appear also in the expression for the $K\to\pi$ matrix element \begin{equation} \<\pi^+(p)\vert O^{(\pm)}(\mu) \vert K^+(q)\> = -\delta^{(\pm)}\, {m^2_K\over f_\pi^2} + \gamma^{(\pm)}\,p\cdot q \,.\label{eq:SPT2} \end{equation} By studying this matrix element as a function of $p\cdot q$ one can, in principle, determine $\gamma^{(\pm)}$, from which we obtain the $K\to\pi\pi$ matrix elements up to corrections of order $m_K^2/\Lambda_\chi^2$. The expression for the positive-parity components of the renormalized $O^{(\pm)}$ is \cite{MMRT}~\footnote{Although, for convenience, we use the same symbol, $Z^{(\pm)}$, the values of the overall renormalization constant for the positive parity part is in general different from that for the parity violating part, as a result of the explicit chiral symmetry breaking on the lattice. The operators $O^{(\pm)}$ in this section also refer to their positive parity components.} \begin{eqnarray} O^{(\pm)}(\mu) &\equiv & Z^{(\pm)}(\mu a, g_0^2)\,O^{(\pm)}_{\mathrm sub} \nonumber\\ & = & Z^{(\pm)}(\mu a, g_0^2) \left[O^{(\pm)}(a) + \sum_{i=1}^4 C_i^{(\pm)} O^{(\pm)}_i(a) \right. \nonumber\\ &&\left. +(m_c-m_u) C^{(\pm)}_\sigma O_\sigma(a) +(m_c-m_u) {C^{(\pm)}_s \over a^2} O_s(a) \right] \,. \label{eq:pcmixing} \end{eqnarray} Here $O^{(\pm)}_i$ are four-fermion operators of dimension 6 which have different chirality from $O^{(\pm)}$. They are listed in refs. \cite{BernardTasi}, \cite{talevi}--\cite{jlqcd}. The remaining operators are of lower dimension \begin{eqnarray} O_\sigma &=& g_0 \bar s \sigma_{\mu\nu} G_{\mu\nu} d \,,\label{eq:osigmadef}\\ O_s &=& \bar s d \,. \label{eq:osdef}\end{eqnarray} Comparing the result in eq.~(\ref{eq:pcmixing}) with that for the negative parity parts of $O^{(\pm)}$, eq.~(\ref{eq:rops}), we see that CPS symmetry provides much weaker constraints for the parity conserving parts. In particular, the loss of the factor of $m_s-m_d$ means that the mixing with the scalar density diverges like $1/a^2$. Reference~\cite{MMRT} suggested the following approach for determining the mixing coefficients: use perturbative values for the coefficients of the dimension 6 operators and $O_\sigma$, but determine $C^{(\pm)}_s$ non-perturbatively. The latter determination is to be made by adjusting $C^{(\pm)}_s$ until the momentum independent part of the $K\to\pi$ matrix element in eq.~(\ref{eq:SPT2}) vanishes, i.e. $\delta^{(\pm)}=0$. This is the positive parity analogue of the condition~(\ref{eq:subt}), and can be justified by similar arguments. One problem with this approach is that a perturbative determination of mixing coefficients is often not reliable. This is exemplified by the case of $\Delta S=2$ operators, where the operator with non-perturbatively determined coefficients has, to a very good approximation, the expected chiral behaviour, while that with perturbative coefficients does not \cite{talevi,jlqcd}. Nevertheless, this approach should be pursued, because it could provide semiquantitative results for the $\Delta I=1/2$ rule. To date, no useful results have been obtained. There are three approaches which have been proposed to determine the subtraction coefficients of four-fermion operators in a non-perturbative way: Gauge Invariant Ward Identities (GIWIs)~\cite{MMRT}, Ward Identities on quark states~\cite{jlqcd} and non-perturbative renormalization between quark states at large external momenta~\cite{talevi}. We start with a discussion of the GIWIs, reformulated in the light of the recent progress in the exploitation of Ward identities in the context of improved actions~\cite{luscher}. As we shall explain below, this is the most promising approach in the presence of mixing with lower dimensional operators (and is more general than the present case). Consider the following Ward identity in the chiral limit: \begin{equation} \sum_{y \in \cal{R}} \langle \partial_\mu A^f_\mu(y)\, O^{(\pm)}_{\mathrm sub}(0)\, \Phi(x_1, x_2, \cdots, x_n)\rangle = -i \langle\frac{\delta O^{(\pm)}_{\mathrm sub}(0)}{\delta\alpha^f}\, \Phi(x_1, x_2, \cdots, x_n)\rangle \ , \label{eq:giwi}\end{equation} where ${\cal R}$ is a region of space-time containing the origin bounded by two hyperplanes $y_4=-t_{a}$ and $y_4=t_{b}$ and $f$ labels the flavour component of the axial transformation. $\Phi$ represents a multilocal gauge-invariant operator, with $x_1, x_2, \cdots,x_n$ all lying outside ${\cal R}$. $\delta O^{(\pm)}_{\mathrm sub}/\delta\alpha^f$ denotes the variation of the operators under infinitesimal axial rotations of the fields. As shown in ref.~\cite{MMRT}, in the chiral limit, there is a unique choice of the coefficients of the operators which belong to different chiral representations, i.e. $C_i^{(\pm)}$, $C_\sigma^{(\pm)}$, $C_s^{(\pm)}$ and $C_p^{(\pm)}$, such that the subtracted operators $O^{(\pm)}_{\mathrm sub}$ satisfy this Ward identity. By varying the points $x_1, x_2, \cdots, x_n$, eq.~(\ref{eq:giwi}) corresponds to an overdetermined set of linear inhomogeneous equations which, in principle, allow for the determination of all the mixing coefficients. Using the property that the axial current is conserved in the chiral limit, it is convenient to rewrite eq.~(\ref{eq:giwi}) as follows: \begin{equation} \sum_{\vec y} \langle \left(A^f_4(\vec y, t_b) - A^f_4(\vec y, -t_a)\right) O^{(\pm)}_{\mathrm sub}(0)\,\Phi(x_1, x_2, \cdots, x_n)\rangle = -i \langle\frac{\delta O^{(\pm)}_{\mathrm sub}(0)}{\delta\alpha^f} \,\Phi(x_1, x_2, \cdots, x_n)\rangle \ . \label{eq:giwi2}\end{equation} The above equation shows that there are no contact terms arising upon integration over $y$~\cite{luscher}. The absence of contact terms implies that we do not have to include the mixing with operators that vanish by the equations of motion. Although these operators do not contribute to physical matrix elements, in general (see below) they must be taken into account in the determination of the mixing coefficients. Notice that the coefficients determined in the chiral limit are sufficient to predict unambiguously the physical $K\to\pi\pi$ amplitudes. The other two non-perturbative methods mentioned above use quark and gluon correlators in a fixed gauge, and either impose normalization conditions at large Euclidean momenta \cite{talevi}, or enforce the Ward Identities on quark Green functions~\cite{jlqcd}. The problem with these methods is that they require the inclusion of two additional classes of operator\footnote{ For a more detailed discussion of why such operators appear see ref.~\cite{dawson}.}: \begin{enumerate} \item Gauge invariant operators which vanish by the equation of motion. These do not contribute to on-shell matrix elements, but do contribute to the off-shell correlators used in these methods. In the present case there are three such operators of low enough dimension \begin{eqnarray} &&\bar s ({\overrightarrow{\slash D}} + m_d) d + \bar s (-{\overleftarrow{\slash D}} + m_s) d \,,\nonumber\\ && \bar s ({\overrightarrow{\slash D}} + m_d)^2 d + \bar s (-{\overleftarrow{\slash D}} + m_s)^2 d \,,\\ && \bar s (-{\overleftarrow{\slash D}} + m_s) ({\overrightarrow{\slash D}} + m_d) d \,.\nonumber \end{eqnarray} The coefficient of the first is proportional to $(m_c-m_u)/a$, while that of the latter two are proportional to $(m_c-m_u)$. These operators also appear in the continuum, but they can be removed in perturbation theory by studying how the form factors behave as one goes on-shell. This is not possible in a numerical simulation. \item Operators which are not gauge invariant. These are, however, constrained to be either BRST invariant, or to vanish by the equations of motion \cite{zuber}. Note that there is an exact BRST symmetry on the lattice after gauge fixing \cite{gaugefix}. There are two such operators of low enough dimension, and with positive CPS parity, \begin{eqnarray} &&(m_c-m_u) \left[\bar s {\overleftarrow{\slash \partial}} ({\overrightarrow{\slash D}} + m_d) d - \bar s (-{\overleftarrow{\slash D}} + m_s) {\overrightarrow{\slash \partial}} d\right] \,,\\ &&(m_c-m_u) \left[\bar s {\overrightarrow{\slash \partial}} ({\overrightarrow{\slash D}} + m_d) d - \bar s (-{\overleftarrow{\slash D}} + m_s) {\overleftarrow{\slash \partial}} d\right] \,. \end{eqnarray} \end{enumerate} By varying the external momenta, and using suitable projectors~\cite{talevi,jlqcd}, it may be possible, in principle, to separate the contributions of these operators. The inclusion of five additional operators, when the normalization conditions are imposed on quark states in momentum space, seems, however, to make the possibility of a non-perturbative determination of the mixing coefficients quite remote in practice. These problems can be avoided, by working in configuration space, rather than in momentum space. This can be achieved by studying (unamputated) quark Green functions of the form \begin{equation} \langle \psi_1(x_1)\psi_2(x_2)O^{(\pm)}(0)\bar\psi_3(x_3)\bar\psi_4(x_4) \rangle\ , \end{equation} where all the points are separated. In this case none of the operators which vanish by the equation of motion can appear. To enforce the Ward identities, one uses eq.~({\ref{eq:giwi2}) with a non-gauge invariant $\Phi(x_1,\cdots,x_4) = \psi_1(x_1)\psi_2(x_2)\bar\psi_3(x_3)\bar\psi_4(x_4)$. The expectation value has to be evaluated in a fixed gauge. Although, in this case, we have the same number of coefficients to determine, we prefer the GIWI method, since it requires the evaluation of gauge invariant correlation functions only. \section{Construction of ${\mathbf{\cal H}^{W}_{\rm eff}}$ from first principles} \label{sec:jj} In this section we propose a method which, in principle, avoids the difficulties caused by mixing with lower dimension operators, and which automatically gives the effective weak Hamiltonian with the correct normalization. In addition, it allows one to construct an improved weak Hamiltonian, i.e. one having errors of $O(a^2)$, given only the improved versions of the weak currents. The method does not use chiral perturbation theory, and thus applies equally well to the $\Delta S=1$, $\Delta C=1$ and $\Delta B=1$ parts of the weak Hamiltonian. The method is speculative in the sense that it is likely to require more computational power than is presently available, although we expect it to be practical with the advent of Teraflops machines. The standard construction of the non-leptonic weak Hamiltonian begins with the expression \begin{equation} {\cal H}_{\rm eff}^{W}=g^2_{W}\int d^4x\,D_{\rho\nu}^W(x;M_{_W}) T\left[J_{\rho L}(x) J^\dagger_{\nu L}(0)\right] \, , \label{eq:HEFF} \end{equation} where \begin{equation} D_{\rho\nu}^{W}(x;M_{_W})=\int d^4p\,\frac{\mbox{e}^{ipx}} {p^2+M_{_W}^2} (\delta^{\rho\nu}-\frac{p^\rho p^\nu}{M_{_W}^2}) \label{WPROP} \end{equation} is the $W$-boson propagator and $J_{\rho L}$ is the (left-handed) hadronic weak current. One then performs an operator product expansion (OPE) on the product of the two currents in eq.~(\ref{eq:HEFF}), which is justified by the observation that the dominant contribution to the integral comes from distances $|x| \ll M_{_W}^{-1}$. For physical amplitudes, one obtains in this way \begin{equation} \langle h|{\cal H}_{\rm eff}^{W}|h'\rangle = \frac{G_{F}}{\sqrt{2}} \sum_i C_i(\mu,M_{_W}) M_{_W}^{6-d_i} \langle h|{O}^{(i)}(\mu)|h'\rangle\ , \label{eq:HEFFOPE} \end{equation} where $d_i$ is the dimension of the operator ${O}^{(i)}(\mu)$, and functions $C_i(\mu,M_{_W})$ result from the integration of the Wilson expansion coefficients, $c_i(x;\mu)$ (defined in eq.~(\ref{eq:ME}) below), with the $W$-propagator. Schematically, suppressing Lorentz indices, one has \begin{equation} C_i(\mu,M_{_W}) M_{_W}^{6-d_i} = \int d^4x\,D^{W} (x;M_{_W}) c_i(x;\mu)\ . \label{eq:WILCOEF} \end{equation} The ${O}^{(i)}(\mu)$ are quark and/or gluon operators renormalized at the subtraction point $\mu$. The functions $C_i(\mu,M_{_W})$ are evaluated in perturbation theory and their running with $\mu$ is dictated by the renormalization group equation which follows from the $\mu$-independence of the l.h.s. of eq.~(\ref{eq:HEFFOPE}). The sum in the expansion~(\ref{eq:HEFFOPE}) is over operators of increasing dimension. We consider in the following only operators with dimensions $d_i \le 6$, since the contribution from operators with $d_i>6$ is suppressed by powers of $1/M_{_W}$. All the intricacies of operator mixing in the definition of the finite and renormalized operators, ${O}^{(i)}(\mu)$, come about because the integrals in~(\ref{eq:HEFF}) and~(\ref{eq:WILCOEF}) are extended down to the region of extremely small $x$. The complicated mixing for the ${O}^{(i)}(\mu)$'s in terms of bare operators arises from contact terms when the separation of the two currents goes to zero (i.e. when $|x|$ is of the order of $a$). The problem is particularly bad because chiral symmetry is broken by the lattice regularization. This observation suggests that a simple way to avoid these complications is to implicitly define the renormalized operators by enforcing the OPE on the lattice for distances $|x|$ much larger than the lattice spacing $a$. We imagine proceeding in the following way: \begin{enumerate} \item Take the $T$-product of two properly normalized weak currents, $J_{\rho L}(x) J^\dagger_{\rho L}(0)$. If required these currents can be improved. \item Measure the hadronic matrix element $\< h\vert T[J_{\rho L}(x) J^\dagger_{\rho L}(0)]\vert h'\>$ in a Monte Carlo simulation, as a function of $x$ for $|x|\rightarrow 0$ in the region \begin{equation} a\ll |x| \ll \Lambda_{QCD}^{-1} \,.\label{eq:COND} \end{equation} \item Extract the numbers $\< h\vert {O}^{(i)}(\mu)\vert h'\>$ by fitting in the region~(\ref{eq:COND}) the $x$-behaviour of $\< h\vert T[J_{\rho L}(x) J^\dagger_{\rho L}(0)]\vert h'\>$ to the formula \begin{equation} \< h\vert T\left[J_{\rho L}(x) J^\dagger_{\rho L}(0)\right]\vert h' \> = \sum_i c_i(x;\mu) \< h\vert {O}^{(i)}(\mu)\vert h' \> \,,\label{eq:ME} \end{equation} where the Wilson coefficients $c_i(x;\mu)$ are determined by continuum perturbation theory using any standard renormalization scheme. The scale $\mu$ should be chosen so that $1/\mu$ lies in the range defined by eq.~(\ref{eq:COND}). Since we only consider operators of dimension 6 or lower, the $T$-product differs from the right-hand side of eq.~(\ref{eq:ME}) by terms of $O(|x|^2\Lambda_{\rm QCD}^2)$, which is an estimate of the size of the systematic errors in this procedure. Note that in eq.~(\ref{eq:ME}) an average over the points $x$ and $-x$ (schematically $J(x)J(0)\to 1/2(J(x)J(0) + J(-x)J(0))$) is implied in order to eliminate from the OPE terms which do not appear in physical amplitudes because of the integration over $x$ in eq.~(\ref{eq:HEFF}). These terms, however, would appear on the r.h.s. of eq.~(\ref{eq:ME}) were we to not perform this average. \item Insert the numbers $\< h\vert {O}^{(i)}(\mu)\vert h'\>$ determined in this way into the expression for the matrix elements of ${\cal H}_{\rm eff}^{W}$, finally obtaining eq.~(\ref{eq:HEFFOPE}). \end{enumerate} For the implementation of this procedure, what is required is the existence of a window, eq.~(\ref{eq:COND}), in which the distance between the two currents is small enough so that perturbation theory can be used to determine the expected form of the OPE, but large enough that lattice artifacts are small. These artifacts will be suppressed by powers of $a/x$. Clearly the existence of such a window requires that we have a sufficiently small lattice spacing. At the same time the physical volume of the lattice must be sufficiently large to allow the formation of hadrons. A few remarks may be useful at this point: \begin{itemize} \item The method determines directly the ``physical'' matrix elements of the operators appearing in the OPE of the two currents, i.e. the matrix elements of the finite, renormalized operators ${O}^{(i)}(\mu)$, without any reference to the magnitude of the $W$-mass. Thus we do not need to probe distances of $O(1/M_{_W})$ with lattice calculations. \item If the action and currents are improved, then the resulting matrix elements of ${O}^{(i)}(\mu)$, and thus of ${\cal H}_{\rm eff}^{W}$, will also be improved. \item The $\mu$-dependence of the matrix elements of the operators ${O}^{(i)}(\mu)$ is given trivially by that of the (perturbative) Wilson coefficients, $c_i(x;\mu)$. It compensates the related $\mu$-dependence of the functions $C_i(\mu,M_{_W})$ in such a way that the l.h.s of eq.~(\ref{eq:HEFFOPE}) is independent of the choice of the subtraction point. A similar comment holds for the dependence on renormalization scheme. \item Unlike the methods discussed in previous sections, this approach automatically yields hadronic amplitudes that are properly normalized (in the renormalization scheme in which the Wilson coefficients appearing in eq.~(\ref{eq:ME}) are computed). \end{itemize} We now discuss the feasibility of the method in more detail. The critical step is fitting to the form predicted by the OPE, eq.~(\ref{eq:ME}). Typically more than one operator contributes to the sum, so one must be able to separate the contributions using their different dependence on $x$. The operators of interest are of dimension 6, and thus have Wilson coefficients which vary logarithmically with $x$. At leading order the form is \begin{equation} c_i(x;\mu) \propto \left(\alpha_s(1/x) \over \alpha_s(\mu)\right)^{\gamma_0^{(i)}\over 2 \beta_0} = 1 + \frac{\alpha_s}{4\pi}\gamma_0^{(i)} \log(x \mu) + \dots \,, \label{eq:formofci} \end{equation} where $\gamma_0^{(i)}$ is the one-loop anomalous dimension of the operator $O^{(i)}$, and $\beta_0$ is coefficient of the one-loop term in the $\beta$-function. By contrast, the coefficients of lower dimension operators diverge as powers of $1/x$ (up to logarithmic corrections). Thus if lower dimension operators are present they will dominate at short distances, and it will be very difficult to pick out the matrix elements of the dimension 6 operators. If, on the other hand, only dimension 6 operators appear then it may be possible to separately determine their matrix elements. How feasible this is depends on how large a range of $x$ we can use, and on the magnitude of the differences between the anomalous dimensions. Fortunately, in the cases of interest, there are no operators of dimension lower than 6. Consider, for example, the $\Delta S=1$ part of ${\cal H}_{\rm eff}^{W}$. The operators which can appear in the OPE are $O^{(\pm)}$ [defined in eq.~(\ref{eq:oplmidef})], and in addition \begin{equation} O' = (m_c^2-m_u^2) \, \bar s (\overrightarrow{D_\mu} - \overleftarrow{D_\mu}) \gamma_\mu^L d \,. \end{equation} The GIM mechanism requires $O'$ to vanish when $m_c=m_u$, while chiral symmetry requires both the quarks to be left-handed and that the GIM factor be quadratic in the quark masses. Although this operator looks new, its negative parity part is, by the equations of motion, proportional to $(m_c^2-m_u^2)\,O_p$, where $O_p$ is defined in eq.~(\ref{eq:opdef}), and its positive parity part is proportional to $(m_c^2-m_u^2)\, (m_s + m_d)\, O_s$. So these are the same operators we encountered in sections~\ref{sec:problem} and \ref{sec:pp}, except for the overall factors. Since $O'$ has dimension 6,, its coefficient function depends only logarithmically on $x$.\par To determine the matrix elements using eq.~(\ref{eq:ME}) we need the anomalous dimensions, which for the three operators are \begin{equation} \gamma^{(+)}_0 = 4, \qquad \gamma^{(-)}_0 = -8, \qquad \gamma '_0 = 16\ . \label{eq:ANOMALDIM} \end{equation} In fact, the contribution of $O'$ to the r.h.s. of eq.~(\ref{eq:ME}) can be determined separately since its matrix element does not require any subtraction and can be calculated directly. As for $O^{\pm}$, since their anomalous dimensions are well separated from one another, it may be possible to extract the corresponding matrix elements and then construct the physical amplitude of ${\cal H}_{\rm eff}^{\Delta S=1}$. An important element of the procedure proposed in this section is that since it is the continuum OPE which determines the operators which appear, these are restricted by continuum symmetries. This is because, for $|x|\gg a$, the lattice OPE matches that of the continuum with discretization errors suppressed by powers of $a/x$. The previous discussion shows how we can, in principle, remove the contribution of the scalar and pseudoscalar densities from any hadronic matrix element. Not only does the method work for both positive and negative parity parts of ${\cal H}_{\rm eff}^{W}$, but it also works if the weak Hamiltonian carries momentum $\Delta p$. Thus the simplest way to test the method may be to calculate $K\to\pi$ matrix elements and then use chiral perturbation theory to relate these to the physical amplitude, as described in the previous section. The problems of operator mixing described in sec.~\ref{sec:pp} do not apply to the new method. Of course, for physical matrix elements one does not need to worry about the subtraction of $O'$. This is because the matrix elements determined by this method are those of the continuum operator, up to discretization errors. We end this section with an observation on the computational feasibility of this approach. The main difficulty is to have a sufficiently large range of values of $|x|$ in order to separate the contributions from the different operators, and yet to satisfy the condition~(\ref{eq:COND}). These constraints make the method difficult at present, and it will only be fully exploited when Teraflops machines become available. \section{A propagating top quark} \label{sec:top} For CP violating processes in kaon decays, or for $B$ decays where top-penguin diagrams enter at a Cabibbo-allowed level, the strategies described in sec.~\ref{sec:new} and \ref{sec:jj} for the negative parity operators fail, because the GIM mechanism is not operative. In particular $O^{(\pm)}$ mix with all the penguin operators (see below). This makes the calculation of the mixing matrix of comparable difficulty to that for the positive parity operators described in sec.~\ref{sec:pp}. For positive parity operators, the analysis of section~\ref{sec:pp} still applies. The difference is that the mixing coefficients of the magnetic operator and scalar density become more divergent~\cite{MMRT} and this can make the numerical determination of the renormalized operators less precise. In order to circumvent these problems we propose two methods involving a fictitious top quark (with mass $\widetilde m_t$) which is light enough to propagate on the lattice. The basic idea is to work with two different scales: the first, $\mu$, is larger than $\widetilde m_t$, so that the corresponding operator basis is as in the previous sections ($O^{(\pm)}$); the second, $\mu^\prime$, is smaller than $\widetilde m_t$ so that a full set of penguin operators is generated. The matrix elements of the operators renormalized at the scale $\mu$ are computed numerically following the strategies explained in secs.~\ref{sec:new}--\ref{sec:jj}. By matching the result to the amplitude expressed in terms of operators renormalized at $\mu'$, we extract their matrix elements. In this way, at least in principle, we can obtain the matrix elements of the penguin operators without directly computing them. We now present the details of this procedure. At scales $\mu'$ below $m_t$, when GIM is not operative, the form of the $\Delta S=1$ effective Hamiltonian is \begin{eqnarray} {\cal H}_{eff}^{\Delta S=1}&=&\frac {G_F} {\sqrt{2}} \Bigl[ \lambda_u\, \Bigl( C_1(\mu^\prime,M_{_W})\left( Q^u_1(\mu^\prime) - Q_1(\mu^\prime) \right) + C_2(\mu^\prime,M_{_W}) \left( Q^u_2(\mu^\prime) - Q_2(\mu^\prime) \right) \Bigr)\nonumber\\ &-&\lambda_t \, \vec C(\mu^\prime,M_{_W}, {m}_t) \cdot\vec Q(\mu^\prime) \Bigr] \protect\label{eq:ehmu} \end{eqnarray} where $\lambda_q=V_{qd} V_{qs}^*$, $q=u,c,t$ ($\lambda_c$ is eliminated by using the unitarity relation $\lambda_c = -\lambda_u-\lambda_t$). Here $\vec Q$ contains the ``penguin'' operators \begin{equation} \vec Q(\mu^\prime)\equiv \left(Q_1(\mu^\prime),Q_2(\mu^\prime),\dots, Q_{6}(\mu^\prime)\right) \end{equation} and $\vec C$ are the corresponding coefficients \begin{equation} \vec C(\mu^\prime)\equiv( C_1(\mu^\prime,M_{_W}),C_2(\mu^\prime,M_{_W} ), C_3(\mu^\prime,M_{_W},{m}_t) \dots, C_{6}(\mu^\prime,M_{_W}, {m}_t)) \,. \end{equation} A convenient basis of operators when $QCD$ corrections are taken into account is~\cite{russi}--\cite{lus} \begin{eqnarray} Q_{ 1}&=&({\bar s}d)_{ (V-A)} ({\bar c}c)_{ (V-A)}\,, \nonumber\\ Q_{ 2}&=&({\bar s}c)_{ (V-A)} ({\bar c}d)_{ (V-A)}\,, \nonumber\\ Q_{ 3,5} &=& ({\bar s}d)_{ (V-A)} \sum_{q}({\bar q}q)_{ (V\mp A)} \protect\label{eq:basis}\\ Q_{ 4} &=&\sum_{q} ({\bar s}q)_{ (V-A)} ({\bar q}d)_{ (V - A)} \nonumber\\ Q_{6} &=& -2 \sum_{q} ({\bar s}q)_{ (S+P)} ({\bar q}d)_{ (S-P)} \nonumber \,.\end{eqnarray} $Q_1^{u,t}$ and $Q_2^{u,t}$ are the analogous operators to $Q_1$ and $Q_2$ with the up- and top-quark replacing the charmed one. Here the subscripts $(V \pm A)$ and $(S \pm P)$ indicate the chiral structures, and the sum over quarks $q$ runs over the active flavours at the scale $\mu^\prime$. For simplicity we ignore electroweak penguin and magnetic operators; it is straightforward to generalize the following discussion to include them. The coefficient functions appearing above have been calculated up to non-leading order in perturbation theory in refs.~\cite{noi}--\cite{ciuz}. The effective Hamiltonian relevant for $\Delta B=1$ decays is simply obtained by replacing the $s$ quark with the $b$ quark. The part of ${\cal H}_{\rm eff}^{\Delta S=1}$ proportional to $\lambda_u$ is the same as that considered above in eq.~(\ref{eq:HW}), and has been the focus of discussion for much of the paper. We have simply re-expressed it here in the new operator basis, in terms of which \begin{eqnarray} O^{\pm} &=& (Q_1^u - Q_1) \pm (Q_2^u - Q_2) \,, \\ C_\pm(\mu, M_{_W}) &=& \frac12 \left[ C_1(\mu, M_{_W}) \pm C_2(\mu, M_{_W}) \right] \,. \end{eqnarray} This part of ${\cal H}_{\rm eff}^{\Delta S=1}$ gives the dominant contribution to CP conserving $K\to\pi\pi$ amplitudes. These amplitudes can be calculated using the methods presented in secs.~\ref{sec:new}--\ref{sec:jj}. The difficulties arise for the part of ${\cal H}_{\rm eff}^{\Delta S=1}$ proportional to $\lambda_t$, which gives rise to CP violation in kaon decays. This part contains the penguin operators $Q_i$, whose matrix elements are not protected by the GIM mechanism. To write a renormalized version of these operators requires subtracting $O_p$ and $\widetilde O_\sigma$ with appropriate coefficients, and accounting for mixing with all the other operators $Q_j$. This is true for both positive and negative parity sectors. The methods of this section are designed to avoid these difficulties. We do so by introducing a dynamical top quark so as to keep the GIM mechanism operative. This will, however, be a fictitious top quark with mass satisfying \begin{equation} 1/a \gg \widetilde{m}_t \gg m_c \gg \Lambda_{\rm QCD} \,. \end{equation} In other words, our top is light enough to propagate on the lattice, but, like the physical top, it is heavier than the charm quark. As explained below, by using the fictitious top we can extract the matrix elements $ \langle h \vert Q_i(\mu') \vert h^\prime \rangle$ which can then be inserted into the expression for ${\cal H}_{\rm eff}^{\Delta S=1}$, eq.~(\ref{eq:ehmu}). In this respect the method is similar to that of sec.~\ref{sec:jj}. For purposes of illustration we will restrict the discussion below to negative parity operators. We begin with the two matrix elements \begin{equation} {\cal M}_i(\mu,\widetilde m_t) \equiv \langle h \vert Q_i(\mu) - Q^t_i(\mu) \vert h^\prime \rangle \, , \quad i=1,2 \label{eq:match1}\end{equation} evaluated at a renormalization scale satisfying $a^{-1} \sim \mu \gg \widetilde m_t$. Since GIM is operative, the analysis of sec.~\ref{sec:problem} applies (with $m_u\to m_t$). Thus we can define $Q_i(\mu) - Q^t_i(\mu)$ in terms of bare lattice operators by \begin{equation} Q_i(\mu) - Q^t_i(\mu) = Z_{ij}(\mu a,g_0^2) \left[ Q_j(a) - Q^t_j(a) + (m_c-\widetilde{m}_t) {C_p^{(j)} \over a} O_p(a) \right] \,, \end{equation} where $i,j=1,2$ and the subtraction coefficients $C_p^{(j)}$ are determined by enforcing \begin{equation} \langle 0 \vert Q_j(a) - Q^t_j(a) + (m_c-\widetilde{m}_t) {C_p^{(j)} \over a} O_p(a) \vert K \rangle = 0 \,. \end{equation} The $Z_{ij}$ are related by a simple change of basis to the $Z^{(\pm)}$ of eq.~(\ref{eq:rops}), and can be calculated either perturbatively or non-perturbatively. In this way we can obtain ${\cal M}_i(\mu,\widetilde m_t)$ from a lattice calculation, as a function of $\widetilde{m}_t$, for some choice of $\mu$. A simple choice is $\mu \approx 1/a$. On the other hand, we can also consider the same matrix elements for a renormalization scale $\widetilde{m}_t\gg \mu' \gg m_c$. In this case, the GIM mechanism is not operative, and the matrix elements can be expressed in terms of the six operators which appear in eq.~(\ref{eq:basis}) \begin{equation} {\cal M}_i(\mu,\widetilde m_t) = \sum_{j=1,6} \widehat Z^{-1}_{ij}(\mu^\prime,\mu, \widetilde{m}_t) \langle h \vert Q_j(\mu^\prime) \vert h^\prime \rangle \, . \label{eq:match2} \end{equation} The rectangular matrix $\widehat Z^{-1}$ can be calculated perturbatively by matching the theory with and without the fictitious top quark. This is exactly the method used to calculate the coefficients $\vec C$ in the theory with the physical top quark mass, except that in the physical case one must simultaneously integrate out both the $W$ boson and the top quark. Here we are effectively integrating out the $W$ first and then the fictitious top quark. The results for $\widehat Z^{-1}$ at non-leading order can be reconstructed from those computed in refs.~\cite{noi}--\cite{ciuz}. The running in the $1$--$2$ submatrix is particularly simple (since the operators $Q_{3}$--$Q_{6}$ do not feed back into $Q_1$ and $Q_2$) and one can show that \begin{equation} Z_{ij}(\mu^\prime a , g_0^2)= \sum_{k=1,2} \widehat Z_{ik}(\mu^\prime, \mu ) Z_{kj}(\mu a , g_0^2) \, , \label{eq:matchzz}\end{equation} with $(i,j)=1,2$. Here $\widehat Z_{ik}$ is the inverse of the $2\times 2$ sub-block of the mixing matrix appearing in eq.~(\ref{eq:match2}). This submatrix does not depend on $\widetilde m_t$, but only on $\mu'$ and $\mu$. We now choose a value of $\mu^\prime$ and vary $\widetilde m_t$. The six matrix elements of interest, $\langle h \vert Q_j(\mu^\prime) \vert h^\prime \rangle$, are obtained by fitting the right hand side of eq.~(\ref{eq:match2}) to ${\cal M}_i(\mu,\widetilde m_t)$ computed numerically as in eq.~(\ref{eq:match1}), and using the renormalization matrix $\widehat Z^{-1}$ calculated perturbatively. Since the dependence on $\widetilde m_t$ is logarithmic, this will not be easy. The procedure is analogous to our use of the $x$-dependence in sec.~\ref{sec:jj} to separate the renormalized matrix elements of operators appearing in the weak Hamiltonian. Having determined these renormalized matrix elements, we can insert them into the expression (\ref{eq:ehmu}) for the effective Hamiltonian. At this point the constraint $M_{_W} \gg \widetilde{m}_t$ can be removed since the Wilson coefficients of the operators appearing in ${\cal H}_{eff}^{\Delta S=1}$ can be computed perturbatively for arbitrary values of $\widetilde m_t$, including $\widetilde m_t=m_t$. Before discussing the errors involved in this procedure, we make the following observation. In the effective theory where the top quark has been removed, provided that $\mu' \gg \Lambda_{\rm QCD}$, we can evolve the renormalized operators from one scale to another using perturbation theory. In particular we can obtain $\langle h \vert Q_j(\mu) \vert h^\prime \rangle$ from $\langle h \vert Q_j(\mu') \vert h^\prime \rangle$ for all six operators using the $6\times 6$ anomalous dimension matrix computed in perturbation theory with the effective Hamiltonian where the top quark has been integrated out. Thus we can directly extract the matrix elements of the operators $\vec Q(\mu)$ from ${\cal M}_i(\mu, \widetilde m_t)$. Note, however, that an accurate determination of the matrix elements of $\vec Q(\mu)$ (or of $\vec Q(\mu^\prime)$) requires that the typical scale, $\Lambda_{hh'}$, of masses and external momenta appearing in the physical process $h^\prime \to h$ be much smaller than $\widetilde m_t$. This is because in the matching procedure we neglect terms of $O(\Lambda_{hh'}/\widetilde m_t)$. An alternative method, in the same spirit as the approach followed in sec.~\ref{sec:jj}, is the following. We can avoid the need for any non-perturbative subtraction by separating the two currents and using a fictitious propagating top quark. Thus we directly match $\langle h\vert T(J_{\rho L}(x) J^\dagger_{\rho L}(0)) \vert h'\rangle_{\rm top}$, where the subscript indicates the presence of the fictitious top, to the formula \begin{equation} \langle h\vert T(J_{\rho L}(x) J^\dagger_{\rho L}(0)) \vert h' \rangle_{\rm top} = \sum_{i=1,6} c_i(x; \mu^\prime, \widetilde{m}_t) \langle h\vert {O}^{(i)}(\mu^\prime)\vert h' \rangle \,,\label{eq:MEtop} \end{equation} where \begin{eqnarray} J_{\rho L}(x) J^\dagger_{\rho L}(0)&=& \bar s(x) \gamma_\rho (1 -\gamma_5) t(x) \bar t(0) \gamma_\rho (1 -\gamma_5) d(0) \nonumber \\ &-& \bar s(x) \gamma_\rho (1 -\gamma_5) c(x) \bar c(0) \gamma_\rho (1 -\gamma_5) d(0) \, .\end{eqnarray} The coefficients $c_{1,2}(x; \mu^\prime, \widetilde{m}_t)\equiv c_{1,2}(x; \mu^\prime)$ are the same as those in sec.~\ref{sec:jj}. The coefficients $c_i(x; \mu^\prime, \widetilde{m}_t)$, with $i=3$--$6$ are complicated functions of the anomalous dimension matrix which can be worked out from the results of refs.~\cite{buras2} and \cite{noi2} and computed numerically. Both methods proposed in this section require small enough lattice spacings to accommodate a number of scales. Like the method of sec.~\ref{sec:jj} their full implementation is likely to require the next generation of supercomputers. \section{Conclusion} \label{sec:concs} In this paper we have suggested a number of new approaches with which to study the $\Delta I=1/2$ rule\ using Wilson-like fermions. These methods can also be used for staggered fermions. In order to obtain the physical $K\to\pi\pi$ amplitude without relying on chiral perturbation theory, or to study decays such as $B\to\pi\pi$, one must learn how to extract information on final state interactions from Euclidean amplitudes. The method of ref.~\cite{cfms} might make this possible, but detailed numerical studies are needed to assess whether it is practical. The calculation of the CP violating part of $K\to\pi\pi$ amplitudes with Wilson quarks is very difficult. A completely nonperturbative method may require the addition of a fictitious top quark. We have also reevaluated the method of ref.~\cite{MMRT} involving $K\to\pi$ amplitudes. This approach is likely to be more difficult because of the large number of mixing coefficients which have to be determined non-perturbatively. It may however provide complementary information to the results obtained with the $K\to\pi\pi$ method, and a check of the accuracy of chiral relations. Only numerical studies will be able to confirm or refute our present intuition that the $K\to\pi\pi$ methods are likely to provide the better results in the near future. \section*{Acknowledgements} We would like to thank R. Gupta and A. Vladikas for discussions. C.D., G.M., G.C.R., C.T.S., M.Ta. and M.Te. acknowledge partial support from EU contract CHRX-CT92-0051. M.Ta. thanks the Universit\`a di Roma ``La Sapienza'' where part of the work for this paper was carried out and acknowledges INFN for partial support and EPSRC for its support through grant GR/K41663. C.D., M.Ta. and C.T.S. acknowledge the PPARC for its support through studentship 9530247X and grants GR/L22744 and GR/J21569, respectively. G.M., G,C.R., M.T. and M.T. acknowledge partial support by M.U.R.S.T. C.D. and S.S. thank the University of Rome ``La Sapienza" for its hospitality, and the INFN for partial support. S.S. was also partially supported by the U.S. Department of Energy grant DE-FG03-96ER40956.
1,314,259,996,260
arxiv
\section{Introduction} Over the recent years, neural networks (NNs) have become state of the art in processing various kinds of data types, such as, e.\,g., high-dimensional numerical data \cite{}, images \cite{}, time series \cite{}, or language data \cite{jones1995,collobert2011,martinez2013,cambria2012,lawrence2000}, only to name a few. Standard applications are classification or regression tasks, and in many cases NNs are oußtperforming classical approaches significantly. Also in the field of anomaly detection \cite{taylor2016,sakurada2014,zimek2012,erfani2016} and object recognition \cite{felzenszwalb2010,viola2001,sung2002,ohnbar2016,wojek2012,% kobatake1996,bai2010}, NNs have been proven to be powerful approaches. Recently, it has been shown that NNs can be used also as data generators, especially in settings of generative adversarial networks (GANs) \cite{goodfellow2014,denton2015,radford2015,salimans2016,zhao2016}. Therein, two networks -- the generator and the discriminator -- compete with each other in a way that the generator learns to generate synthetic data that exhibits the specific properties and characteristics of the training data. A similar task can also be performed by variational autoencoders \cite{kingma2014}. Although the mathematical background of NNs has been known for decades, some of the biggest development steps and successful applications have been presented only in the recent years. These breakthroughs are mainly due to two reasons: On the one hand, we are today equipped with the required computational power, especially in the form of GPUs, in order to perform the network training on a reasonable time scale. On the other hand, substantial knowledge about network architectures has been developed, e.\,g.~concerning convolutional (CNN) or recurrent (RNN) neural networks. A special field of application which could not be thought of without the progress mentioned above is that of style transfer networks \cite{gatys2016,pix2pix,CycleGAN,UNIT,MUNIT,huang2017,sanakoyeu2018,wang2017, li2018,ledig2016}. These algorithms have been developed with special regard to the data type of digital images. Their task is to translate an image of a certain domain $\mathcal{A}$ (e.\,g.~a photo) into the style of a different domain $\mathcal{B}$ (e.\,g.~an artistic painting). This problem can be faced from different perspectives and in the literature, one can find methods using direct optimization procedures \cite{gatys2016}, methods working on paired images \cite{pix2pix}, and approaches performing unpaired image translations \cite{CycleGAN,UNIT,MUNIT} (see also Sec.~\ref{sec:related-work}). Unpaired frameworks have the advantage that they do not require one-to-one training examples from both domains, which are often not easily available or do not even exist. Instead, they use a special kind of NN arrangement in an extended GAN setting which makes it possible to train the translation using unpaired image examples. Unpaired domain translation settings are also subject of this paper and we focus on the special case of high-resolution images by which we understand the several to high megapixel regime. This is in contrast to many previous machine learning papers on image tasks which have demonstrated applications on publicly available data sets. Typical representatives of these image data sets are MNIST ($28 \times 28$ pixels), CIFAR10 ($32 \times 32$ pixels), CALTEC101 (about $300 \times 200$ pixels), or others with typical image resolutions on the order of some hundred to some ten thousands of pixels altogether. However, these data sets do by far not reach resolutions that are typical for today's camera systems. Even simple smartphone cameras easily reach the double-digit megapixel regime and they can record videos in Full HD (about 2 megapixels). Video systems with 4K resolution (about 8 megapixels) are commercially available and today's standard of DSLR cameras is on the order of 20 megapixels and above. Today, there is a clear demand for such highly resolved images to capture relevant image details. As an example, video systems developed for the use case of autonomous driving work on high-resolution images to provide a sufficiently detailed view of the car's surrounding. These numbers demonstrate the clear need to develop machine learning algorithms that are capable to handle today's high-resolution image data. In this paper, we address the problem of unpaired domain translation with special regard to this issue of being able to process high-resolution images. We discuss that current methods suffer from a high peak memory consumption during training and translation which sets a natural limit to the largest processable image size on a given GPU. To solve this issue, we introduce a scalable method which is able to work on arbitrary-high resolutions without increasing the peak memory consumption of the NN. We achieve this goal by the simple idea, not to process the whole image at once but to train and apply the domain translation on the level of small, overlapping image subsamples. For the training of the underlying generators, each of the existing methods can be applied, and we use a \textsc{Unit}-like framework\cite{UNIT} for our investigations in this paper. A question arising with the task of image translation is how the styles of the domains $\mathcal{A}$ and $\mathcal{B}$ are defined. Since there is a high variability in real-world data samples, NNs are usually trained on huge data sets representing this high variance. By contrast, there can also be the need to handle the opposite case of low variance data: An example is the case that one of the domains corresponds to simulated images. Such simulated images often work on textures with low spacial variability. As a consequence, different images in the domain change their content but only hardly vary in their appearance. This question is closely related to our goal of developing a high-resolution style transfer, in the sense that one large image with all its details may contain a variance that is similar to the one of a big data set of small images. This reflects the fact that the actual relevant measure of the data set used for training is less the size in terms of megapixel but rather in the sense of its Shannon entropy. Our paper is organized as follows: In Sec.~\ref{sec:related-work}, we provide a brief overview of some selected style transfer algorithms and discuss some of their advantages and shortcomings. In Sec.~\ref{sec:method}, we introduce our method and explain how we work on the level of image subsamples. Results on high-resolution images are presented in Sec.~\ref{sec:results} with examples covering the range from similar to different domains. We demonstrate that our method works well even for ``single-shot'' translations, where the target style is defined by only \emph{one} image, and we present results obtained from images with up to more than 50 megapixels resolution. After concluding in Sec.~\ref{sec:conclusion}, we provide additional information like the NN's architectural design and image details in the appendix. \section{Related Work} \label{sec:related-work} \begin{figure}[t] \includegraphics[width=\columnwidth]{fig1.pdf} \caption{% Observed peak memory consumption of the \textsc{Unit} framework in the evaluation phase when translating an image of a certain overall number of pixels. The memory consumption grows roughly linearly with the number of pixels from a certain threshold. In addition, we visualize the hardware limits of two GPUs [4GB of an Nvidia Quadro M2000 (green) and 11GB of an Nvidia GTX 1080Ti (blue)] as vertical lines. } \label{fig:memory_consumption} \end{figure} As mentioned in the introduction, there are several approaches in the literature to perform domain translation on images. Each of them has its advantages and shortcomings and we provide a brief overview on a selection of methods in this section. (We refer the reader to the respective publications and references therein for details). One of the very early approaches to unplaired style transfer is the method of Gatys~\textit{et~al.}~\cite{gatys2016}. Their approach is based on a single, pretrained multi-layer CNN which takes the image to be transferred as input. Going deeper and deeper into this CNN, the filters in each layer are activated by different image properties, such as e.\,g.~colors, structures, local and global features. Based on the idea that images with a similar style should lead to similar activation patterns across the layers of the CNN, they proceed as follows: Given a target style image and its corresponding activations, the layer activations are also determined for the image to be translated. From the differences in the activations, they determine the gradient with respect to the input image and apply this information to change the image. By this way, the input image resembles more and more to the desired target. An advantage is that this method yields high-quality images and that -- by setting different gradient weights with respect to different layer depths of the CNN -- the style can be adjusted to cover more local or global features. A drawback of this method, however, is that it requires an optimization procedure for each image transformation, which makes this procedure computationally very expensive. A second approach to style transfer of images it the \textsc{Pix2Pix} framework by Isola~\textit{et~al.}~\cite{pix2pix}. Their method is based on a NN in an encoder-decoder configuration whose latent space covers the relevant features of the images which are translated. Compared to the method of Gatys~\textit{et~al.}, the advantage of this approach is that, once the network is trained, it can be applied directly to new images without additional optimization steps. This makes the evaluation phase significantly less expensive concerning the computational requirements. However, a shortcoming of this method is that for the network training, \emph{paired} images of both domains $\mathcal{A}$ and $\mathcal{B}$ are required, which are often not available in real-world applications. The much more challenging problem of domain translations on \emph{unpaired} image settings has been addressed by the \textsc{CycleGAN} \cite{CycleGAN} and \textsc{Unit} \cite{UNIT} frameworks. Both of them are based on extended GAN settings and apply the crucial requirement of cycle consistency for training. In the unpaired setting, a direct transformation from domain $\mathcal{A}$ to domain $\mathcal{B}$ is not possible, since for an image in $\mathcal{A}$ there is no counterpart in $\mathcal{B}$. Instead, the transformations $\mathcal{A}\to\mathcal{B}\to\mathcal{A}$ and $\mathcal{B}\to\mathcal{A}\to\mathcal{B}$ are performed with the goal to reproduce the respective original images. In both these frameworks, each of the translations $\mathcal{A}\to\mathcal{B}$ and $\mathcal{B}\to\mathcal{A}$ is applied by a generator made of a deep CNN in encoder-decoder arrangement, and there are two separate discriminators for each domain $\mathcal{A}$ and $\mathcal{B}$ which distinguish real and fake images. The most important difference between the two frameworks is that the generators in \textsc{CycleGAN} are completely independent from each other, while they share part of their latent space in \textsc{Unit}. Both these frameworks have shown to yield very good results. A challenge of these frameworks, however, is the huge overall network size, which consists of four different CNNs (two generators and two discriminators; see Tab.~\ref{tab:net-config} in the appendix). This results in a substantial computational effort and long computational times for training and evaluation. In addition, it is required to store all four networks and at least some intermediate network results for training on the GPU which sets additional hardware requirements. The latter point is directly related to our main goal of processing high-resolution images, so that we go into this in more detail: We have discovered from our investigations of e.\,g.~the \textsc{Unit} framework, that storing an image in the MB range on the GPU is, of course, not a problem. However, propagating the image as float tensors through the network and backing up intermediate results for loss functions or gradients for backpropagation, the peak memory consumption of the whole network is significantly higher, with an especially large contribution from the deep hidden layers. We illustrate this observation in Fig.~\ref{fig:memory_consumption}, which shows from a threshold a linear increase of the peak memory consumption with the input image's number of pixels. From the vertical lines, which indicate the GPU memory hardware limit of two GPUs, it becomes clear that this framework cannot be executed e.\,g.~on a standard Nvidia Quadro M2000 for images with more that about one megapixel and the limit on a Nvidia GTX 1080Ti is less than three megapixel. Even if code improvements might reduce the memory consumption, this basic limit to the processable image resolution remains, and also special high-prize computing GPUs which provide higher GPU memory can only push this boundary but cannot overcome it. \begin{figure}[t] \includegraphics[width=\columnwidth]{fig2.pdf} \caption{% (a) From the high-resolution image, training samples are extracted from random positions and with random size. The extracted samples are scaled down to a common resolution $x_\text{batch} \times y_\text{batch}$ and combined to a training batch. (b) With this extracted batch, a training iteration of the GAN setting is performed. For the GAN, known configurations like the \textsc{CycleGAN} \cite{CycleGAN} or \textsc{Unit} \cite{UNIT} frameworks can be used and we use the latter with shared latent space in this paper. (c) To transform the whole high-resolution image after training, single samples are extracted and translated by the generator separately. The translated images are finally merged together. We emphasize that in the translation step, the single samples may overlap and also be of different size. } \label{fig:method} \end{figure} \section{Method} \label{sec:method} \begin{figure*}[p] \includegraphics[width=\textwidth]{fig3a.jpg} \\ \includegraphics[width=\textwidth]{fig3b.jpg} \\ \textcolor{white}{x}\hfill \includegraphics[width=.14\textwidth]{fig3c.jpg} \caption{% High-resolution style transfer of an image of the Swiss Alps (top) towards the style of the Scottish Highlands (bottom). Domain $\mathcal{A}$ is defined by the top image and the target style is defined by the small image on the bottom right. The original image has a resolution of $15,884 \times 3,271$ pixels and the target style image of $5,472 \times 3,648$ pixels. (The resolution has been downscaled for the presentation in this paper.) Image details can be seen in the two top rows of Fig.~\ref{fig:landscape2_details}. } \label{fig:landscape2} \vspace{3em} \includegraphics[width=\textwidth]{fig4a.jpg} \\ \includegraphics[width=\textwidth]{fig4b.jpg} \\ \textcolor{white}{x}\hfill \includegraphics[width=.14\textwidth]{fig4c.jpg} \caption{% High-resolution style transfer of a street scene between very similar-looking domains. The original image (top) and the target style image (bottom right) show the same street, but with images taken from different positions and under different weather and lighting conditions. The transformed image (center) well exhibits the style of the target domain. (The original image has a size of $12,895 \times 3,472$ pixels and the target image one of $4,785 \times 3,508$ pixels.) Image details can be seen in Fig.~\ref{fig:landscape2_details}. } \label{fig:street1} \end{figure*} It is the purpose of this paper to introduce an approach by which arbitrary-high resolution images can be processed on today's standard GPUs. The basic idea is simple and can be stated in one sentence: Instead of processing the whole image at once, perform the network training as well as its evaluation on small subsamples of the image. From a mathematical point of view, the justification of this procedure is the analogous functional principle of the CNN's filters which stride over the input image: With a usual size of three to seven pixels, these filters are very small compared to the actual image and they only see a very small part of it in every step. Our procedure of extracting subsamples can, therefore, be regarded as an abstract intermediate interface to the NN which works on a level between the whole image size and the filter size. In more detail, the procedure is described in the following two subsections and illustrated in Fig.~\ref{fig:method}. \subsection{Training} To train an unpaired domain translation network for high-resolution images, we start from a training set consisting of a single or several such images and extract a batch out of the high-resolution image as shown in Fig.~\ref{fig:method}(a): According to the desired batch size $b$, we extract image subsamples of different size out of the original image. Both, position and size of each subsample can be chosen arbitrarily and all extracted samples are then scaled down to a common resolution \begin{equation} x_\text{batch} \times y_\text{batch} \,. \end{equation} Including $n_\text{color}$ color channels, the respective batch tensor then has the size \begin{equation} b \otimes x_\text{batch} \otimes y_\text{batch} \otimes n_\text{color} \,. \end{equation} With this tensor, we perform a single training iteration of the underlying unpaired domain translation network as illustrated in Fig.~\ref{fig:method}(b) (e.\,g.~a \textsc{CycleGAN} \cite{CycleGAN} or \textsc{Unit} \cite{UNIT} framework). This step of sample extraction and update iteration is repeated until the algorithm has converged or another stopping criterion is reached. \subsection{Translation} Analogously to the training procedure, the evaluation phase is performed on the level of small image subsamples. As illustrated in Fig.~\ref{fig:method}(c), in a first step samples are extracted from the image that shall be transformed. Second, each of them is translated to the other domain by the generator separately. Finally, the translated images are merged together to the translated high-resolution image. \subsection{Remarks} The mechanism to train and evaluate NNs for the domain translation task described above leaves some freedom in the actual application. Therefore, we want to extend this scheme by some remarks: First, for high-resolution images of pixel size $x_\text{full} \times y_\text{full}$ and extracted small batches of size $x_\text{batch} \times y_\text{batch}$, the number of possible, different image subsamples is \begin{equation} (x_\text{full} - x_\text{batch}) \times (y_\text{full} - y_\text{batch}) \gg 1 \,. \label{eq:number_of_possibilities} \end{equation} Taking as an example an image of size $5000 \times 3000$ pixels and working on subsamples of size $128 \times 128$, the value on the left-hand side of Eq.~\eqref{eq:number_of_possibilities} is almost 14 million. Further taking into account the different sizes of the extracted samples and possible horizontal or vertical image flips, this number becomes even larger. This makes clear that, by this procedure, one single high-resolution image can effectively act as a huge training set. Using random positions and sizes, it is also very unlikely that the NN sees exactly the same image more than once during the whole training process, which prevents overfitting. Second, in the extraction phase during training, we use random positions over the whole image and random sizes in the range between the small batch ($x_\text{batch} \times y_\text{batch}$) and the whole image ($x_\text{full} \times y_\text{full}$). This corresponds to different zoom levels of the image and, hence, we cover different length scales of the image as well as help the generator to learn, both, global and local image properties. The different zoom levels of the exracted images can also be interpreted as effectively changing the distance from camera to object which helps the generator to generalize better along the optical axis of the camera in addition to the axes perpendicular to it. Third, an advantage of only processing small subsamples is that the peak memory consumption during the training and evaluation phase is set by the subsample size $x_\text{batch} \times y_\text{batch}$, but not by the size $x_\text{full} \times y_\text{full}$ of the full high-resolution image. Larger images ``only'' lead to larger computation times during the evaluation phase, because more subsamples need to be processed, but they do not increase the GPU memory requirements. Nevertheless, it is, of course, possible to fully parallellize the processing of the image subsamples if more computational resources are available. Each training batch can be extracted from only one or, of course, also from several independent images. In the translation phase, we use overlapping subsamples and use the average color value for each pixel from the different samples. Our experiments have shown that this improves image quality by reducing noise and avoiding that neighboring samples show discontinuities of objects at their borders. Let us finally remark that, due to the evaluation level on subsamples, images sizes in the two domains can be chosen independently. It is also possible to train models on different image sizes than those which they are evaluated on later. Of course, a trained generator can be applied to further images. \section{Results and Discussion} \label{sec:results} \begin{figure*}[t] \includegraphics[width=.499\textwidth]{fig5a.jpg} \includegraphics[width=.499\textwidth]{fig5b.jpg} \\ \textcolor{white}{x}\hfill \includegraphics[width=.15\textwidth]{fig5c.jpg} \caption{% Domain translation of a high-resolution street scene from sunny to darker lighting conditions. In this case, original and target image are of the same size of $5,472 \times 3,648$ pixels. Image details can be seen in Fig.~\ref{fig:landscape2_details}. } \label{fig:street2} \end{figure*} In this section, we present results of the domain translation obtained with different high-resolution image sizes and styles. In all cases, the original and transformed images are too large to be properly presented in a pdf document, so that we have downscaled them to a reasonable file size. To provide a detailed view on some images, we present characteristic image details in the appendix. Concerning the style of the images, we present different domain adaption ranges, meaning that in some images, the domains $\mathcal{A}$ and $\mathcal{B}$ are very different and in some others they are very similar. All domain translations in this paper have been performed in a low-variance ``one-shot'' setting, in which both domains are defined by only one high-resolution image. All images in this paper have been translated by the procedure described in Sec.~\ref{sec:method} with subsample size of \mbox{$128 \times 128$} pixels. Moreover, a \textsc{Unit}-like GAN-setting \cite{UNIT} with shared latent space and a configuration as listed in Tab.~\ref{tab:net-config} in the appendix has been used. In order to underline the small memory consumption of the procedure, we emphasize that we have trained and evaluated all the NNs for the images shown in this paper on a ``small'' standard Desktop GPU Nvidia Quadro M2000 with only 4GB GPU memory. As a first example to demonstrate the performance of the presented method, we show in Fig.~\ref{fig:landscape2} the domain translation of a panorama image taken in the Swiss Alps towards the style of the Scottish Highlands, which are very different-looking image domains. The original image has a resolution of \mbox{$15,884 \times 3,271$} pixels (more than 50 megapixels altogether) and the target style is defined by an image with \mbox{$5,472 \times 3,648$} pixels (about 20 megapixels). We see that the target style is well adopted by the translation on both local and global length scales. The clear blue sky is transformed throughout to a cloudy one, and grass as well as rocks receive the style of the brownish Scottish Highland landscape. Interestingly, only some snow fields are interpreted as water while others also are translated to brown earth, which nicely demonstrates that the NN not only repaints the different areas but takes into account its surrounding and meaning. Also, the image details are very well preserved as we show by the subsamples presented in the appendix (see Fig.~\ref{fig:landscape2_details}). Even single trees, small paths, and also blades of grass keep their structure. A second panorama image style transfer is presented in Fig.~\ref{fig:street1} showing a street scenery. With this example, we address the situation of very similar domains $\mathcal{A}$ and $\mathcal{B}$. Both the original as well as the target image show the same street, but the images have been taken from different positions and under different weather and lighting conditions. The original image is of size \mbox{$12,895 \times 3,472$} pixels (about 45 megapixels) and the target image has a resolution of \mbox{$4,785 \times 3,508$} pixels. Also in this case of similar styles, the procedure performs well, and it preserves, both, local and global image information. Image details are, again, provided in Fig.~\ref{fig:landscape2_details} in the appendix. Figure~\ref{fig:street2} shows a street scene as well, but with more complex image content. Beyond the street, grass and trees, this image contains also a sidewalk, more complex road markings, a car, traffic lights and signs, and some houses in the background. Here, both the original and the target image have the same resolution of \mbox{$5,472 \times 3,648$} pixels (about 20 megapixels). Again, we focus on the case of rather similar domains and take as target an image showing the same street, but photographed shortly before sunset and from another position, whereas the source was shot in bright daylight. Again, the image is well transformed to the target style and contains all details which define the scene. (See Fig.~\ref{fig:landscape2_details} in the appendix for image details.) In Fig.~\ref{fig:street5}, we demonstrate once more that our method is capable of dealing with larger domain differences. Therefore, we show the cross-translation of a street and a dirt road, both images having a size of \mbox{$5,472 \times 3,648$} pixels (about 20 megapixels). The top row of this figure shows the original photographs and the bottom row shows the translated images in the domain that is defined by the respective other photograph. The generator has learned to transform the asphalt street into a dirty and stony surface, while the grass structure and the sky keep their structure while being only translated in color. (See Fig.~\ref{fig:landscape2_details} in the appendix for image details.) \begin{figure*}[p] \includegraphics[width=.49\textwidth]{fig6a.jpg} \hfill \includegraphics[width=.49\textwidth]{fig6b.jpg} \\[1ex] \includegraphics[width=.49\textwidth]{fig6c.jpg} \hfill \includegraphics[width=.49\textwidth]{fig6d.jpg} \caption{% Cross-domain-translation between a high-resolution street image and one of a dirt road. Both images are of the same size of $5,472 \times 3,648$ pixels and the target domain is the respective other scene. Image details can be seen in the two center rows of Fig.~\ref{fig:landscape2_details}. } \label{fig:street5} \vspace{3em} \includegraphics[width=.47\columnwidth]{fig7a.jpg} \includegraphics[width=.47\columnwidth]{fig7b.jpg} \hfill \includegraphics[width=.47\columnwidth]{fig7c.jpg} \includegraphics[width=.47\columnwidth]{fig7d.jpg} \\ \includegraphics[width=.47\columnwidth]{fig7e.jpg} \includegraphics[width=.47\columnwidth]{fig7f.jpg} \hfill \includegraphics[width=.47\columnwidth]{fig7g.jpg} \includegraphics[width=.47\columnwidth]{fig7h.jpg} \caption{% High-resolution domain transfer of traffic signs: The four images on the left belong to one translation process, and the four on the right to another. The top row shows the original images and the bottom row the translated ones. In this figure, we show a cross-translation, i.\,e.~the images are pairwise translated to the domain which is defined by the other image. (The translations have been performed on images with resolution $2,448 \times 2,448$ pixels.) } \label{fig:trafficsign} \end{figure*} We finish the results focussing on cross-translations of traffic signs in Fig.~\ref{fig:trafficsign}. Here, we show two groups of traffic sign images, respectively, on the left and on the right. The top row shows the original images and the bottom row the translated ones. Here, the images in the top row are assigned pairwise to the domains $\mathcal{A}$ and $\mathcal{B}$ and in the bottom row, the same traffic signs are shown in the domain, which is defined by the respective other image. Each of the translations has been performed on images of size \mbox{$2,448 \times 2,448$} pixels (about 6 megapixel). In all the cases, the structure of the images and the content is well translated. The only exception is that the colors yellow and green of the traffic light symbol are not preserved (left column). However, this is clear since the color yellow does not occur in the target domain (second column). From this perspective, this observation is in agreement with the goal to reach the target domain, but it also makes clear how important the choice of the target data set is for the translation task. We note that, despite the huge resolution that has finally been processed, training time of the whole GAN setting was on the order of only one day even on the small Nvidia Quadro GPU. One of our general observations is also that using smaller image subsamples helps to improve the \emph{local} microstructure in the translated image while larger subsamples rather keep \emph{global} information. Finally, we want to note that we have verified the generalization of our procedure by training the NN only on a smaller part of the high-resolution image, but finally transforming the whole one. By this procedure, there were areas in the big image which the NN has never seen during training. In all our test cases, the procedure generalized very well with no visible artifacts or mis-translations as long as no crucial image content had been cut off for the training process. \section{Conclusion} \label{sec:conclusion} In this paper, we have introduced a method which allows to perform unpaired domain translation on high-resolution images. It is based on the idea not to process the whole image at once but to apply the training and evaluation on subsamples of random size (and/or position) which were downscaled to a fixed small image size. With this method, we were able to create high-quality domain translations which fulfill micro- and macro-consistency, and preserve image details well. We performed training and evaluation of the GAN setting on a ``small'' standard desktop GPU which underlines that high-resolution domain transfer does not require large and expensive GPU hardware or clusters. We have applied the method to various domain translations covering the range from very similar to very different domains. We see potential applications of this procedure especially in the case of high-quality and high-resolution (test) data generation for different use cases as, e.\,g., autonomous driving. We generally propose to apply the idea of processing data with NNs not as a whole but on (random) overlapping subsamples also for different kinds of data types and in other application fields. For example, one might use similar approaches in the field of 3-dimensional objects or meshes \cite{3dgan,jiang2017,chang2015} where parts of the object could be processed separately. Another field can be graph data \cite{dai2016,dai2017,hamilton2017,grover2016} where one could work on single subgraphs instead of the whole big one. \section*{Appendix} \subsection*{Neural Network Architecture} For the domain translations performed in this paper, a \textsc{Unit}-like GAN architecture \cite{UNIT} with shared latent space has been used with the generators and discriminators set up as listed in Tab.~\ref{tab:net-config}. \begin{table}[h!] \centering \small \begin{tabular}{lrrr} \toprule \textbf{Generator} & Filter size (num), Norm, Activ.\\ \midrule Down-Convolution & $7\times7$ (64), --, LeakyReLU \\ Down-Convolution & $3\times3$ (128), --, LeakyReLU \\ Down-Convolution & $3\times3$ (256), --, LeakyReLU \\ Down-Convolution & $3\times3$ (512), --, LeakyReLU \\ Residual ($3\times$) & $3\times3$ (512), Inst.-Norm, ReLU \\ Residual ($2\times$, shared) & $3\times3$ (512), Inst.-Norm, ReLU \\ Residual ($3\times$) & $3\times3$ (512), Inst.-Norm, ReLU \\ Up-Convolution & $3\times3$ (256), --, LeakyReLU \\ Up-Convolution & $3\times3$ (128), --, LeakyReLU \\ Up-Convolution & $3\times3$ (64), --, LeakyReLU \\ Up-Convolution & $3\times3$ (3), --, Tanh\\ \bottomrule \\ \toprule \textbf{Discriminator} & Filter size (num), Norm, Activ.\\ \midrule Down-Convolution & $3\times3$ (64), --, LeakyReLU \\ Down-Convolution & $3\times3$ (128), --, LeakyReLU \\ Down-Convolution & $3\times3$ (256), --, LeakyReLU \\ Down-Convolution & $3\times3$ (512), --, LeakyReLU \\ Down-Convolution & $1\times1$ (1), --, LeakyReLU \\ \bottomrule \end{tabular} \caption{% Network architecture of the GAN's generator and discriminator used for the images presented in this paper. } \label{tab:net-config} \end{table} \subsection*{Image details} Table~\ref{tab:resolutions} summarizes the image geometries and resolutions of the images presented in this paper. Further image details are presented in Fig.~\ref{fig:landscape2_details}. \begin{table}[h!] \centering \small \begin{tabular}{rrrr} \toprule Figure & Width [px] & Height [px] & Total [Mio. px] \\ \midrule 3 & 15,884 & 3,271 & 51.96\\ 4 & 12,895 & 3,472 & 44.77\\ 5 & 5,472 & 3,648 & 19.96\\ 6 & 5,472 & 3,648 & 19.96\\ 7 & 2,448 & 2,448 & 5.99\\ \bottomrule \end{tabular} \caption{% Resolution of the original and transformed images in Figs.~\ref{fig:landscape2}--\ref{fig:trafficsign} and the corresponding total number of pixels. } \label{tab:resolutions} \end{table} \begin{figure*}[p] Image details of Fig.~\ref{fig:landscape2}\\ \includegraphics[width=.12\textwidth]{fig8a1.jpg} \includegraphics[width=.12\textwidth]{fig8a2.jpg} \includegraphics[width=.12\textwidth]{fig8a3.jpg} \includegraphics[width=.12\textwidth]{fig8a4.jpg} \includegraphics[width=.12\textwidth]{fig8a5.jpg} \includegraphics[width=.12\textwidth]{fig8a6.jpg} \includegraphics[width=.12\textwidth]{fig8a7.jpg} \includegraphics[width=.12\textwidth]{fig8a8.jpg} \\ \includegraphics[width=.12\textwidth]{fig8b1.jpg} \includegraphics[width=.12\textwidth]{fig8b2.jpg} \includegraphics[width=.12\textwidth]{fig8b3.jpg} \includegraphics[width=.12\textwidth]{fig8b4.jpg} \includegraphics[width=.12\textwidth]{fig8b5.jpg} \includegraphics[width=.12\textwidth]{fig8b6.jpg} \includegraphics[width=.12\textwidth]{fig8b7.jpg} \includegraphics[width=.12\textwidth]{fig8b8.jpg} \\[2em] Image details of Fig.~\ref{fig:street1}\\ \includegraphics[width=.12\textwidth]{fig8c1.jpg} \includegraphics[width=.12\textwidth]{fig8c2.jpg} \includegraphics[width=.12\textwidth]{fig8c3.jpg} \includegraphics[width=.12\textwidth]{fig8c4.jpg} \includegraphics[width=.12\textwidth]{fig8c5.jpg} \includegraphics[width=.12\textwidth]{fig8c6.jpg} \includegraphics[width=.12\textwidth]{fig8c7.jpg} \includegraphics[width=.12\textwidth]{fig8c8.jpg} \\ \includegraphics[width=.12\textwidth]{fig8d1.jpg} \includegraphics[width=.12\textwidth]{fig8d2.jpg} \includegraphics[width=.12\textwidth]{fig8d3.jpg} \includegraphics[width=.12\textwidth]{fig8d4.jpg} \includegraphics[width=.12\textwidth]{fig8d5.jpg} \includegraphics[width=.12\textwidth]{fig8d6.jpg} \includegraphics[width=.12\textwidth]{fig8d7.jpg} \includegraphics[width=.12\textwidth]{fig8d8.jpg} \\[2em] Image details of Fig.~\ref{fig:street2}\\ \includegraphics[width=.12\textwidth]{fig8e1.jpg} \includegraphics[width=.12\textwidth]{fig8e2.jpg} \includegraphics[width=.12\textwidth]{fig8e3.jpg} \includegraphics[width=.12\textwidth]{fig8e4.jpg} \includegraphics[width=.12\textwidth]{fig8e5.jpg} \includegraphics[width=.12\textwidth]{fig8e6.jpg} \includegraphics[width=.12\textwidth]{fig8e7.jpg} \includegraphics[width=.12\textwidth]{fig8e8.jpg} \\ \includegraphics[width=.12\textwidth]{fig8f1.jpg} \includegraphics[width=.12\textwidth]{fig8f2.jpg} \includegraphics[width=.12\textwidth]{fig8f3.jpg} \includegraphics[width=.12\textwidth]{fig8f4.jpg} \includegraphics[width=.12\textwidth]{fig8f5.jpg} \includegraphics[width=.12\textwidth]{fig8f6.jpg} \includegraphics[width=.12\textwidth]{fig8f7.jpg} \includegraphics[width=.12\textwidth]{fig8f8.jpg} \\[2em] Image details of Fig.~\ref{fig:street5}\\ \includegraphics[width=.12\textwidth]{fig8g1.jpg} \includegraphics[width=.12\textwidth]{fig8g2.jpg} \includegraphics[width=.12\textwidth]{fig8g3.jpg} \includegraphics[width=.12\textwidth]{fig8g4.jpg} \includegraphics[width=.12\textwidth]{fig8g5.jpg} \includegraphics[width=.12\textwidth]{fig8g6.jpg} \includegraphics[width=.12\textwidth]{fig8g7.jpg} \includegraphics[width=.12\textwidth]{fig8g8.jpg} \\ \includegraphics[width=.12\textwidth]{fig8h1.jpg} \includegraphics[width=.12\textwidth]{fig8h2.jpg} \includegraphics[width=.12\textwidth]{fig8h3.jpg} \includegraphics[width=.12\textwidth]{fig8h4.jpg} \includegraphics[width=.12\textwidth]{fig8h5.jpg} \includegraphics[width=.12\textwidth]{fig8h6.jpg} \includegraphics[width=.12\textwidth]{fig8h7.jpg} \includegraphics[width=.12\textwidth]{fig8h8.jpg} \caption{% Details of the original and transformed images in Figs.~\ref{fig:landscape2}, \ref{fig:street1}, \ref{fig:street2}, and \ref{fig:street5} presented in their full resolution. } \label{fig:landscape2_details} \end{figure*} \footnotesize
1,314,259,996,261
arxiv
\section{Notation and terminology}\label{sec1} We consider a complex $m$-convex Fr\'echet algebra, that is an algebra $X$ over the complex numbers that at the same time is a (locally convex) Fr\'echet space whose topology is induced by an increasing sequence $(\|\cdot\|_q)_{q\geq 1}$ of seminorms that are submultiplicative, i.e., \begin{equation}\label{eq:subm} \|xy\|_q\leq \|x\|_q \|y\|_q \end{equation} for all $x,y\in X$, $q\geq 1$. For brevity we will call $X=(X, (\|\cdot\|_q)_q)$ simply a \textit{Fr\'echet algebra}; see \cite{Fra}. The space of all complex sequences is denoted, as usual, by $$ \omega=\{x=(x_{n})_{n\geq 0}:x_{n}\in\mathbb C, n\in\mathbb N_0\}. $$ We endow $\omega$ with the product topology, that is, the topology of coordinatewise convergence. A \textit{sequence space} is a subspace of $\omega$. As for a multiplicative structure one may endow $\omega$ either with the coordinatewise product of sequences, see Section \ref{pointwise}, or with the Cauchy product of sequences, see Section \ref{cauchy}. A \textit{sequence algebra} is a subalgebra of $\omega$ in either of the two senses. The sequence $e_n$, $n\geq 0$, is defined as $e_n=(0,\ldots,0,1,0,\ldots)$ with 1 at index $n$. Furthermore, we write $e=(1,1,1\ldots)$. We denote by $$ \varphi=\Big\{\sum_{n=0}^{N}x_{n}e_{n}:x_0,\ldots,x_N\in\mathbb{C}, N\in\mathbb{N}_0\Big\} $$ the set of all \textit{finite sequences}. When a sequence space, respectively sequence algebra, $X$ carries the additional structure of a Fr\'echet space, resp. Fr\'echet algebra, such that the canonical embedding into $\omega$ is continuous we speak of a \textit{Fr\'echet sequence space}, resp. \textit{Fr\'echet sequence algebra}. A weighted backward shift on $\omega$ is an operator $B_{w}$ given by $$ B_{w}(x_{0}, x_{1}, x_{2}, \ldots) = (w_{1}x_{1}, w_{2}x_{2}, w_{3}x_{3}, \ldots),\quad x\in\omega, $$ where $w = (w_{n})_{n\geq 0}$ is a sequence of non-zero complex numbers, called a \textit{weight sequence}. The unweighted shift is denoted by $B=B_e$. The forward shift associated to a weight $w$ is the operator given by $$ F_{w}(x_{0}, x_{1}, x_{2}, \ldots) = (0, w_{1}x_{0}, w_{2}x_{1}, w_{3}x_{2}, \ldots),\quad x\in\omega. $$ Naturally we have that $B_{w}F_{w^{-1}}=I$, where $I$ is the identity map on $\omega$ and $w^{-1}=(w_n^{-1})_n$. Since the element $w_{0}$ is not relevant for the definition of the operators $B_{w}$ and $F_{w}$ we will assume that $w_{0}=1$ for any weight $w$. Throughout the paper we will write, for a given weight $w=(w_n)_n$, \[ v_{n}=\prod_{k=0}^{n}w_{k}, \quad n\geq 0. \] Note that the closed graph theorem implies that as soon as $B_w$ or $F_w$ maps a Fr\'echet sequence space $X$ into itself then it defines a (continuous, linear) operator on $X$. Apart from the notion of hypercyclicity we will need the stronger property of mixing. An operator $T$ on a separable Fr\'echet space $X$ is called \textit{mixing} if, for any non-empty open subsets $U,V$ of $X$, the set $\{n\geq 0: T^n(U)\cap V\neq \varnothing\}$ is co-finite. For monographs on linear dynamics we refer to \cite{EtMAt} and \cite{GrPe11}. \section{Fr\'echet sequence algebras under coordinatewise multiplication} \label{pointwise} In this section we study algebrability of the set of hypercyclic vectors by considering dynamical systems where the underlying space $X$ is a Fr\'echet sequence algebra and the multiplicative structure is the coordinatewise multiplication of sequences. So given two sequences $x=(x_n)_{n}$ and $y=(y_n)_{n}$ in $X$ we define $xy = (x_ny_n)_{n}$. We will assume that the weighted backward shift $B_w$ is an operator on~$X$. To start, it will be useful to consider the following variant of the well-known characterization of hypercyclicity of weighted backward shifts, see \cite[Theorem 4.8]{GrPe11}. \begin{proposition} \label{prop:increasingsequence} Let $X$ be a Fr\'echet sequence space in which $(e_{n})_{n}$ is a basis. Suppose that the weighted backward shift $B_{w}$ is an operator on $X$. Then $B_{w}$ is hypercyclic if and only if there exists an increasing sequence $(p_{k})_{k}$ of natural numbers such that \begin{equation}\label{eq1} \text{for each $n\geq 0$,}\quad v_{p_{k}+n}^{-1}e_{p_{k}+n}\to 0 \end{equation} in $X$ as $k\to \infty$. \end{proposition} \begin{proof} We have that $B_w$ is hypercyclic if and only if there exists an increasing sequence $(m_{k})_{k}$ of natural numbers such that \begin{equation}\label{eq2} v_{m_{k}}^{-1}e_{m_{k}}\to 0 \end{equation} in $X$ as $k\to \infty$, see \cite[Theorem 4.8]{GrPe11}. Thus condition \eqref{eq1} is sufficient. For the necessity, suppose that \eqref{eq2} holds. It follows from continuity of $B_w$ and the fact that $B_we_k= w_ke_{k-1}$, $k\geq 1$, that \[ v_{m_{k}-n}^{-1}e_{m_{k}-n}= B_w^n v_{m_{k}}^{-1}e_{m_{k}} \to 0 \] as $k\to\infty$, for any $n\geq 0$. By \cite[Lemma 4.2]{GrPe11} there is an increasing sequence $(p_k)_k$ of natural numbers such that \[ v_{p_{k}+n}^{-1}e_{p_{k}+n} \to 0 \] as $k\to\infty$, for any $n\geq 0$, which had to be shown. \end{proof} \begin{definition}\label{def:propertyCS} Let $(X, (\|\cdot\|_q)_q)$ be a Fr\'echet sequence space that contains the finite sequences. We say that $(e_n)_n$ has \textit{Property A} if, for any $r\geq 1$, there is some $q\geq 1$ and some $C>0$ such that, for all $n\geq 0$, \begin{equation}\label{eq3} \|e_n\|_r^2\leq C \|e_n\|_q. \end{equation} \end{definition} This is less of a restriction than it might at first appear. \begin{example} \label{propA} (a) If $(e_n)_n$ is bounded in the space $X$ then it has Property A; simply consider $q=r$. In particular, the classical sequence spaces $\ell^p$, $1\leq p<\infty$, and $c_0$ are Banach sequence algebras under their usual norms and coordinatewise multiplication (\eqref{eq:subm} is easily verified) for which their bases $(e_n)_n$ have Property A. (b) The space $H(\mathbb{C})$ of entire functions can be considered as a sequence space via Taylor coefficients at 0. Its natural topology of uniform convergence on compact sets can be induced by the seminorms \[ \|(a_n)_{n\geq 0}\|_q = \sum_{n=0}^\infty |a_n|q^n,\quad q\geq 1. \] This turns $H(\mathbb{C})$ into a Fr\'echet sequence algebra under coordinatewise multiplication of the sequences (\eqref{eq:subm} is easily verified). Moreover, its basis $(e_n)_{n}$ has Property A since $\|e_n\|_r^2= \|e_n\|_{r^2}$ for $n\geq 0$. (c) The product topology of the space $\omega$ of all sequences is generated by the increasing sequence of seminorms $$ \|x\|_q = \sup_{0\leq n\leq q} \vert x_{n}\vert,\quad q\geq 1. $$ Then $\omega$ is a Fr\'echet sequence algebra under coordinatewise multiplication whose basis $(e_n)_{n}$ has Property A. \end{example} We will need the following improvement of Property A. \begin{lemma}\label{lem:propertyCS} Let $(X, (\|\cdot\|_q)_q)$ be a Fr\'echet sequence space for which $(e_n)_n$ has Property A. Then, for any $m\geq 1$ and $r\geq 1$, there is some $q\geq 1$ and some $C>0$ such that, for all $n\geq 0$, \begin{equation}\label{eq3b} \|e_n\|_r^m\leq C \|e_n\|_q. \end{equation} \end{lemma} \begin{proof} It follows in view of the definition of Property A that the result holds for $m=2^N$, $N\geq 0$. If $2^N\leq m< 2^{N+1}$, then the result follows from the fact that $\|e_n\|_r^m\leq \max\{\|e_n\|_r^{2^N}, \|e_n\|_r^{2^{N+1}}\}$. \end{proof} As an application we obtain an improvement of \eqref{eq1} under Property A. \begin{lemma}\label{coro:increasingsequenceroots} Let $(X, (\|\cdot\|_q)_q)$ be a Fr\'echet sequence space for which $(e_n)_n$ has \textit{Property A}. Let $w=(w_n)_n$ be a weight. If $(p_{k})_{k}$ is an increasing sequence of natural numbers that satisfies \eqref{eq1} then, for any $m\geq 1$, $n\geq 0$, \begin{equation*} v_{p_{k}+n}^{-\frac{1}{m}}e_{p_{k}+n}\to 0 \end{equation*} in $X$ as $k\to \infty$, were $v_{n}^{-\frac{1}{m}}$ is any $m$-th root of $v_{n}^{-1}$ in $\mathbb C$. \end{lemma} \begin{proof} Let $m\geq 1$ and $r\geq 1$. Then there are $q\geq 1$ and $C>0$ such that \eqref{eq3b} holds for all $n\geq 0$. Hence we have, for any $n\geq 0$, \begin{align*} \|v_{p_{k}+n}^{-\frac{1}{m}}e_{p_{k}+n}\|_r&=|v_{p_{k}+n}|^{-\frac{1}{m}}\|e_{p_{k}+n}\|_r\\ &=\big(|v_{p_{k}+n}|^{-1}\|e_{p_{k}+n}\|_r^m\big)^{\frac{1}{m}}\\ &\leq\big(|v_{p_{k}+n}|^{-1}\ C \|e_{p_{k}+n}\|_q\big)^{\frac{1}{m}}\quad(\text{by \eqref{eq3b}})\\ &=\big(C\|v_{p_{k}+n}^{-1}e_{p_{k}+n}\|_q\big)^{\frac{1}{m}}\to 0\quad(\text{by \eqref{eq1}}) \end{align*} as $k\to \infty$. \end{proof} Now we present our first result on the existence of algebras of hypercyclic vectors for weighted backward shifts on Fr\'echet sequence algebras. \begin{theorem}\label{thrm:algebra1} Let $(X, (\|\cdot\|_q)_q)$ be a Fr\'echet sequence algebra under coordinatewise multiplication in which $(e_{n})_{n}$ is a basis with Property A. Let $B_{w}$ be a hypercyclic weighted backward shift on $X$. If there exists an increasing sequence $(p_k)_k$ of natural numbers satisfying \eqref{eq1} such that \[ \text{for any $n\geq 0$,}\quad\prod_{\nu=0}^{p_k+n}w_{\nu}^{-1}\to 0 \quad\text{as $k\to\infty$}, \] then there exists a point $x\in HC(B_{w})$ such that the algebra generated by $x$, except zero, is contained in $HC(B_{w})$. \end{theorem} \begin{proof} To simplify our notation we will denote by $T$ the weighted backward shift $B_{w}$ on $X$. We will associate each number $r\in\mathbb{N}$ with the $r$-th element of a fixed order in the set $\mathbb{N}\times \mathbb{N}$, and we simply write $r=(m,l)$. For each natural number $m$ let us fix an $m$-th root of $w_{n}$, $n\geq 0$, which we denote by $w_{n}^{\frac{1}{m}}$; the $j$-th power of the latter number is denoted by $w_{n}^{\frac{j}{m}}$. Note that one has to distinguish, for example, $w_n^\frac{1}{2}$ from $w_n^\frac{2}{4}$. Since $(e_{n})_{n}$ is a basis of $X$, $\varphi$ is dense in $X$. Let $(y^{(l)})_{l\geq 1}\subset \varphi$ be a dense sequence of non-zero points in $X$ such that for each $l_{0}\in\mathbb N$ the element $y^{(l_{0})}$ appears infinitely many times in the sequence $(y^{(l)})_{l}$. Let $s_{l}$ be the largest index of the non-zero coordinates of $y^{(l)}$. As before, for any $m\geq 1$, we fix an $m$-th root of $y^{(l)}_{n}$, $l\geq 1$, $n\geq 0$, written $(y^{(l)}_{n})^{\frac{1}{m}}$, and we denote the $j$-th power of that number by $(y^{(l)}_{n})^{\frac{j}{m}}$. Let $a,j,m,l\geq 1$. We will in the sequel denote by \[ ( S^{a}y^{(l)})^{\frac{j}{m}} \] the $j$-th power of the point $F^{a}_{w^{-\frac{1}{m}}}(y^{(l)})^{\frac{1}{m}}$, where $w^{-\frac{1}{m}}=(1/w_{0}^{\frac{1}{m}}, 1/w_{1}^{\frac{1}{m}},\ldots)$ and $(y^{(l)})^{\frac{1}{m}}= ((y^{(1)}_{n})^{\frac{1}{m}}, (y^{(2)}_{n})^{\frac{1}{m}},\ldots)$. In other words, \begin{equation}\label{eq:0} ( S^{a}y^{(l)})^{\frac{j}{m}} = \sum_{n=0}^{s_{l}} \frac{1}{w_{n+1}^{\frac{j}{m}}\cdots w_{n+a}^{\frac{j}{m}}} (y_n^{(l)})^{\frac{j}{m}}e_{n+a}; \end{equation} in particular, $( S^{a}y^{(l)})^{\frac{m}{m}}=F^{a}_{w^{-1}}y^{(l)}$, so that \begin{equation}\label{eq:00} T^a( S^{a}y^{(l)})^{\frac{m}{m}}=y^{(l)}. \end{equation} We will now construct an increasing sequence of natural numbers $(a_{r})_{r\geq 1}$ with $a_{r}\in \{p_k: k\geq 1\}$ and such that, if $r=(m,l)\in \mathbb N$, then \begin{enumerate}[label=\textbf{A.\arabic*}] \item \label{condition1} $\| (S^{a_{r}} y^{(l)})^{\frac{1}{m}}\|_r< 2^{-r}$, \end{enumerate} and if $r\geq 2$ then \begin{enumerate}[label=\textbf{A.\arabic*},resume] \item \label{condition2} $\| T^{a_{t}}( S^{a_{r}}y^{(l)})^{\frac{\nu}{m}}\|_r< 2^{-r}$ for $1\leq t<r$ and $1\leq \nu \leq d_{r}$, \item \label{condition3} $a_{r}-a_{r-1}> s_{\widetilde{l}}$ with $r-1=(\widetilde{m},\widetilde{l})$, \end{enumerate} where we set $d_{r}=\max_{(\widetilde{m},\widetilde{l})< r}\widetilde{m}$. Let $(p_{k})_{k\geq 1}$ be an increasing sequence of natural numbers such that \eqref{eq1} holds (which exists by Proposition \ref{prop:increasingsequence}). By Lemma \ref{coro:increasingsequenceroots} we have that, for all $m\geq 1$, $n\geq 0$, \begin{equation}\label{eq:0b} \frac{1}{w_{n+1}^{\frac{1}{m}}\cdots w_{n+p_k}^{\frac{1}{m}}}e_{n+p_{k}}=\frac{w_{0}^{\frac{1}{m}}\cdots w_{n}^{\frac{1}{m}}}{w_{0}^{\frac{1}{m}}\cdots w_{n+p_k}^{\frac{1}{m}}}e_{n+p_{k}} \to 0 \end{equation} as $k\to\infty$. In view of \eqref{eq:0}, there exists $a_{1}\in \{p_k: k\geq 1\}$ that satisfies condition \ref{condition1}. Let us now assume that we have fixed $a_{1},\ldots, a_{r-1}$ $(r\geq 2)$ satisfying conditions \ref{condition1}, \ref{condition2} and \ref{condition3}. Assume $r=(m,l)$. Since multiplication is continuous in $X$, \eqref{eq:0b} implies that, for any $m,j\geq 1$, $n\geq 0$, \[ \frac{1}{w_{n+1}^{\frac{j}{m}}\cdots w_{n+p_k}^{\frac{j}{m}}}e_{n+p_{k}} \to 0 \] as $k\to\infty$. Again by \eqref{eq:0}, and by continuity of $T$ at $0$, there is then some $a_r\in \{p_k: k\geq 1\}$, $a_r>a_{r-1}$, such that \ref{condition1}, \ref{condition2} and \ref{condition3} hold, and the induction process is completed. In order to produce a hypercyclic algebra, we define \begin{equation}\label{eq:point} x=\sum_{m=1}^{\infty} \sum_{l=1}^{\infty} ( S^{a_{(m,l)}}y^{(l)})^{\frac{1}{m}}=\sum_{r=1}^{\infty}(S^{a_{r}}y^{(l)})^{\frac{1}{m}}. \end{equation} By property \ref{condition1}, the series \eqref{eq:point} is convergent in $X$, so that $x\in X$. We first show that, for any $j\geq 1$, the $j$-th power of the point $x$ is hypercyclic for $T$. Fix a natural number $l_{0}$. Let us consider the number $t=(j,l_{0})$. Then \begin{align}\label{eq:pointb} \begin{split} T^{a_{t}}x^{j}&=T^{a_{t}}\Big(\sum_{r=1}^{\infty} (S^{a_{r}}y^{(l)})^{\frac{1}{m}} \Big)^{j}\\ &=T^{a_{t}}\sum_{r=t}^{\infty} (S^{a_{r}}y^{(l)})^{\frac{j}{m}} \ \ \ (\text{by property \ref{condition3}})\\ &=T^{a_{t}}( S^{a_{t}}y^{(l_{0})})^{\frac{j}{j}}+\sum_{r=t+1}^{\infty} T^{a_{t}}(S^{a_{r}}y^{(l)})^{\frac{j}{m}} \\ &=y^{(l_{0})} +\sum_{r=t+1}^{\infty} T^{a_{t}}(S^{a_{r}}y^{(l)})^{\frac{j}{m}}.\ \ \ (\text{by } \eqref{eq:00}) \end{split} \end{align} Note that since $t=(j,l_{0})$ then $j\leq d_{r}$ for all $r>t$. Therefore, by property \ref{condition2}, \begin{equation} \label{equation:2minusk} \| T^{a_{t}}x^j - y^{(l_{0})}\|_t\leq \sum_{r=t+1}^{\infty} \| T^{a_{t}}(S^{a_{r}}y^{(l)})^{\frac{j}{m}}\|_r < 2^{-t}. \end{equation} Since the sequence $(y^{(l)})_{l}$ is dense in $X$ we have that $x^{j}\in HC(T)$ for any $j\geq 1$. To conclude, we show that any point $z\in X$ of the form \[ z=\sum_{\nu=j}^{N}c_{\nu}x^{\nu} \] with $j\geq 1$ and $c_{j},\ldots,c_{N}\in \mathbb C$, $c_{j}\ne 0$, is hypercyclic for $T$. Since non-zero multiples of hypercyclic vectors are hypercyclic, we may assume that $c_{j}=1$, whence \begin{equation}\label{eq:1} T^{a_t}z=T^{a_t}x^j+\sum_{\nu=j+1}^{N}c_{\nu}T^{a_t}x^{\nu},\quad t\geq 1. \end{equation} Let us fix a natural number $l_{0}$. Since the element $y^{(l_{0})}$ is repeated infinitely many times in the sequence $(y^{(l)})_{l}$ there exists an increasing sequence of natural numbers $(l_{i})_{i}$ with $y^{(l_{i})}=y^{(l_{0})}$ for all $i\geq 1$. By \eqref{equation:2minusk} we have for each $t=(j,l_{i})\in\mathbb N$, $i\geq 1$, \begin{equation} \label{eqtozeroj} \|T^{a_{t}}x^{j} - y^{(l_{0})}\|_t < 2^{-t}. \end{equation} Also, we have for any $\nu\geq 1$ \begin{equation}\label{eq:2} T^{a_{t}}x^{\nu} =T^{a_{t}}( S^{a_{t}}y^{(l_{0})})^{\frac{\nu}{j}}+\sum_{r=t+1}^{\infty} T^{a_{t}}(S^{a_{r}}y^{(l)})^{\frac{\nu}{m}}, \end{equation} see \eqref{eq:pointb}. For the first term, we obtain from \eqref{eq:0} that \[ T^{a_{t}}( S^{a_{t}}y^{(l_{0})})^{\frac{\nu}{j}} = \sum_{n=0}^{s_{l_0}} \frac{w_{n+1}\cdots w_{n+a_t}}{w_{n+1}^{\frac{\nu}{j}}\cdots w_{n+a_t}^{\frac{\nu}{j}}} (y_n^{(l_0)})^{\frac{\nu}{j}}e_n. \] By hypothesis we have that $|v_{n+p_k}|\to\infty$ as $k\to\infty$, for all $n\geq 0$. Since, by construction, $(a_t)_t$ is a subsequence of $(p_k)_k$ we have for any $n\geq 0$ and $\nu>j$, \[ \Big|\frac{w_{n+1}\cdots w_{n+a_t}}{w_{n+1}^{\frac{\nu}{j}}\cdots w_{n+a_t}^{\frac{\nu}{j}}}\Big| = \frac{|v_n|^{\frac{\nu}{j}-1}}{|v_{n+a_t}|^{\frac{\nu}{j}-1}}\to 0 \] as $(t=(j,l_{i}))_{i}$ goes to infinity. This implies that, for $\nu>j$, \begin{equation}\label{eq:4} T^{a_{t}}( S^{a_{t}}y^{(l_{0})})^{\frac{\nu}{j}}\to 0. \end{equation} For the second term in \eqref{eq:2}, it follows from property \ref{condition2} that whenever $d_t\geq \nu$ then \begin{equation}\label{eq:5} \Big\|\sum_{r=t+1}^{\infty} T^{a_{t}}(S^{a_{r}}y^{(l)})^{\frac{\nu}{m}} \Big\|_t \leq \sum_{r=t+1}^{\infty} \|T^{a_{t}}(S^{a_{r}}y^{(l)})^{\frac{\nu}{m}} \|_r < 2^{-t}. \end{equation} Therefore, by equations \eqref{eq:1}, \eqref{eqtozeroj}, \eqref{eq:2}, \eqref{eq:4} and \eqref{eq:5} we have that \begin{align*} T^{a_{t}}z\to y^{(l_{0})} \end{align*} as $(t=(j,l_{i}))_{i}$ goes to infinity. The proof is completed by the density of the sequence $(y^{(l)})_{l}$. \end{proof} We now make a refinement of the previous proof to obtain that $HC(B_{w})$ contains an algebra that is not finitely generated. \begin{theorem}\label{genresl} Let $(X, (\|\cdot\|_q)_q)$ be a Fr\'echet sequence algebra under coordinatewise multiplication in which $(e_{n})_{n}$ is a basis with Property A. Let $B_{w}$ be a hypercyclic weighted backward shift on $X$. If there exists an increasing sequence $(p_k)_k$ of natural numbers satisfying \eqref{eq1} such that \[ \text{for any $n\geq 0$,}\quad\prod_{\nu=0}^{p_k+n}w_{\nu}^{-1}\to 0 \quad\text{as $k\to\infty$}, \] then $HC(B_{w})$ contains an algebra, except zero, that is not finitely generated. In other words, $HC(B_w)$ is algebrable. \end{theorem} \begin{proof} We begin the proof as in Theorem \ref{thrm:algebra1}. With the notation defined there we obtain again a dense sequence $(y^{(l)})_l\subset \varphi$ of non-zero points in $X$ and an increasing sequence of natural numbers $(a_{r})_{r\geq 1}$ with $a_{r}\in \{p_k: k\geq 1\}$ such that, if $r=(m,l)\in \mathbb N$, then \begin{enumerate}[label=\textbf{A.\arabic*}] \item \label{condition1b} $\| (S^{a_{r}} y^{(l)})^{\frac{1}{m}}\|_r< 2^{-r}$, \end{enumerate} and if $r\geq 2$, then \begin{enumerate}[label=\textbf{A.\arabic*},resume] \item \label{condition2b} $\| T^{a_{t}}( S^{a_{r}}y^{(l)})^{\frac{\nu}{m}}\|_r< 2^{-r}$ for $1\leq t<r$ and $1\leq\nu \leq d_{r}$, \item \label{condition3b} $a_{r}-a_{r-1}> s_{\widetilde{l}}$ with $r-1=(\widetilde{m},\widetilde{l})$, \end{enumerate} where $d_{r}=\max_{(\widetilde{m},\widetilde{l})< r}\widetilde{m}$. Let us now consider a partition of the natural numbers into an infinite number of infinite sets $\mathbb N_{k}$, $k\geq 1$, such that the sequence $(y^{(l)})_{l\in\mathbb N_{k}}$ is dense in $X$ for any $k\geq 1$. We can assume that for each $l_{0}\in\mathbb N_k$ the element $y^{(l_{0})}$ appears infinitely many times in the sequence $(y^{(l)})_{l\in\mathbb{N}_k}$. For each natural number $k$ we consider the vector \[ x^{(k)}=\sum_{m=1}^{\infty} \sum_{l\in\mathbb N_{k}} ( S^{a_{(m,l)}}y^{(l)})^{\frac{1}{m}}. \] It follows from condition \ref{condition1b} that these series converge in $X$, so that $x^{(k)}\in X$. Note that, by condition \ref{condition3b} and the fact that the sets $\mathbb{N}_k$ are pairwise disjoint, we have that \begin{equation}\label{eq:20} x^{(k)}x^{(k')}=0\text{ if $k\ne k'$.} \end{equation} Let $\mathcal{A}$ be the algebra generated by $(x^{(k)})_k$. Since finitely many elements of $\mathcal{A}$ only involve a finite number of the elements $x^{(k)}$, $k\geq 1$, \eqref{eq:20} shows that $\mathcal{A}$ is not finitely generated. Thus, to complete the proof, it suffices to show that any non-zero point in $\mathcal{A}$ is hypercyclic for $T$. Let $z\in\mathcal{A}\setminus\{0\}$. We can write \begin{equation}\label{eq:21} z=\sum_{\substack{\beta\in I \subset\mathbb{N}_0^s\\ \beta\neq 0}}c_{\beta}(x^{(1)})^{\beta_{1}}\cdots (x^{(s)})^{\beta_{s}} \end{equation} for some $s\geq 1$ and $I$ finite, where $(x^{(k)})^0=e$. By \eqref{eq:20}, this reduces to $$ z=\sum_{\nu=j}^{N}Q_{\nu} $$ with $1\leq j\leq N$ and $Q_j\neq 0$, where $Q_{\nu}$ is the $\nu$-homogeneous part of $z$, \[ Q_{\nu}=\sum_{k=1}^{s}c_{\nu,k}(x^{(k)})^{\nu}. \] Since $Q_j$ is not zero, we may assume that there is some $k'$ such that $c_{j,k'}=1$. Let us fix $l_{0}\in\mathbb N_{k'}$. Then, for $t=(j,l_{0})$, a calculation as in \eqref{eq:pointb} together with condition \ref{condition2b} shows that \begin{align*} \Vert T^{a_{t}}(Q_{j}) - y^{(l_{0})}\Vert_t & \leq \sum_{k=1}^{s} \vert c_{j,k}\vert\sum_{\substack{r\geq t+1\\r=(m,l),\ l\in\mathbb N_{k}}} \Vert T^{a_{t}}(S^{a_{r}}y^{(l)})^{\frac{j}{m}} \Vert_r\\ & \leq \sum_{k=1}^{s} \vert c_{j,k}\vert \sum_{r=t+1}^{\infty} \Vert T^{a_{t}}(S^{a_{r}}y^{(l)})^{\frac{j}{m}} \Vert_r\\ &< 2^{-t} \sum_{k=1}^{s} \vert c_{j,k}\vert. \end{align*} Since there exists an increasing sequence of natural numbers $(l_{i})_{i}\subset\mathbb N_{k'}$ with $y^{(l_{i})}=y^{(l_{0})}$ for all $i$, we have that \[ T^{a_{t}}(Q_{j}) \to y^{(l_{0})} \] as $(t=(j,l_{i}))_{i}$ goes to infinity. The same argument as in \eqref{eq:2}, \eqref{eq:4} and \eqref{eq:5} shows that, for any $\nu>j$, $T^{a_{t}}(Q_{\nu})\to 0$ when the sequence $(t=(j,l_{i}))_{i}$ goes to infinity. Hence, $$ T^{a_{t}}z\to y^{(l_{0})} $$ when $(t=(j,l_{i}))_{i}$ goes to infinity. The density of the sequence $(y^{(l)})_{l\in\mathbb{N}_{k'}}$ implies that $z\in HC(T)$, which had to be shown. \end{proof} The hypothesis on the weight $w$ in Theorems \ref{thrm:algebra1} and \ref{genresl} is slightly technical. However, it allows us to treat general hypercyclic operators in the two cases of greatest interest. \begin{corollary}\label{corrolalg} Let $B_w$ be a hypercyclic weighted backward shift on $\ell^p$, $1\leq p<\infty$, or $c_0$, which we consider as Banach sequence algebras under the coordinatewise multiplication. Then the set $HC(B_w)$ of hypercyclic vectors for $B_w$ is algebrable. This applies, in particular, to the Rolewicz operators $\lambda B$, $|\lambda|>1$. \end{corollary} Indeed, since $\|e_n\|=1$ for all $n\geq 0$, the hypothesis on $w$ follows immediately from Proposition \ref{prop:increasingsequence}, that is, from hypercyclicity. Property A follows from part (a) of Example \ref{propA}. The space $H(\mathbb{C})$ of entire functions is a Fr\'echet algebra when endowed with the Hadamard product \[ (f*g)(z) =\sum_{n=0}^\infty a_nb_nz^n,\quad z \in \mathbb C \] for $f(z)=\sum_{n=0}^\infty a_nz^n$ and $g(z)=\sum_{n=0}^\infty b_nz^n$, see \cite{RS96}. When we identify entire functions with their sequence of Taylor coefficients at 0 then $H(\mathbb{C})$ turns into a Fr\'echet sequence algebra. Again, since $\|e_n\|_1=1$ for all $n\geq 0$, the hypothesis on $w$ follows immediately from hypercyclicity via Proposition \ref{prop:increasingsequence}. Property A follows from part (b) of Example \ref{propA}. \begin{corollary}\label{corrolalg2} Let $B_w$ be a hypercyclic weighted backward shift on $H(\mathbb{C})$, which we consider as a Fr\'echet sequence algebra under the Hadamard product. Then the set $HC(B_w)$ of hypercyclic vectors for $B_w$ is algebrable. This applies, in particular, to the MacLane operator $D$ of differentiation. \end{corollary} Finally, on the space $\omega$ of all sequences, condition \eqref{eq1} holds trivially for any weighted backward shift, so it no longer implies the hypothesis in the above theorems. We only state here a special case. Recall that Property A holds for the space $\omega$ by part (c) of Example \ref{propA}. \begin{corollary}\label{corrolalg3} Let $B_w$ be a weighted backward shift on $\omega$, which we consider as a Fr\'echet sequence algebra under coordinatewise multiplication. If $\prod_{k=0}^n w_k^{-1}\to 0$ as $n\to\infty$, then the set $HC(B_w)$ of hypercyclic vectors for $B_w$ is algebrable. \end{corollary} \section{Fr\'echet sequence algebras under the Cauchy product} \label{cauchy} In this section we focus on the study of dynamical systems where the underlying sequence space is a Fr\'echet algebra whose multiplicative structure is given by the Cauchy product. The Cauchy product is the natural structure that appears when we multiply two power series. Given $\sum_{k=0}^\infty a_n z^{n}$ and $\sum_{n=0}^\infty b_n z^{n}$ we have formally, after regrouping the terms with the same degree, that $(\sum_{n=0}^\infty a_n z^{n}) \cdot (\sum_{n=0}^\infty b_n z^{n}) = \sum_{n=0}^\infty c_n z^{n}$ where $c_n=\sum_{k=0}^n a_k b_{n-k}$. Even more, if the power series $\sum_{n=0}^\infty a_n z^{n}$ has radius of convergence $R_{1}$ and the power series $\sum_{n=0}^\infty b_n z^{n}$ has radius of convergence $R_{2}$, then the resulting power series $\sum_{n=0}^\infty c_n z^{n}$ has a radius of convergence of at least $\min\{R_{1},R_{2}\}$. In general, the Cauchy product of two sequences $x=(x_n)_n$ and $y=(y_n)_n$ is defined by the discrete convolution \begin{equation*} x \ast y = (z_n)_n,\quad\text{where } z_n=\sum_{k=0}^n x_k y_{n-k}, \quad n\geq 0. \end{equation*} For the sake of clarity we will write the Cauchy product of two sequences $x$, $y$ as $x\ast y$, while the $n$-fold Cauchy product will be written as $x^n=x\ast\ldots\ast x$ to avoid a more cumbersome notation. \begin{example}\label{seqalg} (a) The most natural example of a Fr\'echet algebra in this scenario is the Fr\'echet space $H(\mathbb{C})$ of entire functions, which we consider again as a sequence space via Taylor coefficients at 0, see Example \ref{propA}. Its natural topology is induced by the family of seminorms \[ \|(a_n)_{n\geq 0}\|_q = \sup_{|z|\leq q}\Big|\sum_{n=0}^\infty a_n z^n\Big|,\quad q\geq 1. \] Then $H(\mathbb{C})$ becomes a Fr\'echet sequence algebra under the Cauchy product. (b) The sequence space $\ell^1$ is a Banach sequence algebra under its usual norm when endowed with the Cauchy product. (c) The product topology of the space $\omega$ of all sequences is generated by the increasing sequence of seminorms $$ \|x\|_q = \sum_{n=0}^q \vert x_{n}\vert,\quad q\geq 1. $$ Then $\omega$ is a Fr\'echet sequence algebra under the Cauchy product. \end{example} In order to translate the results obtained in Section \ref{pointwise} to algebras that are defined by Cauchy products we need again to impose conditions on the weight $w$ that defines the weighted backward shift $B_{w}$ and on the basis $(e_n)_n$, as we did in Theorems \ref{thrm:algebra1} and \ref{genresl}. As for the weight, we will demand that $B_w$ is mixing. Recall that a weighted backward shift $B_w$ on a Fr\'echet sequence space in which $(e_{n})_{n}$ is a basis is mixing if and only if \begin{equation}\label{eq:mix} \frac{1}{\prod_{k=0}^n w_k}e_n\to 0 \end{equation} in $X$ as $n\to\infty$, see \cite[Theorem 4.8]{GrPe11}. As for $(e_n)_n$, we introduce a new property. \begin{definition}\label{def:propertyB} Let $(X, (\|\cdot\|_q)_q)$ be a Fr\'echet sequence space that contains the finite sequences. We say that $(e_n)_n$ has \textit{Property B} if the following conditions hold: \begin{enumerate} \item[(i)] there is some $q\geq 1$ such that $\|e_n\|_q>0$ for all $n\geq 0$; \item[(ii)] for any $r\geq 1$ there is some $q\geq 1$ and some $C_1>0$ such that, for all $n,k\geq 0$, \[ \|e_n\|_r\cdot \|e_k\|_r\leq C_1 \|e_{n+k}\|_q; \] \item[(iii)] for any $m\geq 2$, $M\geq 1$, $r\geq 1$ there is some $\rho\geq 1$ such that for any $t\geq 1$ there is some $\tau\geq 1$ and some $C_2>0$ such that, for any $0\leq k\leq M$, $n\geq M$, \[ \|e_{mn}\|_t\cdot\|e_{n-k}\|_r\leq C_2\|e_{mn}\|_{\tau}^{\frac{1}{m}}\cdot\|e_{mn-k}\|_{\rho}. \] \end{enumerate} \end{definition} \begin{lemma}\label{buildingblocks} Let $(X,(\|\cdot\|_q)_q)$ be a Fr\'echet sequence space in which $(e_{n})_{n}$ is a basis with Property B, and let $B_{w}$ be a mixing weighted backward shift on $X$. Then, for any point \[ y=\sum_{j=0}^{s}y_je_{j}\in \varphi, \] any $m\geq 1$, $r\geq 1$, $N\geq 0$ and $\varepsilon>0$ there are $\eta\geq N$, $\gamma> \eta+2s$, and complex numbers $c_0,\ldots,c_s$ and $b$ such that the point \begin{equation*} p=q+b e_{\gamma}\quad\text{with}\quad q=\sum_{j=0}^{s}c_{j}e_{\eta+j} \end{equation*} satisfies: \begin{enumerate}[label=\textbf{\emph{C.\arabic*}}] \item \label{cauchycond2} $\|p\|_r<\varepsilon$; \item \label{cauchycond3} $mq\ast b^{m-1}e_{(m-1)\gamma}=F_{w^{-1}}^{\eta+(m-1)\gamma}y$; \item \label{cauchycond4} $\|B_w^{\eta+(m-1)\gamma}(b^{m}e_{m\gamma})\|_r<\varepsilon$. \end{enumerate} \end{lemma} \begin{proof} For $m=1$ the assertion is trivial. Indeed, take $b=0$ and $c_{j}=\frac{v_jy_j}{v_{\eta+j}}$ for $j=0,\ldots,s$. Then \emph{\ref{cauchycond3}} and \emph{\ref{cauchycond4}} hold trivially. By \eqref{eq:mix} we may take $\eta\geq N$ so large that \[ \|p\|_r=\Big\|\sum_{j=0}^{s}\frac{v_jy_j}{v_{\eta+j}}e_{\eta+j}\Big\|_r<\varepsilon, \] hence \emph{\ref{cauchycond2}}. Finally choose any $\gamma> \eta+2s$. Fix $m$ bigger than one. Let $b\in\mathbb{C}$ be given, where $b\neq 0$. Setting \begin{equation}\label{eq:alph} c_{j}=\frac{1}{mb^{m-1}}\frac{v_jy_j}{v_{\eta+j+(m-1)\gamma}}, \quad j=0,\ldots,s, \end{equation} we see that condition \emph{\ref{cauchycond3}} holds. Now let $r\geq 1$, $N\geq 0$ and $\varepsilon\in (0,1]$. It remains to choose $\eta\geq N$, $\gamma>\eta+2s$ and $b$ so that \emph{\ref{cauchycond2}} and \emph{\ref{cauchycond4}} hold. In condition (i) of Property B we may assume that $q=1$, so that $\|e_n\|_r>0$ for all $r\geq 1$ and $n\geq 0$. By condition (ii) of Property B, repeated $m-1$ times, there is some $q\geq r$ and some $C_3>0$ such that, for any $n,k\geq 0$, \begin{equation}\label{eq:iinew} \|e_n\|_r^{m-1}\cdot \|e_k\|_r\leq C_3 \|e_{(m-1)n+k}\|_q. \end{equation} Let \[ C_4=\frac{1}{m}\sum_{j=0}^s |v_j||y_j|+1\quad\text{and}\quad \widetilde{\varepsilon}=\Big(\frac{\varepsilon}{2C_4}\Big)^2. \] In view of \eqref{eq:mix} there is some $M> 2s$ such that \begin{equation}\label{eq:mix2} \frac{1}{|v_{n}|}\|e_n\|_q <\min\{1, C_3^{-1}\}\widetilde{\varepsilon}^m \end{equation} whenever $n\geq M$. Next, let $\rho\geq 1$ be chosen according to condition (iii) of Property B. Since $B_w$ is continuous and \[ B_w^ke_{m\gamma}= \frac{v_{m\gamma}}{v_{m\gamma-k}}e_{m\gamma-k},\quad \gamma\geq 0, k\leq m\gamma \] there is some $t\geq 1$ and $C_5>0$ such that \[ \frac{|v_{m\gamma}|}{|v_{m\gamma-k}|}\|e_{m\gamma-k}\|_\rho\leq C_5\|e_{m\gamma}\|_t,\quad k\leq M, \gamma\geq M. \] We now take $\tau\geq 1$ as in condition (iii) of Property B, which implies that \[ \frac{|v_{m\gamma}|}{|v_{m\gamma-k}|}\|e_{\gamma-k}\|_r\leq C_2C_5\|e_{m\gamma}\|_{\tau}^\frac{1}{m},\quad k\leq M, \gamma\geq M. \] We deduce that \[ \frac{|v_{m\gamma}|^{m-1}}{|v_{m\gamma-k}|^m}\|e_{\gamma-k}\|_r^m\leq (C_2C_5)^m\frac{1}{|v_{m\gamma}|}\|e_{m\gamma}\|_{\tau},\quad k\leq M, \gamma\geq M, \] which tends to zero as $\gamma\to \infty$ by \eqref{eq:mix}. Thus there is some $\gamma\geq N+M$ so that with $\eta=\gamma-M$ and $j=0,\ldots,s$ we have that \[ \frac{|v_{m\gamma}|^{m-1}}{|v_{\eta+j+(m-1)\gamma}|^m}\|e_{\eta+j}\|_r^m\leq 1 \] and hence, in view of \eqref{eq:mix2} and the fact that $r\leq q$, \begin{equation}\label{eq:lem1} \frac{1}{|v_{\gamma-\eta}|^{\frac{1}{m}}}\|e_{\gamma-\eta}\|_r^{\frac{1}{m}}\frac{|v_{m\gamma}|^{\frac{1}{m}}}{|v_{\eta +j+(m-1)\gamma}|^{\frac{1}{m-1}}}\|e_{\eta+j}\|_r^{\frac{1}{m-1}} <\widetilde{\varepsilon}. \end{equation} Note that $\eta\geq N$ and $\gamma>\eta+2s$. From \eqref{eq:iinew} and \eqref{eq:mix2} we obtain that for these $\gamma$ and $\eta$ and any $j=0,\ldots,s$, \[ \frac{1}{|v_{\eta+j+(m-1)\gamma}|}\|e_\gamma\|_r^{m-1}\|e_{\eta+j}\|_r \leq \frac{C_3}{|v_{\eta+j+(m-1)\gamma}|}\|e_{\eta+j+(m-1)\gamma}\|_q <\widetilde{\varepsilon}^m, \] hence \begin{equation}\label{eq:lem2} \frac{1}{|v_{\eta+j+(m-1)\gamma}|^{\frac{1}{m-1}}}\|e_\gamma\|_r\|e_{\eta+j}\|_r^{\frac{1}{m-1}} <\widetilde{\varepsilon}. \end{equation} So, let finally \[ b = \Big(\max_{0\leq j\leq s} \frac{\|e_{\eta+j}\|_r^{\frac{1}{m-1}}}{|v_{\eta+j+(m-1)\gamma}|^{\frac{1}{m-1}}}\cdot \min\Big\{\frac{1}{\|e_\gamma\|_r }, \frac{|v_{\gamma-\eta}|^{\frac{1}{m}}}{|v_{m\gamma}|^{\frac{1}{m}}\|e_{\gamma-\eta}\|_r^{\frac{1}{m}}} \Big\}\Big)^{\frac{1}{2}}, \] which is strictly positive. Then we have with \eqref{eq:alph}, \eqref{eq:lem1} and \eqref{eq:lem2} \begin{align*} \|q\|_r &\leq \sum_{j=0}^s |c_j|\|e_{\eta+j}\|_r = \frac{1}{mb^{m-1}}\sum_{j=0}^s |v_j||y_j|\frac{\|e_{\eta+j}\|_r}{|v_{\eta+j+(m-1)\gamma}|}\\ &\leq C_4 \frac{1}{b^{m-1}}\max_{0\leq j\leq s} \frac{\|e_{\eta+j}\|_r}{|v_{\eta+j+(m-1)\gamma}|}\\ &= C_4 \Big(\max_{0\leq j\leq s} \frac{\|e_{\eta+j}\|_r^{\frac{1}{m-1}}}{|v_{\eta+j+(m-1)\gamma}|^{\frac{1}{m-1}}}\cdot \max\Big\{\|e_\gamma\|_r , \frac{|v_{m\gamma}|^{\frac{1}{m}}\|e_{\gamma-\eta}\|_r^{\frac{1}{m}}}{|v_{\gamma-\eta}|^{\frac{1}{m}}} \Big\}\Big)^{\frac{m-1}{2}}\\ &<C_4\widetilde{\varepsilon}^{\frac{m-1}{2}}\leq \tfrac{\varepsilon}{2}. \end{align*} Moreover, \eqref{eq:lem2} implies that \[ \|b e_\gamma\|_r =b \|e_\gamma\|_r \leq \max_{0\leq j\leq s}\Big(\frac{1}{|v_{\eta+j+(m-1)\gamma}|^{\frac{1}{m-1}}}\|e_\gamma\|_r\|e_{\eta+j}\|_r^{\frac{1}{m-1}}\Big)^{\frac{1}{2}}<\widetilde{\varepsilon}^{\frac{1}{2}}\leq\tfrac{\varepsilon}{2}. \] Altogether we have that \[ \|p\|_r\leq\|q\|_r+\|b e_\gamma\|_r<\varepsilon, \] so that \emph{\ref{cauchycond2}} holds. On the other hand, \eqref{eq:lem1} implies that \begin{align*} \|B_w^{\eta+(m-1)\gamma}(b^{m}e_{m\gamma})\|_r &= \Big(b\frac{|v_{m\gamma}|^{\frac{1}{m}}\|e_{\gamma-\eta}\|_r^{\frac{1}{m}}}{|v_{\gamma-\eta}|^{\frac{1}{m}}}\Big)^m\\ &\leq \max_{0\leq j\leq s}\Big(\frac{\|e_{\eta+j}\|_r^{\frac{1}{m-1}}}{|v_{\eta+j+(m-1)\gamma}|^{\frac{1}{m-1}}}\frac{|v_{m\gamma}|^{\frac{1}{m}}\|e_{\gamma-\eta}\|_r^{\frac{1}{m}}}{|v_{\gamma-\eta}|^{\frac{1}{m}}}\Big)^{\frac{m}{2}}<\widetilde{\varepsilon}^{\frac{m}{2}}\leq\varepsilon. \end{align*} Thus, \emph{\ref{cauchycond4}} hold as well. \end{proof} \begin{remark}\label{remomega} For the space $\omega$, the sequence $(e_n)_n$ does not have Property B because it does not satisfy condition (i). However, the conclusion of Lemma \ref{buildingblocks} holds trivially by choosing $\eta$ and $\gamma$ so large that the value of the seminorms in \emph{\ref{cauchycond2}} and \emph{\ref{cauchycond4}} is zero. Since the remaining results of this section only rely on this conclusion, they also hold for all (not necessarily mixing) weighted backward shifts on $\omega$. \end{remark} We can now obtain the analogue of Theorem \ref{thrm:algebra1} for Fr\'echet algebras defined by Cauchy products. \begin{theorem} \label{thrm:algebra1cauchy} Let $(X,(\|\cdot\|_q)_q)$ be a Fr\'echet sequence algebra under the Cauchy product in which $(e_{n})_{n}$ is a basis with Property B, and let $B_{w}$ be a mixing weighted backward shift on $X$. Then there exists a point $x\in HC(B_{w})$ such that the algebra generated by $x$, except zero, is contained in $HC(B_{w})$. \end{theorem} \begin{proof} To simplify our notation we will denote, as before, by $T$ the weighted backward shift operator $B_{w}$ on $X$ and by $S$ the weighted forward shift operator $F_{w^{-1}}$. Recall that $TS=I$ on $\omega$. As in the proof of Theorem \ref{thrm:algebra1} we fix a correspondence $r=(m,l)$ between $\mathbb{N}$ and $\mathbb{N}\times\mathbb{N}$, and we fix a dense sequence $(y^{(l)})_l\subset \varphi$ of non-zero points in $X$. We define a partition of the set of all non-zero multi-indices by setting \begin{equation*} I_{m,t}=\{\alpha\in\mathbb N_0^{t}:\vert \alpha\vert=m, \alpha_t>0\},\quad m,t\geq 1, \end{equation*} where $|\alpha|=\sum_{j=1}^t \alpha_j$. Given $\alpha\in I_{m,t}$ and $p_1,\ldots,p_t\in X$ we will write $$ P^{\alpha}=p_{1}^{\alpha_1}\ast\cdots\ast p_{t}^{\alpha_t}; $$ and \[ \binom{m}{\alpha}= \frac{m!}{\alpha_1!\cdots \alpha_t!} \] denotes the corresponding multinomial coefficient. Let us now construct an increasing sequence $(a_{r})_{r\geq 0}$ of natural numbers and a sequence $(p_{r})_{r\geq 0}$ in $\varphi$ satisfying that, if $r=(m,l)\in \mathbb N$, then \begin{enumerate}[label=\textbf{D.\arabic*}] \item \label{Cs1} $\Vert p_{r}\Vert_r<2^{-r},$ \item \label{Cs2} $T^{a_{r}}P^{\alpha}=0$ for all $\alpha\in I_{\mu,t}$, $1\leq \mu<m$, $1\leq t\leq r$, or $\mu=m$, $1\leq t< r$, and for all $\alpha\in I_{m,r}$, $\alpha\neq (0,\ldots, 0,m)$, \item \label{Cs3} $\Vert T^{a_{r}}p_{r}^{m}-y^{(l)}\Vert_{r} < 2^{-r}$, \item \label{Cs4} $\sum_{\alpha\in I_{\mu, r}} \binom{\mu}{\alpha}\Vert T^{a_{t}}P^{\alpha}\Vert_{r}< 2^{-r}$ for $1\leq t<r$ and $1\leq \mu\leq \widetilde{m}$, where $t=(\widetilde{m},\widetilde{l})$. \end{enumerate} We proceed by induction on $r\geq 0$. For $r=0$ we set $a_0=1$ and $p_0=e_0$; there is nothing else to do. Let $r\geq 1$, and assume that we have constructed natural numbers $a_0 < a_{1}<\ldots< a_{r-1}$ and points $p_{0},\ldots, p_{r-1}$ in $\varphi$ satisfying conditions \ref{Cs1}, \ref{Cs2}, \ref{Cs3} and \ref{Cs4}. Consider $r=(m,l)$. Let $\varepsilon\leq 2^{-r}$ be a positive number and $\rho\geq r$ an integer, both to be specified later; write \[ y^{(l)} = \sum_{j=0}^{s_l} y_j^{(l)} e_j. \] Let $N$ be the largest index of the non-zero coordinates in any of the points $p_{0},\ldots, p_{r-1}$. By Lemma \ref{buildingblocks} there are $\eta > \max\{N,a_{r-1}\}$, $\gamma> \eta+2s_l$ and complex numbers $d_0,\ldots,d_{s_l}$ and $b$ such that the point \[ p=q+b e_{\gamma}\quad\text{with}\quad q=\sum_{j=0}^{s_l}d_j e_{\eta+j} \] satisfies \begin{enumerate}[label=\textbf{E.\arabic*}] \item \label{conda} $\|p\|_\rho<\varepsilon$; \item \label{condb} $mq\ast b^{m-1}e_{(m-1)\gamma}=S^{\eta+(m-1)\gamma}y^{(l)}$; \item \label{condc} $\|T^{\eta+(m-1)\gamma}(b^{m}e_{m\gamma})\|_r<2^{-r}$. \end{enumerate} We define \[ p_r=p, \quad a_r=\eta+(m-1)\gamma. \] Then $a_r\geq \eta > a_{r-1}$. Moreover, \ref{conda} implies condition \ref{Cs1} since $\rho\geq r$ and $\varepsilon\leq 2^{-r}$. Let $\alpha\in I_{\mu,t}$, $1\leq \mu\leq m$, $1\leq t\leq r$. If $\mu<m$ then the largest index of the non-zero coordinates of $P^\alpha$ is at most $(m-1) \gamma<a_r$ (note that $N\leq \gamma$); now, if $\mu=m$ and $t<r$ then this index as at most $mN=N+(m-1)N< \eta+(m-1)\gamma=a_r$; if $\mu=m$, $t=r$ and $\alpha\neq (0,\ldots,0,m)$ then this index is at most $N+(m-1)\gamma< \eta+(m-1)\gamma=a_r$. Thus, in any case, we have that $T^{a_r}P^\alpha=0$, hence \ref{Cs2}. Next, we have that \begin{align*} p_{r}^{m} &=\sum_{k=0}^{m}\tbinom{m}{k}q^{m-k}\ast(b e_\gamma)^{k}\\ &=\sum_{k=0}^{m-2}\tbinom{m}{k}q^{m-k}\ast(b e_\gamma)^{k} + mq\ast b^{m-1} e_{(m-1)\gamma} + b^me_{m\gamma}. \end{align*} The largest index of the non-zero coordinates of the first sum is at most \[ 2(\eta+s_l)+(m-2)\gamma = \eta+(\eta+2s_l) +(m-2)\gamma < \eta +(m-1)\gamma=a_r, \] so that $T^{a_r}$ sends the sum to 0. Hence \[ T^{a_r} p_r^m = T^{a_r}(mq\ast b^{m-1} e_{(m-1)\gamma}) + T^{a_r}(b^me_{m\gamma})= y^{(l)}+ T^{a_r}(b^me_{m\gamma}), \] where we have applied \ref{condb} and the fact that $TS=I$. Thus, \ref{condc} implies condition \ref{Cs3}. Finally, condition \ref{Cs4} consists of a finite number of inequalities (in fact, for $r=1$ the condition is empty). Now, if $\alpha\in I_{\mu,r}$, then $P^\alpha$ is of the form \[ p_1^{\alpha_1}\ast\cdots\ast p_r^{\alpha_r} \] with $\alpha_r\neq 0$. Since $p_1,\ldots, p_{r-1}$ are known and both the Cauchy product and the operator $T$ are continuous on $X$, there exist $\rho\geq r$ and $\varepsilon\leq 2^{-r}$ such that all the inequalities in \ref{Cs4} are satisfied as soon as $\|p_r\|_\rho<\varepsilon$. We choose $\rho$ and $\varepsilon$ so that these inequalities hold. This completes the induction process. Consider now \begin{equation*} x=\sum_{r=1}^{\infty}p_{r}. \end{equation*} As a consequence of \ref{Cs1}, the series converges and $x\in X$. We claim that the algebra generated by $x$ is contained in $HC(T)$, except for zero. Thus let \[ z=\sum_{\mu=1}^{m}c_{\mu}x^{\mu} \] with $c_{1},\ldots,c_{m}\in \mathbb C$ and $c_{m}\ne 0$. We may assume that $c_{m}=1$. Let $l\geq 1$. Since \[ x^m = \sum_{t=1}^\infty \sum_{\alpha\in I_{m,t}}\tbinom{m}{\alpha} P^\alpha \] we have for $r=(m,l)$ that, in view of condition \ref{Cs2}, \begin{align*} T^{a_{r}}x^{m}-y^{(l)} &= \sum_{t<r} \sum_{\alpha\in I_{m,t}}\tbinom{m}{\alpha} T^{a_{r}}P^\alpha + \sum_{\substack{\alpha\in I_{m,r}\\ \alpha\neq (0,\ldots,0,m)}}\tbinom{m}{\alpha} T^{a_{r}}P^\alpha\\ &\phantom{xxxxxxxxxxxxx} + T^{a_{r}}p_{r}^{m}-y^{(l)} + \sum_{t>r} \sum_{\alpha\in I_{m,t}}\tbinom{m}{\alpha} T^{a_{r}}P^\alpha\\ &= T^{a_{r}}p_{r}^{m}-y^{(l)} + \sum_{t>r} \sum_{\alpha\in I_{m,t}}\tbinom{m}{\alpha} T^{a_{r}}P^\alpha, \end{align*} and hence \begin{equation*}\label{degreem} \begin{split}\|T^{a_{r}}x^{m}-y^{(l)}\|_r &< 2^{-r}+\sum_{t>r} \sum_{\alpha\in I_{m,t}}\tbinom{m}{\alpha}\| T^{a_{r}}P^\alpha\|_r\quad\text{(by condition \ref{Cs3})}\\ &< 2^{-r}+ \sum_{t>r}2^{-t}\quad\text{(by condition \ref{Cs4})}\\ &= 2^{-r+1}. \end{split} \end{equation*} In the same way we obtain by conditions \ref{Cs2} and \ref{Cs4} for $\mu<m$ \begin{equation*} \label{goestozero} \|T^{a_{r}}x^{\mu}\|_r \leq\sum_{t>r} \sum_{\alpha\in I_{\mu,t}}\tbinom{\mu}{\alpha}\|T^{a_{r}}P^\alpha\|_r< 2^{-r}. \end{equation*} Altogether we have that \[ \|T^{a_{r}}z-y^{(l)}\|_r < \sum_{\mu=1}^{m-1}|c_\mu|2^{-r} + 2^{-r+1} = \Big(\sum_{\mu=1}^{m-1}|c_\mu|+2\Big)2^{-r}. \] By the density of the sequence $(y^{(l)})_{l}$ the result follows and the proof is complete. \end{proof} We next want to show that the set $HC(B_w)$ is even algebrable. Thus we need to pass from an algebra generated by a single point $x$ to one generated by infinitely many points $x^{(k)}$, $k\geq 1$. The building blocks will be essentially the same points $p_r$ as in the previous proof. However, we need to ensure that the algebra generated by the $x^{(k)}$ is not finitely generated. This can be achieved by choosing suitable coefficients for the $p_r$. \begin{theorem} \label{thrm:algebraseveralcauchy} Let $(X,(\|\cdot\|_q)_q)$ be a Fr\'echet sequence algebra under the Cauchy product in which $(e_{n})_{n}$ is a basis with Property B, and let $B_{w}$ be a mixing weighted backward shift on $X$. Then $HC(B_{w})$ contains an algebra, except zero, that is not finitely generated. In other words, $HC(B_w)$ is algebrable. \end{theorem} \begin{proof} The proof of Theorem \ref{thrm:algebra1cauchy} will be modified in certain ways. We write again $T=B_w$, and we let $(y^{(l)})_l\subset \varphi$ be a dense sequence of non-zero points in $X$. We will here identify $\mathbb{N}$ with $\mathbb{N}^3$, so that we write $r=(m,l,\nu)\in \mathbb{N}$ with $m,l,\nu\geq 1$. Moreover, let $A$ be a countable dense subset of the set of finite sequences of norm at most 1 in $\ell^\infty(\mathbb{N})$. Let $\Lambda=(\lambda_{k,\nu})_{k,\nu\geq 1}$ be a matrix so that each column belongs to $A$, and each element of $A$ appears infinitely often as a column. Let $I_{m,t}$, $m,t\geq 1$, and $P^{\alpha}$, $\alpha\in I_{m,t}$, be defined as in the proof of Theorem \ref{thrm:algebra1cauchy}. Following that proof we can then construct an increasing sequence $(a_{r})_{r\geq 1}$ of natural numbers and a sequence $(p_r)_{r\geq 1}$ in $\varphi$ such that, if $r=(m,l,\nu)\in \mathbb N$, then \begin{enumerate}[label=\textbf{F.\arabic*}] \item \label{Ds1} $\Vert p_{r}\Vert_r<2^{-r},$ \item \label{Ds2} $T^{a_{r}}P^{\alpha}=0$ for all $\alpha\in I_{\mu,t}$, $1\leq \mu<m$, $1\leq t\leq r$, or $\mu=m$, $1\leq t< r$, and all $\alpha\in I_{m,r}$, $\alpha\neq (0,\ldots, 0,m)$, \item \label{Ds3} $\Vert T^{a_{r}}p_{r}^{m}-y^{(l)}\Vert_{r} < 2^{-r}$, \item \label{Ds4} $\Vert T^{a_{t}}P^{\alpha}\Vert_r< 2^{-r}$ for all $\alpha\in I_{\mu,r}$ with $1\leq t<r$ and $1\leq\mu\leq \widetilde{m}$, where $t=(\widetilde{m},\widetilde{l},\widetilde{\nu})$. \end{enumerate} We may achieve, in addition, that if $p_r=\sum_{j=\eta_r}^{\gamma_r}d_je_j$ with $d_{\gamma_r}\neq 0$ then $a_{r}\leq m\gamma_{r}<\eta_{r+1}$, where $r=(m,l,\nu)$. We now define, for any $k\geq 1$, \begin{equation}\label{eq:avoid0} x^{(k)} = \sum_{r=1}^{\infty}\lambda_{k,\nu_r} p_{r}. \end{equation} Since the elements of the matrix $\Lambda$ are bounded (by 1), \ref{Ds1} implies that these series converge, so that $x^{(k)}\in X$, $k\geq 1$. Let $\mathcal{A}$ be the algebra generated by the points $x^{(k)}$, $k\geq 1$. We first show that any non-zero point $z\in\mathcal{A}$ is hypercyclic for $T$. We can write \[ z=\sum_{\substack{\beta\in I \subset \mathbb{N}_0^s\\ \beta\neq 0}}c_{\beta}(x^{(1)})^{\beta_{1}}\ast\cdots\ast (x^{(s)})^{\beta_{s}} \] for some $s\geq 1$ and $I$ finite. Let \[ m= \max\{|\beta| : c_\beta\neq 0\}. \] Thus \[ z= \sum_{\mu=1}^m \sum_{|\beta|=\mu}c_{\beta}(x^{(1)})^{\beta_{1}}\ast\cdots\ast (x^{(s)})^{\beta_{s}}. \] One reason for introducing the $\lambda_{k,\nu}$ in \eqref{eq:avoid0} is that one cannot be sure that $\sum_{|\beta|=m}c_{\beta}\neq 0$. But since the polynomial $P(a_1,\ldots,a_s)= \sum_{|\beta|=m}c_{\beta}a_1^{\beta_{1}}\cdots a_s^{\beta_{s}}$ is non-zero and since the first $s$ coordinates of the elements of $A$ are dense in the polydisk of $\mathbb{C}^s$, there is an element $a=(a_n)_n\in A$ such that \[ \sum_{|\beta|=m}c_{\beta}a_1^{\beta_{1}}\cdots a_s^{\beta_{s}}=:\rho\neq 0. \] Now, in order to show that $z$ is hypercyclic, let $l\geq 1$. By the definition of the matrix $\Lambda$ there is some $\nu\geq 1$, arbitrarily large, such that \[ \lambda_{k,\nu}=a_k,\quad k=1,\ldots,s. \] Let $r=(m,l,\nu)$, which can be made arbitrarily large by choosing $\nu$ large. After expansion, taking account of the continuity of the Cauchy product, we see that there are complex numbers $d_{\alpha}$, $\alpha\in I_{m,t}$, $t\geq 1$, such that \begin{equation}\label{eq:exp} \begin{split} \sum_{|\beta|=m}c_{\beta}(x^{(1)})^{\beta_{1}}\ast\cdots\ast (x^{(s)})^{\beta_{s}}&=\sum_{t=1}^{r-1}\sum_{\alpha\in I_{m,t}} d_\alpha P^\alpha + \sum_{\substack{\alpha\in I_{m,r}\\\alpha\neq (0,\ldots,0,m)}} d_{\alpha} P^{\alpha}\\ &\phantom{xxxxxxxxxxxxxx}+ \rho p_r^m + \sum_{t>r}\sum_{\alpha\in I_{m,t}} d_{\alpha} P^{\alpha}; \end{split} \end{equation} note that the coefficient of $p_r^m$ is \[ d_{(0,\ldots,0,m)}=\sum_{|\beta|=m}c_{\beta}\lambda_{1,\nu_r}^{\beta_{1}}\cdots \lambda_{s,\nu_r}^{\beta_{s}}=\sum_{|\beta|=m}c_{\beta}a_1^{\beta_{1}}\cdots a_s^{\beta_{s}} = \rho \] since $\nu_r=\nu$. Let \[ C_\mu= (1+\mu)^s\max_{|\beta|=\mu}|c_\beta|,\quad 1\leq \mu\leq m. \] Now, each $P^\alpha$ comes from one of the $t^m$ terms without any power of $p_{t+1}, p_{t+2},\ldots$ in the expansion of $(x^{(1)})^{\beta_{1}}\ast\cdots\ast (x^{(s)})^{\beta_{s}}$, and there are at most $(1+m)^s$ choices of $\beta$ with $|\beta|=m$; moreover, the elements of $\Lambda$ are bounded by 1. Altogether we obtain as a very rough estimate that \begin{equation}\label{eq:delta} |d_{\alpha}|\leq C_{m} t^m,\quad \alpha\in I_{m,t}, t\geq 1. \end{equation} It follows from \eqref{eq:exp} with condition \ref{Ds2} that \[ T^{a_r}\Big(\sum_{|\beta|=m}c_{\beta}(x^{(1)})^{\beta_{1}}\ast\cdots\ast (x^{(s)})^{\beta_{s}}\Big) = \rho T^{a_r}p_r^m + \sum_{t>r}\sum_{\alpha\in I_{m,t}} d_{\alpha} T^{a_r} P^{\alpha}, \] and therefore, by conditions \ref{Ds3} and \ref{Ds4} with \eqref{eq:delta}, \[ \Big\|T^{a_r}\Big(\sum_{|\beta|=m}c_{\beta}(x^{(1)})^{\beta_{1}}\ast\cdots\ast (x^{(s)})^{\beta_{s}}\Big) - \rho y^{(l)}\Big\|_r < \rho 2^{-r} + \sum_{t>r}\text{card} (I_{m,t})C_m t^m 2^{-t}. \] In the same way, for $1\leq \mu <m$, there are complex numbers $d_{\alpha}$, $\alpha\in I_{\mu,t}$, $t\geq 1$, such that \begin{equation}\label{eq:exp2} \begin{split} \sum_{|\beta|=\mu}c_{\beta}(x^{(1)})^{\beta_{1}}\ast\cdots\ast (x^{(s)})^{\beta_{s}}&=\sum_{t=1}^{r}\sum_{\alpha\in I_{\mu,t}} d_\alpha P^\alpha + \sum_{t>r}\sum_{\alpha\in I_{\mu,t}} d_{\alpha} P^{\alpha}. \end{split} \end{equation} From \ref{Ds2}, \ref{Ds4} we thus obtain that \[ \Big\|T^{a_r}\Big(\sum_{|\beta|=\mu}c_{\beta}(x^{(1)})^{\beta_{1}}\ast\cdots\ast (x^{(s)})^{\beta_{s}}\Big)\Big\|_r < \sum_{t>r}\text{card} (I_{\mu,t})C_{\mu} t^{\mu} 2^{-t}. \] Altogether we have that \[ \|T^{a_r}z-\rho y^{(l)}\|_r < \rho 2^{-r}+\sum_{\mu=1}^m\sum_{t>r}\text{card} (I_{\mu,t})C_{\mu} t^{\mu} 2^{-t}. \] Since \[ \text{card}(I_{\mu,t}) \leq \tbinom{\mu+t-1}{\mu}\leq \tfrac{(\mu+t)^\mu}{\mu!} \] the above series converge. Thus, for any $N\geq 1$ and $\varepsilon>0$ we can find an $r\geq N$ such that \[ \|T^{a_r}z-\rho y^{(l)}\|_N <\varepsilon. \] Since the sequence $(\rho y^{(l)})_l$ is dense in $X$ we deduce that $z$ is hypercyclic for $T$. The choice of the $\lambda_{k,\nu}$ also ensures that $\mathcal{A}$ is not finitely generated. Indeed, if it were finitely generated, we would have that, for some $s\geq 1$, \[ x^{(s+1)}=\sum_{\substack{\beta\in I \subset \mathbb{N}_0^s\\ \beta\neq 0}}c_{\beta}(x^{(1)})^{\beta_{1}}\ast\cdots\ast (x^{(s)})^{\beta_{s}} \] with complex numbers $c_\beta$, where $I$ is a finite set. Let again $m= \max\{|\beta| : c_\beta\neq 0\}$. As above we can then find some $\nu\geq 1$ such that \[ \sum_{|\beta|=m}c_{\beta}\lambda_{1,\nu}^{\beta_{1}}\cdots \lambda_{s,\nu}^{\beta_{s}}=:\rho\neq 0. \] In view of \eqref{eq:exp} and \eqref{eq:exp2}, we can then write \[ x^{(s+1)}=\sum_{\substack{1\leq\mu\leq m\\1\leq t\leq r\\(\mu,t)\neq (m,r)}}\sum_{\alpha\in I_{\mu,t}} d_\alpha P^\alpha + \sum_{\substack{\alpha\in I_{m,r}\\\alpha\neq (0,\ldots,0,m)}} d_{\alpha} P^{\alpha}+ \rho p_r^m + \sum_{\mu=1}^m\sum_{t>r}\sum_{\alpha\in I_{\mu,t}} d_{\alpha} P^{\alpha}, \] where $r=(m,l,\nu)$; note that $l\geq 1$ can be chosen freely. On the right-hand side, the first two terms represent a sequence whose non-zero coordinates have index less than $a_r$ by \eqref{Ds2}, while the non-zero coordinates of the fourth term have index at least $\eta_{r+1}$. Since $a_r\leq m\gamma_r<\eta_{r+1}$, it follows that \[ x^{(s+1)}_{m\gamma_r}\neq 0. \] However, for the same reason and by the definition of $x^{(s+1)}$, we have that $x^{(s+1)}_{m\gamma_r} =0$ whenever $m\geq 2$. Thus we must have that $m=1$. But then there are complex numbers $c_k$ such that \[ x^{(s+1)} = \sum_{k=1}^s c_k x^{(k)} =\sum_{r=1}^\infty \Big(\sum_{k=1}^s c_k \lambda_{k,\nu_r}\Big) p_r. \] Now, by the choice of the matrix $\Lambda$ there is some $\nu\geq 1$ such that $\sum_{k=1}^s c_k \lambda_{k,\nu}-\lambda_{s+1,\nu}\neq 0$. This contradicts the fact that $x^{(s+1)}=\sum_{r=1}^\infty \lambda_{s+1,\nu_r} p_r$. Therefore $\mathcal{A}$ cannot be finitely generated. \end{proof} We spell out the two cases of greatest interest. In each case property B is verified, see Example \ref{seqalg}; note that $\|e_n\|_q=q^n$ in $H(\mathbb{C})$. \begin{corollary}\label{corrolalgCP1} Let $B_w$ be a mixing weighted backward shift on $\ell^1$, which we consider as a Banach sequence algebra under the Cauchy product. Then the set $HC(B_w)$ of hypercyclic vectors for $B_w$ is algebrable. This applies, in particular, to the Rolewicz operators $\lambda B$, $|\lambda|>1$. \end{corollary} \begin{corollary}\label{corrolalgCP2} Let $B_w$ be a mixing weighted backward shift on $H(\mathbb{C})$, which we consider as a Fr\'echet sequence algebra under the pointwise product of functions. Then the set $HC(B_w)$ of hypercyclic vectors for $B_w$ is algebrable. This applies, in particular, to the MacLane operator $D$ of differentiation. \end{corollary} Finally, any weighted backward shift $B_w$ on $\omega$ satisfies \eqref{eq:mix}. Thus, in view of Remark \ref{remomega}, we have the following. \begin{corollary}\label{corrolalgCP3} For any weighted backward shift $B_w$ on $\omega$, considered as a Fr\'echet sequence algebra under the Cauchy product, the set $HC(B_w)$ of hypercyclic vectors for $B_w$ is algebrable. \end{corollary}
1,314,259,996,262
arxiv
\section{Introduction} In order to develop a practical natural language processing (NLP) system, it is essential to deal with ill-formed sentences that cannot be parsed correctly according to the grammar rules in the system. In this paper, an ``ill-formed sentence'' means one that cannot be parsed as a unified structure. A syntactic parser with general grammar rules is often unable to analyze not only sentences with grammatical errors and ellipses, but also long sentences, owing to their complexity. Thus, ill-formed sentences include not only ungrammatical sentences, but also some grammatical sentences that cannot be parsed as unified structures owing to the presence of unknown words or to a lack of completeness in the syntactic parser. In texts from a restricted domain, such as computer manuals, most sentences are grammatically correct. However, even a well-established syntactic parser usually fails to generate a unified parsed structure for about 10 to 20 percent of all the sentences in such texts, and the failure to generate a unified parsed structure in syntactic analysis leads to a failure in the output of a NLP system. Thus, it is indispensable to establish a correct analysis for such a sentence. To handle such sentences, most previous approaches apply various heuristic rules \cite{Jensen:1983,Douglas:1992,Richardson:1988}, including \begin{itemize} \item Relaxing constraints in the condition part of a grammatical rule, such as number and gender constraints \item Joining partial parses by using meta rules. \end{itemize} Either way, the output reflects the general plausibility of an analysis that can be obtained from information in the sentence; however, the interpretation of a sentence depends on its discourse, and inconsistency with recovered parses that contain different analyses of the same phrase in other sentences in the discourse often results in odd outputs of the natural language processing system. Starting from the viewpoint that an interpretation of a sentence must be consistent in its discourse, we worked on completing incomplete parses by using information extracted from complete parses in the discourse. The results were encouraging. Since most words in a sentence are repeatedly used in other sentences in the discourse, the complete parses of well-formed sentences usually provided some useful information for completing incomplete parses in the same discourse. Thus, rather than trying to enhance a syntactic parser's grammar rules in order to support ill-formed sentences, which seems to be an endless task after the parser has obtained enough coverage to parse general grammatical sentences, we treat the syntactic parser as a black box and complete incomplete parses, in the form of partially parsed chunks that a bottom-up parser outputs for ill-formed sentences, by using information extracted from the discourse. In the next section, the effectiveness of using information extracted from the discourse to complete syntactic analysis of ill-formed sentences. After that, we propose an algorithm for completing incomplete parses by using discourse information, and give the results of an experiment on completing incomplete parses in technical documents. \section{Discourse information for completing incomplete parses} \begin{table*}[ht] \caption{Frequency of morphologically identical words in computer manuals} \label{Table-Fre-of-MIW} \begin{center} {\small \begin{tabular}{|c|c|c||c|c|} \hline Part & \multicolumn{2}{c||}{Freq. of morph. identical words} & \multicolumn{2}{c|}{Proportion of all content words} \\ \cline{2-5} of & Two or more & Five or more & Total number of & Proportion \\ speech & times (\%) & times (\%) & appearances (words) & (\%) \\ \hline Noun & 90.7 & 76.2 & 99047 & 59.8 \\ \hline Verb & 94.9 & 83.6 & 35622 & 21.5 \\ \hline Adjective & 88.9 & 71.0 & 16941 & 10.2 \\ \hline Adverb & 85.9 & 68.8 & 4993 & 3.0 \\ \hline Pronoun & 98.0 & 94.8 & 8911 & 5.4 \\ \hline \hline Total & 91.6 & 78.0 & 165514 & --- \\ \hline \end {tabular} } \end{center} \end{table*} In this section, we use the word ``discourse'' to denote a set of sentences that forms a text concerning related topics. Gale \cite{Gale:1992b} and Nasukawa \cite{Nasukawa:1993} reported that polysemous words within the same discourse have the same word sense with a high probability (98\% according to \cite{Gale:1992b},) and the results of our analysis indicate that most content words are frequently repeated in the discourse, as is shown in Table \ref{Table-Fre-of-MIW}; moreover, collocation (modifier-modifiee relationship) patterns are also repeated frequently in the same discourse, as is shown in Figure \ref{Figure-Size-and-MIW}. \begin{figure*}[htbp] \begin{center} \epsfile{file=Col-Freq.ps,width=150mm} \end{center} \caption{Rate of finding identical or similar collocation patterns in relation to the size of the discourse} \label{Figure-Size-and-MIW} \end{figure*} This figure reflects the analysis of structurally ambiguous phrases in a computer manual consisting of 791 consecutive sentences for discourse sizes ranging from 10 to 791 sentences. For each structurally ambiguous phrase, more than one candidate collocation pattern was formed by associating the structurally ambiguous phrase with its candidate modifiees \footnote{For example, in the sentence\\ {\it You can use the folder on the desktop,}\\ the ambiguous phrase, {\it on the desktop}, forms two candidate collocation patterns:\\ ``use --(on)-- desktop'' and ``folder --(on)-- desktop.''}, and a collocation pattern identical with or similar to each of these candidate collocation patterns was searched for in the discourse. An identical collocation pattern is one in which both modifiee and modifier sides consist of words that are morphologically identical with those in the sentence being analyzed, and that stand in an identical relationship. A similar collocation pattern is one in which either the modifiee or modifier side has a word that is morphologically identical with the corresponding word in the sentence being analyzed, while the other has a synonym. Again, the relationship of the two sides is identical with that in the sentence being analyzed. Except in the case where all 791 sentences were referred to as a discourse, the results indicate the averages obtained by referring to each of several sample areas as a discourse. For example, to obtain data for the case in which the size of a discourse was 20 sentences, we examined 32 areas each consisting of 20 sentences, such as the 1st sentence to the 20th, the 51st to the 70th, and the 701st to the 720th. Thus, Figure \ref{Figure-Size-and-MIW} indicates that a collocation pattern either identical with or similar to at least one of the candidate collocation patterns of a structurally ambiguous phrase was found within the discourse in more than 70\% of cases, provided the discourse contained more than 300 consecutive sentences. On the assumption that this feature of words in a discourse provides a clue to improving the accuracy of sentence analysis, we conducted an experiment on sentences for which a syntactic parser generated more than one parse tree, owing to the presence of words that can be assigned to more than one part of speech, or to the presence of complicated coordinate structures, or for various other reasons. If the constituent words tend to be associated in identical modification patterns with an identical part of speech and identical modifiee-modifier relationship when an identical phrase (a set of consecutive words) is repeated in different sentences within the discourse, the candidate parse that shares the most collocation patterns with other sentences in the discourse should be selected as the correct analysis. Out of 736 consecutive sentences in a computer manual, the ESG parser \cite{McCord:1991} generated multiple parses for 150 sentences. In this experiment, we divided the original 736 sentences into two texts, one a discourse of 400 sentences and the other a discourse of 336 sentences. Of the 150 sentences with multiple parses, 24 were incorrectly analyzed in all candidate parses or had identical candidate parses; we therefore focused on the other 126 sentences. In each candidate parse of these sentences, we assigned a score for each collocation that was repeated in other sentences in the discourse (in the form of either an identical collocation or a similar collocation), and added up the collocation scores to assign a preference value to the candidate parse. Out of the 126 sentences, different preference values were assigned to candidate parses in 54 sentences, and the highest value was assigned to a correct parse in 48 (88.9\%) of the 54 sentences. Thus, there is a strong tendency for identical collocations to be actually repeated in the discourse, and when an identical phrase (a set of consecutive words) is repeated in different sentences, their constituent words tend to be associated in identical modification patterns. \begin{figure*}[ht] \begin{quote} \begin{quote} {\footnotesize \baselineskip=0.9\normalbaselineskip \begin{verbatim} ((XXXX (COMMENT(CONJ "as") (NP (PRON* "you" ("you" (SG PL)))) (AUXP (VERB* "can" ("can" PS))) (VERB* "see" ("see" PS))) (PUNC ",") (VP (NP (PRON* "you" ("you" (SG PL)))) (AUXP (VERB* "can" ("can" PS))) (VERB* "choose" ("choose" PS)) (PP (PP (PREP* "from")) (QUANP (ADJ* "many" ("many" BS))) (NOUN* "topics" ("topic" PL)))) (VP* (INFCL (INFTO (PREP* "to")) (VERB* "find" ("find" PS)) (COMPCL (COMPL "") (VERB* "out" ("out" PS)) (NP (PRON* "what" ("what" (SG PL)))))) (NP (NOUN* "information" ("information" SG))) (VERB* "is" ("be" PS)) (AJP (ADJ* "available" ("available" BS)) ? (PP (PP (PREP* "about")) (DETP (ADJ* "the" ("the" BS))) (NP (NOUN* "AS/400" ("AS/400" (SG PL)))) (NOUN* "system" ("system" SG))))) (PUNC ".")) 0) \end{verbatim} \baselineskip=1.8\normalbaselineskip} \end{quote} \end{quote} \caption{Example of an incomplete parse obtained by the PEG parser} \label{Fig-PEG-output} \end{figure*} Figure \ref{Fig-PEG-output} shows the output of the PEG parser \cite{Jensen:1992} for the following sentence: \begin{description} \item[(2.1)]{\sl As you can see, you can choose from many topics to find out what information is available about the AS/400 system.} \end{description} This is the 53rd sentence in Chapter 6 of a computer manual \cite{IBM:1992}, and every word of it is repeatedly used in other sentences in the same chapter, as shown in Table \ref{Table-DiscInfo-assign}. \begin{table*}[ht] \caption{Selecting POS candidates on the basis of discourse information} \label{Table-DiscInfo-assign} \begin{flushleft} {\small \begin{tabular}{|c| c c c c c c c c c c c c c|} \hline & As & you & can & see, & you & can & choose & from & many & topics & to & find & out \\ \hline Candidates & CJ & PN & N & N & PN & N & V & PP & AJ & N & PP & N & PP \\ for the POS & AV & & V & V & & V & & & N & & & V & N \\ of each word & PP & & & & & & & & PN & & & AV & PP \\ & & & & & & & & & & & & & V \\ \hline & As & you & can & see, & \multicolumn{9}{l|}{{\it appears in sentences 39, 175.}} \\ \cline{6-14} Phrases & CJ & PN & V & V & \multicolumn{1}{|c}{you} & can & choose & \multicolumn{6}{l|}{{\it appears in sentences 179.}} \\ \cline{10-14} repeated & & & & & \multicolumn{1}{|c}{PN} & V & V & & \multicolumn{1}{|c}{many} & \multicolumn{4}{l|}{{\it appears in sentences 49.}} \\ \cline{6-9} \cline{11-14} within the & & & & & & & & & \multicolumn{1}{|c|}{AJ} & \multicolumn{1}{c|}{topics} & & \multicolumn{2}{c|}{find out what} \\ \cline{2-10} \cline{12-13} discourse & \multicolumn{9}{l}{{\it appears in sentences 39, 140, 145, 160, 161 167 169...}} & N & \multicolumn{1}{|c}{to} & \multicolumn{1}{c|}{find} & \\ \cline{2-11} & \multicolumn{10}{l}{{\it appears in sentences 236.}} & PP & \multicolumn{1}{c|}{V} & \\ \cline{2-13} & \multicolumn{11}{l}{{\it appears in sentences 32.}} & \multicolumn{2}{c|}{V \ PP \ (PN)} \\ \hline POS & CJ & PN & V & V & PN & V & V & PP & AJ & N & PP & V & PP \\ \hline \end{tabular} \smallskip \begin{tabular}{|c| c c c c c c c c |} \hline & what & information & is & available & about & the & AS/400 & system. \\ \hline Candidates & AJ & N & V & AJ & AJ & DET & N & N \\ for the POS & AV & & & & AV & & & \\ of each word & PN & & & & PP & & & \\ \hline Phrases & what & information & is & available & about & the & \multicolumn{2}{l|}{{\it appears in sentences 49.}} \\ repeated & AJ & N & V & AJ & PP & DET & & \\ \cline{2-9} within the & & & & & & the & AS/400 & system. \\ discourse & \multicolumn{5}{l}{{\it appears in sentences 6, 109, 115.}} & DET & N & N \\ \hline POS & PN & N & V & AJ & PP & DET & N & N \\ & AJ & & & & & & & \\ \hline \end{tabular} \smallskip N={\it noun}\ \ PN={\it pronoun}\ \ V={\it verb}\ \ AJ={\it adjective}\ \ AV={\it adverb}\ \ CJ={\it conjunction}\ \ PP={\it preposition}\ \ DET={\it determiner} } \end{flushleft} \end{table*} For example, the 39th sentence in the same chapter contains ``As you can see,'' as shown in Figure \ref{Fig-PEGInfo-assign1}. \begin{figure*}[htbp] \begin{quote} \begin{quote} {\normalsize \baselineskip=1.5\normalbaselineskip {\sl As you can see, the help display provides additional information about the menu options \vspace{1mm}\\ available, as well as a list of related topics.}} {\footnotesize \baselineskip=0.9\normalbaselineskip \begin{verbatim} ((DECL (SUBCL (CONJ "as") (NP (PRON* "you" ("you" (SG PL)))) (AUXP (VERB* "can" ("can" PS))) (VERB* "see" ("see" PS)) (PUNC ",")) (NP (DETP (ADJ* "the" ("the" BS))) (NP (NOUN* "help" ("help" SG))) (NOUN* "display" ("display" SG))) (VERB* "provides" ("provide" PS)) : : : \end{verbatim} \baselineskip=1.8\normalbaselineskip} \end{quote} \end{quote} \caption{Thirty-ninth sentence of Chapter 6 and a part of its parse} \label{Fig-PEGInfo-assign1} \end{figure*} The sentences that contain some words in common with sentence (2.1) provide information that is very useful for deriving a correct parse of the sentence. Table \ref{Table-DiscInfo-assign} also shows that the parts of speech (POS) for most words in sentence (2.1) can be derived from words repeated in other sentences in the same chapter. In this table, the uppercase letters below the top sentence indicate the parts of speech that can be assigned to the words above. Underneath the candidate part of speech, repeated phases in other sentences are presented along with the part of speech of each word in those sentences; thus, the first word of sentence (2.1), ``As,'' can be a conjunction, an adverb, or a preposition, but complete parses of the 39th and 175th sentences indicate that in this discourse the word is used as a conjunction when it is used in the phrase ``As you can see.'' Furthermore, information on the dependencies among most words in sentence (2.1) can be extracted from phrases repeated in other sentences in the same chapter, as shown in Figure \ref{PRECOMPLETE-DS}.\footnote{Thick arrows indicate dependencies extracted from the discourse information.} \begin{figure}[htbp] \begin{center} \epsfile{file=precomplete-ds.ps,width=74mm} \end{center} \caption{Constructing a dependency structure by combining dependencies existing within phrases that occur in other sentences of the same chapter} \label{PRECOMPLETE-DS} \end{figure} \section{Implementation} \subsection{Algorithm} As we showed in the previous section, information that is very useful for obtaining correct parses of ill-formed sentences is provided by complete parses of other sentences in the same discourse in cases where a parser cannot construct a parse tree by using its grammar rules. In this section, we describe an algorithm for completing incomplete parses by using this information. The first step of the procedure is to extract from an input text discourse information that the system can refer to in the next step in order to complete incomplete parses. The procedure for extracting discourse information is as follows: \begin{enumerate} \item Each sentence in the whole text given as a discourse is processed by a syntactic parser. Then, except for sentences with incomplete parses and multiple parses, the results of each parse are stored as discourse information. To be precise, the position and the part of speech of each instance of every lemma are stored along with the lemma's modifiee-modifier relationships with other content words extracted from the parse data. Table \ref{Table-Dframe} shows an example of such information. \begin{table*}[ht] \caption{Discourse information on modifiees and modifiers of a noun ``cursor''} \label{Table-Dframe} \begin{center} {\small \begin{tabular}{|c|c|l|} \hline \multicolumn{3}{|c|}{Modifiers} \\ \hline POS & Relation & Word ({\tt CFRAME}s \ \ preference value) \\ \hline Noun & of & display ({\tt CFRAME106873} 0.1) \\ \cline{2-3} & in & protected area ({\tt CFRAME106872} 1) \\ \cline{2-3} & to & left ({\tt CFRAME106407} 0.1) \ \ right({\tt CFRAME106338} 0.1) \\ \cline{2-3} & {\tt DIRECT} & position ({\tt CFRAME106405} 1) \\ \hline Adjective & up & line ({\tt CFRAME106295} 0.1) \\ \cline{2-3} & {\tt DIRECT} & your ({\tt CFRAME106690 CFRAME106550} 2) \\ \hline \hline \multicolumn{3}{|c|}{Modifiees} \\ \hline POS & Relation & Word ({\tt CFRAME}s \ \ preference value) \\ \hline Verb & with & play ({\tt CFRAME106928} 0.1) \ \ be ({\tt CFRAME106927} 0.1) \\ \cline{2-3} & up & move ({\tt CFRAME106688} 1) \\ \cline{2-3} & {\tt SUBJ} & stop ({\tt CFRAME106572} 1) \ \ reach ({\tt CFRAME106346} 1) \ \ move ({\tt CFRAME106248} 1) \\ \cline{2-3} & {\tt OBJ} & move ({\tt CFRAME106402 CFRAME106335 CFRAME106292} 3) confuse ({\tt CFRAME106548} 1) \\ \cline{2-3} & {\tt RECIPIENT} & move ({\tt CFRAME106304} 1) \\ \hline \end {tabular} } \end{center} \end{table*} In this table, {\tt CFRAME\verb*+ +} indicates an instance of {\it cursor} in the discourse; information on the position and on the whole sentence can be extracted from each occurrence of {\tt CFRAME}. In accumulating discourse information, a score of 1.0 is awarded for each definite modifiee-modifier relationship. A lower score, 0.1, is awarded for each ambiguous modifiee-modifier relationship, since such relationships are less reliable. \item When all the sentences have been parsed, the discourse information is used to select the most preferable candidate for sentences with multiple possible parses, and the data of the selected parse are added to the discourse information. \end{enumerate} After all the sentences except the ill-formed sentences that caused incomplete parses have provided data for use as discourse information, the parse completion procedure begins. The initial data used in the completion procedure are a set of partial parses generated by a bottom-up parser as an incomplete parse tree. For example, the PEG parser generated three partial parses for sentence (2.1), consisting of ``As you can see,'' ``you can choose from many topics,'' and ``to find out what information is available about the AS/400 system,'' as shown in Figure \ref{Fig-PEG-output}. Since partial parses are generated by means of grammar rules in a parser, we decided to restructure each partial parse and unify them according to the discourse information, rather than construct the whole parse tree from discourse information. The completion procedure consists of two steps: \subsubsection*{Step 1: Inspecting each partial parse and restructuring it on the basis of the discourse information} For each word in a partial parse, the part of speech and the modifiee-modifier relationships with other words are inspected. If they are different from those in the discourse information, the partial parse is restructured according to the discourse information. \begin{figure}[t] \begin{center} \psbox[width=74mm]{parse1d.ps} \end{center} \caption{Example of an incomplete parse by the ESG parser} \label{Fig-ESG-output} \end{figure} For example, Figure \ref{Fig-ESG-output} shows an incomplete parse of the following sentence, which is the 43rd sentence in a technical text that consists of 175 sentences.\footnote{This structure resulting from an incomplete parse does not indicate that the grammar of the parser lacks a rule for handling a possessive case indicated by an apostrophe and an s. When the parser fails to generate a unified parse, it outputs partial parses in such a manner that fewer partial parses cover every word in the input sentence.} \begin{description} \item[(3.1)]{\sl Fig. 3 is an isometric view of the magazine taken from the operator's side with one cartridge shown in an unprocessed position and two cartridges shown in a processed position.} \end{description} In the second partial parse, the word ``side'' is analyzed as a verb. The same word appears fifteen times in the discourse information extracted from well-formed sentences, and is analyzed as a noun every time it appears in complete parses; furthermore, there are no data on the noun ``operator'' modifying the verb ``take'' through the preposition ``from,'' while there is information on the noun ``operator's'' modifying the noun ``side,'' as in sentence (3.2), and on the noun ``side'' modifying the verb ``take,'' as in sentence (3.3). \begin{description} \item[(3.2)]{\sl In the operation of the invention, an operator loads cartridges into the magazine from \underline{the operator's side} as seen in Figs. 3 and 12.} (151st sentence) \item[(3.3)]{\sl Fig. 4 is an isometric view of the magazine \underline{taken from the machine side} with one cartridge shown in the unprocessed position and two cartridges shown in the processed position.} (44th sentence) \end{description} Therefore, these two partial parses are restructured by changing the part of speech of the word ``side'' to noun, and the modifiee of the noun ``operator'' to the noun ``side,'' while at the same time changing the modifiee of the noun ``side'' to the verb ``take.'' As a result, a unified parse is obtained, as shown in Figure \ref{Fig-recovered-output}. \begin{figure}[t] \begin{center} \psbox[width=84mm]{parse2c.ps} \end{center} \caption{Example of a completed parse} \label{Fig-recovered-output} \end{figure} \subsubsection*{Step 2: Joining partial parses on the basis of the discourse information} If the partial parses are not unified into a single structure in the previous step, they are joined together on the basis of the discourse information until a unified parse is obtained. Partial parses are joined as follows: First, the possibility of joining the first two partial parses is examined, then, either the unification of the first two parses or the second parse is examined to determine whether it can be joined to the third parse, then the examination moves to the next parse, and so on. Two partial parses are joined if the root (head node) of either parse tree can modify a node in the other parse without crossing the modification of other nodes. To examine the possibility of modification, discourse information is applied at three different levels. First, for a candidate modifier and modifiee, an identical pattern containing the modifier word and the modifiee word in the same part of speech and in the same relationship is searched for in the discourse information. Next, if there is no identical pattern, a modification pattern with a synonym \cite{COLLINS:1984} of the node on one side is searched for in the discourse information. Then, if this also fails, a modification pattern containing a word that has the same part of speech as the word on one side of the node is searched for. Since the discourse information consists of modification patterns extracted from complete parses, it reflects the grammar rules of the parser, and a matching pattern with a part of speech rather than an actual word on one side can be regarded as a relaxation rule, in the sense that syntactic and semantic constraints are less restrictive than the corresponding grammar rule in the parser. These matching conditions at different levels are applied in such a manner that partial parses are joined through the most preferable nodes. \subsection{Results} We have implemented this method on an English-to-Japanese machine translation system called Shalt2 \cite{Takeda:1992}, and conducted experiments to evaluate the effectiveness of this method. Table \ref{Table-result} gives the result of our experiments on two technical documents of different kinds, one a patent document (text 1), and the other a computer manual (text 2). Since text 1 contained longer and more complex sentences than text 2, our ESG parser failed to generate unified parses more often in text 1; on the other hand, the frequency of morphologically identical words and collocation patterns was higher in text 1, and our method was more effective in text 1. In both texts, the discourse information provided enough information to unify partial parses of an incomplete parse in more than half of the cases. However, the resulting unified parses were not always correct. Since sentences with incomplete parses are usually quite long and contain complicated structures, it is hard to obtain a perfect analysis for those sentences. Thus, in order to evaluate the improvement in the output translation rather than the improvement in the rate of success in syntactic analysis, in which only perfect analyses are counted, we compared output translations generated with and without the application of our method. When our method was not applied, partial parses of an incomplete parse were joined by means of some heuristic rules such as the one that joins a partial parse with ``NP'' in its root node to a partial parse with ``VP'' in its root node, and the root node of the second partial parse was joined to the last node of the first partial parse by default. When the discourse information did not provide enough information to unify partial parses with the application of our method, the heuristic rules were applied. In such cases the default rule of joining the root node of the second partial parse to the last node of the first partial parse was mostly applied, since the least restrictive matching patterns in our method were similar to the heuristic rules. Thus, the system generated a unified parse for each sentence regardless of the discourse information, and we compared the output translations generated with and without the application of our method. The results are shown in Table \ref{Table-result}. The translations were compared by checking how well the output Japanese sentence conveyed the meaning of the input English sentence. Since most unified parses contained various errors, such as incorrect modification patterns and incorrect parts of speech assigned to some words, fewer errors generally resulted in better translations, but incorrect parts of speech resulted in worse translations. \begin{table*} \caption{Results of completing incomplete parses on the basis of discourse information} \label{Table-result} \begin{center} {\small \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & Text1 & Text2 \\ \hline \multicolumn{2}{|c|}{Number of sentences in discourse} & 175 & 354 \\ \hline \multicolumn{2}{|c|}{Incomplete parses} & 32 & 31\\ \hline \hline \multicolumn{2}{|c|}{Unified into a single parse} & 18 (56.3\%) & 17 (54.8\%) \\ \hline Improvement & Better & 7 & 7 \\ \cline{2-4} in & Even & 10 & 7\\ \cline{2-4} translation & Worse & 1 & 3 \\ \hline \multicolumn{2}{|c|}{Partially joined or restructured} & 12 (37.5\%) & 8 (25.8\%) \\ \hline Improvement & Better & 4 & 2 \\ \cline{2-4} in & Even & 7 & 3 \\ \cline{2-4} translation & Worse & 1 & 3 \\ \hline \multicolumn{2}{|c|}{Not changed} & 2 (6.3\%) & 6 (19.4\%) \\ \hline \end {tabular} } \end{center} \end{table*} \section{Conclusion} We have proposed a method for completing partial parses of ill-formed sentences on the basis of information extracted from complete parses of well-formed sentences in the discourse. Our approach to handling ill-formed sentences is fundamentally different from previous ones in that it reanalyzes the part of speech and modifiee-modifier relationships of each word in an ill-formed sentence by using information extracted from analyses of other sentences in the same text, thus, attempting to generate the analysis most appropriate to the discourse. The results of our experiments show the effectiveness of this method; moreover, implementation of this method on a machine translation system improved the accuracy of its translation. Since this method has a simple framework that does not require any extra knowledge resources or inference mechanisms, it is robust and suitable for a practical natural language processing system. Furthermore, in terms of the turn-around time (TAT) of the whole translation procedure, the improvement in the parses achieved by using this method along with other disambiguation methods involving discourse information, as shown in another paper \cite{Nasukawa:1995}, shortened the TAT in the late stages of the translation procedure, and compensated for the extra TAT required as a result of using the discourse information, provided the size of the discourse was kept to between 100 and 300 sentences. In this paper, the term ``discourse'' is used as a set of words in a text together with the usage of each of those words in that text -- namely, a part of speech and modifiee-modifier relationships with other words. The basic idea of our method is to improve the accuracy of sentence analysis simply by maintaining consistency in the usage of morphologically identical words within the same text. Thus, the effectiveness of this method is highly dependent on the source text, since it presupposes that morphologically identical words are likely to be repeated in the same text. However, the results have been encouraging at least with technical documents such as computer manuals, where words with the same lemma are frequently repeated in a small area of text. Moreover, our method improves the translation accuracy, especially for frequently repeated phrases, which are usually considered to be important, and leads to an improvement in the overall accuracy of the natural language processing system. \section*{Acknowledgements} I would like to thank Michael McDonald for invaluable help in proofreading this paper. I would also like to thank Taijiro Tsutsumi, Masayuki Morohashi, Koichi Takeda, Hiroshi Maruyama, Hiroshi Nomiyama, Hideo Watanabe, Shiho Ogino, Naohiko Uramoto, and the anonymous reviewers for their comments and suggestions.